Front cover

IBM System Storage DS8870
Architecture and Implementation

Dual IBM POWER7 based controllers with up to 1 TB of cache 400 GB SSDs with Full Disk Encryption support Improved Power Supplies and extended Power Line Disturbance

Bertrand Dufrasne Pere Alcaide Peter Kimmel Istvan Paloczi Akin Sakarcan Roland Wolf

ibm.com/redbooks

International Technical Support Organization IBM System Storage DS8870 Architecture and Implementation January 2013

SG24-8085-00

Note: Before using this information and the product it supports, read the information in “Notices” on page xi.

First Edition (January 2013) This edition applies to DS8870 with Licensed Machine Code (LMC) 7.7.xx.xx (bundle version 87.xx.x)).

© Copyright International Business Machines Corporation 2013. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.

Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The team who wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii xiv xvi xvi xvi

Part 1. Concepts and architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Chapter 1. Introduction to the IBM System Storage DS8870 . . . . . . . . . . . . . . . . . . . . . 3 1.1 Introduction to the DS8870 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.1.1 Features of the DS8870 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.2 The DS8870 controller options and frames. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.3 DS8870 architecture and functions overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.3.1 Overall architecture and components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.3.2 Storage capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.3.3 Supported environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.3.4 Configuration flexibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.3.5 Copy Services functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 1.3.6 Resource Groups for copy services scope limiting . . . . . . . . . . . . . . . . . . . . . . . . 18 1.3.7 Service and setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 1.3.8 IBM Certified Secure Data Overwrite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1.4 Performance features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1.4.1 Sophisticated caching algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1.4.2 Solid State Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 1.4.3 Multipath Subsystem Device Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 1.4.4 Performance for System z. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 1.4.5 Performance enhancements for IBM Power Systems . . . . . . . . . . . . . . . . . . . . . 21 Chapter 2. IBM System Storage DS8870 models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 The System Storage DS8870 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Model overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 DS8870 Disk drive options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Additional licenses that are needed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 3. DS8870 hardware components and architecture. . . . . . . . . . . . . . . . . . . . . 3.1 Frames: DS8870 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 Base frame: DS8870. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.2 Expansion frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.3 Rack operator panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 DS8870 architecture overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 The IBM POWER7 processor-based server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Peripheral Component Interconnect Express adapters . . . . . . . . . . . . . . . . . . . . 3.2.3 Storage facility processor complex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.4 Processor memory and cache management . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.5 Flexible service processor and system power control network . . . . . . . . . . . . . . . 3.3 I/O enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 24 24 29 32 33 34 34 35 36 38 38 40 41 42 43 44

© Copyright IBM Corp. 2013. All rights reserved.

iii

3.3.1 DS8870 I/O enclosure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 Host adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.3 Device adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Disk subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 Disk enclosures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 Disk drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Power and cooling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Management console network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.1 Ethernet switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

44 45 47 48 48 52 53 57 58

Chapter 4. RAS on IBM System Storage DS8870 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 4.1 Names and terms for the DS8870 storage system . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 4.2 RAS features of DS8870 CEC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 4.2.1 POWER7 Hypervisor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 4.2.2 POWER7 processor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 4.2.3 AIX operating system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 4.2.4 CEC dual hard disk drive rebuild. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 4.2.5 Cross Cluster communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 4.2.6 Environmental monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 4.2.7 Resource deallocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 4.3 CEC failover and failback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 4.3.1 Dual operational . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 4.3.2 Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 4.3.3 Failback. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 4.3.4 NVS and power outages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 4.4 Data flow in DS8870 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 4.4.1 I/O enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 4.4.2 Host connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 4.4.3 Metadata checks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 4.5 RAS on the HMC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 4.5.1 Microcode updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 4.5.2 Call Home and Remote Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 4.6 RAS on the disk system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 4.6.1 RAID configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 4.6.2 Disk path redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 4.6.3 Predictive Failure Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.6.4 Disk scrubbing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.6.5 Smart Rebuild . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.6.6 RAID 5 overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 4.6.7 RAID 6 overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 4.6.8 RAID 10 overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.6.9 Spare creation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4.7 RAS on the power subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 4.7.1 Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 4.7.2 Line power loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 4.7.3 Line power fluctuation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 4.7.4 Power control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 4.7.5 Emergency power off . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 4.8 RAS and Full Disk Encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 4.8.1 Deadlock recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 4.8.2 Dual platform Tivoli Key Lifecycle Manager servers . . . . . . . . . . . . . . . . . . . . . . 102 4.9 Other features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 4.9.1 Internal network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

iv

IBM System Storage DS8870 Architecture and Implementation

4.9.2 Remote support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 4.9.3 Earthquake resistance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Chapter 5. Virtualization concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Virtualization definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 The abstraction layers for disk virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Array sites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.3 Ranks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.4 Extent Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.5 Logical volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.6 Space Efficient volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.7 Allocation, deletion, and modification of LUNs/CKD volumes. . . . . . . . . . . . . . . 5.2.8 Logical subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.9 Volume access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.10 Virtualization hierarchy summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Benefits of virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 zDAC - z/OS FICON discovery and Auto-Configuration . . . . . . . . . . . . . . . . . . . . . . . 5.5 EAV V2 - Extended Address Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 6. IBM System Storage DS8000 Copy Services overview. . . . . . . . . . . . . . . 6.1 Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 FlashCopy and FlashCopy SE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 Basic concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.2 Benefits and use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.3 FlashCopy options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.4 FlashCopy SE-specific options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.5 Remote Pair FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Remote Mirror and Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.2 Global Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.3 Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.4 Metro/Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.5 Multiple Global Mirror sessions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.6 Thin provisioning enhancements on open environments . . . . . . . . . . . . . . . . . . 6.3.7 GM and MGM improvement because of collision avoidance . . . . . . . . . . . . . . . 6.3.8 z/OS Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.9 z/OS Metro/Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.10 Summary of Remote Mirror and Copy function characteristics. . . . . . . . . . . . . 6.3.11 Consistency group considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.12 GDPS on zOS environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.13 Tivoli Storage Productivity Center for Replication functionality. . . . . . . . . . . . . 6.4 Resource Groups for copy services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 7. Architectured for Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 DS8870 hardware: Performance characteristics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.1 Vertical growth and scalability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.2 DS8870 Fibre Channel switched interconnection at the back-end . . . . . . . . . . . 7.1.3 Fibre Channel device adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.4 Eight-port and four-port host adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Software performance: Synergy items . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.1 Synergy on System p . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.2 Synergy on System z . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Performance considerations for disk drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents

105 106 106 108 108 110 111 114 118 121 126 128 131 133 134 136 141 142 143 143 145 146 147 148 150 151 152 152 154 155 159 160 160 162 162 163 164 164 164 167 168 168 170 171 173 174 174 176 178 v

7.4 DS8000 superior caching algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.1 Sequential Adaptive Replacement Cache. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.2 Adaptive Multi-stream Prefetching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.3 Intelligent Write Caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5 Performance considerations for logical configuration . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.1 Workload characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.2 Data placement in the DS8000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.3 Data placement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6 I/O Priority Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.1 Performance policies for open systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.2 Performance policies for System z . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7 IBM Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.1 IBM Easy Tier operating modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.2 IBM Easy Tier Statement of Direction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8 Performance and sizing considerations for open systems . . . . . . . . . . . . . . . . . . . . . 7.8.1 Determining the number of paths to a LUN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8.2 Dynamic I/O load-balancing: SDD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8.3 Automatic port queues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8.4 Determining where to attach the host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.9 Performance and sizing considerations for System z . . . . . . . . . . . . . . . . . . . . . . . . . 7.9.1 Host connections to System z servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.9.2 Parallel Access Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.9.3 z/OS Workload Manager: Dynamic PAV tuning . . . . . . . . . . . . . . . . . . . . . . . . . 7.9.4 HyperPAV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.9.5 PAV in z/VM environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.9.6 Multiple Allegiance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.9.7 I/O priority queuing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.9.8 Performance considerations on Extended Distance FICON . . . . . . . . . . . . . . . . 7.9.9 High Performance FICON for z . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

181 181 183 184 186 186 186 187 193 194 195 197 199 202 203 203 203 204 205 206 206 206 209 211 213 214 216 217 218

Part 2. Planning and installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 Chapter 8. DS8870 Physical planning and installation . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Considerations before installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.1 Who should be involved . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.2 What information is required . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Planning for the physical installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.1 Delivery and staging area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.2 Floor type and loading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.3 Overhead cabling features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.4 Room space and service clearance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.5 Power requirements and operating environment . . . . . . . . . . . . . . . . . . . . . . . . 8.2.6 Host interface and cables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.7 Host adapter Fibre Channel specifics for open environments . . . . . . . . . . . . . . 8.2.8 FICON specifics on zOS environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.9 Best practice for host adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.10 WWNN and WWPN determination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 Network connectivity planning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.1 Hardware Management Console and network access . . . . . . . . . . . . . . . . . . . . 8.3.2 IBM Tivoli Storage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.3 DS command-line interface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.4 Remote support connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.5 Remote power control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 224 225 226 227 227 228 230 231 232 234 235 235 236 236 239 240 240 241 241 242

vi

IBM System Storage DS8870 Architecture and Implementation

8.3.6 Storage area network connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.7 Tivoli Key Lifecycle Manager server for encryption. . . . . . . . . . . . . . . . . . . . . . . 8.3.8 Lightweight Directory Access Protocol server for single sign-on . . . . . . . . . . . . 8.4 Remote mirror and copy connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5 Disk capacity considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.1 Disk sparing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.2 Disk capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.3 DS8000 Solid-State drive considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6 Planning for growth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 9. DS8870 HMC planning and setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1 Hardware Management Console overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.1 Storage HMC hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.2 Private Ethernet networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Hardware Management Console software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.1 DS Storage Manager GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.2 Command-line interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.3 DS Open application programming interface . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.4 Web-based user interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 HMC activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.1 HMC planning tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.2 Planning for microcode upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.3 Time synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.4 Monitoring DS8870 with the HMC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.5 Call Home and remote support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4 HMC and IPv6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5 HMC user management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6 External HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6.1 External HMC benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7 Configuration worksheets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8 Configuration flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 10. IBM System Storage DS8000 features and license keys . . . . . . . . . . . . 10.1 IBM System Storage DS8000 licensed functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.1 Licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.2 Licensing: cost structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Activating licensed functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.1 Obtaining DS8000 machine information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.2 Obtaining activation codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.3 Applying activation codes by using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.4 Applying activation codes by using the DS CLI. . . . . . . . . . . . . . . . . . . . . . . . . 10.3 Licensed scope considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.1 Why you have a choice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.2 Using a feature for which you are not licensed . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.3 Changing the scope to All . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.4 Changing the scope from All to FB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.5 Applying an insufficient license feature key . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.6 Calculating how much capacity is used for CKD or FB. . . . . . . . . . . . . . . . . . .

242 243 244 244 245 245 246 248 249 251 252 252 254 254 255 256 256 256 258 258 259 259 260 260 261 263 265 266 267 268 271 272 273 276 277 278 282 285 290 291 292 293 294 295 296 296

Part 3. Storage configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 Chapter 11. Configuration flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 11.1 Configuration worksheets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302 11.2 Configuration flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
Contents

vii

Chapter 12. Configuring IBM Tivoli Storage Productivity Center 5.1 for DS8000 . . . 12.1 Introducing IBM Tivoli Storage Productivity Center 5.1. . . . . . . . . . . . . . . . . . . . . . . 12.2 IBM Tivoli Storage Productivity Center Architecture . . . . . . . . . . . . . . . . . . . . . . . . . 12.3 Adding a DS8000 storage system with IBM Tivoli Storage Productivity Center 5.1 . 12.4 Exploring DS8870 with IBM Tivoli Storage Productivity Center 5.1 web-based GUI Chapter 13. Configuration by using the DS Storage Manager GUI . . . . . . . . . . . . . . 13.1 DS Storage Manager GUI overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.1.1 Accessing the DS GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.1.2 DS GUI Overview window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2 User management by using the DS GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3 Logical configuration process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4 Examples of configuring DS8000 storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4.1 Defining a storage complex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4.2 Creating arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4.3 Creating ranks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4.4 Creating Extent Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4.5 Configuring I/O ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4.6 Configuring logical host systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4.7 Creating fixed block volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4.8 Creating volume groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4.9 Creating LCUs and CKD volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4.10 Additional tasks on LCUs and CKD volumes . . . . . . . . . . . . . . . . . . . . . . . . . 13.5 Other DS GUl functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5.1 Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5.2 I/O Priority Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5.3 Checking the status of the DS8000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5.4 Exploring the DS8000 hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 14. Configuration with the DS Command-Line Interface . . . . . . . . . . . . . . . 14.1 DS Command-Line Interface overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1.1 Supported operating systems for the DS CLI . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1.2 User accounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1.3 User management by using the DS CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1.4 DS CLI profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1.5 Configuring DS CLI to use a second HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1.6 Command structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1.7 Using the DS CLI application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1.8 Return codes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1.9 User assistance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2 Configuring the I/O ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3 Configuring the DS8000 storage for FB volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3.1 Creating arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3.2 Creating ranks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3.3 Creating Extent Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3.4 Creating FB volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3.5 Creating volume groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3.6 Creating host connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3.7 Mapping open systems host disks to storage unit volumes . . . . . . . . . . . . . . . 14.4 Configuring DS8000 storage for CKD volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.4.1 Create arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.4.2 Ranks and Extent Pool creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.4.3 Logical control unit creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.4.4 Creating CKD volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
IBM System Storage DS8870 Architecture and Implementation

307 308 309 311 317 323 324 324 329 333 337 338 338 341 346 349 354 356 360 364 368 375 377 377 379 380 382 385 386 386 387 387 389 391 392 392 396 396 398 399 399 400 400 403 409 411 413 415 416 416 417 418

14.4.5 Resource Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.4.6 Performance I/O Priority Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.4.7 Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.5 Metrics with DS CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.6 Private network security commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.7 Copy Services commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

425 425 426 426 430 431

Part 4. Maintenance and upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433 Chapter 15. Licensed machine code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.1 How new microcode is released . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2 Bundle installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.3 Concurrent and non-concurrent updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.4 Code updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.5 Host adapter firmware updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.6 Loading the code bundle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.7 Post-installation activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 16. Monitoring with Simple Network Management Protocol . . . . . . . . . . . . 16.1 Simple Network Management Protocol overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.1.1 SNMP agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.1.2 SNMP manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.1.3 SNMP trap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.1.4 SNMP communication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.1.5 SNMP Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.1.6 Generic SNMP security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.1.7 Message Information Base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.1.8 SNMP trap request . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.1.9 DS8000 SNMP configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.2 SNMP notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.2.1 Serviceable event that uses specific trap 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.2.2 Copy Services event traps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.2.3 I/O Priority Manager SNMP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.2.4 Thin Provisioning SNMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.3 SNMP configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.3.1 SNMP preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.3.2 SNMP configuration from the HMC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.3.3 SNMP configuration with the DS CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 17. Remote support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.1 Introduction to remote support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.1.1 Suggested reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.1.2 Organization of this chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.1.3 Terminology and definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.2 IBM policies for remote support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.3 VPN rationale and advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.4 Remote connection types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.4.1 Asynchronous modem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.4.2 IP network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.4.3 IP network with traditional VPN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.4.4 AOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.5 DS8870 support tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.5.1 Call Home and heartbeat: outbound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents

435 436 438 440 441 441 442 442 443 445 446 446 447 447 447 448 448 449 449 449 450 450 451 457 458 459 459 460 462 465 466 466 466 467 468 468 469 469 471 472 472 472 473 ix

. . . . . . . . . . . . 473 477 477 478 478 478 480 481 482 483 483 485 486 487 488 495 497 499 503 504 506 506 509 509 509 514 517 518 518 521 523 526 527 527 528 Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17. . . . . . . . . . . . . . . . . . . . Help from IBM . . . . . . . . . . . . . . . . . . . . . . . .5 AOS Best Practices for DS8870 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18. . . . . . . . . . . . . . . . . . . . .17. . . . .5 Modem and traditional VPN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7.1 Installation order of upgrades . . . . . . . . . 17. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17. . . . . . . . IBM Redbooks publications . . . . . . . . . . Disk Magic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How to get IBM Redbooks publications . . . . . . . . . . . . . . . . . . . . . . . . .4 Port Forwarding and AOS Gateway for the DS8870. . . . . . . . . . . . . . . . . . . . . . . . . . .7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 AOS with VPN versus a modem . . . . . . . . 17. . .7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18. . . . . . . . . . . . . . . . . . . . . . . . Appendix A. . . . . . . . . 17. . . . . . . . . . . . .8 Further remote support enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . Storage Tier Advisor Tool . . . . . . . . . . . . . . . . . . . . .1 What is Capacity on Demand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9 Audit logging . . 17. . . . . . . . .2 Data offload: outbound . . . . . . . . . . . . . . . . . . . . . 18.6. . . . . . . . . . 529 Related publications . . . . . . . . .3 VPN only . . . . . . . . . . . . 17. . . Other publications . . . . . . .4 Remote support: inbound and two way. . . . . 17. . IBM Service offerings . . . . . . . . . . . . . . . .1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Capacity Magic . . . . . . . . . . . .7. . . . . . . . . . . . . . . . . .2 Determining whether a DS8870 includes CoD disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 18. . . . . . . . . . . . . . . . . . . . . . .1 Installing capacity upgrades . . . . . . . . . . .2 Using Capacity on Demand . . .2. . . Tools and service offerings . . 18. . . . . .1 AOS components . . . 17. . . . . . . . . . . . . . .2 AOS Security . . . .6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17. . . . . . . . . . . . . . . . . . .6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. . .3 Code download: inbound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6. . . . . . . .7 Assist On-site . .3 Using the CoD storage . .5. . . . . . . . . . . . . . . . . . . . . . . . . .2 Checking how much total capacity is installed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17. . . . . . . . . . . . . .1 No connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Global Technology Services: Service offerings . .2 Modem only . . . . . . . . . . . . . . . . . . IBM STG Lab Services: Service offerings. . . . . . . . . . . . . . . . . . . . . . . . . . . . 533 533 534 534 534 535 x IBM System Storage DS8870 Architecture and Implementation . . . . . . . .3 Support Session modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 Modem and network with no VPN. . . . . . . . . . . . . . . . . . . . . .6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2. . . . . 17. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Planning and administration tools . . IBM Tivoli Storage FlashCopy Manager . . . . . . . . . . . . . DS8870 Capacity upgrades and CoD . . . . . . . .7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17. . . . . . . . . . . . . . Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5. . . . . . . . . . . . . .7. . . . . . . . . . . . . . . . . . 17. . . . . . . . . . . 17. . . . . . . . 18. . . . . 18. . . . . . . . . . . . . . . . . . . . . . 18. . . . . . . . . . . . . . . . . . . . .6 Remote connection scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

This information could include technical inaccuracies or typographical errors. cannot guarantee or imply reliability. Some states do not allow disclaimer of express or implied warranties in certain transactions. it is the user's responsibility to evaluate and verify the operation of any non-IBM product. Any reference to an IBM product. therefore. program. BUT NOT LIMITED TO. serviceability. for the purposes of developing. the results obtained in other operating environments may vary significantly. Users of this document should verify the applicable data for their specific environment. IBM has not tested those products and cannot confirm the accuracy of performance. or service. or service that does not infringe any IBM intellectual property right may be used instead. INCLUDING. or function of these programs. Consult your local IBM representative for information on the products and services currently available in your area. Any performance data contained herein was determined in a controlled environment. or service is not intended to state or imply that only that IBM product. You can send license inquiries. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. which illustrate programming techniques on various operating platforms. this statement may not apply to you. companies. IBM may not offer the products. The materials at those websites are not part of the materials for this IBM product and use of those websites is at your own risk. to: IBM Director of Licensing. and distribute these sample programs in any form without payment to IBM. IBM Corporation. program. compatibility or any other claims related to non-IBM products. Therefore. some measurements may have been estimated through extrapolation. IBM. these changes will be incorporated in new editions of the publication. COPYRIGHT LICENSE: This information contains sample application programs in source language. or service may be used. © Copyright IBM Corp. Some measurements may have been made on development-level systems and there is no guarantee that these measurements will be the same on generally available systems.A. brands.A. modify. Changes are periodically made to the information herein. program. These examples have not been thoroughly tested under all conditions. 2013. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. To illustrate them as completely as possible. EITHER EXPRESS OR IMPLIED. in writing. NY 10504-1785 U. Furthermore. marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. services. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND. You may copy. However. IBM may have patents or pending patent applications covering subject matter described in this document. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Actual results may vary. The furnishing of this document does not grant you any license to these patents. This information contains examples of data and reports used in daily business operations. Armonk. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. using. North Castle Drive. or features discussed in this document in other countries. the examples include the names of individuals. THE IMPLIED WARRANTIES OF NON-INFRINGEMENT.Notices This information was developed for products and services offered in the U. Any functionally equivalent product. Any references in this information to non-IBM websites are provided for convenience only and do not in any manner serve as an endorsement of those websites.S. xi .S. All rights reserved. therefore. Information concerning non-IBM products was obtained from the suppliers of those products. their published announcements or other publicly available sources. MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. and products. program.

or both. and ibm. Such trademarks may also be registered or common law trademarks in other countries. These and other IBM trademarked terms are marked on their first occurrence in this information with the appropriate symbol (® or ™). Other company. or both.shtml The following terms are trademarks of the International Business Machines Corporation in the United States. product. Microsoft. other countries. Intel Inside logo. indicating US registered or common law trademarks owned by IBM at the time this information was published. xii IBM System Storage DS8870 Architecture and Implementation . the IBM logo. other countries.com/legal/copytrade. or both. A current list of IBM trademarks is available on the Web at http://www. Intel logo. and the Windows logo are trademarks of Microsoft Corporation in the United States. Java. or both: AIX® CICS® Cognos® DB2® DS4000® DS6000™ DS8000® Easy Tier® Enterprise Storage Server® ESCON® FICON® FlashCopy® GDPS® Global Technology Services® HACMP™ HyperSwap® i5/OS™ IBM SmartCloud™ IBM® IMS™ iSeries® NetView® Power Architecture® POWER Hypervisor™ Power Systems™ POWER6+™ POWER6® POWER7 Systems™ POWER7® PowerHA® PowerPC® POWER® ProtecTIER® pSeries® Real-time Compression™ Redbooks® Redpapers™ Redbooks (logo) ® RMF™ S/390® Storwize® System i® System p® System Storage DS® System Storage® System x® System z10® System z® SystemMirror® TDMF® Tivoli® XIV® z/OS® z/VM® z10™ z9® zEnterprise® zSeries® The following terms are trademarks of other companies: Intel.Trademarks IBM. other countries. other countries. UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States. and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates. Windows. and Intel Centrino logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. or service names may be trademarks or service marks of others.com are trademarks or registered trademarks of International Business Machines Corporation in the United States.ibm.

Connectivity options. see IBM System Storage DS8000: Copy Services for Open Environments. SG24-6788. with up to 128 Fibre Channel/FICON® ports for host connections. The DS8870 can automatically optimize the use of each storage tier. Various configuration options are available that scale from dual 2-core systems up to dual 16-core systems with up to 1 TB of cache. © Copyright IBM Corp. and implementation of the IBM System Storage® DS8870 storage system. The DS8870 also features enhanced 8 Gpbs device adapters and host adapters. The DS8870 is equipped with high-density storage enclosures that are populated with 24 small-form-factor SAS-2 drives or storage enclosures for 12 large-form-factor nearline SAS drives. All disk drives in the DS8870 storage system have the Full Disk Encryption (FDE) feature. The DS8870 storage subsystems also can be equipped with Solid-State Drives (SSDs). REDP-4500 IBM System Storage DS8000: LDAP Authentication. For more information about Easy Tier. REDP-4505 For more information about DS8000 Copy Services functions. see IBM System Storage DS8000: Easy Tier Concepts and Usage. see IBM System Storage DS8000: Host Attachment and Interoperability. which is available for no extra fee. The book provides reference information to assist readers who need to plan for. The DS8870 also can be integrated in an LDAP infrastructure. SG24-8887. 2013. and IBM System Storage DS8000: Copy Services for IBM System z. REDP-4760 IBM System Storage DS8000: Copy Services Resource Groups. and configure the DS8870. For information about specific features. The IBM System Storage DS8870 is the most advanced model in the IBM DS8000® lineup and is equipped with IBM POWER7® based controllers. xiii . SG24-6787. make the DS8870 suitable for multiple server environments in open systems and IBM System z® environments. and thin provisioning. architecture. particularly SSD drives through the IBM Easy Tier® feature. business continuity solutions. All rights reserved. see the following publications: IBM System Storage DS8000: Easy Tier Concepts and Usage. For more information about host attachment and interoperability for the DS8000 series. install. The DS8870 supports advanced disaster recovery solutions. REDP-4667 BM System Storage DS8000: Priority Manager. REDP-4667. REDP-4758 IBM System Storage DS8700 Disk Encryption Implementation and Usage Guidelines.Preface This IBM® Redbooks® publication describes the concepts.

Peter Kimmel is an IT Specialist and ATS team lead of the Enterprise Disk Solutions team at the European Storage Competence Center (ESCC) in Mainz. He holds a Ph. Germany. Since 2005. Hungary. in Theoretical Physics. Akin Sakarcan is working as a Storage Client Technical Specialist (CTS) for Systems & Technology Group in IBM Turkey. and new product implementation. DS8000. His areas of expertise include performance analysis and disaster recovery solutions in enterprises by using the unique capabilities and features of the IBM disk storage servers. Barcelona. with a focus on architecture and performance. Before joining the ITSO. He works in Technical Sales Support.D. and analysis of IBM storage solutions.The team who wrote this book This book was produced by a team of specialists from around the world working for the International Technical Support Organization (ITSO). His focus is hardware and software qualification for all the supported DS8000 releases in the manufacturing environment. He holds a degree in Electrical Engineering. Pere Alcaide joined IBM in 1996 and is working as a Support Center Representative after working as a System Services Representative. his main focus has been IBM DS8000 Family Series hardware. lstvan Paloczi is a Test Engineer working for the DS8000 manufacturing in Vac. from the University of Pecs. He has three years of experience in high-end disk testing. with a specialization in Telecommunication. He has worked at IBM in various IT areas. he worked for IBM Global Services as an Application Architect. He joined IBM Storage in 1999 and since then worked with all the various ESS (Enterprise Storage Server®) and System Storage DS8000 generations. and IBM XIV®. San Jose Center. His areas of expertise include planning. He also has collaborated with the IBM testing team and DS8K DA Development team. He holds a degree in Technical Telecommunications Engineering from the Universitat Politècnica de Catalunya. He has written many IBM Redbooks publications and has developed and taught technical workshops. He has contributed to various IBM Redbooks publications including ESS. design. DS8000 Architecture. He has co-authored several DS8000 IBM Redbooks publications. test process optimization. Peter holds a Diploma (MSc) degree in Physics from the University of Kaiserslautern. Hungary. and DS8000 Copy Services. at the European Storage Competence Center (ESCC) in San Jose. He has worked at IBM for 26 years and has extensive experience with high-end disk-storage hardware in System z and Open Systems. He has been working at IBM for three years as the subject matter expert and systems software blackbelt for IBM storage solutions. Roland Wolf is a Certified IT Specialist in Germany. He holds a Masters degree in Electrical Engineering. xiv IBM System Storage DS8870 Architecture and Implementation . He has 10 years of experience in the server and storage field. Bertrand Dufrasne is an IBM Certified IT Specialist and Project Leader for System Storage disk products at the ITSO.

Many thanks to the following people who helped with equipment provisioning and preparation: Mike Schneider Dietmar Schniering Günter Schmitt Uwe Heinrich Müller Uwe Schweikhard Jörg Zahn Hans-Joachim Sachs Edwin Weinheimer Roland Beisele Stephan Schorn Stephan Weyrich Seven Bittlingmayer IBM Systems Lab Europe. Thanks to the following people for their contributions to this project: John Bynum Allen Wright John Cherbini Allen Marin David Sacks Stephen Manthorpe Craig Gordon John Elliott Brian Rinaldi Dale H Anderson Chip Jarvis Jared Minch Jeff Steffan David Whitworth Kavita Shah Jason Piepelman Thomas Fiege Bjoern Wesselbaum Hubertus Wegel Volker Kiemes Juan Brandenburg Kai Jehnen IBM Preface xv . for their continuous interest and support regarding the ITSO Redbooks projects. and the ESCC director. Klaus-Jürgen Rünger. Germany Special thanks to the Enterprise Disk team manager. Mainz. Bernd Müller.

Now you can become a published author.redbooks. Residencies run from two to six weeks in length. Send us your comments about this book or other IBM Redbooks publications in one of the following ways: Use the online Contact us review Redbooks form found at: http://www. Find out more about the residency program.nsf/subscribe?OpenForm Stay current on recent Redbooks publications with RSS Feeds: http://www.com/groups?home=&gid=2130806 Explore new Redbooks publications.com/IBMRedbooks Follow us on Twitter: http://twitter.com Mail your comments to: IBM Corporation.linkedin.com/rss.ibm.com/Redbooks. browse the residency index. and apply online at: http://www. while honing your experience using leading-edge technologies. HYTD Mail Station P099 2455 South Road Poughkeepsie.facebook.ibm. Your efforts will help to increase product acceptance and customer satisfaction. and workshops with the IBM Redbooks weekly newsletter: https://www. NY 12601-5400 Stay connected to IBM Redbooks Find us on Facebook: http://www. and you can participate either in person or as a remote resident working from your home base.com/ibmredbooks Look for us on LinkedIn: http://www.redbooks.html Comments welcome Your comments are important to us! We want our books to be as helpful as possible. as you expand your network of technical contacts and relationships. too! Here's an opportunity to spotlight your skills.ibm.com/redbooks/residencies.com/redbooks Send your comments in an email to: redbooks@us. and become a published author—all at the same time! Join an ITSO residency project and help write a book in your area of expertise.html xvi IBM System Storage DS8870 Architecture and Implementation . International Technical Support Organization Dept.ibm. grow your career. residencies.ibm.

the latest member of the DS8000 series Overview of the IBM System Storage DS8870 models Detailed information about the DS8870 hardware components and architecture Presentation of the RAS features of the IBM System Storage DS8870 A review of the DS8000 virtualization concepts An overview of the IBM System Storage DS8000 Copy Services A discussion of the DS8870 performance features © Copyright IBM Corp. The following topics are included: Introduction to the IBM System Storage DS8870.Part 1 Part 1 Concepts and architecture This part of the book gives an overview of the IBM System Storage DS8000 concepts and architecture. 2013. All rights reserved. 1 .

2 IBM System Storage DS8870 Architecture and Implementation .

This chapter covers the following topics: The advantages of a POWER7 based storage system DS8870 architecture and functions overview Performance features Previous models. 3 . © Copyright IBM Corp.1 Chapter 1. SG24-8886. All rights reserved. and Implementation. Introduction to the IBM System Storage DS8870 This chapter introduces the features and functions of the IBM System Storage DS8000 series and its newest member. are described in the IBM Redbooks publication. IBM System Storage DS8000 Architecture. More information about functions and features is provided in subsequent chapters. 2013. such as the DS8700 and DS8800. the DS8870 storage system.

The DS8870 offers high availability. and simplified management tools to help provide a cost-effective path to an on-demand world. Customers expect from a high-end storage subsystem that includes the following features: High performance Highly available Cost efficient Energy efficient Scalable Has Business Continuity and Data Protection functions The DS8870 (as show in Figure 1-1) is IBM’s fifth-generation high-end disk system in the DS8000 series. You need to pair it only with an IBM Tivoli® Key Lifecycle Manager to encrypt data on the DS8870. including System z. 4 IBM System Storage DS8870 Architecture and Implementation . Powerful POWER7 processor-based servers manage the cache to minimize disk I/Os to maximize performance and throughput. and resilient series of disk storage systems. performance. and security. The System Storage DS8000 family is designed as a high-performance. It is designed to support the most demanding business applications with its exceptional all-around performance and data throughput.1. multiplatform support. high capacity. and innovative features. interfaces.1 Introduction to the DS8870 IBM has a wide range of product offerings that are based on open standards and share a common set of tools. the storage server provides a unique combination of high availability. The DS8000 architecture is server-based. The DS8870 is equipped with encryption-capable disk drives or Solid-State Drives (SSDs). Figure 1-1 DS8870 Base Frame Combined with world-class business resiliency and encryption features.

ENERGY STAR is a joint program of the US Environmental Protection Agency and the US Department of Energy and helps us all to save money and protect the environment through energy efficient products and practices. makes it possible to manage large caches with small cache segments of 4 KB (and hence large segment tables) without the need to partition the cache. The server architecture of the DS8870. High-density storage enclosures offer a considerable reduction in footprint and energy consumption. the DS8870 provides excellent I/O response times. The 8-Gbps Fibre Channel/FICON host adapter also supports FICON attachment to IBM System zEC12.gov. see this website: http://www. For more information. The DS8870 is designed to comply with the emerging ENERGY STAR specifications. sequential workload does not fill up the cache and does not affect cache hits for random workload. Adaptive Multi-stream Prefetching (AMP) caching algorithm can dramatically improve sequential performance. For example. The DS8870 is available with different processor options that range from dual 2-core systems up to dual 16-cores. with its powerful POWER7 processors. to high-capacity nearline-SAS 3-TB drives. and streaming media. Write data is always protected by maintaining a copy of write-data in Non-volatile Storage until the data is destaged to disks. These algorithms and the small cache segment size optimize cache hits. starting from fast 400 GB SSDs and fast 146 GB 15K RPM SAS disk drives. has broad server support. 1. IBM proves the performance of its storage systems by publishing standardized benchmark results. Introduction to the IBM System Storage DS8870 5 . reducing times for backup.1. Chapter 1. The DS8000 series improves the cost structure of operations and lowers energy consumption through a tiered storage environment. The DS8870 supports a broad range of disk drives. IBM zEnterprise® 196 (z196).1 Features of the DS8870 The DS8870 offers the following features: Storage virtualization that is offered by the DS8000 series allows organizations to allocate system resources more effectively and better control application quality of service. and IBM System z10®.storageperformance. and virtualization capabilities. IBM System z114. Sequential Adaptive Replacement Cache is a caching algorithm that allows you to run different workloads like sequential and random workloads without negatively affecting each other. For more information about benchmark results of the DS8870. The POWER7 processors have enough power to implement sophisticated caching algorithms. Cache configurations are available that range from 16 GB up to 1-TB cache. Therefore. Intelligent Write Caching (IWC) improves the Cache Algorithm for random writes. This feature makes the DS8870 the most energy-efficient model in the DS8000 series.org 8-Gbps host adapters (HAs): The DS8870 model offers enhanced connectivity with fourand eight-port Fibre Channel/FICON host adapters that are in the I/O enclosures that are directly connected to the internal processor complexes. These features can help simplify the storage environment by consolidating multiple storage systems. processing for business intelligence. A new power supply system that is based on direct current uninterruptible power supply DC-UPS is much more energy-efficient than the Power Distribution System of previous models.energystar. which covers a wide-range of performance needs.The DS8870 is tremendously scalable. see this website: http://www.

a FICON port. rank depopulation. For more information. Extended Address Volumes (EAVs) with sizes up to 1 TB are supported. Step-by-step z/OS’s access methods use the new I/O commands. For z/OS. Easy Tier also allows several manual data relocation capabilities (extent pools merge. This configuration simplifies storage management tasks. Recent enhancements to zHPF include Extended Distance capability. Configuration flexibility and overall storage cost-performance can greatly benefit from this feature. The DS8870 is at the most up-to-date support level for zHPF. The DS8870 supports the T10 DIF standard. Peripheral Component Interconnect Express (PCI Express Generation 2) I/O enclosures: To improve I/O Operations Per Second (IOPS) and sequential read/write throughput. It supports a combination of three classes or storage tiers (nearline. Storage Tier Advisor Tool: This tool is used with the Easy Tier facility to help clients understand their current disk system workloads. and zHPF support for sequential access methods. to SSDs. 6 IBM System Storage DS8870 Architecture and Implementation .Each port can be configured by the user to operate as a Fibre Channel Port. For more information. zHPF List Pre-fetch support for IBM DB2® and utility operations. especially the utilization of SSDs. see “Performance for System z” on page 21 and 7. High Performance FICON for System z (zHPF): zHPF is z/OS®’s new I/O architecture. No manual tuning is required. online volume expansion (for Open Systems and System z) to support application data growth.7. volume migration). or a Fibre Channel port that is used for mirroring. “IBM Easy Tier” on page 197. Easy Tier can also be used when encryption support is turned on. the I/O Priority Manager allows increased interaction with the host side. T10 DIF support: The Data Integrity Field standard of SCSI T10 enables end-to-end data protection from the application or host HBA down to the disk drives. Easy Tier: This included feature enables automatic dynamic data relocation capabilities. and to support data center migration and consolidation to larger volumes to ease addressing constraints. “I/O Priority Manager” on page 193.6. The Dynamic Volume Expansion simplifies management by enabling easier. for example. The tool also provides guidance on how much of their existing data is better-suited for the various drive types (Spinning disk or Solid-State Drives). This feature provides a way to manage QoS for I/O operations that are associated with critical workloads and gives them priority over other I/O operations that are associated with non-critical workloads. see 7. In a z/OS environment. Storage Pool Striping (rotate extents) provides a mechanism to distribute a volume’s or LUN’s data across many RAID arrays and across many disk drives. and Enterprise Disks). SSDs. Active Volume Protection: This feature prevents the deletion of volumes still in use. Large volume support: The DS8870 supports LUN sizes up to 16 TB. Data areas that are accessed frequently are moved to higher tier disks. Storage Pool Striping helps maximize performance without special tuning and greatly reduces hot spots in arrays. All DB2 I/O is now zHPF capable. nearline-SAS drives. The auto-balancing algorithms also provide benefits when homogeneous disk pools are used to eliminate hot spots on disk arrays. I/O Priority Manager: This optional feature provides application-level quality of service (QoS) for workloads that share a storage pool. for example. Infrequently accessed data areas are moved to lower tiers. Easy Tier optimizes the usage of each tier. zHPF is an optional feature of the DS8870. the I/O enclosures are directly connected to the internal servers with point-to-point PCI Express cables.

The scope of the aggregated resources can be tailored to meet each hosted customers’ Copy Services requirements for any operating system platform that is supported by the DS8000 series. The concept of Consistency Groups provides a means to copy several volumes consistently. see IBM System Storage DS8000 Resource Groups. For more information. For more information. FlashCopy is also supported by z/OS backup functions. Dual platform key server support allows two server platforms to host the key manager with either platform operating in either clear key or secure key mode. Chapter 1. even across several DS8000 systems. The DS8000 requires an isolated key server in encryption configurations. Introduction to the IBM System Storage DS8870 7 . REDP-4500. FlashCopy can be used to perform backup operations parallel to production or to create test systems. For more information. including SSDs. or Microsoft Exchange. Oracle. such as DFSMS and DB2 BACKUP SYSTEM. Resource Groups are used to define an aggregation of resources and policies for configuration and management of those resources. REDP-4758. The Full Disk Encryption is available for all disk and drives.Thin Provisioning: This feature allows the creation of over-provisioned devices for more efficient usage of the storage capacity for Open Systems. The following specific features of disk encryption key management help address Payment Card Industry Data Security Standard (PCI-DSS) requirements: – Encryption deadlock recovery key option: When enabled. this option allows the user to restore access to a DS8000 when the encryption key for the storage is unavailable because of an encryption deadlock scenario. Copy Services is now available for Thin Provisioning. incremental copies. The Full Disk Encryption (FDE) can protect business-sensitive data by providing disk-based hardware encryption that is combined with a sophisticated key management software (IBM Tivoli Key Lifecycle Manager). thus enabling improved cost effectiveness. FlashCopy can be managed with the help of IBM’s FlashCopy Manager product from within certain applications like DB2. Because encryption is done by the disk drive. FlashCopy®: FlashCopy is an optional feature that allows the creation of volume copies (and data set copies for z/OS) nearly instantaneously. it is transparent to host systems and can be used in any environment. The isolated key server that is currently defined is an IBM System x® server. including z/OS. SAP. or copy-on-write copies. “IBM System Storage DS8000 Copy Services overview” on page 141. making them available when the command completes. see Chapter 6. – Dual platform key server support: This support is important if key servers on z/OS share keys with key servers on Open Systems. Different options are available to create full copies. The IBM FlashCopy SE capability enables more space efficient utilization of capacity for copies. see IBM System Storage DS8700 Disk Encryption Implementation and Usage Guidelines. – Recovery key Enabling/Disabling and Rekey data key option for the Full Disk Encryption (FDE) feature: Both of these enhancements can help clients satisfy Payment Card Industry (PCI) security standards Resource Groups: This feature is a policy-based resource scope-limiting function that enables the secure use of Copy Services functions by multiple users on a DS8000 series storage subsystem. Quick Initialization: This feature provides fast volume initialization (for Open Systems LUNs and CKD volumes) and therefore allows the creation of devices.

In co-operation with z/OS’s Data Mover another option is available for z/OS: Global Mirror for z/OS. which can help reduce the need for channel extenders configurations by increasing the number of read commands in flight. Metro Mirror. and z/OS Metro/Global Mirror business continuity solutions are designed to provide the advanced functionality and flexibility that is needed to tailor a business continuity environment for almost any recovery point or recovery time objective. 8 IBM System Storage DS8870 Architecture and Implementation . which indicates its implementation of IPv6 mandatory core protocols and the ability to interoperate with other IPv6 implementations. thus granting it support for the USGV6 profile and testing program. you can establish a FlashCopy relationship in which the target is a remote mirror Metro Mirror primary volume that keeps the pair in the full duplex state. The DS8870 Data Studio GUI has the same look and feel as the GUIs of other IBM storage products. The logo program provides conformance and interoperability test specifications that are based on open standards to support IPv6 deployment globally. The DS8000 series is certified as meeting the requirements of the IPv6 Read Logo program. Three-site options are available by combining Metro Mirror and Global Mirror.Remote Mirroring options: The DS8870 provides the same remote mirroring options as previous models of the DS8000 family. “Configuration by using the DS Storage Manager GUI” on page 323. which allows single sign-on functionality. can simplify user management by allowing the DS8000 to rely on a centralized LDAP directory rather than a local user repository. Asynchronous copy (Global Mirror) is supported for unlimited distances. For more information. see Chapter 13. z/OS Global Mirror. The IBM DS8000 can be configured in native IPv6 environments. Metro/Global Mirror. thus making it easier for a storage administrator to work with different IBM storage products. the US National Institute of Standards and Technology tested IPv6 with the DS8000. IBM’s GDPS® provides an automated disaster recovery solution. Furthermore. For more information. Another important feature for z/OS Global Mirror (two-site) and z/OS Metro/Global Mirror (three-site) is Extended Distance FICON. The DS8870 provides an improved Data Studio GUI management interface to configure the DS8870 or query status information. REDP-4505. Remote Pair FlashCopy: By using this feature. Synchronous remote mirroring (Metro Mirror) is supported up to 300 km. The Copy Services can be managed and automated with IBM Tivoli Storage Productivity Center for Replication. For z/OS environments. LDAP authentication support. Global Copy. Global Mirror. see IBM System Storage DS8000: LDAP Authentication.

5-inch drives. drawing air for cooling from the front of the system and exhausting hot air at the rear. and DS8800. Improved high-density frame design The DS8870 can support 1536 drives in a small footprint (base frame and three expansion frames) that support high density and helping to preserve valuable raised floor space in data center environments.5-inch drives can be intermixed in the same frame (although not within the same disk enclosure). Improved configuration options The DS8870 also provides a Business Class configuration option.536 SFF drives with the third expansion frame (total of four frames). The DS8700 offers a dual 2-core processor complex. The DS8870 uses the simultaneous multithreading (SMT) capabilities of the POWER7 architecture. the DS8870 base frame supports up to 240 drives. a fully configured DS8870 uses up to 20% less power than its predecessor model DS8800. Coupled with an improved cooling implementation and small form factor SAS. DS8700. the DS8870 also supports 3. Chapter 1. The Business Class option offers an entry level configuration cost. cache. a dual 4-core processor complex. see 3.1. Air-flow system The air-flow system allows optimal horizontal cool down of the storage system. DS8300. Compared to the POWER6® processor in the DS8800.056 drives and up to 1.5-inch disk drives with up to 120 drives with the base frame. 2. up to 288 drives with one expansion frame. Adding a first expansion frame allows up to 576 drives. By using the SFF. The DS8870 includes the following features: IBM POWER7 processor technology The DS8870 features the IBM POWER7 processor-based server technology for high performance. and storage enhancement to be performed concurrently. a dual 8-core processor complex. the POWER7 processor can deliver up to three times more throughput in I/O operations per second (IOPS) in transaction-processing workload environments. IBM System Storage DS8100. As an alternative. The 2. For more information. 528 drives with two expansion frames. Compared with its predecessors.2 The DS8870 controller options and frames The IBM System Storage DS8870 adds Models 961 (base frame) and 96E (expansion unit) to the 242x machine type family. the DS8870 is designed to provide capabilities for the combination of price and efficiency.2. sequential workloads can receive as much as 60% bandwidth improvement. Introduction to the IBM System Storage DS8870 9 . The Business Class model can be upgraded nondisruptively to a standard configuration. Additionally. without disrupting applications. “Power and cooling” on page 53. A second expansion frame brings the total to up to 1.5-inch and 3.5. High-density storage enclosures The DS8870 provides storage enclosure support for 24 small form factor (SFF) 2. or 768 drives with three expansion frames (total of four frames). This option helps improve the storage density for disk drive modules (DDMs) as compared to previous enclosures. or a dual 16-core processor complex. Non-disruptive upgrade path A nondisruptive upgrade path for the DS8870 configurations and more Model 96E expansion frames allows processor.5-inch drives in 2U of rack space.0 Enterprise drives. The DS8870 is designed for hot and cold aisle data center design.

IBM POWER7 processor technology The DS8870 uses IBM POWER7 processor technology. Each PCIe connection operates at a speed of 2 GBps in each direction. There are up to 16 PCIe connections from the processor complexes to the I/O enclosures. see Chapter 3. or a dual 16-core processor complex. For more information. see Chapter 3.1. Device adapters The DS8870 offers four-port Fibre Channel Arbitrated Loop (FC-AL) Device Adaptors (DA). high-speed PCI Express (PCIe) connections to the I/O enclosures to communicate with the device adapters and host adapters. It maximizes the throughput of the processor core by offering an increase in core efficiency. SMT4 enables four instruction threads to run simultaneously in each POWER7 processor core. The hardware also is optimized to provide higher performance. For more information. DS8700. 10 IBM System Storage DS8870 Architecture and Implementation . see Chapter 4. DS8300.3. They are optimized for SSD technology and designed for long-term support for scalability growth. Internal PCIe-based fabric The DS8870 uses direct point-to-point. a dual 4-core processor complex.55 GHz. connectivity. and reliability. This architecture ensures that the DS8870 can use a stable and well-proven operating environment that offers optimal availability. For more information. These multithreading capabilities improve the I/O throughput of the DS8870 storage server. throughput.1 Overall architecture and components For more information about the available configurations for the DS8870. which allow up to a 400% improvement in performance over previous generations (DS8700). The DS8870’s P7 processor runs at 3. “DS8870 hardware components and architecture” on page 33. The DS8870 offers a dual 2-core processor complex. a dual 8-core processor complex. 1. “DS8870 hardware components and architecture” on page 33. This technology features copper interconnect and implements an on-chip L3 cache that uses eDRAM. and DS8800 models. and scalability over previous DS8000s. see Chapter 2. “IBM System Storage DS8870 models” on page 23. All adapters provide improved IOPS.3 DS8870 architecture and functions overview The DS8870 offers continuity concerning the fundamental architecture of their predecessors the DS8100. “RAS on IBM System Storage DS8870” on page 59. An enhancement with the POWER7 processor is the addition of the Simultaneous Multi-Threading SMT4 mode. A processor complex also is referred to as a storage server or Central Electronics Complex. The POWER7 processor chip is fabricated by using the IBM 45 nm Silicon-On-Insulator (SOI) technology. These capabilities complement the POWER® server family to provide significant performance enhancements.

You do not need to use encryption. Chapter 1. The DAs connect to the controller cards in the storage enclosures by using FC-AL with optical short wave multi-mode interconnection.Switched Fibre Channel arbitrated loop The DS8870 uses a switched Fibre Channel arbitrated loop (FC-AL) architecture as the back-end for its disk interconnection. see Chapter 8. The Fibre Channel Interface Controller cards (FCIC) offer a point-to-point connection to each drive and device adapter so that there are four paths available from the DS8000 processor complexes to each disk drive. They provide up to 100 times the throughput and 10 times lower response time than 15K rpm spinning disks. However. “DS8870 Physical planning and installation” on page 223): 146 GB and 300 GB (15K rpm) Enterprise disk drives for high performance requirements 600 GB and 900 GB (10K rpm) disk drives for standard performance requirements 3 TB (7200K rpm) Nearline-SAS disk drives for large-capacity requirements 400 GB Solid-State Drives (SSDs) for the highest performance demands All drives in the DS8870 are now Full Disk Encryption (FDE)-capable drives. “DS8870 Physical planning and installation” on page 223. Solid-State Drives are the best choice for I/O-intensive workloads. For more information. Easy Tier can place data in the storage tier that best suits the access frequency of the data. For more information. Highly accessed data can be moved nondisruptively to a higher tier. see Chapter 3. see Chapter 8. Easy Tier Easy Tier enables the DS8870 to automatically balance I/O access to disk drives to avoid hot spots on disk arrays. Easy Tiering also can benefit homogeneous disk pools as it can move data away from over-utilized disk arrays to under-utilized arrays to eliminate hot spots and peaks in disk response times. to SSD disks while cold data or data that is primarily accessed sequentially is moved to a lower tier (for example. “DS8870 hardware components and architecture” on page 33. to Nearline disks). for example. They also use less power than traditional spinning disks. if you want to encrypt your data. you need at least two key servers. Introduction to the IBM System Storage DS8870 11 . Disk drives The DS8870 offers the following disk drives to meet the requirements of various workload and configurations (for more information. However. such as the Tivoli Key Lifecycle Manager or IBM Security Key Lifecycle Manager for z/OS.

IBM intends to deliver the following capabilities: Advanced Easy Tier capabilities on selected IBM storage systems.1 now includes replication management capabilities that are designed to support hundreds of replication sessions across thousands of data volumes. including the IBM System Storage DS8000. or legal obligation to deliver any material. It is designed to provide centralized. “DS8870 hardware components and architecture” on page 33. For more information. Each of the ports on a DS8870 host adapter can also independently be configured to Fibre Channel protocol (FCP) or FICON. For more information. automated. IBM Tivoli Storage Productivity Center IBM Tivoli Storage Productivity Center is a storage resource management application that is available for DS8000 management and other storage systems. 12 IBM System Storage DS8870 Architecture and Implementation . Host adapters Each DS8870 Fibre Channel adapter offers four or eight 8 Gbps Fibre Channel ports. The information that is presented here is intended to outline our general product direction and should not be relied on in making a purchasing decision. Additionally. and file system capacity utilization information in a heterogeneous environment. The development. such as RAID protection and remote mirroring. 1 All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice. The information is for informational purposes only and may not be incorporated into any contract.Statement of general direction1 IBM intends to further enhance Easy Tier and extend it to servers and applications to better use SSDs to better cache the most appropriate data. IBM Tivoli Storage Productivity Center 5. high-density SSDs. This placement will be made by communicating important information about current workload activity and application performance requirements. see Chapter 3.1 for DS8000” on page 307. 4-. Easy Tier will manage the solid-state storage as a large and low latency cache for the hottest data. or functionality. and management of storage area network (SAN) attached devices. and simplified management of complex and heterogeneous storage environments. IBM Tivoli Storage Productivity Center 5. Each 8-Gbps port independently auto-negotiates to 2-. monitoring alerts. the port can be used for mirroring. see Chapter 12. It also provides over 400 enterprise-wide reports. Easy Tier will also preserve advanced disk system functions. “Configuring IBM Tivoli Storage Productivity Center 5. This information is not a commitment. policy-based action. and policy-based management. or 8-Gbps link speed. A new high-density flash storage module for the IBM System Storage DS8870. IBM Tivoli Storage Productivity Center is designed to help improve capacity utilization of storage systems by adding intelligence to data protection and retention practices. If configured for FCP protocol. which is designed to leverage direct-attached solid-state storage on selected AIX® and Linux servers. It extends existing management of a single storage system. and timing of any features or functionality that are described for our products remains at the sole discretion of IBM. The new module will accelerate performance to another level with cost-effective.1 provides a wealth of storage resource management tools. release. performance monitoring. and represent goals and objectives only. An application-aware storage application programming interface (API) to help deploy storage more efficiently by enabling applications and middle ware to direct more optimal placement of data. It also supports open and z/OS-attached volumes. providing capabilities such as storage reporting. it provides storage device configuration. code. promise. monitoring.

Storage Hardware Management Console for the DS8000 The Hardware Management Console (HMC) is the focal point for maintenance activities. For more information. However. IBM Standby Capacity on Demand offering for the DS8870 Standby Capacity on Demand (CoD) provides standby on-demand storage for the DS8000 that allows you to access the extra storage capacity whenever the need arises. see Chapter 3. to avoid deadlock situations where you cannot start your key server because it runs on an encrypted DS8000. An external management console is available as an optional feature.3. and maintains encryption keys that are used to encrypt information that is written to. In addition. you can logically configure your CoD drives concurrently with production. Chapter 1. The console can be used as a redundant management console for environments with high availability requirements.3. At any time. “DS8870 HMC planning and setup” on page 251. two Tivoli Key Lifecycle Manager key servers are required. Tivoli Key Lifecycle Manager provides. protects. Up to 1536 drives can be installed in a base frame with up to three expansion frames. see 2. DS8870 can have up to six Standby CoD drive sets (96 drives).2 Storage capacity The physical storage capacity for the DS8870 is contained in the disk drive sets. For more information. Introduction to the IBM System Storage DS8870 13 . The IBM Security Key Lifecycle Manager for z/OS (ISKLM) is available for z/OS environments. The HMC supports the IPv4 and IPv6 standards. DS8000 storage capacity can be configured as RAID 5. It can also be connected to your network to enable centralized management of your system by using the IBM System Storage DS® Command-Line Interface (DSCLI). encryption-enabled disks. 1. RAID 6. The HMC is a dedicated workstation that is physically located inside the DS8000. depending on the drive type. IBM installs more CoD disk drive sets in your DS8000. The DS8870 storage systems ship with Full Disk Encryption (FDE) drives. An Isolated Key Server (IKS) with dedicated hardware and non-encrypted storage resources is required and can be ordered from IBM. stores. and decrypt information that is read from. “DS8870 Disk drive options” on page 29. RAID 10. SSDs and Nearline drive sets are available in half sets (8) or full sets (16) of disk drives or DDMs. With CoD. “DS8870 hardware components and architecture” on page 33. You are automatically charged for the additional capacity. such as the IBM System Storage DS8000 Series family. The available drive options provide industry class capacity and performance to address enterprise application and business requirements. For more information. A disk drive set contains 16 Disk Drive Modules (DDMs). or as a combination of these RAIDs. see Chapter 9. Tivoli Key Lifecycle Manager isolated key server The Tivoli Key Lifecycle Manager software performs key management tasks for IBM encryption-enabled hardware. which have the same capacity and the same revolutions per minute (rpm). The HMC can proactively monitor the state of your system and notify you and IBM when service is required. you also should have a dedicated Tivoli Key Lifecycle Manager on a stand-alone server. To configure a DS8870 to use encryption.

The maximum Count Key Data (CKD) volume size is 1.3. or by media scrubbing.4 Configuration flexibility The DS8000 series uses virtualization techniques to separate the logical view of hosts onto Logical Unit Numbers (LUNs) from the underlying physical layer. non-IBM Intel. Dynamic LUN/volume creation. Fibre Channel adapters. But more often there is demand for an end-to-end data integrity checking solution (from the application to the disk drive). for example. which gives a high degree of flexibility in managing storage. For the most current list of supported platforms. “Virtualization concepts” on page 105. which greatly reduces the number of volumes that are managed. When an LUN is deleted.182. An LUN can also be dynamically increased in size.1. Large LUN and large count key data volume support You can configure LUNs and volumes to span arrays. see Chapter 5. such as the DS8870. see the DS8000 System Storage Interoperation Center at this website: http://www. including IBM Power Systems™. and AMD-based servers. deletion.3 Supported environments The DS8000 offers connectivity support across a broad range of server environments.com/Redbooks. This capability is referred to as Extended Address Volumes (EAV) and requires z/OS 1. The CKD creates a z/OS volume type that is called 3390 Model A. servers from Oracle and Hewlett-Packard. along with the flexibility to easily partition the DS8000 storage capacity among the attached environments.12 or later. The DS8000 supports over 90 platforms. can help support storage consolidation requirements and dynamic environments. which includes many components that perform error checking. the freed capacity can be used with other free space to form a new LUN of a different size. System z and System x servers. This configuration also is used for some file systems.ibm.com/systems/support/storage/config/ssic/index.ibm. and expansion LUNs can be created and deleted non-disruptively.nsf/RedbookAbstracts/sg248887.jsp A Host Attachment and Interoperability IBM Redbooks publication that provides answers to the proper supported environments is available at this website: http://www.3. T10 DIF support A modern storage system. For more information about virtualization. system buses. which allows for larger LUN sizes of up to 16 TB in Open Systems. 14 IBM System Storage DS8870 Architecture and Implementation . often by checksum techniques. 1.html?Open This rich support of heterogeneous environments and attachments. memory. Errors can be detected and in some cases corrected. This checking is done between different components within the I/O path. thus providing high configuration flexibility. in its RAID components.redbooks.006 cylinders (1 TB). Copy Services are not supported for LUN sizes greater than 2 TB.

Associating the Host Attachment to a Volume Group allows all adapters within the Host Attachment access to all of the storage in the Volume Group. Metro. FlashCopy. users can put LUNs or CKD volumes into Logical Subsystems (LSSs) and make best use of the 256 address range. VMware vSphere can perform key operations faster and use less CPU. also known as Compare and Write for hardware-assisted locking and Clone Blocks (Extended Copy or XCOPY for hardware assisted move or cloning). Thin provisioning features The DS8000 provides two types of space efficient volumes: Track Space Efficient volumes and Extent Space Efficient volumes. VMware VAAI support VMware’s vStorage APIs for Array Integration (VAAI) feature offloads-specific storage operations to disk arrays for highly improved performance and efficiency. memory. LBA (Logical Block Address). and Global Mirror of Thin Provisioned volumes are supported on a DS8870 storage system. You can also create T10 DIF capable LUNs for operating systems that do not yet support this feature (except for IBM i).The ANSI T10 standard provides a way to check the integrity of data that is read and written from the host bus adapter to the disk and back through the SAN fabric. particularly for System z. This check is implemented through the data integrity field (DIF) defined in the T10 standard. Introduction to the IBM System Storage DS8870 15 . Maximum values of logical definitions The DS8000 features the following maximum values for the major logical definitions: Up to 255 logical Subsystems (LSS) Up to 65280 logical devices Up to 16 TB Logical Unit Numbers (LUNs) Up to 1.182.006 cylinders (1 TB) Count Key Data (CKD) volumes Up to 130560 Fibre Connection (FICON) logical paths (512 logical paths per control unit image) on the DS8000 Up to 1280 logical paths per Fibre Channel (FC) port Up to 8192 process logins (509 per SCSI-FCP port) Chapter 1. The DS8870 supports the VAAI primitives Atomic Test-and-Set (ATS). Track Space Efficient volumes are intended as target volumes for FlashCopy. Simplified LUN masking The implementation of volume group-based LUN masking simplifies storage management by grouping some or all worldwide port names (WWPNs) of a host into a Host Attachment. With VAAI. These volumes feature enable over-provisioning capabilities that provide more efficient usage of the storage capacity and reduced storage management requirements. and storage bandwidth. This support adds protection information that consists of CRC (Cyclic Redundancy Checking). The DS8870 supports the T10 DIF standard for FB volumes that are accessed by the FCP channel of Linux on System z. Flexible LUN-to-LSS association With no predefined association of arrays to LSSs on the DS8000 series. You can define LUNs with an option to instruct the DS8870 to use the CRC-16 T10 DIF algorithm to store the data. and host application tags to each sector of FB data on a logical volume.

The target volume can be a logical or physical copy of the data. This configuration enables a full or incremental point-in-time copy to be created at a local site. It then uses remote mirroring commands to copy the data to the remote site. supports IBM HyperSwap®. Metro/Global Mirror. which are Remote Mirror and Copy functions. 16 IBM System Storage DS8870 Architecture and Implementation .5 Copy Services functions For IT environments that cannot afford to stop their systems for backups. SG24-6788 IBM System Storage DS8000: Copy Services for IBM System z. Global Copy. only the data that is required to make the target current with the source’s newly established point-in-time is copied. which facilitates recovery. the DS8870 provides Metro Mirror. Global Mirror. This function is called FlashCopy. FlashCopy can also operate at a data set level. In a z/OS environment. and reduces link bandwidth utilization. When a subsequent FlashCopy is initiated. These functions provide storage mirroring and copying over large distances for disaster recovery or availability purposes. For data protection and availability needs. the remote mirror pair moves into a copy pending state.1. SG24-6787 FlashCopy The primary objective of FlashCopy is to quickly create a point-in-time copy of a source volume on a target volume. and z/OS Global Mirror. Multiple Relationship FlashCopy Multiple Relationship FlashCopy allows a source to have FlashCopy relationships with up to 12 targets simultaneously.3. The benefits of FlashCopy are that the point-in-time target copy is immediately available for use for backups or testing. Remote Mirror Primary FlashCopy Remote Mirror primary FlashCopy allows a FlashCopy relationship to be established in which the target also is a remote mirror primary volume. with the physical copy that copies the data as a background process. Consistency Groups FlashCopy Consistency Groups can be used to maintain a consistent point-in-time copy across multiple LUNs or volumes. the DS8870 provides a fast replication technique that can provide a point-in-time copy of the data in a few seconds or even less. Remote Pair FlashCopy Remote Pair FlashCopy improves resiliency solutions by ensuring data synchronization when a FlashCopy target is also a Metro Mirror source. see the following resources: Chapter 6. The source volume also is immediately released so that applications can continue processing with minimal application downtime. “IBM System Storage DS8000 Copy Services overview” on page 141 IBM System Storage DS8000: Copy Services for Open Systems. While the background copy task is copying data from the source to the target. For more information about Copy Services. These functions are also available and are fully interoperable with previous models of the DS8000 family. or even multiple DS8000 systems. The following sections summarize the options available with FlashCopy. Incremental FlashCopy Incremental FlashCopy provides the capability to refresh a LUN or volume that is involved in a FlashCopy relationship. This configuration keeps the local and remote site consistent.

previously called Peer-to-Peer Remote Copy (PPRC). Furthermore. provides a synchronous mirror copy of LUNs or volumes at a remote site within 300 km. For more information about FlashCopy SE. Metro/Global Mirror Metro/Global Mirror is a three-site data replication solution for Open Systems and the System z environments. Local site (Site A) to intermediate site (Site B) provides high availability replication by using synchronous Metro Mirror. Chapter 1. As with FlashCopy. When used with a supporting application. and Metro/Global Mirror. REDP-4368. or copies for disaster recovery testing. Metro Mirror Consistency Groups can be used to maintain data and transaction consistency across multiple LUNs or volumes or multiple DS8000 systems. see IBM System Storage DS8000 Series: IBM FlashCopy SE. The following sections summarize the Remote Mirror and Copy options that are available with the DS8000 series. previously called Extended Remote Copy (XRC). Global Copy Global Copy. The distance is often limited only by the capabilities of the network and channel extension technology that is used.Inband commands over remote mirror link In a remote mirror environment. Global Mirror Global Mirror provides an asynchronous mirror copy of LUNs or volumes over unlimited distances. which can help reduce costs and complexity. This configuration enables more space-efficient utilization than is possible with the standard FlashCopy function. FlashCopy SE can be especially useful in the creation of temporary copies for tape backup. previously called Extended Distance Peer-to-Peer Remote Copy (PPRC-XD). This configuration eliminates the need for a network connection to the remote site solely for the management of FlashCopy. less capacity can mean fewer disk drives and lower power and cooling requirements. Global Copy. Global Mirror. Only the capacity that is needed to save pre-change images of the source data is allocated in a copy repository. IBM FlashCopy SE The IBM FlashCopy SE feature provides a space efficient copy capability that can greatly reduce the storage capacity that is needed for point-in-time copies. A Global Mirror Consistency Group is used to maintain data consistency across multiple LUNs or volumes or multiple DS8000 systems. provides an asynchronous mirror copy of volumes over unlimited distances for the System z. z/OS Global Mirror for the System z environments also is included. inband FlashCopy allows commands to be issued from the local or intermediate site and transmitted over the remote mirror Fibre Channel links for execution on the remote DS8000. Intermediate site (Site B) to remote site (Site C) provides long-distance disaster recovery replication by using asynchronous Global Mirror. long-distance copy option for data migration and backup. online application checkpoints. Remote Mirror and Copy functions The Remote Mirror and Copy functions include Metro Mirror. Introduction to the IBM System Storage DS8870 17 . Metro Mirror Metro Mirror. is a non-synchronous. Remote Mirror and Copy functions can be established between DS8000 systems. z/OS Global Mirror z/OS Global Mirror. It provides increased parallelism through multiple SDM readers (Multiple Reader capability).

For information.3. the partitioning capability of resource groups can be used to isolate various subsets of the environment as though they were separate tenants. modem. and Metro Mirror to mirror the primary site data to a location within the metropolitan area. When a single-tenant installation is managed. It is possible to allow one or more of the following configurations: Call home on error (machine-detected) Connection for a few days (client-initiated) Remote error investigation (service-initiated) The remote connection between the management console and the IBM Service organization is done by using a Virtual Private Network (VPN) point-to-point connection over the internet. This ability can assist you with multi-tenancy support by assigning specific resources to specific tenants. see 17. the Storage Hardware Management Console (HMC) is the focus. and other copy services operations. 1. to separate mainframes from open servers. particularly when Full Disk Encryption is used. It notifies you and IBM when service is required. you can control the volumes that can be in a copy services relationship. Windows from UNIX. or accounting departments from telemarketing. limiting copy services relationships so that they exist only between resources within each tenant’s scope of resources. Generally. retrieval.7 Service and setup The installation of the DS8000 is performed by IBM in accordance with the installation procedure for this machine. logical configuration.7. This configuration enables a z/OS three-site high availability and disaster recovery solution. use a dual-HMC configuration. see IBM System Storage DS8000: Resource Groups. After the Incremental Resync is performed. and limiting a tenant’s copy services operators to an operator-only role. you can change the copy target destination of a copy relation without the need for a full copy of the data. REDP-4758. AOS offers more options.6 Resource Groups for copy services scope limiting Copy services scope limiting is the ability to specify policy-based limitations on copy services requests. For more information. which network users or host LPARs issue copy services requests on which resources. and execution. or with the new Assist On-site (AOS) feature. For maintenance and service operations. such as SSL security and enhanced audit logging. Use these capabilities to separate and protect volumes in a copy services relationship from each other. With the combination of policy-based limitations and other inherent volume-addressing limitations. “Assist On-site” on page 483. and installation of feature activation codes.z/OS Metro/Global Mirror z/OS Metro/Global Mirror is a combination of Copy Services for System z environments that uses z/OS Global Mirror to mirror primary site data to a remote location that is at a long distance. 18 IBM System Storage DS8870 Architecture and Implementation . which can be configured to meet client requirements. z/OS Global Mirror also offers Incremental Resync.3. The client’s responsibility is the installation planning. The management console is a dedicated workstation that is physically located inside the DS8000 storage system where it can automatically monitor the state of your system. 1. which can significantly reduce the time that is needed to restore a Disaster Recovery (DR) environment after a HyperSwap in a three-site z/OS Metro/Global Mirror configuration. The HMC also is the interface for remote services (Call Home and Remote Support). For example.

SARC provides the following abilities: Sophisticated algorithms to determine what data should be stored in cache that is based on recent access and the frequency needs of the hosts Pre-fetching. which is used in the DS8000 series. the DS8000 is positioned at the top of the high performance category. which anticipates data before a host request and loads it into cache Self-learning algorithms to adaptively and dynamically learn what data should be stored in cache that is based upon the frequency needs of the hosts Chapter 1. However. Sequential Prefetching in Adaptive Replacement Cache One of the performance features of the DS8000 is its self-learning cache algorithm. 1. With a 4-KB cache segment size and up to 1-TB cache sizes. the tables to maintain the cache segments become large. With all these components. fault-tolerant point-to-point PCI Express internal interconnections. This feature is possible because the DS8870 incorporates many performance enhancements. SSDs.4. is called Sequential Prefetching in Adaptive Replacement Cache (SARC). which optimizes cache efficiency and enhances cache hit ratios. and the high bandwidth. data overwrite is not necessary. Introduction to the IBM System Storage DS8870 19 . To implement sophisticated caching algorithms. fast 8-Gbps Fibre Channel/FICON host adaptors. An STG Lab Services Offering for the DS8000 series includes the following services: Multi-pass overwrite of the data disks in the storage system Purging of client data from the server and HMC disks For more information about and the options that are available for IBM Certified Secure Data Overwrite. If you used the encryption capability of the DS8870. 1. 1. This algorithm. if you did not use encryption. you might want to take advantage of this service.4 Performance features The IBM System Storage DS8870 offers optimally balanced performance.1 Sophisticated caching algorithms IBM Research conducts extensive investigations into improved algorithms for cache management and overall system performance improvements. it is essential to include powerful processors for the cache management.The DS8000 storage system can be ordered with an outstanding four-year warranty (an industry first) on hardware and software. contact your IBM marketing representative or IBM Business Partner.8 IBM Certified Secure Data Overwrite Regulations and business prudence can require that the data must be removed when the media is no longer needed. such as the dual multi-core POWER7 processor complex implementation.3.

I/O-intensive workload applications that can use a high level of fast-access storage. see IBM System Storage DS8000: Host Attachment and Interoperability. the DS8870 provides support for SSDs. and lower acoustical noise. While SARC is carefully dividing the cache between the RANDOM and the SEQ lists to maximize the overall hit ratio. Fibre Channel (SCSI-FCP) attachment configurations are supported in the AIX. It minimizes disk actuator movements on writes. AMP.4. Windows. SSDs are high-IOPS class enterprise storage devices that are targeted at Tier 0. 20 IBM System Storage DS8870 Architecture and Implementation . workload-responsive. 1. Linux. including better IOPS. SDD provides dynamic load balancing and eliminates data flow bottlenecks. and IWC play complementary roles.2 Solid State Drives To improve data transfer rate (IOPS) and response time. less heat generation. lower power consumption. such as AIX’s MPIO. SDD is provided with the DS8000 series at no additional charge. and Oracle Solaris environments. SARC. the SDD package provides a plug-in to optimize the operating system’s multipath driver for use with the DS8000. Intelligent Write Caching Intelligent Write Caching (IWC) improves performance through better write cache management and destaging order of writes. database workloads benefit from this new IWC Cache algorithm. see Chapter 8. HP-UX. AMP optimizes cache efficiency by incorporating an autonomic.3 Multipath Subsystem Device Driver The Multipath Subsystem Device Driver (SDD) is a pseudo-device driver on the host system that is designed to support the multipath configuration environments in IBM products. so the disks can do more I/O in total. SSDs feature improved I/O transaction-based performance over traditional spinning drives. If you use the multipathing capabilities of your operating system. SDD helps eliminate a potential single point of failure by automatically rerouting I/O operations when a path failure occurs. The DS8870 is available with 400 GB encryption-capable SSDs. IWC manages the write cache and decides what order and rate to destage to disk. SSDs offer a number of potential benefits over hard disk drives. and self-optimizing prefetching technology. For more information. For more information about SDD. SG24-8887. “DS8870 Physical planning and installation” on page 223. AMP is managing the contents of the SEQ list to maximize the throughput obtained for the sequential workloads. 1.4. Specifically. Support for multipath: Support for multipath is included in an IBM i server as part of Licensed Internal Code and the IBM i operating system (including i5/OS™). By distributing the I/O workload over multiple active paths. It provides load balancing and enhanced data availability capability.Adaptive Multi-stream Prefetching Adaptive Multi-stream Prefetching (AMP) is a breakthrough caching technology that improves performance for common sequential and batch processing workloads on the DS8000. IWC also can double the throughput for random write workloads.

see 7. “I/O Priority Manager” on page 193. With Dynamic PAV. I/O priority queuing allows the DS8000 series to use I/O priority information that is provided by the z/OS Workload Manager to manage the processing sequence of I/O operations at the adapter level.1. Multiple Allegiance expands the simultaneous logical volume access capability across multiple System z servers. For more information about the performance aspects of the DS8000 family. This configuration improves FICON throughput on the DS8000 I/O ports. It extends priority management to the disk arrays in a shared pool. see Chapter 7. HyperPAV is designed to enable applications to achieve equal or better performance than with PAV alone. while also using fewer Unit Control Blocks (UCBs) and eliminating the latency in targeting an alias to a base. High Performance FICON for z (zHPF) reduces the affect that is associated with supported commands on current adapter hardware. see Chapter 7. With HyperPAV. and sequential access methods. DB2 List-prefetch. the system can react immediately to changing I/O workloads. Chapter 1. “Architectured for Performance” on page 167. This function.4 Performance for System z The DS8000 series supports the following IBM performance enhancements for System z environments: Parallel Access Volumes (PAVs) enable a single System z server to simultaneously process multiple I/O operations to the same logical volume. For more information. which can significantly reduce device queue delays.4. PAV is an optional feature on the DS8000 series.6.5 Performance enhancements for IBM Power Systems Many IBM Power Systems users can benefit from the following DS8000 features: End-to-end I/O priorities Cooperative caching Long busy wait host tolerance Automatic Port Queues For more information about these performance enhancements. the assignment of addresses to volumes can be automatically managed to help the workload meet its performance objectives and reduce overall queuing. I/O Priority Manager includes the major enhancements that were described earlier in this section. “Architectured for Performance” on page 167. The DS8000s also supports the new zHPF I/O commands for multi-track I/O operations. Introduction to the IBM System Storage DS8870 21 . enables the DS8000 series to process more I/Os in parallel. This reduction is achieved by defining multiple addresses per volume. along with PAV.4. 1. which improves performance and enables greater use of large volumes.

22 IBM System Storage DS8870 Architecture and Implementation .

This chapter covers the following topics: The DS8870 Business Class model The DS8870 Enterprise model © Copyright IBM Corp. IBM System Storage DS8870 models This chapter describes the IBM System Storage DS8870 different models. 23 . All rights reserved.2 Chapter 2. and shows how well they scale regarding capacity and performance. 2013. cache. and number of frames that are attached. in terms of the number of CPUs. It explains the various options for each model.

dual 8-way. Unlike the DS8800 Business Class. upgrades from the smallest to the largest configuration in terms of CPU. 2. Furthermore. which incorporates the processor complexes and optional expansion frames that mainly serve to host more disk drives. At the time of this writing.1 The System Storage DS8870 Similar to earlier generations of the IBM System Storage DS8000 series. long-term growth. These scalability and upgrade characteristics make the DS8870 the most suitable system for large consolidation projects. The DS8870 is available in the following configurations: DS8870 Model 961 Enterprise (standard) model This model is available as a dual 4-way. cache. later upgrades into the Enterprise model are possible. up to three expansion frames can be attached to a model 961. the DS8870 consists of one base frame. The added FC adapters can be installed only in the first expansion frame. However. However. Expansion frames: Expansion frames cannot be added to a DS8870 in a Business Class configuration. nondisruptive upgrades from Business Class to Enterprise are possible.2. Therefore. The use of Copy Services (or the I/O Priority Manager) features require at least a cache size of 32 GB when the Business class model is used. A business class system can be configured with 16 GB or 32 GB of cache. The processor complexes in the base frame can be upgraded with more CPUs or cache to accommodate growing performance needs that arise when more disk drives are used. 24 IBM System Storage DS8870 Architecture and Implementation . DS8870 Model 96E This expansion frame for the 961 model includes enclosures for more DDMs and FC adapters to allow a maximum configuration of 16 FC adapters. This standard model is optimized for performance and highly scalable configurations. The DS8870 offers unprecedented scalability in terms of CPU power and cache sizes. the Business Class can be upgraded with no application downtime to any Enterprise Class configuration to accommodate expansion frames. which allows for large. which offers a performance increase of 166 % over the previous DS8800 high-end model. The Business Class model is meant to offer a cost-efficient way to enter the DS8000 sphere for clients who require only lower capacity or performance and who use only a small subset of the DS8870 features.2 Model overview The DS8870 storage systems include the DS8800 Model 961 base frame and the associated DS8870 expansion frames 96E. in comparison to previous DS8000 generations. The expansion frame 96E can be attached only to the 961 8-way or 16-way base frame. the DS8870 Business Class uses the same cabling scheme that is used in the Enterprise model for its disk enclosures. DS8870 Model 961 Business Class model This configuration of the model 961 is available as a dual two-way processor complex with storage enclosures for up to 144 DDMs and four FC host adapters. or dual 16-way processor complex with storage enclosures for up to 240 DDMs and 8 FC host adapters. The cache for this model scales between 64 GB and 1 TB. and storage capacity can be done with no system downtime.

Table 2-1 provides a comparison of the offered DS8870 models and options. IBM System Storage DS8870 models 25 . Important: Cache size and the possible CPU options (number of cores) are not fully independent. The Model 961 is offered with 2-core.Important: The DS8870 Model 961 supports nondisruptive upgrades from its minimum Business Class configuration. 8-core. or 4-year warranty period (where x equals the number of years). as shown in Table 2-1. Machine type 242x DS8870 models are associated with machine type 242x. This machine type corresponds to the length of warranty offer that allows a 1-year. you can first upgrade to an four-way system and then add expansion frames. which provide up to 128 Fibre Channel ports for a maximum configuration. which includes space for up to 10 disk enclosures. the base model can hold 240 SFF disk drives (see Number 1 in Figure 2-1 on page 27). Table 2-1 DS8870 Configuration options for base and expansion frames Model number 961 Business Class 961 Business Class 961 Enterprise 961 Enterprise 961 Enterprise 961 Enterprise 961 Enterprise 96E First Expansion Frame 96E Second and Third Expansion Frame Processor 2-core 2-core 4-core 8-core 8-core 16-core 16-core N/A N/A Max. In a maximum configuration. You can also install Large Form Factor (LFF) enclosures. called Gigapacks (24 drives per Gigapack). 4-core. 3-year. up to a full four-frame system. Expansion models (96E) can be added to the base model in a dual 8-way or 16-way system. disk drives 144 144 240 1 056 1 536 1 536 1 536 336 480 Processor memory 16 GB 32 GB 64 GB 128 GB 256 GB 512 GB 1 TB N/A N/A Host adapters 4 4 8 16 16 16 16 8 0 9xE attach 0 0 0 0-2 0-3 0-3 0-3 961 961 Each Fibre Channel/FICON host adapter has four or eight Fibre Channel ports. or up to 15 Small Form Factor (SFF) disk drive sets (16 drives per disk drive set). DS8870 Model 961 overview Figure 2-1 on page 27 provides a high-level view of the front and back of the Model 961 base. Chapter 2. The 96E expansion frame has the same 242x machine type as the base frame. only certain combinations of both are allowed. or 16-core processor complexes. 2-year. Important: You cannot add an expansion model to a DS8870 two-way or four-way system. However.

which allows them to communicate with the HMC and storage facility image logical partitions (LPARs). The power system in each rack is a primary power supply with associated batteries or an uninterruptible power supply (DC-UPS) with internal batteries. The adapters that are contained in the I/O enclosures can be either device adapters (DAs) or host adapters (HAs). The DS8870 features an integrated rack power control (RPC) system that helps manage the efficiency of power distribution. The DC-UPS units improve energy efficiency. The DS8870 uses direct-current uninterruptible power supply (DC-UPS) units. The RPC adapters are attached to the service processors in each complex. The POWER7 processor-based servers (see Number 3 in Figure 2-1 on page 27) contain the processor and memory that drive all functions within the DS8870. the DC-UPS switches to 208-V DC battery power. The DS8870 power subsystem also contains a faster processor with parity-protected memory. 26 IBM System Storage DS8870 Architecture and Implementation . The DC-UPS distributes rectified line AC. Other models in the IBM System Storage DS8000 series contain primary power supply (PPS) units. each one feeding a DC-UPS.The Hardware Management Console (HMC) is stored below the drives (see Number 2 in Figure 2-1 on page 27). The I/O enclosures provide connectivity between the adapters and the storage processors (see Number 4 in Figure 2-1 on page 27). The DC-UPS provides rectified AC power distribution and power switching for redundancy. A redundant pair of RPC adapters coordinate the power management within the storage facility (see Number 6 in Figure 2-1 on page 27). If neither AC input is active. The RPC is also attached to the primary power system in each rack and on some models is indirectly connected to the fan-sense adapters and to the storage enclosures in each rack. The rack has two AC power cords. The base model contains DC-UPS power supplies (see Number 5 in Figure 2-1 on page 27). the output is switched to rectified AC from the partner DC-UPS. The power subsystem in the DS8870 complies with the latest directives for the Restriction of Hazardous Substances (RoHS). If AC is not present at the input line.

Figure 2-1 Base model (front and back views) of a Model 961 (four-core) Cache upgrades are concurrent. see 3.3. There are no I/O enclosures that are installed in the second and third expansion frames. “Device adapters” on page 47. Global Copy. for open systems host attachment FCP port for Metro Mirror. each port can be independently set as one of the following configurations: FCP port. For the host adapters (HAs). the 1536 drives are distributed over all of the DA pairs. IBM System Storage DS8870 models 27 . In a full four-frame installation. with four or eight ports. For more information about DA pairs. Global Mirror. and Metro/Global Mirror connectivity FICON port to connect to System z hosts FICON port for z/OS Global Mirror connectivity The first expansion frame can hold more host adapters and device adapters (DAs).3. Chapter 2.

Figure 2-2 shows a front view of the DS8870 frames in their current maximum configuration option of four frames. Figure 2-3 shows the rear view. Base frame 1st Expansion frame 2nd Expansion frame 3rd Expansion frame Figure 2-2 DS8870 models 961/96E maximum configuration with 1536 drives: Front 3r d Expa nsion frame 2nd Expansion frame 1st Expansion fram e Base fram e Figure 2-3 DS8870 models 961/96E maximum configuration with 1536 disk drives: Rear 28 IBM System Storage DS8870 Architecture and Implementation .

IBM System Storage DS8870 models 29 .0 interface. The suggested configuration of SSDs for an optimum price-to-performance ratio and to come to ideally balanced configurations is 16 drives per DA pair.5-inch enterprise drives are available for DS8870. These T3B (7 200 rpm) Nearline drives are offered in steps of a half drive set (eight disk drives). “DS8000 Solid-State drive considerations” on page 248.89 cm (3. a 400GB SSDs (2. Chapter 2. The SSD drives that previously were listed in this section can be ordered in 16-drive installation groups (full drive set). Initially. which all support Full Disk Encryption (FDE): 146 GB (15 000 rpm) 300 GB (15 000 rpm) 600 GB (10 000 rpm) 900 GB (10 000 rpm) A 8. the following 2.5-inch FDE) is available.35 cm (2. but not within a storage enclosure pair. 2.DS8870 Models 96E As shown in Table 2-1 on page 25. provide increased density and thus increased performance per frame. Intermixing drives: Intermixing drives of different capacities and speeds is supported on a DA pair. the first expansion frame (sometimes called B frame. Another option. but no host adapters or device adapters. or in eight-drive installation groups (half drive set). These enterprise-class drives.5-inch) SAS Nearline disk is also available and supports Full Disk Encryption. is to install a certain capacity percentage of Solid-State Drives (SSD) in the DS8870.3. For more information about SSD configurations. These drives are built on a traditional design with spinning platters so they are also known as mechanical drives. The second and third expansion frames (sometimes called C frames) can hold up to 480 drives each. Whether the FDE-capable drives are used with or without active encryption does not impose any performance difference. often chosen by clients.3 DS8870 Disk drive options The DS8870 offers enterprise-class drives that feature a 6-Gbps SAS 2. which use a 6. see 8. such as 15K/10K HDD drives.5. with the base frame referred to as A frame) can hold up to 336 drives and more I/O enclosures.5 inches) form factor. Encrypting the data at rest is one option. As of this writing.

5-inch disk drives. for a maximum gross capacity of up to 518 TB. Table 2-2 Capacity comparison of device adapters. DDMs.5-inch LFF disks Physical capacity*.ssic.58 PB Up to 2. net RAID-5 2. the DS8870 supports up to 576 2.056 2.storage. with three expansion frames 8 Up to 1536 Up to 384 Up to 1.536 2. the optimum price-to-performance ratio is achieved by using full drive sets (16).html 30 IBM System Storage DS8870 Architecture and Implementation .doc%2Ff2c_physconfigovr_1xfg41. gross 2.jsp?topic=%2Fcom.01 PB Up to 1. The 96E scales with 2. ibm. With two expansion units.boulder. with one expansion frame 5-8 Up to 576 Up to 384 Up to 518 TB 8-way or 16-way. for a maximum gross capacity of up to 950 GB. with two expansion frames 8 Up to 1056 Up to 384 Up to 950 TB 8-way (large cache) or 16-way.ibm.30 PB Up to 130 TB Up to 213 TB Up to 521 TB Up to 1. The minimum capacity is achieved by installing one drive group (16 drives) of 146 GB 15K enterprise drives. see the Overview of physical configurations section at this website: http://publib. and up to 16 Fibre Channel/FICON adapters. With three DS8870 Model 96E expansion units. and storage capacity: 2012. gross 3. and up to 16 Fibre Channel/FICON adapters.38 PB DA pairs HDDs SSDs Physical capacity*. For more information about comparison values for Model 961 and 96E. for a maximum gross capacity of currently up to 1. Component 2-way business class base frame 4/8/16-way enterprise class base frame 8-way or 16-way.Capacity A summary of the capacity characteristics is listed in Table 2-2.com/infocenter/dsichelp/ds8000ic/index.5-inch drives in the following configurations: With one DS8870 Model 96E expansion unit.5-inch SFF disks Physical capacity*.help. Best practice: Although the SSDs can be ordered in full sets (16 drives) or half sets (eight drives).11 PB Up to 216 TB Up to 360 TB Up to 864 TB Up to 1.4 PB. the DS8870 Model 961 supports up to 1. and up to 16 Fibre Channel/FICON adapters.51 PB * PB/TB definition according to ISO/IEC 80000-13 Additional drive types continuously are qualified during the lifetime of the machine.5-inch SFF disks Physical capacity*.5-inch disk drives. the DS8870 Model 961 supports up to 1.5-inch LFF disks 1 or 2 Up to 144 N/A Up to 130 TB 1-4 Up to 240 Up to 192 Up to 216 TB Up to 100 TB Up to 164 TB Up to 399 TB Up to 755 TB Up to 1.5-inch disk drives. net RAID-6 3.

see the DS8870 announcement letter at this website: http://www. SSDs are not available for CoD configurations.5-inch Nearline disks. If you order a system with. Chapter 2. Then. Up to six standby CoD disk drive sets (for a total of 96 disk drives) can be concurrently field-installed into the system. you can also order replacement standby CoD disk drive sets. such as data-warehouse installations. 96 drives.wss Device adapters and performance By default. IBM offers Capacity-on-Demand solutions that are designed to meet the changing storage needs of rapidly growing e-business. When you order 432 disk drives. The Standby Capacity on Demand (CoD) offering is designed to tap into more storage and is attractive if you have rapid or unpredictable storage growth. which is a nondisruptive activity and does not require intervention from IBM. you must place an order with IBM to initiate billing for the activated set. you receive two Device Adapter (DA) pairs (a total of four Device Adapters).Adding DDMs and Capacity on Demand The DS8870 includes a linear capacity growth of up to 2. For more information about the standby CoD offering. which are the maximum number of DA pairs. Adding more drives beyond 432 does not add DA pairs. you receive eight DA pairs.ibm. To activate the sets.com/common/ssi/index. each DA loop gets bigger.3-PB gross capacity that uses 3. logically configure the disk drives for use. the DS8870 includes a pair of Device Adapters per 48 DDMs. IBM System Storage DS8870 models 31 . A significant benefit of this capacity growth is the ability to add DDMs without disruption. Having enough DA pairs is important to achieve the high throughput level that is required by certain sequential workloads. Upon activation of any portion of a standby CoD disk drive set. for example.

4 Additional licenses that are needed Some of the IBM System Storage DS8870 features require a license key. “IBM System Storage DS8000 features and license keys” on page 271.1. the function authorization. The software licenses for Tivoli Key Lifecycle Manager are available with the product number 5608-A99. 2. This code enables the customer download from the Disk Storage Feature Activation website. 1536-drive. The following configurations are available: Two-way Business Class base with one I/O enclosure pair: – Enables lower entry price by not requiring a second I/O enclosure pair – Total: 4 HA. Easy Tier offers the automated cross-tier relocation of heavily accessed parts of the volumes. For example. One feature that was described in 1. “IBM Easy Tier” on page 197. then it should be enabled at first use. 4 DA Four-way = two-way base + processor card feature + second I/O enclosure pair feature – Enables improved performance on base rack – Total: 8 HA. one isolated Tivoli Key Lifecycle Manager server is required. For more information about the features and licensing options for these features.0 version. it is usually generally configured. but because Easy Tier free of charge. Clients who do not plan to enable encryption should specify the encryption de-activation indicator feature FC1754 in their order. If encryption is needed. and to elect to turn on encryption. To effectively use the encryption option of the FDE-capable disk drives in the DS8870. for the current V2. You need a license key to enable Easy Tier. or see IBM System Storage DS8000 Easy Tier. the second server could also be an IBM Security Key Lifecycle Manager server for z/OS based installations). 16 DA Eight+-way base with first expansion frame + second expansion frame. see 7. then Feature Code #1750 should be included in the order. It also offers manual commands for the online relocation of volumes. see Chapter 10. and grow to a full-scale. 8 DA Eight+-way base + first expansion frame – Enables 4 I/O enclosure pairs and 16 host adapters and 8 device adapter pairs Total: 16 HA. however. The Tivoli Key Lifecycle Manager isolated server can be ordered with the D8870 (Special Order features FC1760 per Tivoli Key Lifecycle Manager server hardware). the Copy Services are licensed by the gross amount of capacity that is installed.Scalable upgrades With the DS8870. and the auto-rebalancing of unevenly loaded ranks in single-tier drive pools. 32 IBM System Storage DS8870 Architecture and Implementation . four-frame configuration concurrently.7. For more information about IBM Easy Tier. it is possible to start with a two-way configuration with disk enclosures for 48 DDMs. “Overall architecture and components” on page 10 is he IBM Easy Tier. If encryption is wanted.3. REDP-4667. You will need two servers for redundancy and to avoid deadlock situations. which enables up to 1 056 drives 16-way or 8-way large-cache base with first expansion frame + second expansion frame + third expansion frame. which enables up to 1 536 drives.

2013.3 Chapter 3. All rights reserved. Although functionally the DS8870 remains similar to the DS8800. It provides readers with more insight into individual components and the architecture that holds them together. This chapter covers the following topics: DS8870 frames DS8870 architecture overview Disk subsystem Host adapters Power and cooling Management console network Isolated Tivoli Key Lifecycle Manager server © Copyright IBM Corp. 33 . there are significant hardware differences. DS8870 hardware components and architecture This chapter describes the hardware components of the IBM System Storage DS8870. which are highlighted in this chapter.

on closer inspection.1 Base frame: DS8870 As shown in Figure 3-1.1 Frames: DS8870 The DS8870 is designed for modular expansion. In this example. The second frame is an expansion frame (96E) that contains more I/O enclosures but no additional processors. storage enclosures. The only variations are the combinations of processors. I/O enclosures. and disks that the frames contain. If the Extended Power Line Disturbance (ePLD) option is not available.3. two BSM sets per Power Supply (for the main and expansion racks) are needed. Figure 3-1 shows a full four-frame DS8870 configuration. The left frame is a base frame (Model 961) that contains the processors.1. Disk Enc losure U nit Disk Enc losure U nit Disk Enc losure U nit Disk Enc losure U nit Disk Enc losure U nit Disk Enc losure U nit Disk Enc losure U nit Disk Enc losure U nit Disk Enc losure U nit Disk Enc losure U nit Disk Enc losure U nit Disk Enc losure U nit Disk Enc losure U nit Disk Enc losure U nit Disk Enc losure U nit Disk Enc losure U nit Disk Enc losure U nit Disk Enclosure Unit Disk Enclosure Unit Disk Enclosure Unit Disk Enclosure Unit Disk Enclosure Unit Disk Enclosure Unit Disk Enclosure Unit Disk Enclosure Unit Power DC–UPS Supply Unit1 with Battery 1 Disk Enclosure Unit Disk Enclosure Unit Disk Enclosure Unit Disk Enclosure Unit Disk Enclosure Unit Disk Enclosure Unit Figure 3-1 DS8870 front view with four Frames with ePLD installed 3. Only the base frame contains rack power control (RPC) cards to control power sequencing for the storage unit. this frame features dual 8-core IBM System p® POWER7 servers (a dual 8-core system with 128-GB memory is the minimum that is required to support up to three expansion frames). one Battery Service Module (BSM) set per Power Supply (for the base and expansion racks) is needed. the frames themselves are almost identical. If the ePLD option is wanted. Each frame contains a power area with power supplies and other power-related hardware. All DS8870 frames use direct current uninterruptible power supplies (DC-UPS) with AC input. there appears to be three types of frames that are available for the DS8870. Other models in the IBM Storage DS8000 series contain primary power supply (PPS) units and individual battery sets. batteries. the left side of the base frame (viewed from the front of the machine) is the frame power area. However. From a high-level view. RoHS compliant: The DS8870 is Restriction of Hazardous Substances (RoHS) compliant. 34 RPC 1 RPC 2 DC–UPS 1 DC–UPS 2 Disk Enc losure U nit Disk Enc losure U nit Disk Enc losure U nit Disk Enc losure U nit Disk Enc losure U nit Disk Enc losure U nit Disk Enc losure U nit Disk Enc losure U nit Disk Enc losure U nit Laptop HMC Ethernet Switches DC–UPS 1 Disk Enc losure U nit Disk Enc losure U nit Disk Enc losure U nit Disk Enc losure U nit Disk Enc losure U nit DC–UPS 1 Disk Enc losure U nit Disk Enc losure U nit Disk Enc losure U nit Disk Enc losure U nit Disk Enc losure U nit Disk Enc losure U nit Syst em p 7 server Syst em p 7 server I/O Enc losure I/O Enc losure I/O Enc losure I/O Enc losure Disk Enc losure U nit DC–UPS 2 I/O Enclosure I/O Enclosure I/O Enclosure I/O Enclosure DC–UPS 2 Disk Enc losure U nit Disk Enc losure U nit Disk Enc losure U nit Disk Enc losure U nit Disk Enc losure U nit Disk Enc losure U nit DC–UPS Power Supply 2 Unit with Battery 2 Disk Enclosure Unit Disk Enclosure Unit Disk Enclosure Unit Disk Enclosure Unit Disk Enclosure Unit Disk Enclosure Unit Base frame 1st Expansion frame 2nd Expansion frame 3rd Expansion frame IBM System Storage DS8870 Architecture and Implementation . The third and fourth frames are also expansion frames (96E) that contain only disks and power units.

This configuration is used whether the configuration is dual 2-core. Intermixing HDDs and SDDs in the same disk enclosure pair is not supported. There are two types of expansion frame configurations. Each I/O enclosure can contain up to two device adapters and two host adapters. host adapters. For more information about the disk subsystem. “DS8870 Physical planning and installation” on page 223. These fans pull cool air from the front of the frame and exhaust to the rear of the frame. The I/O enclosures provide connectivity between the adapters and the processors. SSDs. In its maximum configuration. or large form factor (LFF) 3-TB nearline-SAS HDDs. The first expansion frame always contains four I/O enclosures (two pairs). Between the disk enclosures and the processor complexes are two Ethernet switches and the Storage Hardware Management Console (HMC). Disk drives are hard disk drives (HDDs) with real spinning disks or Solid State Disk drives (SSDs). The adapters that are contained in the I/O enclosures can be device adapters. The base frame can contain up to 10 disk enclosures. and up to two BSM sets. Inside the disk enclosures are cooling fans in the storage enclosure power supply units. Important: You cannot use expansion frames from previous DS8000 models as expansion frames for a DS8870 storage system. or both. Chapter 3. These groups are installed evenly across the disk enclosure pair. DS8870 hardware components and architecture 35 . Each enclosure can contain up to 24 disk drives with the Small Form Factor Storage Enclosure or 12 disk drives with the Large Form Factor Storage Enclosure. dual 8-core. HDDs are installed in groups of 16. The base frame also contains I/O enclosures. dual 4-core. A disk enclosure pair can contain small form factor (SFF) HDDs. 3. The base frame contains two processor Central Electronics Complexes (CECs). The communication path that is used for adapter-to-processor complex communication in the DS8870 consists of PCI Express Generation 2 connections. see 3.4. which are installed in pairs. the base frame can hold 240 disk drives. and whether the system includes the Extended Power Line Disturbance (ePLD) option. These POWER7 processor-based servers contain the processor and memory that drive all functions within the DS8870. or dual 16-core. “Disk subsystem” on page 48. For more information about disks.1. SSDs can be installed in groups of 16 (full disk set) or 8 (half disk set).2 Expansion frames Only dual 8-way and dual 16-way DS8870 configurations system can have expansion frames (model 96E). or both. Important: All drives are full disk encryption capable. which are installed in pairs.Each DC-UPS consists of one Control Module (DSU). The second and third expansion frames do not include I/O enclosures. see Chapter 8. These I/O enclosures provide connectivity between the adapters and the processors.

use the Data Studio Storage Manager GUI or the HMC Manage Serviceable Events menu to determine why the indicator is illuminated. The first expansion rack requires one BSM set. The first expansion frame can contain up to 14 storage enclosures (installed in pairs). the first expansion frame can contain up to 336 disk drives. two BSM sets per DC-UPS are needed. RPC cards are present only in the base frame. 36 IBM System Storage DS8870 Architecture and Implementation . The expansion frames do not contain RPC cards. depending on the configuration. If the Extended Power Line Disturbance option is available.5-inch disks that are installed or 12 large form factor (LFF) 3. If this indicator is lit solid amber. Figure 3-2 shows the operator panel for DS8870. A storage enclosure can have up to 24 small form factor (SFF) 2. The second expansion frame can contain up to 480 disk drives.5-inch disks installed. which contain the disk drives. This switch is used only for emergencies. the emergency power off switch (an EPO switch) is also accessible. In a maximum configuration. one for each line cord. The DS8870 power system consists of one DC Supply Unit (DSU) and up to two BSM sets. The EPO switch is on top of the DC-UPS units in the upper left corner of the DS8870 base frame. There also is a fault indicator. Data in non-volatile storage (NVS) is not destaged and is lost. 3. One BSM set includes one Master Module and three Slave Modules. both of these indicators are illuminated green if each line cord is supplying correct power to the frame.1. When the doors are open.3 Rack operator panel The DS8870 frame features status indicators. Each expansion frame contains two direct current uninterruptible power supplies (DC-UPSs). The status indicators can be seen when the doors are closed. The second and third expansion frames can contain up to 20 storage enclosures (installed in pairs). Figure 3-2 Rack operator window: DS8870 Each panel has two line cord indicators. Tripping the EPO switch bypasses all power sequencing control and results in immediate removal of system power. For normal operation.The left side of each expansion frame (when viewed from the front of the machine) is the frame power area.

is destaged properly to disk before power down. Chapter 3. It is not possible to shut down or power off the DS8870 from the operator panel. Figure 3-3 Emergency power off (EPO) switch: DS8870 There is no power on/off switch in the operator window because power sequencing is managed through the HMC. When the EPO is activated. Figure 3-3 shows the location of the EPO switch in the DS8870. which is known as modified data.Important: Do not trip this switch unless the DS8870 is creating a safety hazard or is placing human life at risk. DS8870 hardware components and architecture 37 . IBM must be contacted to restart the machine. This configuration ensures that all data in nonvolatile storage.

The p740 processors operate at 3.3. and the disk enclosures remain the same as in the DS8800. half-length slots One PCIe x4 Gen2 full-height. The DS8870 processor complex is based on the POWER7 technology and uses PCIe Gen2 links to communicate with the storage enclosures.2. 3. dual 4-core. is an integrated power converter unit with power monitoring and battery backup functions. DC-UPS. each card holds a maximum of eight DIMMs.27 GBps. or dual 16-core processors. The Model 961 is available with dual 2-core. There are substantial differences between this new processor complex and that of the DS8700 and DS8800. The DC-UPS is designed to provide rectified AC and battery backup to power supplies in a rack enclosure. as shown in Figure 3-4. each card holds maximum of eight DIMMs One storage cage with two drives Five PCIe x8 Gen2 full-height. The DS8870 CEC has two processor modules and each processor module has two memory riser cards with 16 DIMMs. half-length slot that is shared with the GX++ slot2 Optional 4 PCIe x8 Gen2 low-profile slots Up to two GX++ slots Four 120 mm fans Two power supplies 38 IBM System Storage DS8870 Architecture and Implementation . The DS8870 uses a new power subsystem. Figure 3-4 DS8870 configuration matrix The p740 server supports a maximum of 32 DDR3 DIMM slots and one to four memory riser cards.55 GHz. The 8-Gbps Fibre Channel Protocol (FCP) or FICON Host Adapter and 8-Gbps Device Adapter.1 The IBM POWER7 processor-based server The DS8870 uses the POWER7 p740 server technology. and features the following configuration: One system planar One to four memory riser cards.2 DS8870 architecture overview The DS8870 is built around two new processor CECs. The DS8870 CEC is a 4U-high drawer. The Direct Current Uninterruptible Power Supply. dual 8-core. The maximum memory bandwidth per processor card is 68.

Each processor socket can be configured with two cores in the minimum configuration and up to eight cores (per processor socket) in the maximum configuration.redbooks. which is found at this website: http://www. Chapter 3.com/redpieces/pdfs/redp4797.Figure 3-5 shows the Processor Complex as configured in the DS8870. see IBM Power 720 and 740 Technical Overview and Introduction. as shown in Figure 3-6. REDP-4797. Inactive cores 2 -way co nfi g Active cores 4-wa y 1 so cket con fi g 8-wa y co nfi g 1 6-wa y con fi g empty Processor sockets P7 C EC empty Processor sockets Processor sockets Processor sockets Figure 3-6 Processor sockets and cores To provide different performance configuration. DS8870 hardware components and architecture 39 .pdf The DS8870 base frame contains two processor complexes.ibm. Figure 3-5 DS8870 Processor Complex front and rear view For more information about the server hardware that is used in the DS8870. The POWER7 processor-based server features up to two processor sockets single chip module (SCM). the number of enabled processor cores and the amount of installed memory varies.

1. there are up to four important PCI adapters that are placed in specific slots of the DS8870 process complex. 1.2. 2. 3 0. 3 0. These adapters allow for point-to-point interconnections between devices by using a directly wired interface between these connection points. These adapters plug into the CEC GX+ bus and have an onboard chip (P7IOC) that supply three PCIe ports. 5 16/0. Figure 3-7 PCIe adapter locations in the processor complex 40 IBM System Storage DS8870 Architecture and Implementation . 3 Max Cache Upgrade (Single step/CEC (GB)) 64 64 128 256 512 512 N/A 4 4 4 4 4 8 16 3. The upgrade preserves the system serial number. Table 3-1 shows the seven supported configurations. 2. 1. These cards are designed to provide the connection between the CECs and I/O bays: Two single port PCIe Gen2 adapters in slot 3 and 4 Two multi-port GX++ PCIe adapters in slot 1 and 8. Depending on the configuration that is used. 2. Figure 3-7 show main PCIe adapter locations in the CEC. Table 3-1 Configuration attribute Processor configuration 2-core 2-core 4-core 8-core 8-core 16-core 16-core Cache/NVS per CEC (GB) 8/0.2 Peripheral Component Interconnect Express adapters The DS8870 processor complex uses PCIe Peripheral Component Interconnect Express adapters.5 32/1 64/2 128/4 256/8 512/16 DIMM Size (GB) Expansion Frames 0 0 0 0. A DS8870 processor complex is equipped with the following PCIe cards. the number of cores and memory can be upgraded non-disruptively. 2 0. 1.In the DS8870.

Figure 3-8 shows how the DS8870 hardware is shared between the servers. it has its own affiliated device adapters. Figure 3-8 DS8870 series architecture Chapter 3. see this website: http://www. The CEC uses the N-way symmetric multiprocessor (SMP) of the complex to perform its operations. as shown in Figure 3-8.html?Open 3.redbooks. To access the disk arrays under its management.nsf/RedbookAbstracts/tips0456. All cross-cluster communication uses the PCIe paths through the I/O enclosures.For more information about PCI Express. For fast-write data. DS8870 hardware components and architecture 41 . it features a persistent memory area for the processor complex. It records its write data and caches its read data in the volatile memory of the complex.3 Storage facility processor complex The DS8870 storage facility consists of two POWER7 740 servers. One significant change of the DS8870 is the absence of a RIO-G loop that is used for cross-cluster communication on previous models.2. The DS8870 uses the PCIe paths across the IO enclosures to provide a communications path between the CECs.com/Redbooks.ibm. The host adapters are shared between both servers. On the left side and on the right side there is one processor complex (CEC).

4 Processor memory and cache management The DS8870 includes up to 1 TB of total processor memory. The DS8870 configuration options are based on the number of processor cores and installed memory. If power is lost.3.) 4 9xE Attach Business Class 961 2-core 0 Enterprise Class 961 961 961 96E 96E 4-core 8-core 16-core N/A N/A 360 TB 2. 4. which also helps to optimize performance. 42 IBM System Storage DS8870 Architecture and Implementation .) 144 Memory (GB) 16/32 Host Adapters (max. The set of processors is referred to as a symmetric multiprocessor (SMP). All memory that is installed on any processor subset is accessible to all processors in the processor complex. which executes up to four instructions in parallel. SMT4 mode enables the POWER7 processor to maximize the throughput of the processor core by offering an increase in core efficiency. The effectiveness of a read cache depends on the hit ratio.304 TB 2. The NVS scales to the processor memory size that is selected.2. DS8870 contains volatile memory that is used as a read and write cache and non-volatile memory that is used as a write cache. The absolute addresses that are assigned to the memory are common across all processors in the processor complex.) 216 TB Disk Drives (max.1 TB The possible configurations are shown in Figure 3-9.304 TB 504 TB 720 TB 240 1536 1536 336 480 64 128/256 512/1024 N/A N/A 8 16 16 8 N/A 0 0-3 0-3 N/A N/A First Expansion Frame Second/Third Expansion Frame Figure 3-9 Configuration options for Business Class to Enterprise class Each processor complex has half of the total system memory. Like other modern caches. The following DS8870 configuration upgrades can be performed non-disruptively: Scalable processor configuration with 2. Caching is a fundamental technique for reducing I/O latency. The POWER7 processor that is used in the DS8870 can operate in simultaneous multithreading (SMT4) mode. the batteries keep the system running until data in NVS is written to the CECs internal disks. Model Processor Physical Capacity (max. 8 and 16 cores per controller Scalable cache form 16 GB . which is the fraction of requests that are served from the cache without necessitating a read from the disk (read miss).

It is an embedded controller that is based on an IBM PowerPC® processor core. The system power control network (SPCN) is used to control the power of the attached I/O subsystem.5 Flexible service processor and system power control network The Power 740 system planar contains one flexible service processor (FSP). fans. Critical events trigger appropriate signals from the hardware to the affected components to prevent any data loss without operating system or firmware involvement. and temperature. Non-critical environmental events also are logged and reported. The SPCN monitors environmental characteristics such as power. Chapter 3.2. DS8870 hardware components and architecture 43 . This ability enables the FSP to take appropriate action. Critical and non-critical environmental conditions can generate Early Power-Off Warning (EPOW) events. The FSP performs predictive failure analysis that is based on any recoverable processor errors.3. and it can monitor the operating system for loss of control. The SPCN control software and the FSP software that is run on the same PowerPC processor. The FSP can monitor the operation of the firmware during the boot process.

dual 8-core. Device adapters (DA) and host adapters (HA) are installed in the I/O enclosures.1 DS8870 I/O enclosure Each CEC (p740) has a P7IOC chip that drives two single-port PCI adapters that connect to two I/O enclosures. The I/O enclosure physical architecture is the same as in the DS8800. Each I/O enclosure has six slots.3 I/O enclosures The DS8870 base frame and first expansion frame (if installed) contain I/O enclosures. and a second GX++ PCIe adapter in slot 8. Two I/O enclosures pairs are installed in the first expansion frame. or dual 16-core). dual 4-core. There is no difference between the previous DS8800 and the new DS8870 I/O bays and HAs. depending on configuration (dual 2-core. Figure 3-10 DS8870 I/O enclosure connections to CEC A dual 8-core configuration includes two I/O enclosure pairs that are installed in the base frame. Figure 3-10 shows the DS8870 CEC to I/O enclosure connectivity (dual 8-core with first expansion frame). Other I/O enclosures connect to the first GX++ PCIe adapter in slot 1. which are installed in pairs. The DS8870 can have up to two DAs and two HAs installed in each I/O enclosure. There can be one or two I/O enclosure pairs that are installed in the base frame. The I/O enclosures provide connectivity between the processor complexes and the HAs or DAs. and two I/O enclosure pairs in the first expansion frame (if installed). as shown in Figure 3-10. Each I/O enclosure includes the following attributes: 5U rack-mountable enclosure Six PCI Express slots Default redundant hot plug power and cooling devices 44 IBM System Storage DS8870 Architecture and Implementation .3.3. which connects to I/O enclosures in the expansion frame. 3.

HA cards can be installed only in slot 1 and 4. This configuration provides much faster write performance than writing to the actual disk. With the first expansion model. Each port can be configured to operate as a Fibre Channel port or a FICON port. Chapter 3.3. Optimum availability: To obtain optimum availability and performance. Figure 3-11 shows the locations for HA cards in the DS8870 I/O enclosure. The DS8870 HA cards can have four or eight ports and support 2-Gbps. During write requests. DS8870 Host Adapters The DS8870 supports up to two FC or FICON HAs per I/O enclosure. Figure 3-11 DS8870 I/O enclosure adapter layout The DS8870 Model 961 with four I/O enclosures contains a maximum of eight 8-port host adapters (64 host adapter ports). Slots 2 and 5 are reserved and cannot be used. or 8-Gbps full-duplex data transfer over longwave or shortwave fibre links. for a total of 128 host ports. in which the data is written to volatile memory on one processor complex and preserved memory on the other processor complex. The servers manage all read and write requests to the logical volumes on the disk arrays. DS8870 hardware components and architecture 45 . Preserved memory also is called NVS. the servers use fast-write. 96E.2 Host adapters Attached host servers interact with software that is running on the complexes to access data on logical volumes.3. one HA card should be installed in each available I/O enclosure before a second HA card is installed in the same enclosure. 4-Gbps. The server then reports the write as complete before it is written to disk. another eight 8-port HAs are available by adding another 64 host adapter ports.

The DS8870 uses the Fibre Channel protocol to transmit SCSI traffic inside Fibre Channel frames. To ensure maximum data integrity. 46 IBM System Storage DS8870 Architecture and Implementation . Each Fibre Channel port supports a maximum of 509 host login IDs and 1280 paths. A port cannot be FICON and FCP simultaneously. The chart shows the host adapter positions and plugging order for four I/O enclosures.Figure 3-12 shows the preferred HA plug order for DS8870. It also uses Fibre Channel to transmit FICON traffic. Each of the ports on a DS8870 host adapter can also independently be FCP or FICON. Each 8 Gbps port independently auto-negotiates to 2. Figure 3-12 DS8870 HA plug order Fibre Channel is a technology standard that allows data to be transferred from one node to another at high speeds and great distances (up to 10 km). The type of port can be changed through the Data Studio Storage Manager GUI or by using DSCLI commands. which uses Fibre Channel frames to carry System z I/O. but it can be changed as required. Each DS8870 Fibre Channel adapter offers four or eight 8 Gbps Fibre Channel ports. This configuration allows large storage area networks (SANs) to be created. The card itself is PCIe Gen 2. The cable connector that is required to attach to this adapter is an LC type. or 8 Gbps link speed. 4. The card is driven by a new high-performance application-specific integrated circuit (ASIC). HA positions and plugging order for the four I/O enclosures are the same for the base frame and the expansion frames with I/O enclosures. it supports metadata creation and checking.

The adapter is responsible for managing.ibm. the adapter supports metadata creation and checking. To ensure maximum data integrity. to the disk enclosures. Whenever the device adapter connects to a disk. 3.com/systems/support/storage/config/ssic/index. Each adapter connects the complex to two separately switched Fibre Channel networks. The DS8870 can have up to 16 of these adapters (installed in pairs). it uses a bridged connection to transfer data. Each DS8870 DA card offers four FC-AL ports. two are used to interconnect with other disk enclosures. Fibre Channel distances The following types of HA cards are available: Longwave Shortwave With longwave. DS8870 hardware components and architecture 47 .Fibre Channel supported servers The current list of servers that are supported by Fibre Channel attachment is available at this website: http://www. Chapter 3. Each disk is attached to both switches. All ports on each card must be longwave or shortwave. and rebuilding the RAID arrays. you are limited to a distance of 500 meters (non-repeated). monitoring. These ports are used to connect the processor complexes. The adapter provides remarkable performance thanks to a high function and high performance ASIC.3. Of these 32 ports.3 Device adapters Each processor complex accesses the disk subsystem by way of 4-port Fibre Channel arbitrated loop (FC-AL) device adapters (DAs). there is no intermixing of the two types within a card.jsp Consult these documents regularly because they contain the most current information about server attachment support. through the I/O enclosures. Each network attaches disk enclosures that each contain up to 24 disks. and two interconnect to the device adapters. Each storage enclosure contains two 32-port bridges. This configuration means that all data travels through the shortest possible path. 24 are used to attach to the 24 disks in the enclosure. With shortwave. you can connect nodes at distances of up to 10 km (non-repeated).

“RAS on the disk system” on page 84. Without a drive or a dummy. or 16 disks. cooling air does not circulate properly.5-inch small form factor or 3. The interface control card has an 8 Gbps FC-AL switch with a Fibre Channel (FC) to SAS conversion logic on each disk port.4 Disk subsystem The disk subsystem consists of the following components: Device adapter pairs (installed in the I/O enclosures). Important: If a DDM is not present. The SFF and LFF enclosures are shown in Figure 3-13 on page 49. Each DDM is an industry-standard Serial Attached SCSI (SAS) disk. depending on whether it is a base or expansion frame. 8. Each DS8870 disk enclosure contains a total of 24 2.5-inch large form factor disks: SFF disks: This size allows 24 disk drives to be installed in each storage enclosure LFF disks: This size allows 12 disk drives to be installed in each storage enclosure Each disk plugs into the disk enclosure backplane.4. For more information.3.5-inch large form factor (LFF) DDMs. but contains no electronics. Device adapters are RAID controllers that access the installed disk drives.5-inch small form factor (SFF) DDMs or 12 3. The FC trunking connection provides a full 8 Gbps transfer rate from a group of drives with lower interface speeds. The backplane is the electronic and physical backbone of the disk enclosure. A dummy carrier is similar to a DDM in appearance. The installed disks. They have the same form factor as SFF DS8870 HDD disks.6. commonly referred to as disk drive modules (DDMs). The device adapter pairs connect to Fibre Channel controller cards (FCIC) in the disk enclosures. its slot must be occupied by a dummy carrier. The DDMs can be the following 2. The DS8870 also supports SSDs. 48 IBM System Storage DS8870 Architecture and Implementation . Both enclosure types can contain dummy carriers. We describe the disk subsystem components in the remainder of this section. or 20 disk enclosures. 14. SSDs also are included in disk enclosures that are partially populated with 4. see 4. The DS8870 data disks are installed in enclosures that are called disk enclosures or storage DS8870 disk enclosures Each DS8870 frame contains a maximum of 10. The FC and SAS conversion function provides speed aggregation on the FC interconnection ports. These disk enclosures are installed in pairs. This connection creates a switched Fibre Channel network to the installed disks.1 Disk enclosures enclosures. 3. or fully populated with 24 disks. SSDs and HDDs cannot be intermixed within the same enclosure pair. Each disk enclosure has a redundant pair of FCICs that provides the interconnect logic for the disk access and a SES processor for enclosure services.

Switched FC-AL technology includes the following key features: Standard FC-AL communication protocol from DA to DDMs Direct point-to-point links are established between DA and DDM Isolation capabilities in case of DDM failures. This configuration features the following key benefits: Two independent networks are available to access the disk enclosures Four access paths are available to each DDM Each DA port operates independently Double the bandwidth over traditional FC-AL loop implementations Chapter 3.Figure 3-13 DS8870 disk enclosures for SFF and LFF Switched FC-AL advantages The DS8870 uses switched FC-AL technology to link the DA pairs and the DDMs. Switched FC-AL uses the standard FC-AL protocol. but the physical implementation is different. where no cable rerouting is required when another disk enclosure is added The DS8870 architecture uses dual redundant switched FC-AL access to each of the disk enclosures. providing easy problem determination Predictive failure statistics Simplified expansion. DS8870 hardware components and architecture 49 .

Figure 3-14 DS8870 Disk Enclosure (only 16 disks are shown for simplicity) When a connection is made between the device adapter and a disk. the storage enclosure uses backbone cabling at 8 Gbps. The DDMs that are added must be of the same capacity and speed as the 16 DDMs that are already in the enclosure pair. the first arrays that are created on each DA pair include DDMs that are used as spares until the minimum number of four spares are reached. we have four effective data paths to each disk. Instead. SSD disks support only RAID 5. Depending on the RAID type. which is translated from FC to SAS to the disk drives. two new disk enclosures would be added as a pair. Remember the following RAID support guidelines: SFF and LFF disks support all RAID 5. they are used to fill up that pair of disk enclosures. or RAID 10 array by choosing one array site that is based on the data protection and performance are required. if a DS8870 had six SFF disk enclosures total and all the enclosures were fully populated with disks. there would be 144 DDMs in three enclosure pairs. Expansion Disk enclosures are added in pairs and disks are added in groups of 16. For example. they are added to the end of the loop.In Figure 3-14. you can create a RAID 5. RAID 6. A DS8870 with SFF enclosures takes three orders of 16 DDMs to fully populate a disk enclosure pair (top and bottom). The 3-TB nearline-SAS disks support only RAID 6. the FC-switched networks do not need to be broken to add the disk enclosures. eight DDMs go in one disk enclosure of the pair and the remaining eight DDMs go in the other disk enclosure. RAID 6. If 16 DDMs are ordered later. each DDM is shown as attached to two separate FCICs with bridges to the disk drive. This means that a mini-loop is created between the DA port and the disk. RAID 10 is supported only as an RPQ. By using two DAs. and RAID 10. During the configuration process. Arrays and spares Array sites that contain eight DDMs are created as the DDMs are installed. In each case. Each DA can support two switched FC networks. 50 IBM System Storage DS8870 Architecture and Implementation . If 16 DDMs were purchased.

DS8870 hardware components and architecture 51 . the upper enclosure populates one loop. Chapter 3. If all DDMs are the same capacity and speed. this number can increase depending on DDM intermix. there are 16 or 24 DDMs in each enclosure. and four are taken from the other enclosure in the pair. 8 in each disk enclosure. and the other disk enclosure of the pair is on a second switched loop. which are known as array across loops (AAL). An array site consists of eight DDMs. DDMs are purchased in groups of 16. One disk enclosure of the pair is on one FC switched loop. Arrays across loops Figure 3-15 shows the DA pair layout. four spares are sufficient. Only 16 DDMs are shown in Figure 3-16 on page 52. Figure 3-15 DS8870 switched loop layout (only 8 disks per enclosure are shown for simplicity) For the DS8870 with SFF enclosures.The intention is to have only four spares per DA pair. When fully populated. Four DDMs of the largest capacity and at least two DDMs of the fastest rpm are needed. Half of the new DDMs go into one disk enclosure and the other half is placed into the other disk enclosure of the pair. This configuration splits the array across two loops. half of the array is on each disk enclosure. and the lower enclosure populates the other loop. Each enclosure places two FC switches onto each loop. One DA pair creates two switched loops. However. Each SFF enclosure can hold up to 24 DDMs. as shown in Figure 3-16 on page 52. When a RAID array is created on the array site. Four DDMs are taken from one enclosure in the disk enclosure pair. in a disk enclosure pair.

Each loop hosts one RAID 0 array.2 Disk drives The DS8870 supports the following disk types: SSDs SAS Enterprise disks (all disks that are spinning at a speed equal or higher than 10K rpm) SAS nearline disks (all disks that are spinning at a lower speed than 10K rpm) For the DS8870. The DS8870 also disables the encryption function. When the device adapter writes a stripe of data to a RAID 5 array. 3. By splitting the workload in this manner. it sends half of the write to each switched loop. This configuration aggregates the bandwidth of the two loops and improves performance. half of the array is placed on each loop. The green indicator shows ready status and disk activity when flashing. Array site 1 in green (the darker disks) uses the four left DDMs in each enclosure. two RAID 0 arrays are created. Array site 2 in yellow (the lighter disks). The amber indicator is used with light path diagnostic tests to allow for easy identification and replacement of a failed DDM. When an array is created on each array site. 52 IBM System Storage DS8870 Architecture and Implementation . When servicing read I/O.Figure 3-16 shows the layout of the array sites. half of the reads can be sent to each loop. All drives in the DS8870 are full disk encryption capable to secure critical data. each DDM is hot pluggable and includes two indicators. uses the four right DDMs in each enclosure. If RAID 10 is used. Figure 3-16 Array across loop AAL benefits AAL is used to increase performance. each loop is worked evenly. which improves performance by balancing the workload across loops.4.

see 4. The PPS of previous models was replaced with the DC-UPS technology. REDP-4500.Table 3-2 shows the DS8870 disk configurations. each feeds a single DC-UPS. The DC-UPS distributes rectified line AC. These cards are attached to the FSP card in each processor complex. see IBM System Storage DS8700 Disk Encryption Implementation and Usage Guidelines. the output is switched to rectified AC from the partner DC-UPS. For more information. The rack features two AC power cords.2K rpm nearline-SAS For more information about SSDs.5 Power and cooling The DS8870 power and cooling system is highly redundant. For more information about encrypted drives and inherent restrictions. “RAS on the power subsystem” on page 93. the DC-UPS switches to 208V DC battery power. The RPCs also communicate with each PPS.2K rpm nearline-SAS 400 GB SSD SAS drives with Full Disk Encryption (FDE) and Encryption Standby Capacity 146 GB 15 K rpm 300 GB 15 K rpm 600 GB 10 K rpm 900 GB 10 K rpm 3 TB 7. “DS8000 Solid-State drive considerations” on page 248. see 8.5. DS8870 hardware components and architecture 53 . Chapter 3. the components of which are described in this section.7. which allows them to communicate with the Hardware HMC and the storage facility. The DC-UPS provides rectified AC power distribution and power switching for redundancy.3. Power supply To increase power efficiency the power system of the DS8870 was redesigned. If AC is not present at the input line. If no AC input is active. 3. Rack Power Control cards The DS8870 features a pair of redundant new RPC cards that are used to control certain aspects of power sequencing throughout the DS8870. Table 3-2 DS8870 drive types Serial Attached SCSI (SAS) FDE drives and SSD FDE drives 146 GB 15 K rpm 300 GB 15 K rpm 600 GB 10 K rpm 900 GB 10 K rpm 3 TB 7.

In this case. The line cord connector requirements vary widely throughout the world. In the first expansion frame. Each DC-UPS features internal fans to supply cooling for that power supply and batteries. 54 IBM System Storage DS8870 Architecture and Implementation . the PDUs supply power to the I/O enclosures and the disk enclosures. In the base frame. the system shuts down 4 seconds after a power loss. the PDUs supply power to the disk enclosures because there are no I/O enclosures or processor complexes in this frame. The line cord might not include the suitable connector for the country in which the system will be installed. and the disk enclosures. “RAS on IBM System Storage DS8870” on page 59. the PDUs supply power to the processor complexes. The disk enclosures PSUs are connected to two separate PDUs for redundancy. In the second expansion frame. If ePLD is not installed. Each disk enclosure includes two power supply units (PSUs). There are two redundant DC-UPSs in each frame of the DS8870. for redundancy. Each PDU is supplied from both DC-UPSs in the frame in which they are installed. the connector will need to be replaced by an electrician after the machine is delivered.Figure 3-17 shows DC-UPS front and rear view. Figure 3-17 DC-UPS front and rear view The line cord must be ordered specifically for the operating voltage to meet specific requirements. the I/O enclosures. The ePLD feature (an optional feature) lets the system run for up to 50 seconds without line power and then gracefully shuts down the system. For more information about why this feature might be necessary for your installation. see Chapter 4. The DC-UPS supplies 208V output power to six power distribution units (PDUs).

Each enclosure also has its own cooling fans. Figure 3-18 DS8870 Base frame power distribution units Processor and I/O enclosure power supplies Each processor complex and I/O enclosure feature dual redundant power supplies to convert 208V DC into the required voltages for that enclosure or complex. Chapter 3.Figure 3-18 shows the DS8870 base frame PDUs. DS8870 hardware components and architecture 55 .

There are three Slave battery enclosures and each holds four 12V batteries 56 IBM System Storage DS8870 Architecture and Implementation . The master BSM is the only BSM with docking connector to the Direct current Supply Unit (DSU). the disk enclosures feature two PSUs for each disk enclosure. sufficient to allow the contents of NVS memory (modified data that is not yet destaged to disk from cache) to be written to the disk drives internal to the processor complexes (not the storage DDMs). which holds three 12V batteries. These PSUs draw power from the DC-UPS through the PDUs. A group of four battery enclosures make up a BSM set.a is le d e si g n s to o pti mi s e e ne r gy e ffic i e nc y • DS 8 8 7 0 is d e si gn e d w ith c o m p le te fro n t-to . • Mo r e d a ta c e nt re s a r e mo vi n g to ho t.ba c k a i rflo w B e n e f i t: G re ate r e n e rg y e f fi c i e n cy an d c on tri b u te s to l ow e r e n e rg y co s t s Figure 3-19 Cold and hot aisles example Efficient air flow: DS8870 is designed for a more efficient air flow to be installed with hot and cold aisle configurations. Battery service module set A single battery enclosure is called a BSM. The BSM helps protect data in the event of a loss of external power.Disk enclosure power and cooling For DS8870. There are cooling fans in each PSU. The following types of BSMs are used: There is one Master battery enclosure. These fans draw cooling air through the front of each disk enclosure and exhaust air out the rear of the frame. the batteries are used to maintain power to the processor complexes and I/O enclosures for some time. Figure 3-13 on page 49 shows the DS8870 disk enclosure PSUs.a is le / c o ld . In the event of a complete loss of AC input power.

Figure 3-20 shows BSM sets placed in the frame. DS8870 logical configuration creation and changes are performed by the storage administrator by using the GUI or DSCLI. “DS8870 HMC planning and setup” on page 251. The changes are passed to the storage system through the HMC.6 Management console network All base frames ship with one HMC and two Ethernet switches. Figure 3-21 Mobile computer HMC Chapter 3. as shown in Figure 3-21) is shipped with a DS8870. A notebook HMC (Lenovo ThinkPad T520 for DS8870. see Chapter 9. DS8870 hardware components and architecture 57 . Figure 3-20 Battery Service Module sets 3. For more information about the HMC.

5. For more information. see 4. the next generation of the Internet Protocol. Figure 3-22 Ethernet switch ports Important: The internal Ethernet switches that are shown in Figure 3-22 are for the DS8870 private network only. Two switches are supplied to allow the creation of a fully redundant private management network.1 Ethernet switches The DS8870 base frame has two 8-port Ethernet switches. The HMC continues to support the IPv4 standard and mixed IPV4 and IPv6 environments.1. The switches receive power from the internal power bus and thus do not require separate power outlets. 58 IBM System Storage DS8870 Architecture and Implementation . External Client network connection to the DS8870 system is through a separate patch panel connection. The ports on these switches are shown in Figure 3-22.6. These networks cannot be accessed externally. and no external connections are allowed. 3. Each processor complex includes connections to each switch to allow each server to access both private networks.Important: The DS8870 HMC supports IPv6. “Hardware Management Console overview” on page 252. No client network connection should ever be made directly to these internal switches. “RAS on the HMC” on page 80 and 9.

These changes or enhancements were in the Central Electronics Complex (CEC) and Power Subsystem and are described in this chapter. 2013.4 Chapter 4. availability. All rights reserved. Several changes and enhancements were introduced with the DS8870. This chapter covers the following topics: Names and terms for the DS8870 storage system RAS features of DS8870 CEC CEC failover and failback Data flow in DS8870 RAS on the HMC RAS on the disk system RAS on the power subsystem RAS and Full Disk Encryption Other features © Copyright IBM Corp. RAS on IBM System Storage DS8870 This chapter describes the reliability. and serviceability (RAS) characteristics of the IBM System Storage DS8000 family of products. 59 .

Base frame The DS8870 base frame is available as a single model type (961). expansion frames can be added. A storage complex can (and often does) consist of a single DS8000 storage unit (base frame plus other installed expansion frames). – Using 3.5-inch or 3. – Using 3.5-inch DDMs: Each disk enclosure can have up to 12 disks. Although most terms were introduced in previous chapters of this book. Expansion frames can be only added to 8-way and 16-way systems.5-inch disk drive modules (DDMs): – Using 2. Storage unit The term storage unit describes a single DS8000 (base frame plus other installed expansion frames). However. – Using 2. To increase the storage capacity. Business class (two-way systems with 16 GB or 32 GB of system memory) and enterprise class four-way cannot have expansion frames. 60 IBM System Storage DS8870 Architecture and Implementation . Expansion frames of previous DS8000 generations are not supported in DS8870. The following disk enclosures are configured for 2.5-inch DDMs: Each disk enclosure can have up to 24 disks. It is a complete storage unit that is contained within a single base frame. they are repeated and summarized here because the rest of this chapter uses these terms frequently.4. they can be upgraded non-disruptively to an 8-way or 16-way system to accommodate expansion frames. Expansion frame The 96E model type is used for expansion frames in DS8870. Up to three expansion frames can be added to the DS8870 base frame. If your organization has one DS8000. A base frame contains the following components: Power and cooling components: Direct Current Uninterruptible Power Supply (DC-UPS) Power control cards: Rack Power Control (RPC) and System Power Control Network (SPCN) Two POWER7 CECs Two or four I/O Enclosures that contain Host Adapters and Device Adapters 2-Gigabit Ethernet switches for the internal networks Hardware Management Console (HMC) Up to five disk enclosure pairs (10 total) for storage disks.5-inch DDMs: The base frame can have a maximum of 240 disks. Storage complex The term storage complex describes a group of DS8000s (all models) that are managed by a single management console.5-inch DDMs: The base frame can have a maximum of 120 disks. then you have a single storage complex that contains a single storage unit.1 Names and terms for the DS8870 storage system It is important to understand the naming conventions that are used to describe DS8000 components and constructs to fully appreciate the discussion of RAS concepts.

a CEC consists of an IBM POWER server that is built on the POWER7 architecture.5-inch DDMs in 14 disk enclosures. Although many other IBM products use an HMC. the HMC becomes the focal point for most operations on the DS8870. A CEC is also referred to as a processor complex or a storage server.5-inch DDMs in 14 disk enclosures. The first expansion frame contains storage disks and I/O enclosures.1 operating system and storage-specific microcode. CEC/processor complex/storage server In the DS8870. the DS8870 fails over to the remaining CEC and continue to run the storage unit. Subsequent expansion frames (third or fourth Frame in the overall system) contain only storage disks.All expansion frames contain the power and cooling components that are needed to run the frame. The CECs are identified as CEC0 and CEC1. With connectivity to the CECs. Each CEC can have up to 512 GB of memory (cache) and up to a 16-core processor. The CECs run the AIX V7. These designations are the same as CEC0 and CEC1 for the DS8870. Hardware Management Console The Hardware Management Console (HMC) is the management console for the DS8870 storage unit. All storage configuration and service actions are managed through the HMC. The second and third expansion frames can have a maximum of 480 2. and other management systems. Adding an expansion frame is a concurrent operation for the DS8000. Chapter 4. The first expansion frame can have a maximum of 168 3. the client network. Expansion frames feature the following capacities: The first expansion frame can have a maximum of 336 2. The DS8870 contains two CECs as a redundant pair so that if either fails.5-inch DDMs in 20 disk enclosures. The second and third expansion frames can have a maximum of 240 3. Some chapters and illustrations in this publication refer to Server 0 and Server 1. the DS8000 HMC is unique to the DS8000 family.5-inch DDMs in 20 disk enclosures. RAS on IBM System Storage DS8870 61 .

4. Hardware features. At the heart of the DS8870 is a pair of POWER7 based System Power servers. It operates as a hidden partition. and serviceability (RAS) are important concepts in the design of the IBM System Storage DS8870. the system fails over to the remaining CEC and continue to run the DS8000 without any host interruption. design considerations.2. regardless of the system configuration and when disconnected from the managed console.1 POWER7 Hypervisor The POWER7 Hypervisor (PHYP) is a component of system firmware that is always active. the operating system. so that when one server is used. availability. The Hypervisor then can perform direct memory access (DMA) transfers to the PCI adapters. It requires memory to support the resource assignment to the logical partitions on the server. and the interconnections. with no processor resources assigned to it. it does not initialize an I/O adapter that is in use by another server. 62 IBM System Storage DS8870 Architecture and Implementation . and operational guidelines all contribute to make the DS8870 reliable. including the hardware. The Hypervisor provides the following capabilities: Reserved memory partitions allow you to set aside a portion of memory to use as cache and a portion to use as non-volatile storage (NVS). The Hypervisor also monitors the Service Processor and performs a reset or reload if it detects the loss of the Service Processor. However. I/O enclosure initialization control. The operating system communicates the wanted I/O bus address to logical mapping. These servers (CECs) share the load of receiving and moving data between the attached hosts and the disk arrays. This section looks at the RAS features of the CECs. Preserved memory support allows the contents of the NVS and cache memory areas to be protected in the event of a server reboot. 4.2 RAS features of DS8870 CEC Reliability. The Hypervisor needs a dedicated memory region for the TCE tables to translate the I/O address to the partition memory address. and the Hypervisor returns the I/O bus address to physical mapping within the specific TCE table. It notifies the operating system if the problem is not corrected. Automatic reboot of a frozen partition. they are also redundant so that if either CEC fails. software features. The AIX operating system uses PHYP services to manage the translation control entry (TCE) tables.

Advances in off-chip signaling. The POWER7 processor features Intelligent Threads that can vary based on the workload demand. Advances in memory subsystem. those workloads that need fast individual tasks can get the performance that they need for maximum benefit. The multi-core architecture of the POWER7 processor is matched with innovation across a wide range of related technologies to deliver leading throughput. There also is lower energy consumption and a smaller physical footprint. These features and abilities apply to the DS8870 CECs.2. the POWER7 processor can perform processor instruction retry and alternate processor recovery for a number of core-related faults. SMT4. when an error is encountered in the core in caches and certain logic functions. efficiency. POWER7 RAS features The following sections describe the RAS leadership features of IBM POWER7 Systems™.4. SMT4 mode also enables the POWER7 processor to maximize the throughput of the processor core by offering an increase in core efficiency. RAS on IBM System Storage DS8870 63 . which improves latency and bandwidth. This ability significantly reduces exposure to permanent and intermittent errors in the processor core. This mode permits four instruction threads to execute simultaneously in each POWER7 processor core.2 POWER7 processor The IBM POWER7 processor implements 64-bit IBM Power Architecture® technology and represents a leap forward in technology achievement and associated computing capability. With the instruction retry function. POWER7 processor instruction retry As with previous generations. or if the workload benefits more from having capability spread across two or four threads of work. enhancement. enhancement. Areas of innovation. and consolidation The POWER7 processor represents an important performance increase in comparison with previous generations. The remainder of this section describes the RAS features of POWER7 processor. and consolidation: On-chip L3 cache that is implemented in embedded dynamic random access memory (eDRAM). the POWER7 processor first automatically retries the instruction. The addition of a new simultaneous multithreading mode. The POWER7 processor features the following areas of innovation. the POWER7 processor can deliver more total capacity as more tasks are accomplished in parallel. scalability. With more threads. The system automatically selects whether a workload benefits from dedicating as much capability as possible to a single thread of work. Cache hierarchy and component innovation. Chapter 4. the instruction succeeds and the system can continue as before. and RAS. If the source of the error was truly transient. With fewer threads.

For most faults. FFDC ensures that when a fault is detected in a system through error checkers or other types of detection methods. included in Power 740. An ECC uncorrectable error that is detected in the cache can also trigger a purge and delete of the cache line. Modified data is handled through Special Uncorrectable Error handling. and are then written back to L2 or L3. FFDC data is collected from the fault isolation registers and the associated logic. This proactive diagnostic strategy is a significant improvement over the classic. POWER7 processors can extract the failing instruction from the faulty core and retry it elsewhere in the system for a number of faults. L1 cache is divided into sets. In hardware. FFDC check stations are carefully positioned within the server logic and data paths to ensure that potential errors can be quickly identified and accurately tracked to a field-replaceable unit (FRU). 64 IBM System Storage DS8870 Architecture and Implementation . a good FFDC design means that the root cause is detected automatically without intervention by a service representative. In addition. POWER7 processor can deallocate all but one set before a Processor Instruction Retry is performed. the root cause of the fault is captured without the need to re-create the problem or run an extended tracing or diagnostics program. faults in the Segment Lookaside Buffer (SLB) array are recoverable by the POWER Hypervisor™. In addition. the caches maintain a cache line delete capability. less accurate reboot and diagnose service approaches. POWER7 cache protection Processor instruction retry and alternate processor retry. and so on. as described previously in this chapter. This feature significantly reduces the probability of any one processor affecting total system availability. Systems with POWER7 processors are designed to avoid a full system outage. function calls. The SLB is used in the core to perform address translation calculations. A threshold of correctable errors that is detected on a cache line can result in the data in the cache line that is purged and the cache line that is removed from further operation without requiring a reboot. POWER7 First Failure Data Capture First Failure Data Capture (FFDC) is an error isolation technique. The entire process is transparent to the partition that owns the failing instruction. is the ability to contain most processor checkstops to the partition that was using the processor at the time. The L2 and L3 caches in the POWER7 processor are protected with double-bit detect single-bit correct error detection code (ECC). As in POWER6. The failing core is then dynamically unconfigured and scheduled for replacement. POWER7 single processor checkstopping The POWER7 processor provides single core check stopping. Single-bit errors are corrected before they are forwarded to the processor. A processor checkstop would result in a system checkstop. This feature.POWER7 alternate processor retry Hard failures are more difficult because permanent errors are replicated each time that the instruction is repeated. L2 and L3 deleted cache lines are marked for persistent deconfiguration on subsequent system reboots until they can be replaced. Pertinent error data that is related to the fault is captured and saved for analysis. protect processor and data caches. Retrying the instruction does not help in this situation because the instruction continues to fail. In firmware. This action results in no loss of operation because an unmodified copy of the data can be held on system memory to reload the cache line from main memory. this data consists of return codes.

and then reintroduce the fixed or replaced component into service without any application disruption. The system then should be able to take the component offline. and a failure on any of the DRAM chips can be fully recovered by the ECC algorithm. The remaining 8 bytes are used to hold check bits and more information about the ECC word. Hardware scrubbing is a method that is used to address intermittent errors. This feature is an improvement from the Double Error Detection/Single Error Correction ECC implementation that is found on the POWER6 processor-based systems. Single-bit error correction by using Error Checking and Correcting (ECC) without reaching error thresholds for main. but it can also correct an error even if another symbol (a byte. or isolate it. The system can continue indefinitely in this state with no performance degradation until the failed DIMM can be replaced. This innovative ECC algorithm from IBM research works on DIMM pairs on a rank basis. Self-healing includes the following examples: Bit steering to redundant memory in the event of a failed memory module to keep the server operational. Chipkill is an enhancement that enables a system to sustain the failure of an entire DRAM chip. With this ECC code. An ECC word uses 18 DRAM chips from two DIMM pairs. L2 and L3 cache line delete capability. Memory reliability. L2. Of these bytes. which include redundant bits in L1 instruction and data caches. which provides more self-healing. an ECC word consists of 72 bytes of data. fault tolerance. RAS on IBM System Storage DS8870 65 . Dynamic processor deallocation. the system can dynamically recover from an entire DRAM failure (Chipkill). which is accessed by a 2-bit line pair) experiences a fault. In POWER7. ECC extended to inter-chip connections on fabric and processor bus.Redundant components High opportunity components (those components that most effect system availability) are protected with redundancy and the ability to be repaired concurrently. L2 caches. it must be able to recover from a failing component by detecting and isolating the failed component. fix. Chapter 4. and L2 and L3 directories Power 740 main memory DIMMs. which use an innovative ECC algorithm from IBM research that improves bit error correction and memory failures Redundant and hot-swap cooling Redundant and hot-swap power supplies Redundant 12X loops to I/O subsystem Self-healing For a system to be self-healing. Any memory locations with a correctable error are rewritten with the correct data. and L3 cache memory. 64 are used to hold application data. IBM POWER processor-based systems periodically address all memory locations. and integrity POWER7 uses ECC circuitry for system memory to correct single-bit memory failures. The use of the following redundant components allows the system to remain operational: POWER7 cores.

ibm. 4.The memory DIMMs also use hardware scrubbing and thresholding to determine when memory modules within each bank of memory should be used to replace modules that exceeded their threshold of error count (dynamic bit-steering). the server is automatically brought up for the first time during the initial machine load (IML). they must be replaced. For more information about how AIX V7. see this website: http://www. the HMC manages the errors. This process was introduced in the previous DS8000 generation.2. For the DS8870. The HMC provides the operating system and microcode to the CEC over the DS8870 internal network. and allows the service representative to address the problem and restart the rebuild. only the HMC includes a DVD drive. After the IML is successful. there are no optical drives on the CEC. the service representative acquires the needed code bundles on the HMC. The AIX operating system and DS8000 microcode then must be reloaded.1 operating system (OS). This version of AIX includes support for Failure Recovery Routines (FRRs).1 Differences Guide. 66 IBM System Storage DS8870 Architecture and Implementation . The POWER Hypervisor also can request a service processor repair action. an IBM service representative loaded multiple CDs or DVDs directly onto the CEC that is being serviced to rebuild a system. SG24-7910. Hardware scrubbing is the process of reading the contents of the memory during idle time and checking and correcting any single-bit errors that accumulated by passing the data through the ECC logic. which is faster than reading and verifying from an optical disc. For a CEC dual hard disk drive rebuild. which then runs as a Network Installation Management on Linux (NIMoL) server. The service processor can take appropriate action (including calling for service) when it detects that the POWER Hypervisor firmware lost control.html 4. if necessary. This OS is IBM’s well-proven.1 adds to the RAS features of IBM AIX V6. If the rebuild fails. and open standards-based UNIX-like OS. which is also aware of the overall service action that necessitated the rebuild.2. Any fault that causes the CEC to be unable to load the operating system from its internal hard disk drives leads to this service action. This function is a hardware function on the memory controller chip and does not influence normal system memory performance. the system remains operational with full resources and you or your IBM service representative do not need to intervene. including error data. For more information about the features of the IBM AIX operating system.3 AIX operating system Each CEC is a server that is running the IBM AIX Version 7. the service representative can resume operations on the CEC. Mutual surveillance The service processor monitors the operation of the POWER Hypervisor firmware during the boot process and watches for loss of control during system operation. It also allows the POWER Hypervisor to monitor service processor activity.4 CEC dual hard disk drive rebuild If a simultaneous failure of the dual hard disk drives in a CEC occurs. The DS8870 incorporates a process that is known as a rebuild.com/systems/power/software/aix/index. All of the tasks and status updates for a CEC dual hard disk drive rebuild are done from the HMC. see the IBM Redbooks publication IBM AIX Version 7. When the rebuild is complete. Fault masking If corrections and retries succeed and do not exceed threshold limits. Before this functionality was introduced. scalable.

the I/O enclosures are wired point-to-point and each CEC uses a PCI Express architecture. This configuration means that there is no separate path between XC communications and I/O traffic. RAS on IBM System Storage DS8870 67 .5 Cross Cluster communication In previous DS8000 generations.Overall. the RIO-G Bus was used for Cross Cluster communication (XC) as the path to communicate between CECs. which simplifies the topology. An example of the first window that is shown on HMC when this service procedure is initiated is shown in Figure 4-1. Figure 4-1 Initial HMC window for Hard Drive Rebuild 4. For the DS8870.2. which reduces the time that is needed to perform this critical service action. DS8870 uses the PCIe paths across the I/O enclosures to provide the communication between CECs. Chapter 4. the rebuild process on a DS8870 is robust and straightforward.

Voltage monitoring provides a warning and an orderly system shutdown when the voltage is out of operational specification. Figure 4-2 shows the PCIe fabric design of the DS8870. which is accompanied by a service call to IBM. An orderly system shutdown. a complete AC power loss) trigger appropriate signals from hardware to the affected components to prevent any data loss without operating system or firmware involvement. However. and temperature is performed by the System Power Control Network (SPCN).6 Environmental monitoring Environmental monitoring that is related to power. Temperature monitoring also is performed. 68 IBM System Storage DS8870 Architecture and Implementation . If the ambient temperature rises above a preset operating range. Environmental critical and non-critical conditions generate Early Power Off Warning (EPOW) events. the rotation speed of the cooling fans is increased. occurs when the operating temperature exceeds a critical level.2. Critical events (for example. During normal operations. XC traffic gives more flexibility as any I/O enclosure can be used for XC communication (following certain rules). Non-critical environmental events are logged and reported by using Event Scan. Temperature monitoring also warns the internal microcode of potential environment-related problems. XC traffic uses a low portion of the overall available PCIe bandwidth so that it has negligible effect on I/O performance.XC communication bears sufficient data to maintain NVS and resource states that are synchronized. Figure 4-2 DS8870 PCIe fabric and I/O Enclosures 4. fans.

For more information. Persistent deallocation occurs when a failed component is detected. or RAID 10. Creating logical volumes on the DS8000 works through the following constructs: Storage DDMs are installed into predefined array sites. Each logical volume belongs to a logical subsystem (LSS). This ability allows deferred maintenance at a convenient time. Dynamic deallocation functions include the following components: Processor L3 cache lines Partial L2 cache deallocation PCIe bus and slots Persistent deallocation functions include the following components: Processor Memory Unconfigure or bypass failing I/O adapters L2 cache Following a hardware error that is flagged by the service processor.2. Each Extent Pool is an open system fixed block (FB) or System z count key data (CKD). which are structured as Redundant Array of Independent Disks (RAID) 5. the subsequent reboot of the server invokes extended diagnostic testing. we create logical volumes. It is then deactivated at a subsequent reboot. RAS on IBM System Storage DS8870 69 . the boot process attempts to proceed to completion with the faulty device automatically unconfigured. If a processor or cache is marked for deconfiguration by persistent processor deallocation. see “RAID configurations” on page 84. 4. these logical volumes are called logical unit number (LUNs). For open systems. For more information. Chapter 4.) RAID arrays become members of a rank. LUNs are used for SCSI addressing. Within each Extent Pool. For System z. see Chapter 5.3 CEC failover and failback To understand the process of CEC failover and failback. Failing I/O adapters are unconfigured or bypassed during the boot process. these logical volumes are called volumes. resources can be deallocated and the system remains operational. Each rank becomes a member of an Extent Pool.7 Resource deallocation If recoverable errors exceed threshold limits. “Virtualization concepts” on page 105. Dynamic deallocation of potentially failing components is nondisruptive.4. Each Extent Pool has an affinity to either server 0 or server 1 (CEC0 or CEC1). RAID 6. which allows the system to continue to run. the logical construction of the DS8870 must be reviewed. Array sites are used to form arrays. (Restrictions apply for SSDs.

The CECs have two areas of their primary memory that are used for holding host data: cache memory and NVS. one of the basic premises of RAS is that the DS8000 always tries to maintain two copies of the data while it is moving through the storage system. the LSS membership is only significant for Copy Services. the maximum was increased to 16 GB per server. NVS for ODD num bere d LSS NVS for EVEN numbered LSS Cache Memory for EVEN num bere d LSS Cac he Mem ory for ODD numbered LSS CEC 0 Figure 4-3 Write data when CECs are dual operational CEC 1 70 IBM System Storage DS8870 Architecture and Implementation . When a write is issued to a volume and both the CECs are operational. the write data is directed to the CEC that owns the volume. which the DS8000 emulates). It is important to remember that LSSs that have an even identifying number have an affinity with CEC0. NVS is an area of the system RAM that is persistent across a server reboot. For the DS8870. But for System z. 4.For open systems. it is discarded after the destaging is complete.3.1 Dual operational Regarding processing host data. the DS8000 host adapter directs that write to the CEC that owns the LSS of which that logical volume is a member. LSSs that have an odd identifying number have an affinity with CEC1. The write data is also placed into the NVS of the other CEC. The data flow begins with the write data being placed into the cache memory of the owning CEC. The NVS copy of the write data is accessed only if a write failure should occur and the cache memory is empty or possibly invalid. Important: For the previous generations of DS8000. Otherwise. The location of write data with both CECs operational is shown in Figure 4-3. which equates to a 3990 (a System z disk controller. When a host operating system issues a write to a logical volume. the LSS is the logical control unit (LCU). the maximum available NVS was 6 GB per server.

The following sections describe the failover and failback procedures that occur between the CECs when an abnormal condition affected one of them.Figure 4-3 on page 70 shows how the cache memory of CEC0 is used for all logical volumes that are members of the even LSSs. 3. 4. the cache memory of CEC1 supports all logical volumes that are members of odd LSSs. 4. a copy is placed into the NVS memory that is in the alternate CEC. they can still be accessed through the Device Adapters that are owned by CEC1. the following normal flow of data for a write when both CECs are operational is used: 1. RAS on IBM System Storage DS8870 71 . see 4. Likewise. For more information about the Fibre Channel Loops. At the same time.1 “RAID configurations” on page 84.2 Failover In the example that is shown in Figure 4-4. 2. Data is written to cache memory in the owning CEC. CEC1 needs to take over all of the CEC0 functions. Under normal operation.3. The write data is destaged from the cache memory to a disk array. Thus. The write operation is reported to the attached host as completed. For every write that is placed into cache. NVS for ODD num bere d LSS NVS For EVEN number ed LSS NVS For ODD number ed LSS Cache Memory for EVEN num bere d LSS Cache Memory For EVEN number ed LSS Cache Memory For ODD number ed LSS CEC 0 Failover Figure 4-4 CEC0 failover to CEC1 CEC 1 Chapter 4. CEC0 failed. The write data is discarded from the NVS memory of the alternate CEC.6. data is written to NVS memory of the alternate CEC. Because the RAID arrays are on Fibre Channel Loops that reach both CECs. both DS8000 CECs are actively processing I/O requests.

CEC1 destages the contents of its NVS (the CEC0 write data) to the disk subsystem. If you have real-time response requirements in this area. There is no loss of functionality. and the ownership of the even LSSs being transferred back to CEC0. Just like the failover process. Any critical failure in the working CEC renders the DS8000 unable to serve I/O for the arrays. This functionality is limited so that it cannot consume more than 85% of NVS space. CEC1 now owns all the LSSs. The DS8000 can continue to operate in this state indefinitely. the IBM service representative or remote support engineer restarts the failed CEC. After failover. The working CEC starts by preserving the data in cache that was backed by the failed CEC NVS. The NVS inside CEC1 is now used for odd and even LSSs. the concern is for the backup copy of the CEC1 write data. 4. contact IBM to determine the latest information about how to manage your storage to meet your requirements. The existing data in cache (for which there is still only a single volatile copy) is added to the NVS so that it remains available if the attempt to destage fails or a server reboot occurs. CEC1 begins processing the I/O for all the LSSs. 72 IBM System Storage DS8870 Architecture and Implementation . For this example in which CEC0 failed. then resumes. Because the DS8870 now has only one copy of that data (active in the cache memory of CEC1). with both CECs operational. The entire failover process should be transparent to the attached hosts. which means all reads and writes are serviced by CEC1. the resume action is initiated by the software. the failback process is transparent to the attached hosts. recovery actions (failover or failback) on the DS8000 do not affect I/O operation latency by more than 15 seconds. From a data integrity perspective. 3.At the moment of failure. The failback begins with CEC1 starting to use the NVS in CEC0 again. If a reboot of the single working CEC occurs before the cache data is destaged.3 Failback The failback process always begins automatically when the DS8000 microcode determines that the failed CEC resumed to an operational state. half for the odd LSSs and half for the even LSSs. the IBM support team should begin work immediately to determine the scope of the failure and to build an action plan to restore the failed CEC to an operational state. the following tasks occur: a. before the actual destage and at the beginning of the failover. CEC1 includes a backup copy of the CEC0 write data in its own NVS. the DS8000 operates as shown in Figure 4-4 on page 71. However. the write data remains available for subsequent destaging.3. If the failure was relatively minor and recoverable by the operating system or DS8000 microcode. Because of this failure. but there is a loss of redundancy. it performs the following steps: 1. Normal I/O processing. we should now assume that CEC0 was repaired and resumed. With certain limitations on configurations and advanced functions. 2. If there was a service action with hardware components replaced. This entire process is known as a failover. In general. taking over for CEC0. this effect to latency is often limited to just 8 seconds or less. which was in the NVS of CEC0 when it failed. b. The NVS and cache of CEC1 are divided in two.

Chapter 4. the ability of the other DC-UPS to keep the DS8700 running properly is not affected. Power loss When an on-battery condition shutdown begins.3. It is during this shutdown that the entire contents of NVS memory are written to the CEC hard disk drives so that the data will be available for destaging after the CECs are operational again. When shut down in each CEC is complete. in case of continuous power unavailability for 4 seconds. The ePLD feature can be ordered so that disk operation can be maintained for 50 seconds after a power disruption. 4. If power is lost to a single DC-UPS.4. each CEC features dual internal disks that are available to store the contents of NVS. The single purpose of the BSM sets is to preserve the NVS area of CEC memory in the event of a complete loss of input power to the DS8870. so the CECs would remain online. they would begin a shutdown procedure. At a certain stage in the boot process. 4. 2. Each CEC begins copying its NVS data to internal disk (not the storage DDMs). Power restored When power is restored to a DS8000 model. the following events occur: 1. the DS8000 is powered down. the CEC detects NVS data on its internal disks and begins to destage it to the storage DDMs. 3. The CECs power-on and perform power on self tests and PHYP functions. When the copy process is complete.4 NVS and power outages During normal operation. 2. the CECs are informed that they are running on batteries and. This configuration is known as on-battery condition. two copies are made of the NVS data. If all the batteries were to fail (which is unlikely because the batteries are in an N+1 redundant configuration). The design is to not move the data from NVS to the disk arrays. the following events occur: 1. To ensure that this write data is not lost because of a power event. the CECs come online and begin to process host I/O. 3. The DS8700 takes all CECs offline because reliability and availability of host data are compromised. After this period. Instead. Should any frame lose AC input (known as wall power or line power) to both DC-UPSs. the DS8700 would lose this NVS protection. RAS on IBM System Storage DS8870 73 . When the battery units reach a certain level of charge. For each CEC. The following sections described the steps that are used in the event of complete power interruption. Each CEC begins IML. the DS8000 preserves write data by storing a duplicate in the NVS of the alternate CEC. the DS8870 contains Battery Service Module (BSM) sets. Important: Unless the extended power line disturbance feature (ePLD) was purchased. BSM sets guarantees storage disk operation for up to 4 seconds in case of power outage. BSM sets keep the CECs and I/O enclosures operable long enough to write NVS contents to internal CEC hard disks. each CEC shuts down. All host adapter I/O is blocked.

The power supplies can be concurrently replaced and a single power supply that can supplying DC power to the whole I/O enclosure. In addition. The DS8870 continues this design for the I/O enclosures. installation of a new adapter. and microcode boot. see 3. each I/O enclosure has N+1 power and cooling in the form of two power supplies with integrated fans. In later generations. which reduces the time and effort that is needed to service the I/O enclosure. For more information.Battery charging In many cases. 4.4. The older DS8000 I/O enclosure consisted of multiple parts that required removal of the bay and disassembly for service.2 “Peripheral Component Interconnect Express adapters” on page 40. if a complete discharge of the batteries occurred (which can happen if multiple power outages occur in a short period) recharging might take up to two hours. These adapters are replaceable concurrently. The DS8870 I/O enclosures use hot-swap adapters with PCI Express connections. which house the device adapters and host adapters. 4. operating system boot. However. 74 IBM System Storage DS8870 Architecture and Implementation . Important: The CECs do not come online (process host I/O) until the batteries are sufficiently charged to handle at least one outage. As shown in Figure 4-2 on page 68. This configuration makes each I/O enclosure an extension of each server. Each slot can be independently powered off for concurrent replacement of a failed adapter. each CEC is connected to all four I/O enclosures (base frame) or all eight I/O enclosures (expansion frame installed) through PCI Express cables.1 I/O enclosures The DS8870 I/O enclosure is a design that was introduced in the DS8700. or removal of an old one. sufficient charging occurs during the power-on self test. Connectivity between the CEC and the I/O enclosures was also improved by using the many strengths of the PCI Express architecture.4 Data flow in DS8870 One of the significant hardware changes for the DS8700 and DS880 generation was in how host I/O was brought into the storage unit.2. the switch card can be replaced without removing the I/O adapters.

as shown in Figure 4-5. or I/O enclosure. RAS on IBM System Storage DS8870 75 . or in the storage area network (SAN). it is able to access volumes that belong to all LSSs because the host adapter (HA) directs the I/O to the correct CEC. the host adapters are shared between the CECs. all connectivity would be lost. Figure 4-5 shows a potential machine configuration. making it a single point of failure as well. two I/O enclosures are shown. Single or multiple path In DS8870. Each I/O enclosure has a pair of Fibre Channel host adapters.4.2 Host connections Each DS8870 Fibre Channel host adapter provides four or eight ports for connection directly to a host or to a Fibre Channel SAN switch. if an error occurs on the host adapter (HA). If a host has only a single path to a DS8870. To illustrate this concept. In this example. Single pathed host HBA HP HP HP HP HP HP HP HP Host Adapter Host Adapter CEC 0 owning all even LSS logical volumes I/O enclosure 2 PC I Expres s x4 PC I E xpres s x4 I/O enclosure 3 Host Adapter HP HP HP HP CEC 1 owning all odd LSS logical volumes Host Adapter HP HP HP HP Figure 4-5 A single-path host connection Chapter 4. The same is true for the host bus adapter (HBA) in the attached host. host port (HP).4. However.

SAN/FICON switches Because many hosts can be connected to the DS8870. This configuration also is important because during a microcode update. This configuration allows host I/O to survive a hardware failure on any component on either path. the number of host adapter ports that are available in the DS8870 might not be sufficient to accommodate all of the connections. The solution to this problem is the use of SAN switches or directors to switch logical connections from multiple hosts. Ports from two separate host adapters in two separate I/O enclosures should be configured to go through each of two directors. a host adapter port might need to be taken offline. Provide more than one switch or director to ensure continued availability. The complete failure of either director leaves half the paths still operating. In a System z environment. Dual pathed host HBA HBA HP HP HP HP HP HP HP HP Host Adapter CEC 0 owning all even LSS logical volumes Host Adapter CEC 1 owning all odd LSS logical volumes I/O enclosure 2 PCI Ex pr es s x4 PC I Ex pr es s x4 I/O enclosure 3 Host Adapter HP HP HP HP Host Adapter HP HP HP HP Figure 4-6 A dual-path host connection Important: Best practice for host connectivity is that hosts that access the DS8870 have at least two connections to host ports on separate host adapters in separate I/O enclosures. A logic or power failure in a switch or director can interrupt communication between hosts and the DS8870. you need to select a SAN switch or director that also supports FICON. each using multiple paths. 76 IBM System Storage DS8870 Architecture and Implementation .A more robust design is shown in Figure 4-6 in which the host is attached to separate Fibre Channel host adapters in separate I/O enclosures.

For multipathing under Microsoft Windows. it should be able to detect when the path is restored so that the I/O can again be load-balanced.jsp For more information about the SDD. T10 DIF can now check end-to-end data integrity through the SAN. Subsystem Device Driver Device Specific Module (SDDDSM) is available. the Subsystem Device Driver (SDD) is useful to manage path failover and preferred path determination. there is also the Subsystem Device Driver Path Control Module (SDDPCM) for multipathing with IBM Storage devices. RAS on IBM System Storage DS8870 77 . SDD is a software product that IBM supplies as an option with the DS8870 at no additional fee. the DIF is checked before leaving the DS8870 and again when received by the host system.com/support/docview. it was only possible to ensure the data integrity within the disk system with error correction code (ECC). when a failure occurs on one redundant path. If a failure occurs in the data path between the host and the DS8870. Also. see IBM System Storage DS8000: Host attachment and Interoperability. SDD automatically switches the I/O to another path. so there is no performance impact. The mechanism that is used varies by attached host operating system and environment. Chapter 4. is the ANSI T10 Data Integrity Field (DIF) standard for FB volumes that are accessed by the FCP channel of Linux on System z. For the AIX operating system. SC26-7917. For more information about the multipathing software that might be required. SG24-8887. Checking is done by hardware. regarding end-to-end data integrity through the SAN. When data is read. use of channel extension or FICON extension technology is required.ibm.com/systems/support/storage/config/ssic/index. Metro/Global Mirror (MGM) Support. However. For more information. then the attached host must have a mechanism to allow it to detect that one path is gone and route all I/O requests for those logical devices to an alternative path. see IBM System Storage DS8000 Host Systems Attachment Guide. Global Mirror.wss?uid=ssg1S7003277 Support for T10 Data Integrity Field (DIF) standard One of the firmware enhancements that the DS8870 incorporates. SDD also automatically sets the failed path back online after a repair is made. and to preferably load balance to these requests. Until now. as described in the following sections. Finally. see this website: http://www. see the IBM System Storage Interoperability Center (SSIC) at this website: http://www. The following site contains information about network devices that are marketed by IBM and other companies to extend Fibre Channel communication distances. SDD also improves performance by sharing I/O operations to a common disk over multiple active paths to distribute and balance the I/O workload. and z/OS Global Mirror.Using channel extension or FICON extension technology For Copy services scenarios in which single mode fibre distance limits are exceeded.ibm. Open systems and SDD In the most open systems environments. Global Copy. Multipathing software Each attached host operating system requires a mechanism to allow it to manage multiple paths to the same device. SDD provides availability through automatic I/O path failover. For more information about T10 DIF implementation in the DS8870. They can be used with DS8000 Series Metro Mirror. SDD is not available for every supported operating system. For more information. see“T10 Data Integrity Field support” on page 115.

There can be multiple system images in a CPU. For more information about logical configuration and virtualization. logical paths are established through the port between the host. also known as redundancy checks. CUIR is available for the DS8870 when operated in the z/OS and IBM z/VM® environments. Dynamic Path Reconnect (DPR) allows the DS8870 to select any available path to a host to reconnect and resume a disconnected operation. for example. 4. This configuration happens for each physical path between a System z CPU and the DS8870. and reduces the time required for the maintenance. This metadata remains associated with the application data as it is transferred throughout the DS8870. CUIR provides automatic channel path vary on and vary off actions to minimize manual operator intervention during selected DS8870 service actions. 78 IBM System Storage DS8870 Architecture and Implementation . These functions are part of the System z architecture and are managed by the channel subsystem on the host and the DS8870. It is also checked by the DS8870 before the data is sent to the host in response to a read I/O request. to transfer data after disconnection because of a cache miss. The DS8870 then knows which paths can be used to communicate between each LCU and each host. Figure 4-7 on page 79 shows metadata along the different stages of virtualization process. Dynamic Path Selection (DPS) allows the channel subsystem to select any available (non-busy) path to initiate an operation to the disk subsystem. Logical paths are established for each system image. and either notifying the DS8870 subsystem that the paths are offline. are appended to that data.System z In the System z environment. Now. or that it cannot take the paths offline. A physical FICON path is established when the DS8870 port sees light on the fiber. Control Unit Initiated Reconfiguration Control Unit Initiated Reconfiguration (CUIR) prevents loss of access to volumes in System z environments because of incorrect path handling. for example. special codes or metadata. The metadata also contains information that is used as an additional level of verification to confirm that the data that is returned to the host is coming from the wanted location on the disk. CUIR reduces manual operator intervention and the possibility of human error during maintenance actions. see Chapter 5. The metadata is checked by various internal components to validate the integrity of the data as it moves throughout the disk system. a cable is plugged in to a DS8870 host adapter. and some or all of the LCUs in the DS8870 controlled by the HCD definition for that host.4.3 Metadata checks When application data enters the DS8870. a processor or the DS8870 is powered on. normal practice is to provide multiple paths from each host to a disk system. or a path is configured online by z/OS. CUIR also allows the DS8870 to request that all attached system images set all paths that are required for a particular service action to the offline state. System images with the appropriate level of software support respond to such requests by varying off the affected paths. Typically. four paths are installed. “Virtualization concepts” on page 105. This function automates channel path management in System z environments in support of selected DS8870 service actions. The channels in each host that can access each logical control unit (LCU) in the DS8870 are defined in the hardware configuration definition (HCD) or I/O configuration data set (IOCDS) for that host. This ability is useful in environments in which there are many z/OS or z/VM systems that are attached to a DS8870.

RAS on IBM System Storage DS8870 79 .Figure 4-7 Metadata and virtualization process Chapter 4.

Figure 4-8 HMC standard service position (left) and HMC alternate service position (right) 80 IBM System Storage DS8870 Architecture and Implementation . perform modifications to the logical configuration.3 “Network connectivity planning” on page 239. A second HMC (the secondary) can be ordered. or a combination of both IP standards. In the DS8870. Generally. power the DS8870 up or down. DVD Read-only media and DVD drives are still included. you should order two management consoles to act as a redundant pair. such as the establishment of FlashCopies by using the DSCLI or DS GUI.Since the previous version of the DS8000 was introduced. or perform Copy Services tasks. metadata size was increased to support future functionality. there is an orientation change in the way that CECs are serviced in compared to previous generations.5 RAS on the HMC The HMC is used to configure. If the HMC is not operational. SDHC is not bootable and is used to offload data collections or to save physical configurations on discontinue. it is impossible to perform maintenance. and maintain the DS8870. The DS8870 HMCs work with IPv4. see 9. see “Capacity Magic” on page 518.1. The HMC can move from the standard service position to a new alternate service position. SDHC media is used instead. as shown in Figure 4-8.1 “Storage HMC hardware” on page 252 and 8. One HMC (the primary) is included in every DS8870 base frame. manage. 4. DS8870 does not use DVD-RAM media. For more information about the HMC and network connections. For more information about raw and net storage capacities. IPv6. and is located external to the DS8870.

see 4. IBM Service personnel outside of the client facility logs in to the HMC to provide remote service and support. This ability is achieved by using the redundant design of the DS8870. Concurrent code updates The architecture of the DS8870 allows for concurrent code updates.7 “RAS on the power subsystem” on page 93. see Chapter 17. For more information about remote support and the Call Home option. Most of the following components have firmware that can be updated: Direct-current uninterruptible power supply (DC-UPS) Host adapters Fibre Channel Interface Control cards (FCIC) Device adapters DDMs DS8870 CECs have an operating system (AIX) and Licensed Machine Code (LMC) that can be updated. “Remote support” on page 465. Power subsystem was improved regarding power firmware update and some elements are updated while power redundancy is not lost. see Chapter 15. new releases of firmware and licensed machine code become available that offer improvements in function and reliability. which is referred to as Call Home for service. 4.2 Call Home and Remote Support Call Home is the capability of the HMC to contact IBM support services to report a problem. In general.5.4. The HMC also communicates machine-reported product data (MRPD) to IBM by way of the Call Home facility. Chapter 4. As IBM continues to develop and improve the DS8870. RAS on IBM System Storage DS8870 81 .1 Microcode updates The DS8870 contains many discrete redundant components. For more information about microcode updates. For more information. redundancy is lost for a short period as each component in a redundant pair is updated. “Licensed machine code” on page 435.5.

which means that any Call Home made by the HMC also notifies the email address that was previously configured through the HMC GUI. Figure 4-9 Manage Serviceable Event Notification HMC GUI panel 82 IBM System Storage DS8870 Architecture and Implementation .Users can enable Service Event Notification via email. Figure 4-9 shows the HMC GUI option in which the email address can be configured.

Example 4-1 shows one notification that was received via email. 2012 10:11:12 AM CEST CALL HOME RETRY #0 of 12 on Oct 10.3. 2012 10:11:12 AM CEST LAST REPORTED TIME Oct 10.BG079WL-P1-D3 Previous PMH N/A Prev ProbNum N/A PrevRep Data N/A END OF NOTE LOG *************************** ************************** Chapter 4. REFCODE BE3400AA <-----------------------------.0 HMC CE default HMC REMOTE default HMC PE default HMC DEVELOPER default 2107 BUNDLE 87. RAS on IBM System Storage DS8870 83 .20120817.0. <-----------------. Example 4-1 Service Event notification via e-ma REPORTING SF MTMS: REPORTING SF LPAR: PROBLEM NUMBER: PROBLEM TIMESTAMP: REFERENCE CODE: 2107-961*75ZA180 SF75ZA180ESS11 155 Oct 10.0 HMC DRIVER 20120723. 2012 10:11:14 AM CEST.1 LMC LEVEL Unavailable FIRMWARE LEVEL SRV0 01AL74094 SRV1 01AL74094 PARTITION NAME SF75ZA180ESS11 PARTITION HOST NAME SF75ZA180ESS11 PARTITION STATUS SFI 2107-961*75ZA181 SVR 8205-E6C*109835R LPAR SF75ZA180ESS11 STATE = AVAILABLE FIRST REPORTED TIME Oct 10.SCR description FRU FRU FRU FRU FRU group HIGH FRU class FRU Part Number 45W7457 FRU CCIN I60B Serial Number 504BC7C3CC0D Location Code U2107.155. 2012 10:11:12 AM CEST BE3400AA ************************* START OF NOTE LOG ************************** BASE RACK ORDERED MTMS 2421-961*75ZA180 LOCAL HMC MTMS 4242BC5*R9K1VXK HMC ROLE Primary LOCAL HMC INBOUND MODE Attended MODEM PHONE Unavailable LOCAL HMC INBOUND CONFIG Continuous LOCAL HMC OUTBOUND CONFIG VPN only FTP: enabled REMOTE HMC Single HMC HMC WEBSM VERSION 5.D02.System Reference Code SRC) SERVICEABLE EVENT TEXT DDM format operation failed.

see Table 8-8 on page 247. The data and parity bits float between the seven or eight drives to provide optimum write performance. An updated version of Capacity Magic (see “Capacity Magic” on page 518) helps you to determine the raw and net storage capacities. The remaining drive on the array site is used as a spare. The DS8870 supports RAID 5. No JBOD support: The DS8870 models do not include support for JBOD. 4.6. and the numbers for the required extents for each available type of RAID. 5+P+Q RAID 6 configuration: The array consists of five data drives and two parity drives. 3+3 RAID 10 configuration: The array consists of three data drives that are mirrored to three copy drives. RAID 6. 4+4 RAID 10 configuration: The array consists of four data drives that are mirrored to four copy drives.6 RAS on the disk system The DS8870 was designed to safely store and retrieve large amounts of data. There must be enough spare space to reconfigure arrays. 6+P+Q RAID 6 configuration: The array consists of six data drives and two parity drives. The remaining drive on the array site is used as a spare. if an online DS8870 storage is 95% loaded with RAID-6 arrays. The DS8870 uses floating parity technology such that no one drive is always involved in every write operation. It does not support the non-RAID configuration of disks that are better known as JBOD (just a bunch of disks). There are many variants of RAID in use today. 84 IBM System Storage DS8870 Architecture and Implementation .4. RAID is an industry-wide implementation of methods to store data on multiple physical disks to enhance the availability of that data. and RAID 10. Two drives on the array site are used as spares. For example. 7+P RAID 5 configuration: The array consists of seven data drives and one parity drive. This reconfiguration can be done only during downtime. +P indicator: The indicator +P does not mean that a single drive is dedicated to holding the parity bits for the RAID. For more information about the effective capacity of these configurations. it is not possible to complete an online reconfiguration to turn a RAID 6 array into a RAID 5 array.1 RAID configurations The following RAID configurations are possible for the DS8870: 6+P RAID 5 configuration: The array consists of six data drives and one parity drive.

These switches are built into the disk enclosure controller cards.tucson. RAS on IBM System Storage DS8870 85 . The RPQ/SCORE process can be used to submit requests for RAID 10 configurations for SSD and Nearline devices.Important restrictions: The following restrictions apply: Nearline SAS drives are not supported on RAID 5 and RAID 10 configurations. DS8000 Storage Enclosure with Switched Dual Loops Next Storage Enclosure CEC 0 Device Adapter CEC 1 Device Adapter Next Storage Enclosure Out Out In In In In Out Out FC-AL Switch FC-AL Switch Storage Enclosure Backplane Disk Drive Modules Figure 4-10 Switched disk path connections Chapter 4. This information is subject to change.com/systems/support/storage/ssic/interoperability. SSD drive sets are not supported in RAID 6 or RAID 10 configurations.6.2 Disk path redundancy Each DDM in the DS8870 is attached to two Fibre Channel switches.wss 4. Consult with your IBM Service Representative for the latest information about supported RAID configurations. see the Storage Customer Opportunity REquest (SCORE) system page at this website: http://iprod. For more information.ibm. Figure 4-10 shows the redundancy features of the DS8870 switched Fibre Channel disk architecture.

A spare is brought into the array at the same time. 4. For more information about the disk subsystem of the DS8870. However.6. This configuration allows the disk to be simultaneously attached to both FC switches. If a sector contains data that is beyond ECC's ability to correct. This ability reduces the possibility of multiple bad bits accumulating in a sector beyond the ability of ECC to correct them. adding enclosures is nondisruptive. This scrubbing process applies to array members and spare DDMs. If error correcting code (ECC)-correctable bad bits are identified.6.5 Smart Rebuild Smart Rebuild is a feature that is designed to help reduce the possibility of secondary failures and data loss in RAID arrays.6.4 “Disk subsystem” on page 48. This copy ability avoids the use of RAID recovery to reconstruct all of the data onto the spare drive. However. If the error rates exceed predetermined threshold values. the FC switch in the remaining controller card retains the ability to communicate with all the disks and both device adapters (DAs) in a pair. the switch that is included in that card is also removed. data can be copied directly to a spare drive by using the technique that is described in 4. The result is a substantial time savings and a new level of availability that is not found in other RAID products. it cannot access the switches.3 Predictive Failure Analysis The storage drives that are used in the DS8870 incorporate Predictive Failure Analysis (PFA) and can anticipate certain forms of failures by keeping internal statistics of read and write errors. the drive is nominated for replacement. This reading is designed to occur without any interference with application performance.5 “Smart Rebuild” on page 86. 4. so it also can tolerate the loss of a single path. If either disk enclosure controller card is removed from the enclosure. each DA has a path to each switch. the partner DA retains connection.6. Equally. The array never goes through an n-1 stage in which it would be exposed to complete failure if another drive in this array encounters errors. The suspect disk drive and the new spare are set up in a temporary RAID 1. Figure 4-10 on page 85 also shows the connection paths to the neighboring Storage Enclosures. It can be used to rebuild a RAID 5 array when certain disk errors occur and a normal determination is made that it is time to use a spare to replace a failing disk drive. Because expansion is done in this linear fashion.4 Disk scrubbing The DS8870 periodically reads all sectors on a disk. allowing the troubled drive to be duplicated onto the spare rather than performing a full RAID reconstruction from data and parity. If both paths from one DA fail. RAID is used to regenerate the data and write a new copy onto a spare sector of the disk.Each disk has two separate connections to the backplane. Because the drive has not yet failed. the bits are corrected immediately by the DS8870. see 3. If the suspect disk is still available for I/O. 4. The new spare is then made a regular member of the array and the suspect disk can be removed from the RAID array. it is kept in the array rather than being rejected as under a standard rebuild. 86 IBM System Storage DS8870 Architecture and Implementation .

Smart Rebuild is not applicable in all situations, so it is not guaranteed to be used. If there are two drives with errors in a RAID 6 configuration, or if the drive mechanism failed to the point that it cannot accept any I/O, then the standard rebuild procedure is used for the RAID. If communications across a drive fabric are compromised, such as a loop error that causes drives to be bypassed, then standard rebuild procedures are used because the suspect drive is not available for a one-to-one copy with a spare. If Smart Rebuild is not possible or would not provide the designed benefits, a standard RAID rebuild occurs.

Smart Rebuild enhancements
The proven benefit of Smart Rebuild is highly improved in the DS8870. Smart Rebuild DDM error patterns are now continuously analyzed in real time as part of one of the normal tasks that are driven by DS8870 microcode. At any time, when certain disk errors (following a specific criteria) reach a determined threshold, the Device Adapter (DA) microcode component starts Smart Rebuild immediately. This enhanced technique leads to a considerably better loop stability. A fast response in fixing determined DDM errors often is vital to avoid a second drive failure in the same disk array, and to avoid data loss. The possibility of having an array that is exposed is reduced by shortening the time since a specific error threshold appears until Smart Rebuild is triggered, as described in the following scenarios: Smart Rebuild could avoid the circumstance in which a suspected DDM is rejected, as Smart Rebuild process is started before rejection. Therefore, Smart Rebuild avoids the array from going to a standard rebuild and is susceptible of hitting a second drive failure when the array is exposed. Specific DDM error threshold is detected by DS8870 microcode immediately as DS8870 microcode is continuously analyzing drive errors. DA microcode component starts Smart Rebuild without delay after Smart Rebuild threshold criteria are met. The Smart Rebuild Process does not wait for IBM Support representative to initiate Smart Rebuild as it had in the previous Smart Rebuild implementation IBM Support representatives still can launch Smart Rebuild on their own when necessary; for instance, when the DDM error pattern does not reach threshold but it is considered appropriate to launch the rebuild.

4.6.6 RAID 5 overview
The DS8870 supports RAID 5 arrays. RAID 5 is a method of spreading volume data plus parity data across multiple disk drives. RAID 5 provides faster performance by striping data across a defined set of DDMs. Data protection is provided by the generation of parity information for every stripe of data. If an array member fails, its contents can be regenerated by using the parity data. The DS8870 uses the idea of floating parity, meaning that there is no one storage drive in an array that is dedicated to holding parity data, which would make such a drive active in every I/O operation. Instead, the drives in an array rotate between holding the storage data and holding the parity data, thus balancing out the activity level of all drives in the array.

Chapter 4. RAS on IBM System Storage DS8870

87

RAID 5 implementation in DS8870
In a DS8870, a RAID 5 array that is built on one array site contains seven or eight disks, depending on whether the array site is supplying a spare. A seven-disk array effectively uses one disk for parity, so it is referred to as a 6+P array (where the P stands for parity). The reason only seven disks are available to a 6+P array is that the eighth disk in the array site that is used to build the array was used as a spare. We refer to this configuration as a 6+P+S array site (where the S stands for spare). An eight-disk array also effectively uses one disk for parity, so it is referred to as a 7+P array.

Drive failure with RAID 5
When a disk drive module fails in a RAID 5 array, the device adapter starts an operation to reconstruct the data that was on the failed drive onto one of the spare drives. The spare that is used is chosen based on a smart algorithm that looks at the location of the spares and the size and location of the failed DDM. The rebuild is performed by reading the corresponding data and parity in each stripe from the remaining drives in the array, performing an exclusive-OR operation to re-create the data, and then writing this data to the spare drive. While this data reconstruction is occurring, the device adapter can still service read and write requests to the array from the hosts. There might be some performance degradation while the sparing operation is in progress because some DA and switched network resources are used to complete the reconstruction. Because of the switch-based architecture, this effect is minimal. Also, any read requests for data on the failed drive require data to be read from the other drives in the array, and then the DA reconstructs the data. Performance of the RAID 5 array returns to normal when the data reconstruction onto the spare device completes. The time that is taken for sparing can vary, depending on the size of the failed DDM and the workload on the array, the switched network, and the DA. The use of arrays across loops (AAL) speeds up rebuild time and decreases the impact of a rebuild.

88

IBM System Storage DS8870 Architecture and Implementation

4.6.7 RAID 6 overview
The DS8870 supports RAID 6 protection. RAID 6 presents an efficient method of data protection in case of double disk errors, such as two drive failures, two coincident medium errors, or a drive failure and a medium error. RAID 6 protection provides more fault tolerance than RAID 5 in the case of disk failures and uses less raw disk capacity than RAID 10. RAID 6 allows for more fault tolerance by using a second independent distributed parity scheme (dual parity). Data is striped on a block level across a set of drives, similar to RAID 5 configurations, and a second set of parity is calculated and written across all the drives, as shown in Figure 4-11.

One stripe with 5 data drives (5 + P + Q): Drives 1 0 5 10 15 2 1 6 11 16 3 2 7 12 17 4 3 8 13 18 5 4 9 14 19 P P00 P10 P20 P30 Q P01 P11 P21 P31 P41

P00 = 0+1+2+3+4; P10 = 5+6+7+8+9;… (parity on block level across a set of drives) P01 = 9+13+17+0; P11 = 14+18+1+5;… (parity across all drives) P41 = 4+8+12+16
NOTE: For illustrative purposes only – implementation details may vary

Figure 4-11 Illustration of one RAID 6 stripe

RAID 6 is best used with large-capacity disk drives because they have a longer rebuild time. One of the risks here is that longer rebuild times increase the possibility that a second DDM error occurs within the rebuild window. Comparing RAID 6 to RAID 5 performance gives about the same results on reads. For random writes, the throughput of a RAID 6 array is only two thirds of a RAID 5, considering the additional parity handling. Workload planning is especially important before RAID 6 for write-intensive applications is implemented, including copy services targets and FlashCopy SE repositories. Yet, when properly sized for the I/O demand, RAID 6 is a considerable reliability enhancement. Important: In previous generations, the only possible configuration for Nearline drives was RAID 6. In the DS8870, RAID 10 also can be implemented in Nearline drives with a RPQ/SCORE. From a reliability viewpoint, RAID 6 is well-suited for large capacity disk drives.

Chapter 4. RAS on IBM System Storage DS8870

89

RAID 6 implementation in the DS8870
A RAID 6 array in one array site of a DS8870 can be built on one of the following configurations: In a seven-disk array, two disks are always used for parity, and the eighth disk of the array site is needed as a spare. This type of RAID 6 array is referred to as a 5+P+Q+S array, where P and Q stand for parity and S stands for spare. A RAID 6 array, consisting of eight disks, is built when all necessary spare drives are available. An eight-disk RAID 6 array also always uses two disks for parity, so it is referred to as a 6+P+Q array.

Drive failure with RAID 6
When a DDM fails in a RAID 6 array, the DA starts to reconstruct the data of the failing drive onto one of the available spare drives. A smart algorithm determines the location of the spare drive to be used, depending on the size and the location of the failed DDM. After the spare drive replaces a failed one in a redundant array, the recalculation of the entire contents of the new drive is performed by reading the corresponding data and parity in each stripe from the remaining drives in the array and then writing this data to the spare drive. During the rebuild of the data on the new drive, the DA can still handle I/O requests of the connected hosts to the affected array. Performance degradation could occur during the reconstruction because DAs and switched network resources are used to do the rebuild. Because of the switch-based architecture of the DS8870, this effect is minimal. Additionally, any read requests for data on the failed drive require data to be read from the other drives in the array, and then the DA reconstructs the data. Any subsequent failure during the reconstruction within the same array (second drive failure, second coincident medium errors, or a drive failure and a medium error) can be recovered without loss of data. Performance of the RAID 6 array returns to normal when the data reconstruction on the spare device has completed. The rebuild time varies, depending on the size of the failed DDM and the workload on the array and the DA. The completion time is comparable to a RAID 5 rebuild, but slower than rebuilding a RAID 10 array in the case of a single drive failure.

4.6.8 RAID 10 overview
RAID 10 provides high availability by combining features of RAID 0 and RAID 1. RAID 0 optimizes performance by striping volume data across multiple disk drives at a time. RAID 1 provides disk mirroring, which duplicates data between two disk drives. By combining the features of RAID 0 and RAID 1, RAID 10 provides a second optimization for fault tolerance. Data is striped across half of the disk drives in the RAID 1 array. The same data is also striped across the other half of the array, which creates a mirror. Access to data is preserved if one disk in each mirrored pair remains available. RAID 10 offers faster data reads and writes than RAID 5 because it does not need to manage parity. However, with half of the DDMs in the group that is used for data and the other half to mirror that data, RAID 10 disk groups have less capacity than RAID 5 disk groups. RAID 10 is not as commonly used as RAID 5, mainly because more raw disk capacity is needed for every gigabyte of effective capacity. A typical area of operation for RAID 10 are workloads with a high random write ratio.

RAID 10 implementation in DS8870
In the DS8870, the RAID 10 implementation is achieved by using six or eight DDMs. If spares need to be allocated on the array site, six DDMs are used to make a three-disk RAID 0 array, which is then mirrored. If spares do not need to be allocated, then eight DDMs are used to make a four-disk RAID 0 array, which is then mirrored. 90
IBM System Storage DS8870 Architecture and Implementation

Drive failure with RAID 10
When a DDM fails in a RAID 10 array, the DA starts an operation to reconstruct the data from the failed drive onto one of the hot spare drives. The spare that is used is chosen based on a smart algorithm that looks at the location of the spares and the size and location of the failed DDM. Remember that a RAID 10 array is effectively a RAID 0 array that is mirrored. Thus, when a drive fails in one of the RAID 0 arrays, we can rebuild the failed drive by reading the data from the equivalent drive in the other RAID 0 array. While this data reconstruction is going on, the DA can still service read and write requests to the array from the hosts. There might be degradation in performance while the sparing operation is in progress because DA and switched network resources are used to do the reconstruction. Because of the switch-based architecture of the DS8870, this effect is minimal. Read requests for data on the failed drive should not be affected because they can all be directed to the good RAID 1 array. Write operations are not be affected. Performance of the RAID 10 array returns to normal when the data reconstruction onto the spare device completes. The time that is taken for sparing can vary, depending on the size of the failed DDM and the workload on the array and the DA. In relation to a RAID 5, RAID 10 sparing completion time is faster because rebuilding a RAID 5 6+P configuration requires six reads plus one parity operation for each write. However, a RAID 10 3+3 configuration requires one read and one write (essentially, a direct copy).

Arrays across loops and RAID 10
The DS8870, as with previous generations, implements the concept of arrays across loops (AAL). With AAL, an array site is split into two halves. Half of the site is on the first disk loop of a DA pair and the other half is on the second disk loop of that DA pair. AAL is implemented primarily to maximize performance and it is used for all the RAID types in the DS8870. However, in RAID 10, we are able to take advantage of AAL to provide a higher level of redundancy. The DS8870 RAS code deliberately ensures that one RAID 0 array is maintained on each of the two loops that are created by a DA pair. This configuration means that in the unlikely event of a complete loop outage, the DS8870 does not lose access to the RAID 10 array. This access is not lost because when one RAID 0 array is offline, the other remains available to service disk I/O. Figure 3-16 on page 52 shows a diagram of this strategy.

4.6.9 Spare creation
When the arrays are created on a DS8870, the microcode determines which array sites contains spares. The first array sites on each DA pair that are assigned to arrays contribute one or two spares (depending on the RAID option) until the DA pair has access to at least four spares, with two spares placed on each loop. A minimum of one spare is created for each array site that is assigned to an array until the following conditions are met: There are a minimum of four spares per DA pair. There are a minimum of four spares for the largest capacity array site on the DA pair. There are a minimum of two spares of capacity and RPM greater than or equal to the fastest array site of any capacity on the DA pair.

Chapter 4. RAS on IBM System Storage DS8870

91

Floating spares
The DS8870 implements a smart floating technique for spare DDMs. A floating spare occurs when a DDM fails and the data it contained is rebuilt onto a spare, then when the disk is replaced, the replacement disk becomes the spare. The data is not migrated to another DDM, such as the DDM in the original position the failed DDM occupied. The DS8870 microcode takes this idea one step further. It might choose to allow the hot spare to remain where it was moved, but it can instead choose to migrate the spare to a more optimum position. This migration is done to better balance the spares across the DA pairs, loops, and disk enclosures. It might be preferable that a DDM that is in use as an array member is converted to a spare. In this case, the data on that DDM is migrated in the background onto an existing spare by using 4.6.5 “Smart Rebuild” on page 86 technique. This process does not fail the disk that is being migrated, though it does reduce the number of available spares in the DS8870 until the migration process is complete. The DS8870 uses this smart floating technique so that the larger or higher RPM DDMs are allocated as spares, which guarantees that a spare can provide at least the same capacity and performance as the replaced drive. If we were to rebuild the contents of a 450 GB DDM onto a 600 GB DDM, approximately one-fourth of the 600 GB DDM is wasted because that space is not needed. When the failed 450 GB DDM is replaced with a new 450 GB DDM, the DS8870 microcode most likely migrates the data back onto the recently replaced 450 GB DDM. When this process completes, the 450 GB DDM rejoins the array and the 600 GB DDM becomes the spare again. Another example would be if we fail a 146 GB 15K RPM DDM onto a 600 GB 10K RPM DDM. The data moved to a slower DDM and is wasting much space, which means that the array has a mix of RPMs, which is not desirable. When the failed disk is replaced, the replacement is the same type as the failed 15K RPM disk. Again, a smart migration of the data is performed after suitable spares are available.

Hot pluggable DDMs
Replacement of a failed drive does not affect the operation of the DS8870 because the drives are fully hot pluggable. Each disk plugs into a switch, so there is no loop break associated with the removal or replacement of a disk. In addition, there is no potentially disruptive loop initialization process.

Overconfiguration of spares
The DDM sparing policies support the overconfiguration of spares. This possibility might be of interest to certain installations because it allows the repair of some DDM failures to be deferred until a later repair action is required. Because of spare overconfiguration, if there are one or several DDMs in State Failed/Deferred Service (not Failed), the DDMs failed and a repair action is not immediately required. If these DDMs were array members, sparing was initiated and there are sufficient spares at the time of this failure to allow the service to be deferred. You can use following DSCLI command to know whether actions can be deferred: lsddm -state not_normal IBM.2107-75XXXXX

92

IBM System Storage DS8870 Architecture and Implementation

An example of where repair can be deferred is shown in Example 4-2.
Example 4-2 DSCLI lsddm command shows DDM State dscli> lsddm -state not_normal IBM.2107-75ZA571 Date/Time: September 26, 2012 13:03:29 CEST IBM DSCLI Version: 7.7.0.566 DS: IBM.2107-75ZA571 ID DA Pair dkcap (10^9B) dkuse arsite State =========================================================================================== IBM.2107-D02-0774H/R1-P1-D21 0 900.0 unconfigured S3 Failed/Deferred Service

If immediate DDM repair for DDMs in State Failed/Deferred is needed, a RPQ/SCORE process can be used to submit a request to disable DDM deferred service. For more information, see the Storage Customer Opportunity REquest (SCORE) system page at this website: http://iprod.tucson.ibm.com/systems/support/storage/ssic/interoperability.wss

4.7 RAS on the power subsystem
The Power subsystem in DS8870 was redesigned regarding the previous generation of DS8000 series family. It offers a higher energy efficiency, lower power loss, and reliability improvement. The DS8870 base frame requires 20% less power when compared to the DS8800. The former Primary Power Supply (PPS) is replaced by a Direct Current Uninterruptible Power Supply (DC-UPS). RPC cards also are improved. All power and cooling components that constitute the DS8870 power subsystem are fully redundant. Key elements that allow this high level of redundancy are two DC-UPSs per rack for a 2N redundancy. By using this configuration, DC-UPSs are duplicated in each rack so that only one DC-UPS by itself provides enough power to all components inside this rack, if the other DC-UPS becomes unavailable. As described in“Battery Service Module sets” on page 95, each DC-UPS has its own battery backup function. Therefore, the battery system in DS8870 also has 2N redundancy. The battery of a single DC-UPS preserves NVS if there is a complete power outage (as described in 4.3.4 “NVS and power outages” on page 73). The CECs, I/O enclosures, disk enclosures, and primary HMC Components inside the rack all features duplicated power supplies. A smart internal power distribution connectivity makes it possible to maintain redundant power distribution on a single power cord. If one DC-UPS power cord is pulled (equivalent to having a failure in one of the customer circuit breakers), the partner DC-UPS not only can keep the overall system powered, it feeds each internal redundant power supply inside the rack. For example, if a DC-UPS power cord is pulled, the two-redundant power supplies of any CEC continue to be powered on. This ability gives an extra level of reliability in the unusual case of failure in multiple power elements. In addition, internal Ethernet switches and tray fans (which are used to provide extra cooling to internal HMC) receive redundant voltage.

Chapter 4. RAS on IBM System Storage DS8870

93

4.7.1 Components
This section describes the power subsystem components of the DS8870 from a RAS standpoint.

Direct Current Uninterruptible Power Supply
There are two DC-UPS units per rack for a 2N redundancy. DC-UPS is a built-in power converter capable of power monitoring and integrated battery functions. It distributes full wave rectified AC (or DC voltage from batteries) to Power Distribution Units (PDU) which then provide that power to all the separate areas of the machine. If AC is not present at the input line, the output is switched to rectified AC from the partner DC-UPS. If neither AC input is active, the DC-UPS switches to DC battery power for up to 4 seconds (if ePLD was not ordered). Each DC-UPS has internal fans to supply cooling for that power supply. DC-UPS supports three-phase (delta or wye) and single phase as input power. Input power to feed DC-UPS must be configured with phase selections jumpers that are at rear of rack frame. Figure 4-12 shows phase selection options. Special care must be taken regarding the power cord because power cables are unique for delta, wye, or single phase. The appropriate power cables must be used. For information about power cord feature codes, see the IBM publication IBM System Storage DS8870 Introduction and Planning Guide, GC27-4209.

Figure 4-12 DC-UPS input phase selection label

All elements of the DC-UPS can be replaced concurrently with customer operations. Furthermore, BSM set replacement and DC-UPS fan assembly are done while the corresponding Direct Current Supply Unit (DSU) remains operational.

94

IBM System Storage DS8870 Architecture and Implementation

The following important enhancements also are available: Improvement in DC-UPS data collection. During DC-UPS firmware update, the current power state is maintained so that the DC-UPS remains operational during this service operation. Because of its dual firmware image design, dual power redundancy is maintained in all internal power supplies of all frames during DC-UPS firmware update. Each DC-UPS unit consists of one DSU and up to two BSMs. Figure 3-17 on page 54 shows the DSU (rear view) and BSMs (front view). Important: If you install a DS8870 so that both DC-UPSs are attached to the same circuit breaker or the same switchboard, the DS8870 is not well-protected from external power failures. This configuration is a common cause of unplanned outages.

Direct Current Supply Unit
Each DC-UPS has a DSU, which contains the intelligence of the DC-UPS and it is where images of the power firmware reside. Its design protects DSU from failures during a power firmware update, avoiding physical intervention or hardware replacement, except in cases of a permanent hardware failure. A DSU contains the necessary battery chargers that are dedicated to monitor and charge all BSM sets that are installed in the DC-UPS. The DSU periodically performs a battery test to detect defective battery cells and to ensure that they are in good condition.

Battery Service Module sets
The BSM set provides backup power to the system when the input AC power is lost. Each DC-UPS supports one or two BSM sets. As standard, there is one BSM in each DC-UPS. If the ePLD feature is ordered, then one BSM set is added. As a result, each DC-UPS has two BSM sets (for more information, see “Power line disturbance” on page 97). All racks in the system must have the same number of BSM sets, including expansion racks without I/O enclosures. A BSM set consists of four battery enclosures. Each of these single-battery enclosure is known as a Battery Service Module (BSM). A group of four BSMs (battery enclosures) makes up a BSM set. There are two types of BSMs: the master and the slave. The master BSM is the only BSM with a docking connector to the DSU and it can be physically installed only in the top position, which contains status LEDs. There are also three slave BSMs. The DS8870 BSMs feature a planned working life of five to seven years.

Power distribution unit
The power distribution units (PDUs) are used to distribute power from the DC-UPS to supplies in disk enclosures, CECs, I/O enclosures, Ethernet switches, and HMC fans. In all racks, there are six PDUs. There are two different types of PDU. One type distributes power to CECs and I/O enclosures and are only in a base rack or first expansion rack. The other type of PDUs provides power to disk enclosures and are installed in all racks (base frame or any expansion rack). Disk enclosures PDUs supply power to five to seven disk enclosures. Each disk enclosure power supply plugs into two separate PDUs, which are supplied from separate DC-UPSs. A PDU module can be replaced concurrently. Figure 3-18 on page 55 shows where PDUs are at the rear-side of the frame.
Chapter 4. RAS on IBM System Storage DS8870

95

Disk enclosure power supply
The disk enclosure power supply unit provides 5V and 12V power for the DDMs, and houses the cooling fans for the disk enclosure. DDM cooling on the DS8870 is provided by these integrated fans in the disk enclosures. The fans draw air from the front of the frame, through the DDMs, and then move it out through the back of the frame. The entire rack cools from front to back, enabling hot-and-cold aisles. There are redundant fans in each power supply unit and redundant power supply units in each disk enclosure. The disk enclosure power supply can be replaced concurrently. Figure 3-13 on page 49 shows a front and rear view of a disk enclosure. Important: Although the DS8870 no longer vents through the top of the frame, IBM still advises clients not to store any objects on top of a DS8870 frame for safety reasons.

CEC power supply and I/O enclosure power supply
Each processor complex and I/O enclosure have dual redundant power supplies to convert power that is provided by PDUs into the required voltages for that enclosure or complex. Each I/O enclosure and each CEC have their own cooling fans.

Power Junction Assembly
Power Junction Assembly (PJA) is a new element that is introduced in DS8870 power subsystem. Dual PJAs provide redundant power to HMC, Ethernet switches, and HMC tray fans.

Rack Power Control card
RPCs manage the DS8870 power subsystem and provide control, monitor, and reporting functions. RPC cards are responsible for receiving DC-UPS status and controlling DC-UPS functions. There are two RPC cards for redundancy. When one is unavailable, the remaining RPC is able to perform all RPC functions. The following RPC enhancements are available in DS8870: The RPC card contains a faster processor and more parity-protected memory. There are two different buses for communication between each RPC card and each CEC. These buses provide redundant paths to have an error recovery capability in case of failure of one of the buses. Each RPC card has two firmware images. If an RPC firmware update fails, the RPC card can still boot from the other firmware image. The new design also leads to a reduced period during which one of the RPC cards is not available because of an RPC firmware update. Because of the dual firmware image, an RPC card is only unavailable for the time that is required (only a few seconds) to boot from the new firmware image after it is downloaded. Because of this configuration, full RPC redundancy is available during most of the time that is required for an RPC firmware update. RPC cards can detect failures in the HMC fan tray, which facilitates isolation and repair of such failures.

System Power Control Network
The System Power Control Network (SPCN) is used to control the power of the attached I/O subsystem. The SPCN monitors environmental components such as power, fans, and temperature. Environmental-critical and noncritical conditions can generate Early Power Off Warning (EPOW) events. Critical events trigger appropriate signals from the hardware to the affected components to prevent any data loss without operating system or firmware involvement. Non-critical environmental events also are logged and reported.

96

IBM System Storage DS8870 Architecture and Implementation

4.7.2 Line power loss
The DS8870 uses an area of server memory as nonvolatile storage (NVS). This area of memory is used to hold data that has not yet been written to the disk subsystem. If line power fails, meaning that both DC-UPSs in a frame were to report a loss of AC input power, the DS8870 must protect that data. See 4.3 “CEC failover and failback” on page 69 for a full explanation of the NVS Cache operation.

4.7.3 Line power fluctuation
The DS8870 frames contain BSM sets that protect modified data in the event of a complete power loss. If a power fluctuation occurs that causes a momentary interruption to power (often called a brownout), the DS8870 tolerates this condition for approximately four seconds rather than only milliseconds as in previous DS8000 generations. If the ePLD feature is not installed on the DS8870 system, the DDMs are powered off and the servers begin copying the contents of NVS to the internal disks in the processor complexes. For many clients who use uninterruptible power supply (UPS) technology, brownouts are not an issue. UPS-regulated power is generally reliable, so more redundancy in the attached devices is often unnecessary.

Power line disturbance
If power at your installation is not always reliable, consider adding the ePLD feature. This feature adds one BSM set to each DC-UPS in all frames of the system. As a result, each DC-UPS in the system contains two BSM sets. Without the ePLD feature, a standard DS8870 offers you about hat four seconds of protection from power line disturbances. Adding. this feature increases your protection to 50 seconds (running on battery power for 50 seconds) before the CECs begin to copy NVS to their internal disks and then shut down.

Chapter 4. RAS on IBM System Storage DS8870

97

Figure 4-13 DS8870 power control from DS Storage Manager In addition. If you want to power off the DS8870.7. When the local/remote switch is in local mode. the following switches in the base frame of a DS8870 are accessible when the rear cover is open: Local/remote switch. Figure 4-13 shows power control through DS Storage Manager.4. Local power on/local force power off switch. the local power on/local force power off switch can manually power on or force power off a complete system. you must do so by using the GUI that is provided by the HMC or by using the DS Storage Manager. it is not possible to control the power sequencing of the DS8870 until the HMC function is restored. Important: These switches should never be used by DS8870 users.4 Power control All power control is done by using the HMC. They can be used only under certain circumstances and as part of an action plan that is carried out by a IBM service representative. Purchasing a redundant HMC avoids this problem. It has two modes: local and remote. If the HMC is not functional. 98 IBM System Storage DS8870 Architecture and Implementation . which communicates sequencing information to the Service Processor in each CEC and RPCs.

such as the electrocution of a person. Normally. Figure 4-14 DS8870 EPO switch Apart from these two contingencies (which are uncommon events). IBM Support must be contacted to restart the DS8870. On the side of the operator panel is an emergency power off (EPO) switch (as shown in Figure 4-14). It can only be seen when the front door is open. the EPO switch should never be used.4. Chapter 4. The DS8870 is placing human life at risk.5 Emergency power off Each DS8870 frame has an operator panel with three LEDs that show the line power status and the system fault indicator. The LEDs can be seen when the front door of the frame is closed.7. When the EPO switch is used. Important: If a critical event forces the use of the Emergency Power Off switch. This switch is intended to remove power from the DS8870 only in the following extreme cases: The DS8870 has developed a fault that is placing the environment at risk. RAS on IBM System Storage DS8870 99 . However. This switch is red and is located inside the front door that protects the frame. the DS8870 can use its internal batteries to destage the write data from NVS memory to persistent storage so that the data is preserved until power is restored. if line power is lost. the EPO switch does not allow this destage process to happen and all NVS cache data is immediately lost. the battery protection for the NVS storage area is bypassed. If the DS8870 needs to be powered off for building maintenance or to relocate it. always use the HMC to shut it down properly. This event most likely results in data loss. such as a fire.

After a power interruption event.ibm.storsys. applications. This configuration represents what is known as a deadlock situation. and application data are often stored on an enterprise-class storage server. Thus. However. availability. one of them being an isolated Tivoli Key Lifecycle Manager server.4.8. see IBM Encrypted Storage Overview and Customer Requirements. customers can also use IBM Security Key Lifecycle Manager as one of their key servers. which means that ALL DDMs that can be ordered are encryption-capable. see 2. For current considerations and best practices regarding DS8870 encryption. This scenario is shown in Figure 4-15 on page 101. For more information about encryption license consideration.pdf 4. found at this website: ftp://index. System z mainframes do not have local storage. the System z IBM Security Key Lifecycle Manager database (which contains the encryption keys) are on the DS8870. encryption is optional and is activated when feature number 1750 is ordered. The purpose of FDE drives is to encrypt all data at rest within the storage system for increased data integrity.4 “Additional licenses that are needed” on page 32. including SSDs. a second server is necessary to provide redundancy and access to the DS8700 encryption keys should the first Tivoli Key Lifecycle Manager server become unresponsive. Although all DS8870s have certificates installed.8 RAS and Full Disk Encryption All DDMs that are installed in a DS8870 (coming from plant) support Full Disk Encryption (FDE). The IBM Security Key Lifecycle Manager database becomes inoperable because the System z server has its OS or application data on the DS8870. A Tivoli Key Lifecycle Manager server provides a robust platform for managing the multiple levels of encryption keys that are needed for a secure storage operation. and serviceability enhancements to FDE storage: deadlock recovery and support for dual-platform key servers. the DS8870 becomes inoperable because it must retrieve the Data Key (DK) from the IBM Security Key Lifecycle Manager database on the System z server.1 Deadlock recovery The DS8870 family of storage servers with FDE drives needs at least two key servers. such as a DS8870 storage subsystem. In a System z environment. 100 IBM System Storage DS8870 Architecture and Implementation . Their operating system. The DS8870 provides two important reliability.com/whitepaper/disk/encrypted_storage_overview.

and it is never included in any service data. Use of a RK is entirely within your control. Chapter 4. The Recovery Key allows the DS8870 to decrypt the Group Key (GK) that it needs to come to full operation. For a complete review of the deadlock recovery process and more information about working with a RK. The DS8870 never stores a copy of the RK on the encrypted disks.Figure 4-15 DS8870 Deadlock Scenario The DS8870 mitigates this problem by implementing a Recovery Key (RK). RAS on IBM System Storage DS8870 101 . The Security Administrator and the Storage Administrator might need to be physically present at the DS8870 to perform the recovery. No IBM Service Representative needs to be involved. see IBM System Storage DS8700 Disk Encryption Implementation and Usage Guidelines. Important: Use the storage HMC to enter a Recovery Key. REDP-4500. Setting up the RK and using the RK to boot a DS8870 requires both people to take action. The security administrator should be someone other than the storage administrator so that no single user can perform recovery key actions. A new client role is defined in this process: the security administrator.

the DS8870 allows propagation of keys across two separate key server platforms. Some of these features are described next. which means that if a single Ethernet switch fails. They are each capable of generating and wrapping two symmetric keys for the DS8870. Each HMC also has a connection to each switch. they feature two public keys. which is common in Tape Storage environments. After the key servers are set up. 102 IBM System Storage DS8870 Architecture and Implementation .1 Internal network Each DS8870 base frame contains 2-Gigabit Ethernet switches to allow the creation of a fully redundant management (private) network. all traffic can successfully travel from the HMCs to other components in the storage unit that are using the alternate network. Either key server is capable of unwrapping these keys upon a DS8870 retrieval exchange. The DS8870 stores both wrapped symmetric keys in the key repository.2 “Private Ethernet networks” on page 254. Key servers like the IKS.com/developerworks/wikis/display/tivolidoccentral/Tivoli+Key+Lifecy cle+Manager 4.4. see 9.8. If two DS8870 storage complexes are connected together.ibm.9. Adding a z/OS Tivoli Key Lifecycle Manager (Tivoli Key Lifecycle Manager version 1 or an IBM Security Key Lifecycle Manager) Secure Key Mode server. which implement a clear key design. can import and export their public and private key pair to other key servers. There also are Ethernet connections for the FSP within each CEC.1. which operates in clear key mode. see IBM System Storage DS8700 Disk Encryption Implementation and Usage Guidelines. Servers that implement secure key design can only import and export their public key to other key servers. Clients have expressed a desire to run key servers that are hardware security module-based (HSM). availability. The current IKS is still supported to address the standing requirement for an IKS. For more information about the dual-platform Tivoli Key Lifecycle Manager/IBM Security Key Lifecycle Manager solution. see this website: http://www. Each CEC in the DS8870 has a connection to each switch. they use ports on the Ethernet switches. To meet this request. For more information about the DS8870 internal network. 4. For more information about planning and deploying Tivoli Key Lifecycle Manager servers. which operate in secure key mode.2 Dual platform Tivoli Key Lifecycle Manager servers The current DS8870 Full Disk Encryption solution requires the use of an IBM System x SUSE Linux-based Isolated Key Server (IKS). and serviceability. REDP-4500. is supported by the DS8870. Important: Connections to your network are made at the Ethernet patch panel at the rear of the machine.9 Other features There are many more features of the DS8870 that enhance reliability. No network connection should ever be made to the DS8870 internal Ethernet switches.

For more information about remote support operations. encrypted network connection to the HMC through the AOS gateway. they need to know that IBM has taken great measures to provide security with its IP-based Remote Support offerings. A storage unit frame with this optional seismic kit includes cross-braces on the front and rear of the frame that prevent the frame from twisting. contact your IBM sales representative. perhaps government or military. REDP-4889. IBM Support can offload service data. When the client allows the secure. The other end of the spectrum would be for those clients.2 Remote support The DS8870 HMC can be accessed remotely by IBM Support personnel for many service tasks. such as hard drives. who do not allow any connections to their DS8870. “Remote support” on page 465. change configuration settings. It helps to prevent personal injury and increases the probability that the system will be available following an earthquake by limiting potential damage to critical system components. installation of required floor mounting hardware might be disruptive. For more information about planning the connections that are needed for HMC installations. remote support. see Chapter 17. Depending on the flooring in your environment (specifically. RAS on IBM System Storage DS8870 103 . IBM can provide the fastest diagnostic testing. As more clients eliminate modems and analog phone lines from their data centers. and the highest level of service.3 Earthquake resistance The Earthquake Resistance Kit is an optional seismic kit for stabilizing the storage unit rack so that the rack complies with IBM earthquake resistance standards. Hardware at the bottom of the frame secures it to the floor.4. All remote support functions are performed through the DS8870 HMC.7 “Assist On-site” on page 483 or see the IBM publication Introduction to Assist on Site for DS8000. Service to these clients is dependent on getting support personnel onsite to perform diagnostics. For more information about the AOS solution for remote support. The following options are included: Modem-only for access to the HMC command line VPN for access to the HMC GUI (WebUI) or command line (SSH) Modem and VPN No access (secure account) Restriction: There is no direct remote access to the DS8870 Storage Servers (CECs). AOS was a window-sharing and remote desktop product. see Chapter 9. non-raised floors). The best remote support operations for the DS8870 can be provided through a solution that uses Assist On-Site (AOS). Remote support is a critical topic for clients who are investing in the DS8870. Many clients have successfully deployed the AOS gateway. 4. This kit must be special-ordered for the DS8870. For more information. Chapter 4. and initiate recovery actions over a remote connection. “DS8870 HMC planning and setup” on page 251. see 17.9.9. which allows them complete control over what remote connections they allow to their Storage Systems. You decide which type of connection you want to allow for remote support.

104 IBM System Storage DS8870 Architecture and Implementation .

5 Chapter 5. This chapter covers the following topics: Virtualization definition The abstraction layers for disk virtualization: – – – – – – – – – Array sites Arrays Ranks Extent Pools Dynamic Extent Pool merge Track Space Efficient volumes Logical subsystems (LSSs) Volume access Virtualization hierarchy summary Benefits of virtualization zDAC . All rights reserved. 105 .Extended Address Volumes © Copyright IBM Corp. Virtualization concepts This chapter describes virtualization concepts as they apply to the IBM System Storage DS8000.z/OS FICON discovery and Auto-Configuration EAV V2 . 2013.

Each device adapter has four paths to the disk drives.1 Virtualization definition In a fast-changing world. The disk drives can be accessed by a pair of device adapters.5. disk drives are typically accessed by one device adapter. there are four paths to each disk drive. two of the ports provide access to one storage enclosure. One device interface from each device adapter is connected to a set of FC-AL devices so that either device adapter has access to any disk drive through two independent switched fabrics (the device adapters and switches are redundant). Disk drives can be ordered in groups of 8 or 16 drives of the same capacity and rpm. we mean the process of preparing physical disk drives (DDMs) to become an entity that can be used by an operating system. when talking about virtualization. IT infrastructure must allow for on-demand changes. 106 IBM System Storage DS8870 Architecture and Implementation . This definition avoids any contention between the two device adapters for access to the disks. Which device adapter owns the disk is defined during the logical configuration process. Because device adapters operate in pairs. Each device adapter has four ports. The option for 8-drive sets applies only for the 400 GB Solid-State Drives (SSDs) and the 3-TB nearline drives. Virtualization is key to an on-demand infrastructure. All four paths can operate concurrently and can access all disk drives on the attached storage enclosures. 5. The DS8870 disks have a small form factor and are mounted in 24-DDM enclosures. to react quickly to changing business conditions. For this chapter. the definition of virtualization is the abstraction process from the physical disk drives to a logical volume that is presented to hosts and servers in a way that they see it as though it were a physical disk. however. which are in large form factor and installed in 12-DDM enclosures. many vendors are talking about different things. In normal operation. which means we are talking about the creation of logical unit numbers (LUNs). except 3-TB nearline drives. The DDMs are mounted in disk enclosures and connected in a switched FC topology that use an FC-AL protocol.2 The abstraction layers for disk virtualization When talking about virtualization. However.

. Figure 5-1 shows the physical layout on which virtualization is based.. each drive has a direct connection to a device adapter. This design is not really a loop but a switched FC-AL loop with the FC-AL addressing schema. All DDMs of one pair are accessed through the eight ports of a device adapter pair. … Switched loop 2 Figure 5-1 Physical layer as the base for virtualization Because of the switching design. … … …24 … .. Virtualization concepts .. that is. DDMs in enclosures that are attached to existing enclosures feature an additional hop through the Fibre Channel switch card in the enclosure to which they are attached.. Arbitrated Loop Physical Addressing (AL-PA)...Two storage enclosures make a storage enclosure pair. … … … …24 … . PCIe PCIe I/O Enclosure I/O Enclosure HA DA HA DA HA DA HA DA Server 0 Switches Storage enclosure pair Switched loop 1 … … …24 … . … Server 1 107 Storage enclosure pair … … …24 … . Other storage enclosure pairs can be attached to existing pairs in a daisy chain fashion. … Chapter 5..

The following RAID types are supported: RAID 5 RAID 6 RAID 10 For more information. and disk class). There is no predetermined server affinity for array sites. Important: RAID configuration information does change occasionally. The process of selecting the RAID type for an array is also called defining an array. speed. you can select a RAID type...5. This configuration ensures that half of the DDMs are on different loops. For more information about important restrictions on DS8870 RAID configurations.2. This design is called arrays across loops. … … 24 … … . Forming an array means defining its RAID type. … … 24 … … . Loop 2 Switch Loop 1 Figure 5-2 Array site 5.. . see “RAID 5 implementation in DS8870” on page 88.6. “RAID configurations” on page 84. and “RAID 10 implementation in DS8870” on page 90. as shown in Figure 5-2.2.1 Array sites An array site is a group of eight identical DDMs (same capacity. For each array site. Array sites are the building blocks that are used to define arrays. The DDMs that are selected for an array site are chosen from the two disk enclosures that make on storage enclosure pair.1. see 4..2 Arrays An array is created from one array site. 108 IBM System Storage DS8870 Architecture and Implementation . The DDMs that are selected can be from any location within the disk enclosures. one array is defined as using one array site. Which DDMs are forming an array site is predetermined automatically by the DS8000. Important: In a DS8000 series implementation. Consult with your IBM Service Representative for the latest information about supported RAID configurations. Array Site . “RAID 6 implementation in the DS8870” on page 90.

and so on. the terms D1. also called a 6+P+S array (it has a capacity of 6 DDMs for data.. . . D2.9.. For example.. parity is distributed across all seven drives in this example.. .. . “Spare creation” on page 91.6.. . there are six different types of arrays possible. Virtualization concepts 109 . Array Site D1 D2 D3 D7 D8 D9 D10 D11 P D12 D13 D14 D15 D16 P D17 D18 . Figure 5-3 shows the creation of a RAID 5 array with one spare.. and a spare drive).. D3.. zero to two spares can be taken from the array site. For more information.. capacity of one DDM for parity.. According to the RAID 5 rules. as shown in Figure 5-4.. if 1 GB of data is written. On the right side of Figure 5-3. ..According to the sparing algorithm of the DS8000 series. stand for the set of data that is contained on one disk within a stripe on the array. see 4. RAID 5 P S or P 6+ P+ S Array site 7+P Array RAID 6 P Q S or P Q 5 +P +Q + S 6+ P+ Q A B C S A B C D RAID 10 A’ B’ C’ S or A’ B’ C’ D’ 3x2 + 2S 4x2 Figure 5-4 DS8000 array types Chapter 5. Creation of an array Data Data Data Data Data Data Parity Spare D4 D5 D6 P RAID Array Spare Figure 5-3 Creation of an array Depending on the selected RAID level and the sparing requirements.. it is distributed across all of the disks of the array.

. called extents. as shown in Figure 5-5.... . Figure 5-5 Forming an FB rank with 1-GB extents 110 IBM System Storage DS8870 Architecture and Implementation . FB or CKD. The available space on each rank is divided into extents... When a new rank is defined... A 3390 Model 3 is three times the size of a Model 1. The extents are the building blocks of the logical volumes. its name is chosen by the Data Studio Storage Manager. . GiB. for example. An FB rank features an extent size of 1 GB (more precisely..... they think of storage in terms of the original 3390 volume sizes.3 Ranks In the DS8000 virtualization hierarchy.. Important: In the DS8000 implementation.. . The capacity of the array is subdivided into equal-sized partitions. . Instead. .. You must add an array to a rank. A Model 1 features 1113 cylinders. .. RAID Array D2 D3 D4 D5 D6 P C reation of a R ank . The extent size of a CKD rank is one 3390 Model 1.2. . or R3..94 GB. .. ... R1.... An extent is striped across all disks of an array. and indicated by the small squares in Figure 5-6 on page 112. This formatting determines the size of the set of data that is contained on one disk within a stripe on the array.. or binary gigabyte. or 1113 cylinders. The extent size depends on the extent type. The process of forming a rank accomplishes the following objectives: The array is formatted for fixed block (FB) data for open systems or count key data (CKD) for System z data.. Figure 5-5 shows an example of an array that is formatted for FB data with 1-GB extents (the squares in the rank indicate that the extent is composed of several blocks from separate DDMs). . there is another logical construct that is called a rank..5... IBM System z users or administrators typically do not deal with gigabytes or gibibytes... . gibibyte... D ata D ata D ata D ata D ata D ata Parity Spare D1 D7 D8 D9 D 10 D 11 P D 12 D 13 D 14 D 15 D 16 P D 17 D 18 . being equal to 230 bytes). which are about 0... R2.... a rank is built by using one array. 1G B 1G B 1G B 1G B .. FB R ank of 1G B extents .

You can switch on (create an encryption group) encryption later. but all data in this Extent Pool is lost because data is striped across all ranks. when you loose one rank (in the unlikely event that a whole RAID array failed). but then all ranks must be deleted and re-created. Chapter 5. unless you want to enable the Easy Tier Automatic Mode facility. You still might want to use Extent Pools. The affinity of the rank (and its associated array) to a server is determined at the point it is assigned to an Extent Pool. your choice is to encrypt everything or nothing. One or more ranks with the same extent type (FB or CKD) can be assigned to an Extent Pool. Do not mix ranks of different classes (or tiers) of storage in the same Extent Pool. 5. However. However. Extent Pools were used to separate disks with different RPM and capacity in different pools that have homogeneous characteristics. REDP-4500. the unused capacity in the last extent is wasted. For more information. If you want Easy Tiering to automatically optimize rank utilization. Encryption group All drives that are offered in the DS8870 are full disk encryption capable to secure critical data. SAS disks. which means your data is also deleted. you can create homogenous pools with a mix of SSD disks. If you plan to use encryption. the DS8000 series supports only one encryption group. To benefit from Storage Pool Striping (see “Storage Pool Striping: Extent rotation” on page 121). One rank can be assigned to only one Extent Pool. you should have more than one rank in an Extent Pool. you must define an encryption group before a rank is created. Originally. Storage Pool Striping can enhance performance significantly. mirror your data to a remote DS8000. To avoid data loss. Virtualization concepts 111 . and Nearline disks.It is still possible to define a CKD volume with a capacity that is an integral multiple of one cylinder or a fixed-block LUN with a capacity that is an integral multiple of 128 logical blocks (64 KB). The DS8870 also can disable the encryption functionality. For example. which forms a domain for extent allocation to a logical volume. There can be as many Extent Pools as there are ranks. with the capabilities of Easy Tiering moving data around different disk tiering levels to optimize I/O throughput. Storage Pool Striping allows you to create logical volumes striped across multiple ranks.2. So. Currently. if the defined capacity is not an integral multiple of the capacity of one extent. Important: Do not mix ranks with separate RAID types or disk rotation per minute (rpm) in an Extent Pool. There is no predefined affinity of ranks or arrays to a storage server. not only is the data of this rank lost. However. The encryption group is an attribute of a rank. but 1113 cylinders (1 extent) are allocated and 1112 cylinders would be wasted. You also can allow Easy Tiering optimize the placement of the data within the Extent Pool.4 Extent Pools An Extent Pool is a logical construct to aggregate the extents from a set of ranks. see IBM System Storage DS8700 Disk Encryption Implementation and Usage Guidelines. you can define a one cylinder CKD volume. There are considerations regarding how many ranks should be added in an Extent Pool. All ranks must be in this encryption group. more than one rank in an Extent Pool is required. This configuration typically enhances performance.

with one per server. CKD 1113 Cyl. Extent Pool CKD0 1113 Cyl. In an environment where FB and CKD are to go onto the DS8000 series storage system. Ranks are organized in two rank groups: Rank group 0 is controlled by server 0 and rank group 1 is controlled by server 1. Important: For best performance. CKD 1113 Cyl. it must be assigned with the following attributes: Server affinity Extent type (FB or CKD) Encryption group As with ranks. Encryption group 1 means encryption.When an Extent Pool is defined. CKD Extent Pool FBprod . you must specify an encryption group. Extent Pools belong to an encryption group. the DS8000 series supports only one encryption group and encryption is on for all Extent Pools or off for all Extent Pools. four Extent Pools provide one FB pool for each server and one CKD pool for each server to balance the capacity between the two servers. Currently. When an Extent Pool is defined. Extent Pools are expanded by adding more ranks to the pool. you should have two. Figure 5-6 shows an example of a mixed environment that features CKD and FB Extent Pools. Additional Extent Pools might also be desirable to segregate ranks with different DDM types. CKD 1113 Cyl. CKD 1113 Cyl. CKD 1113 Cyl. CKD 1113 Cyl. CKD Extent Pool CKD1 1113 Cyl. balance capacity between the two servers and create at least two Extent Pools. Encryption group 0 means no encryption. CKD 1113 Cyl. CKD Server0 1GB FB 1GB FB 1GB FB 1GB FB Extent Pool FBtest 1GB FB 1GB FB 1GB FB 1GB FB 1GB FB 1GB FB 1GB FB 1GB FB 1GB FB 1GB FB 1GB FB 1GB FB 1GB FB 1GB FB 1GB FB 1GB FB Figure 5-6 Extent Pools 112 IBM System Storage DS8870 Architecture and Implementation Server1 1113 Cyl. with one assigned to server 0 and the other to server 1 so that both servers are active. CKD 1113 Cyl. CKD 1113 Cyl. As a minimum number of Extent Pools.

For consolidating Extent Pools with different storage tiers to create a merged Extent Pool with a mix of storage technologies (with Easy Tier IV any combination of SSD.Dynamic Extent Pool merge Dynamic Extent Pool Merge is a capability that is provided by the Easy Tier manual mode facility. Chapter 5. Virtualization concepts 113 . Creating a larger Extent Pool allows logical volumes to be distributed over a greater number of ranks. Newly created volumes in the merged Extent Pool allocate capacity as specified by the selected extent allocation algorithm. Such an Extent Pool is called a hybrid pool and is a prerequisite for using the Easy Tier automatic mode feature. Logical volumes that existed in either the source or the target Extent Pool can be redistributed over the set of ranks in the merged Extent Pool by using the Migrate Volume function. enterprise. Dynamic Extent Pool Merge can be used for the following reasons: For the consolidation of two smaller Extent Pools with equivalent storage type (FB or CKD) into a larger Extent Pool. and nearline disk is possible). Easy Tier managed pools SSD Pools Enterprise Pools Nearline Pools Merge pools Merged Manual volume migration ƒChange Disk Class ƒChange RAID Type ƒChange RPM ƒChange striping Volume-based data relocation Cross-tier data relocation Automated intra-tier rebalance Figure 5-7 Easy Tier: migration types Important: Volume migration (or Dynamic Volume Relocation) within the same extent pool is not supported in hybrid (or multi-tiered) pools. Dynamic Extent Pool Merge allows one Extent Pool to be merged into another Extent Pool while the logical volumes in both Extent Pools remain accessible to the host servers. The Easy Tier Automatic Mode automatically rebalances the volumes extents onto the ranks within the hybrid extent pool. which is based on the activity of the ranks. The Easy Tier manual mode volume migration is shown in Figure 5-7. which improves overall performance in the presence of skewed workloads.

5.256. REDP-4667. the LUN size is 25. which is not quite 64 K in binary) volumes can be created (either 64-K CKD. You can construct LUNs up to a size of 16 TiB (16 x 240 bytes. Fixed block LUNs A logical volume that is composed of fixed block extents is called a LUN. or a mixture of both types with a maximum of 64-K volumes in total). or 244 bytes).5 GiB of the physical storage remain unusable. However. LUNs can be allocated in binary GiB (230 bytes). so it is a good idea to have LUN sizes that are a multiple of a gibibyte. or 512 or 520-byte blocks.5 Logical volumes A logical volume is composed of a set of extents from one Extent Pool. However. decimal GB (109 bytes).2.5 GiB. see IBM System Storage DS8700 Easy Tier. of which 0. The source and target Extent Pools both contain virtual capacity or both contain a space efficient repository. even though it is actually 65536 .An Extent Pool merge operation is not allowed under any of the following conditions: The source and target Extent Pools are not on the same storage server (server 0 or server 1). or 64-K FB volumes. unless you want to integrate it as Managed Disks in an IBM SAN Volume Controller (SVC) with at least release 6. 26 GiB are physically allocated.2 that is installed. A fixed-block LUN is composed of one or more 1 GiB (230 bytes) extents from one FB Extent Pool. For more information. One Extent Pool is composed of SSD ranks and includes virtual capacity or a space efficient repository and the other Extent Pool contains at least one non-SSD rank. but a LUN can have extents from separate ranks within the same Extent Pool. the physical capacity that is allocated for a LUN is always a multiple of 1 GiB. 114 IBM System Storage DS8870 Architecture and Implementation . Do not create LUNs larger than 2 TiB if you want to use Copy Services for those LUNs. On a DS8000. 25. Important: There is no Copy Services support for logical volumes larger than 2 TiB (2 x 240 bytes).5 GiB). Use SVC Copy Services instead. A LUN cannot span multiple Extent Pools. If you define a LUN with a LUN size that is not a multiple of 1 GiB (for example. up to 65280 (we use the abbreviation 64 K in this discussion.

On a write. If the CRC does not match the data. The write operation is returned to the host with a write error code. the DIF is returned with the data block to the host. The DIF field is added to the end of the data block. The storage system validates the CRC and Reference tag and. which validates the CRC and Reference tags. In this way. data corruption is detected immediately on a write and is never committed to the physical media. The 8-byte DIF consists of 2-bytes CRC (Cyclic Redundancy Check) data. and the data is sent through the fabric to the storage target. the DIF is generated by the host bus adapter (HBA).The allocation process for FB volumes is illustrated in Figure 5-8. if correct.9 GB LUN created 1 GB used Rank-b used 1 GB used used 100 MB unused Figure 5-8 Creation of an FB LUN T10 Data Integrity Field support The ANSI T10 standard provides a way to check the integrity of data that is read and written from the application or the host bus adapter to the disk and back through the SAN fabric. To the standard 512-byte data field. You can define LUNs with an option to instruct the DS8870 to use the CRC-16 T10 DIF algorithm to store the data. Logical Block Address (LBA). This check is implemented through the data integrity field (DIF) that is defined in the T10 standard. Virtualization concepts 115 . which is based on the block data and logical block address. On a read. and a 2-byte Application Tag for applications that might use it. and host application tags to each sector of FB data on a logical volume. The DS8870 supports the T10 DIF standard for FB volumes that are accessed by the FCP channel of Linux on System z. then the data was corrupted during the write. stores the data block and DIF on the physical media. Extent Pool FBprod 1 GB 1 GB 1 GB 1 GB free Logical 3 GB LUN Rank-a 3 GB LUN used 1 GB free Rank-b used 1 GB free Allocate a 3 GB LUN Extent Pool FBprod Rank-a 1 GB 1 GB 1 GB 1 GB used 3 GB LUN 2. A T10 DIF-capable LUN uses 520-byte sectors instead of the common 512-byte sector size. Chapter 5. a 4-byte Reference Tag (to protect against misdirected writes). This validation adds a small amount of latency per I/O but could impact overall response time on smaller block transactions (less than 4 KB I/Os). 8 bytes are added. The host records the error and retransmits the data to the target. This support adds protection information that consists of Cyclic Redundancy Checking (CRC).

Up to 255 LCUs can be defined. CKD volumes A System z CKD volume is composed of one or more extents from one CKD Extent Pool.668 cylinders (approximately 223 GB).8. which features 1113 cylinders. When an FB LUN with the mkfbvol DSCLI command is created. which is about 1 TB. A T10 DIF-capable volume must be defined by using the DSCLI because the GUI in the current release does not yet support this function. On a DS8870 and previous models that start with the DS8000 microcode Release 6. 116 IBM System Storage DS8870 Architecture and Implementation . If you query a LUN with the showfbvol command. This volume capacity is called Extended Address Volume (EAV) and is supported by the 3390 Model A.You can also create T10 DIF-capable LUNs for operating systems that do not yet support this feature (except for System i®) but active protection is available only for Linux on System z. add the option -t10dif.006 cylinders. For Copy Services operations. you do not specify the number of 3390 Model 1 extents but the number of cylinders you want for the volume. This restriction does not apply to mirroring. see 5. the datatype is shown to be FB 512T instead of the standard FB 512 type. For more information about LCU. which also are called Logical Subsystems (LSS). a Logical Control Unit (LCU) must be defined that provides up to 256 possible addresses that can be used for CKD volumes. Target LUN: When FlashCopy for a T10 DIF LUN is used. you can define CKD volumes with up to 1. Important: Because the DS8000 internally always uses 520-byte sectors (to be able to support System i volumes). when you define a System z CKD volume.1. CKD extents are of the size of 3390 Model 1. Before a CKD volume can be created. “Logical subsystem” on page 126. the target LUN must also be a T10 DIF type LUN.2.182. there are no capacity considerations when standard or T10 DIF capable volumes are used. the size is still limited to 262. However.

in which case the DS8000 reports that the LUN is not RAID-protected. SG24-8887. you can define Alias volumes and normal base volumes. However. IBM i LUNs IBM i LUNs are also composed of fixed block 1 GiB extents. Although they have no size. Virtualization concepts . 3 Volume w ith 3226 cylinders Rank-y used 1113 used used 1000 used 113 cylinders unused Figure 5-9 Allocation of a CKD logical volume CKD Alias Volumes There is another type of CKD volumes. Alias volumes do not occupy storage capacity. You also can stripe a volume across the ranks (see “Storage Pool Striping: Extent rotation” on page 121). but a volume can have extents from different ranks in the same Extent Pool. However. 17. The i5/OS supports only certain fixed volume sizes. 3 1113 free 1113 free Rank-y used used 1113 free Allocate 3226 cylinder volume Extent Pool CKD0 Rank-x 1113 1113 1113 1113 used 3390 Mod. Extent Pool CKD0 1113 1113 1113 Logical 3390 Mod.5 GB. 117 Chapter 5. IBM i LUNs expose a 520-Byte block to the host. 3 Rank-x 3390 Mod. or RAID 10 arrays. depending on the model that is chosen. They are used by z/OS to send parallel I/Os to the same base CKD volume.25 GB. and other sizes up to 282. LUNs are based on RAID 5. Within an LCU. This deception causes the i5/OS to conduct its own mirroring.A CKD volume cannot span multiple Extent Pools. The capacities that are quoted for the IBM i LUNs are in terms of the 512-Byte block capacity and are expressed in GB (109 ). for example. LUNs that are created on a DS8000 are always RAID-protected. For more information. These sizes are not multiples of 1 GB. the PAV Alias volumes. there are special aspects with System i LUNs. The operating system uses eight of these Bytes so the usable space is still 512 Bytes like other SCSI LUNs. each Alias volume needs an address. model sizes of 8. System i LUNs can have the attribute unprotected. see IBM System Storage DS8000: Host Attachment and Interoperability. These capacities should be converted to GiB (230 ) when effective utilization of extents that are 1 GiB (230 ) are considered. and.5 GB. space is wasted. RAID 6. you might want to deceive i5/OS and tell it that the LUN is not RAID-protected. Figure 5-9 shows an example of how a logical volume is allocated with a CKD volume.

You cannot fill all the volumes with data because the total physical capacity is limited by the repository size. fully provisioned volumes. Virtual space in an Extent Pool is used for TSE and ESE volumes. This function is also called over-provisioning or thin provisioning. Thin Provisioning is a feature that requires payable license. Each Extent Pool can have a TSE volume repository. However. any data that is written to a TSE volume must have enough physical storage to contain this write activity. In this case. So the logical capacity can be larger than the physical capacity. The general idea behind Space Efficient volumes is to use or allocate physical storage when it is only potentially or temporarily needed. but this physical space cannot be shared between Extent Pools. REDP-4554. the following types of Space Efficient volumes can be defined: Extent Space Efficient Volumes (ESE) Track Space Efficient Volumes (TSE) These two concepts are described in detail in DS8000 Thin Provisioning. There can be only one repository per Extent Pool. For example.6 Space Efficient volumes When a standard FB LUN or CKD volume is created on the physical drive.2. ESE volumes use available extents in the Extent Pool in a similar fashion as standard. The amount of space that is physically allocated is a function of the amount of data that is written to or changes that are performed on the volume. it is similar to a volume within the Extent Pool. This physical storage is provided by the repository. The logical size of the repository is limited by the available virtual capacity for Space Efficient volumes.5. Extents are only allocated as needed to write data to the ESE volume. The repository is an object within an Extent Pool. The repository is striped across all ranks within the Extent Pool. Important: The TSE repository cannot be created on SATA Drives. It is the physical space that is available for all Space Efficient volumes in total in this Extent Pool. Important: The size of the repository and the virtual space it uses are part of the Extent Pool definition. Space is allocated when data is written to the volume. which is 100 GB in this example. For the DS8870. In a certain sense. there could be a repository of 100 GB reserved physical storage and you defined a virtual capacity of 200 GB. whereas the repository is used only for TSE volumes for FlashCopy SE. you could define 10 TSE-LUNs with 20 GB each. A Space Efficient volume does not occupy physical capacity when it is created. TSE volumes are defined from virtual space in that the size of the TSE volume does not initially use physical storage. The sum of capacities of all defined Space Efficient volumes can be larger than the physical capacity available. The physical size of the repository is the amount of space that is allocated in the Extent Pool. Repository for Track Space Efficient volumes The definition of TSE volumes begins at the Extent Pool level. 118 IBM System Storage DS8870 Architecture and Implementation . The repository has a physical size and a logical size. it occupies as many extents as necessary for the defined capacity.

The concept of Track Space Efficient volumes is shown in Figure 5-10. the larger the tables. No actual storage is allocated until write activity occurs to the ESE or TSE volumes. careful planning for the size of the repository is required before it is used. it is allocated when a destage from the cache occurs and there is not enough free space left on the currently allocated extent or track. and the impact. all Track Space Efficient volumes within this Extent Pool must be deleted. This virtual space is mapped onto ESE volumes in the Extent Pool (physical space) and TSE volumes in the repository (physical space) as needed. The TSE allocation unit is a track (64 KB for open systems LUNs or 57 KB for CKD volumes). Chapter 5. Because space is allocated in extents or tracks. Virtualization concepts 119 . Therefore. If a repository must be expanded. The repository then must be deleted and re-created with the required size. the system must maintain tables that indicate their mapping to the logical volumes. We need a mechanism to free up physical space in the repository when the data is no longer needed. Space allocation Space for a Space Efficient volume is allocated when a write occurs. The smaller the allocation unit. Physical storage is allocated when data is written to Track Space Efficient volumes. it is not possible to expand the physical size of the repository. More precisely. Virtual space is created as part of the Extent Pool definition.Important: In the current implementation of Track Space Efficient volumes. Virtual space equals the total space of the required ESE volumes and the TSE volumes for FlashCopy SE. Virtual repository capaci ty Used tracks Allocated tracks E xtent Poo l Space efficient volume Ranks Repository for track space efficient volumes striped across ranks no rmal Vol ume Figure 5-10 Concept of Track Space Efficient volumes for FlashCopy SE The lifetime of data on Track Space Efficient volumes is expected to be short because they are used only as FlashCopy targets. so the performance of the Space Efficient volumes is impacted.

For more information about ESE and TSE volumes concepts. ESE volumes can be mapped to hosts. Use of Track Space Efficient volumes Track Space Efficient volumes are supported only as FlashCopy target volumes. Standard volumes Allocated extents virtual capacity per extent pool Extent Pool Used extents Ranks Extent Space efficient volume Free extents in extent pool Figure 5-11 Concept of ESE logical volumes Use of Extent Space Efficient volumes Like standard volumes (which are fully provisioned).The FlashCopy commands include options to release the space of Track Space Efficient volumes when the FlashCopy relationship is established or removed. They are also supported in combination with Copy Services functions. see DS8000 Thin Provisioning. The concept of ESE logical volumes is shown in Figure 5-11. The CLI commands initfbvol and initckdvol also can release the space for Space Efficient volumes (ESE and TSE). Copy Services between Space Efficient and regular volumes are also supported. 120 IBM System Storage DS8870 Architecture and Implementation . Important: Space Efficient volumes (ESE) are also supported by the IBM System Storage Easy Tier function. REDP-4554.

but they do not have to come from one rank. There are two extent allocation methods (EAM) for the DS8000: Rotate volumes and Storage Pool Striping (Rotate extents). and reuse the extents of those LUNs to create other LUNs or volumes. Because the extents are cleaned after you deleted a LUN or CKD volume. it can take some time until these extents are available for reallocation. Rank A Volume 1 Volume 2 Volume 3 Volume 4 Rank B Volume 1 Volume 2 Volume 3 Volume 4 Rank C Volume 1 Volume 3 Volume 3 Volume 4 Rank D Volume Volume Volume Volume 1 3 3 4 Same extent pool Figure 5-12 Rotate Extents Chapter 5. The first rank in the list is randomly picked at each power-on of the storage subsystem.7 Allocation. as shown in Figure 5-12. resize LUNs or volumes. The extents for a LUN or volume are logically ordered. deletion. Thus.0. Important: The default for extent allocation method is Storage Pool Striping (Rotate extents) for the DS8870 and older models at Licensed Machine Code 6. In prior releases of Licensed Machine Code. We can delete LUNs or CKD volumes. The DS8000 tracks the rank in which the last allocation started. This construction method of using fixed extents to form a logical volume in the DS8000 series allows flexibility in the management of the logical volumes. An Extent Pool with more than one rank is needed to use this storage allocation method. One logical volume can be removed without affecting the other logical volumes that are defined on the same Extent Pool.xx or later. Virtualization concepts 121 . Storage Pool Striping: Extent rotation The preferred storage allocation method is Storage Pool Striping. and modification of LUNs/CKD volumes All extents of the ranks that are assigned to an Extent Pool are independently available for allocation to logical volumes. The allocation of the first extent for the next volume starts from the next rank in that sequence. Storage Pool Striping is an option when a LUN or volume is created. The DS8000 maintains a sequence of ranks. The extents do not have to be contiguous on a rank. and so on. the default allocation method was Rotate volumes.6. the system rotates the extents across the ranks.2.5. The extents of a volume can be striped across several ranks. maybe of different sizes. The reformatting of the extents is a background process. The next extent for that volume is taken from the next rank in sequence.

see IBM System Storage DS8000: Easy Tier. depending on your actual workload. if the extent pool is a non-hybrid pool. For extent pools that contain SSD disks. With the Easy Tier manual mode facility. extent allocation is done initially on HDD ranks (Enterprise or Near Line) while space remains available. When several volumes are allocated. it is better to add several ranks at the same time. the user can request an Extent Pool merge followed by a volume relocation with striping to perform the same function. you might introduce a bottleneck. not just one. and EAM is set to managed. Rank A Volum e 1 Volum e 2 Rank B Volum e 3 Rank C Volume 4 Rank D Same extent pool Figure 5-13 Rotate Volumes You might want to consider this allocation method when you prefer to manage performance manually. Important: If you must add capacity to an Extent Pool because it is nearly full. In this case. according to performance needs. as shown in Figure 5-13. For extent pools that contain a mix of Enterprise and Nearline ranks. Important: Rotate extents and rotate volume EAMs provide distribution of volumes over ranks. For more information. extents are automatically relocated over time. which is the preferred method to minimize hot spots and improve overall performance. the allocation continues with the next rank in the Extent Pool. we rotate through the ranks.Rotate volumes allocation method Extents can be allocated sequentially. In this case. This method allows new volumes to be striped across the newly added ranks. the allocation for each volume starts in another rank. When you create striped volumes and non-striped volumes in an Extent Pool. all extents are taken from the same rank until we have enough extents for the requested volume size or the rank is full. the Storage Pool striping EAM is used independently of the requested EAM. If more than one volume is created in one operation. by putting all the volumes data onto one rank. In a mixed disk characteristics (or hybrid) Extent Pool that contains different classes (or tiers) of ranks. The workload of one volume is going to one rank. Easy Tier algorithms migrate the extents as needed to SSD ranks. In the case of a hybrid managed extent pool. however. 122 IBM System Storage DS8870 Architecture and Implementation . A full rank is skipped when you create new striped volumes. Rotate extents performs this distribution at a granular (1-GB extent) level. This configuration makes the identification of performance bottlenecks easier. a rank could be filled before the others. initial extent allocation is done on Enterprise ranks first. REDP-4667.

Double striping issue It is possible to use striping methods on the host. The volumes are striped across three ranks.Rotate volume EAM: The rotate volume EAM is not allowed if one Extent Pool is composed of SSD disks and has a Space Efficient repository or virtual capacity configured. Hotspot! DS8000 1 2 3 1 2 3 1 2 3 • 1 extentpool. In such configurations. for example. The extents are now taken from each of the DS8000 volumes. the striping methods could compensate each other and eliminate any performance advantage or even lead to performance bottlenecks. By using striped volumes. Figure 5-14 shows an example of double striping. all of these extents are on the same rank. 3 ranks • 3 volumes. The DS8000 provides three volumes to an SAN Volume Controller. operating systems that do not include a volume manager that can do striping benefits most from this allocation method. AIX LVM or VDisk striping on SAN Volume Controller. When a striped vdisk is created. In particular. each • EAM: rotate extents 1 2 3 ranks v1 v1 v1 • 1 mdisk group • 3 mdisks • 1 vdisk. 3 exts. which could make this rank a hotspot. you distribute the I/O load of a LUN or CKD volume to more than one set of eight disk drives. but in a worst case scenario. The SAN Volume Controller uses the volumes as MDisks. striping enabled mdisks SAN Volume Controller (SVC) v1 Figure 5-14 Example for double striping issue Chapter 5. Virtualization concepts 123 . The ability to distribute a workload to many physical drives can greatly enhance performance for a logical volume. extents are taken from each MDisk.

After the logical volume is created and available for host access. The latter might be useful if you want to use Easy Tiering within the DS8000 and the SAN Volume Controller. Important: If you have Extent Pools with many ranks and all volumes are striped across the ranks and one rank becomes inaccessible. see “Dynamic Volume Expansion” on page 125. and migration canceled configuration states are associated with a volume relocation request. as shown in Figure 5-15. If a volume deletion request is received. see 7. migration paused. For example. which equals the DS8000 extent size. If you plan to double striping. The transposing configuration state is associated with an Extent Pool merge. Figure 5-15 Logical volume configuration states When a logical volume creation request is received. as described in “Dynamic volume migration” on page 125. the logical volume is placed in the deconfiguring configuration state until all capacity associated with the logical volume is deallocated and the logical volume object is deleted.5. For more information about how to configure Extent Pools and volumes for optimal performance. Logical volume configuration states Each logical volume features a configuration state attribute. a logical volume object is created and the logical volume's configuration state attribute is placed in the configuring configuration state. migration error. The reconfiguring configuration state is associated with a volume expansion request. For more information.However. The migrating. “Performance considerations for logical configuration” on page 186. you lose access to most of the data in that Extent Pool. throughput also could benefit from double striping. the stripe size at the host level should be different from the DS8000 extent size or identical to the DS8000 extent size. it is placed in the normal configuration state. you could use wide Physical Partition striping in AIX with a stripe size in the MB range. 124 IBM System Storage DS8870 Architecture and Implementation . Another example could be a SAN Volume Controller with a stripe size of 1 GB. as described in “Dynamic Extent Pool merge” on page 113. The configuration state reflects the condition of the logical volume relative to user-requested configuration operations.

Dynamic volume migration Dynamic volume migration or Dynamic Volume Relocation (DVR) is a capability that is provided as part of the Easy Tier manual mode facility. and because of the risk of data corruption. The DS8000 configuration interfaces Data Studio CLI and Data Studio GUI do not allow you to change a volume to a smaller size. Easy Tier Automatic Mode automatically relocates extents within the ranks to allow performance rebalancing. Volume migration also can be used to migrate a logical volume into or out of an Extent Pool. On the DS8000. Important: DVR in the same extent pool is not allowed in the case of a managed hybrid pool. Volume migration that specifies the rotate extents EAM can also be used (in non-hybrid extent pools) to redistribute a logical volume's extent allocations across the currently existing ranks in the Extent Pool if more ranks are added to an Extent Pool.As shown in Figure 5-15 on page 124. However. you add extents to the volume. the configuration state serializes user requests with the exception that a volume deletion request can be initiated from any configuration state. Because most operating systems have no means of moving data from the end of the physical disk off to unused space at the beginning of the disk. DVR allows data that is stored on a logical volume to be migrated from its currently allocated storage to newly allocated storage while the logical volume remains accessible to attached hosts. Dynamic Volume Expansion The size of a LUN or CKD volume can be expanded without destroying the data. the system tries to allocate the additional extents within the same rank that the volume was created from originally. the extents that are used to increase the size of the volume are striped. In managed hybrid extent pools. Dynamic volume migration provides the following capabilities: The ability to change the Extent Pool in which a logical volume is provisioned. The ability to specify the extent allocation method for a volume migration that allows the extent allocation method to be changed between the available extent allocation method any time after volume creation. The operating system must support this resizing. you must delete any copy services relationship that involves that volume. enterprise disk. IBM does not support shrinking a volume. The user can request DVR by using the Migrate Volume function that is available through the DS8000 Storage Manager GUI or Data Studio CLI. Important: Before you can expand a volume. disk RPM. Chapter 5. The target Extent Pool can be a separate Extent Pool than the Extent Pool where the volume is located. If the volume was created as striped across the ranks of the Extent Pool. A logical volume includes the attribute of being striped across the ranks or not. the target Extent Pool must be managed by the same DS8000 internal server. or the same extent pool. but only if it is a non-hybrid (or single-tier) pool. or SATA disk). If a volume was created without striping. and RAID array type. DVR allows the user to specify a target Extent Pool and an extent allocation method (EAM). This ability provides a mechanism to change the underlying storage characteristics of the logical volume to include the disk class (SSD. Virtualization concepts 125 .

2. The volume migration follows each of the states that were shown in Figure 5-15 on page 124. or any of the other remote copy implementations. resume. For more information. as described in “Logical volume configuration states” on page 124. Logical volumes that are configured in that Extent Pool are not bound to any specific rank. The capacity of one or more ranks can be aggregated into an Extent Pool. System z users are familiar with a logical control unit (LCU). 5. there is no fixed binding between any rank and any logical subsystem. The 256 possible logical volumes that are associated with an LSS are mapped to the 256 possible device addresses on an LCU (logical volume X'abcd' maps to device address X'cd' on LCU X'ab'). There is a one-to-one relationship between an LCU and a CKD LSS (LSS X'ab' maps to LCU X'ab'). Extent Pools belong to one server (CEC).Each logical volume has a configuration state. To begin a volume migration. with one restriction. When CKD logical volumes are created and their logical volume numbers are assigned. 126 IBM System Storage DS8870 Architecture and Implementation . consider whether Parallel Access Volumes (PAVs) are required on the LCU and reserve addresses on the LCU for alias addresses. It groups logical volumes and LUNs in groups of up to 256 logical volumes. REDP-4667. X’05’. the available capacity of the storage facility can be flexibly allocated across the set of defined logical subsystems and logical volumes. or cancel a volume migration. We saw that volumes are formed from a number of extents from an Extent Pool. up to X’FE’) are handled by server 0 and all odd-numbered LSSs (X’01’. As such. On the DS8000 series. up to X’FD’) are handled by server 1. You can have up to 256 volumes in one LSS. There are more functions that are associated with volume migration that allow the user to pause. see IBM System Storage DS8000 Easy Tier. Different logical volumes on the same logical subsystem can be configured in separate Extent Pools. server 0 or server 1. Any or all logical volumes can be requested to be migrated at any time if there is sufficient capacity available to support the reallocation of the migrating logical volumes in their specified target Extent Pool. LSSs also have an affinity to the servers. For open systems.8 Logical subsystem A logical subsystem (LSS) is another logical construct. System z Operating Systems configure LCUs to create device addresses. All even-numbered LSSs (X’00’. This logical volume number is assigned to a logical volume when a logical volume is created and determines the LSS with which it is associated. Global Mirror. However. the logical volume initially must be in the normal configuration state. X’04’. You can define up to 255 LSSs for the DS8000 series. LSSs do not play an important role except in determining which server manages the LUN (and in which Extent Pool it must be allocated). For each LUN or CKD volume. X’03’. you can choose an LSS. Logical volumes feature a logical volume number X'abcd' where X'ab' identifies the LSS and X'cd' is one of the 256 logical volumes on the LSS. LSSs are important in certain aspects that are related to Metro Mirror. X’02’. LSS X’FF’ is reserved.

. Fixed block LSSs are deleted automatically when the last fixed block logical volume on the LSS is deleted. X'10' to X'1F' are LSSs in address group 1. Array Site 24 24 LSS X'18' DB2-test …. . Physical Drives Array Site …. . ……. CKD LSSs require user parameters to be specified and must be created before the first CKD logical volume can be created on the LSS. where a is the address group and b denotes an LSS within the address group. The first LSS that is defined in an address group sets the type of that address group.Certain management actions in Metro Mirror. or Global Copy operate at the LSS level. . Array Site Logical Volumes LSS X'17' DB2 24 . The option to put all or most of the volumes of a certain application in one LSS makes the management of remote copy operations easier.. . …. They must be deleted manually after the last CKD logical volume on the LSS is deleted. . All LSSs within one address group must be of the same type. LSSs are grouped into address groups of 16 LSSs. The groups are deleted automatically when the last LSS in the address group is deleted. For example. …... the freezing of pairs to preserve data consistency across all pairs. For example. is done at the LSS level. All devices in an LSS must be CKD or FB. . LSSs are numbered X'ab'... Virtualization concepts 127 .... This restriction goes even further. Chapter 5. CKD or FB. Global Mirror. Array Site Array Site . Address groups Address groups are created automatically when the first LSS that is associated with the address group is created. 24 Array Site Figure 5-16 Grouping of volumes in LSSs Fixed block LSSs are created automatically when the first fixed block logical volume on the LSS is created. as shown in Figure 5-16. in case you have a problem with one of the pairs..

the DS8000 introduced the concept of host attachments and volume groups. A set of host ports can be associated through a port group attribute that allows a set of HBAs to be managed collectively. and the position of the LUN within the LSS X'bb'. Figure 5-17 shows the concept of LSSs and address groups. FB LUN X'2101' denotes the second (X'01') LUN in LSS X'21' of address group 2. a server features two or more host bus adapters (HBAs) and the server needs access to a group of LUNs. Address group X'1x' CKD LSS X'10' LSS X'12' LSS X'14' LSS X'16' LSS X'18' LSS X'1A' LSS X'1C' X'1E00' X'1E01' LSS X'11' LSS X'13' LSS X'15' X'1500' LSS X'17' LSS X'19' LSS X'1B' LSS X'1D' X'1D00' LSS X'1F' Extent Pool FB-2 Extent Pool CKD-2 Rank-w Extent Pool CKD-1 Rank-a Rank-b Rank-x Server0 LSS X'1E' Extent Pool FB-1 Rank-c Rank-y Rank-d Address group X'2x': FB LSS X'20' LSS X'22' LSS X'24' LSS X'26' X'2800' LSS X'28' LSS X'21' X'2100' X'2101' LSS X'23' LSS X'25' LSS X'27' LSS X'29' LSS X'2B' LSS X'2D' LSS X'2F' Extent Pool FB-2 Rank-z Volume ID LSS X'2A' LSS X'2C' LSS X'2E' Figure 5-17 Logical storage subsystems The LUN identifications X'gabb' are composed of the address group X'g'.Important: System z users who still want to use IBM ESCON® to attach hosts to the DS8000 series should be aware that ESCON supports only the 16 LSSs of address group 0 (LSS X'00' to X'0F'). The DS8870 does not support ESCON channels. In most cases. 5. and the LSS number within the address group X'a'. This port group is referred to as a host attachment within the GUI. For easy management of server access to logical volumes.9 Volume access A DS8000 provides mechanisms to control host access to LUNs. ESCON devices can be attached only by using FICON/ESCON converters. For example. this address group should be reserved for ESCON attached CKD devices in this case and not used as FB LSSs. Host attachment HBAs are identified to the DS8000 in a host attachment construct that specifies the worldwide port names (WWPNs) of the HBAs. Therefore.2. 128 IBM System Storage DS8870 Architecture and Implementation Server1 .

Whichever ports the HBA logs in on. there is a default volume group that contains all CKD volumes. the host attachment contains attributes that define the logical block size and the Address Discovery Method (LUN Polling or Report LUNs) that are used by the host HBA. You must consider what volume group type to create when a volume group is set up for a particular HBA. When used with open systems hosts. The GUI typically sets these values appropriately for the HBA based on your specification of a host type. This definition allows a LUN to be shared by host HBAs that are configured to separate volume groups. The maximum number of host attachments on a DS8000 is 8192. The maximum number of volume groups is 8320 for the DS8000. The volume group type also controls whether 512-Byte block LUNs or 520-Byte block LUNs can be configured in the volume group. it sees the same volume group that is defined on the host attachment that is associated with this HBA. For this volume group type.Each host attachment can be associated with a volume group to define which LUNs that HBA is allowed to access. a host attachment object that identifies the HBA is linked to a specific volume group. When used with CKD hosts. An FB logical volume is automatically removed from all volume groups when it is deleted. There are two types of volume groups that are used with open systems hosts and the type determines how the logical volume number is converted to a host addressable LUN_ID on the Fibre Channel SCSI interface. Any CKD host that logs in to a FICON I/O port has access to the volumes in this volume group. You must define the volume group by indicating which fixed block logical volumes are to be placed in the volume group. CKD logical volumes are automatically added to this volume group when they are created and are automatically removed from this volume group when they are deleted. This consistency ensures that HBAs that share a volume group have a consistent interpretation of the volume group definition and have access to a consistent set of logical volume types. A map volume group type is used with FC SCSI host types that poll for LUNs by walking the address range on the SCSI interface. Virtualization concepts 129 . This type of volume group can map any FB logical volume numbers to 256 LUN IDs that have zeros in the last 6 Bytes and the first 2 Bytes in the range of X'0000' to X'00FF'. FB logical volumes can be defined in one or more volume groups. Logical volumes can be added to or removed from any volume group dynamically. the logical volume number X'abcd' is mapped to LUN_ID X'40ab40cd00000000'. A mask volume group type is used with FC SCSI host types that use the Report LUNs command to determine the LUN IDs that are accessible. Multiple host attachments can share the volume group. Chapter 5. This type of volume group can allow any FB logical volume numbers to be accessed by the host where the mask is a bitmap that specifies which LUNs are accessible. These attributes must be consistent with the volume group type of the volume group that is assigned to the host attachment. When a host attachment is associated with a volume group. The host attachment can also specify a port mask that controls which DS8000 I/O ports the HBA is allowed to log in to. Volume group A volume group is a named construct that defines a set of logical volumes.

however. W W PN-1 W W PN-2 W W PN-3 W W PN-4 Host attachm ent: AIXprod1 Host attachm ent: AIXprod2 Volum e group: DB2-1 Volum e group: DB2-2 Volum e group: DB2-test Host att: Test W W PN-5 W W PN-6 W W PN-7 Host att: Prog W W PN-8 Volum e group: docs Figure 5-18 Host attachments and volume groups 130 IBM System Storage DS8870 Architecture and Implementation . which are grouped in one host attachment and both are granted access to volume group DB2-1. The server in the lower left part of the figure features four HBAs and they are divided into two distinct host attachments. The other HBAs have access to a volume group called docs. one volume in each group that is not shared. which is accessed by server AIXprod2. One HBA can access volumes that are shared with AIXprod1 and AIXprod2. there is. Most of the volumes in volume group DB2-1 are also in volume group DB2-2. Host AIXprod1 has two HBAs.Figure 5-18 shows the relationships between host attachments and volume groups. In our example.

The LUNs are assigned to one or more volume groups. and resized.5. deleted. Logical volumes can be dynamically created. Within the Extent Pool. which contributes to the reduction of management effort.2. and assign them a logical volume number that determines which logical subsystem they would be associated with and which server would manage them. 3. We create logical volumes within the Extent Pools (by default. Large LUNs and CKD volumes reduce the total number of volumes. Chapter 5. The combined extents from those ranks in the Extent Pool are used for subsequent allocation to one or more logical volumes. 4. The following steps are completed by a user: 1.10 Virtualization hierarchy summary Going through the virtualization hierarchy. The array is further transformed into a rank with extents formatted for FB data or CKD. This configuration is the same for Standard volumes (fully allocated) and ESE volumes. with spare disks. This virtualization concept provides much more flexibility than in previous products. The array sites are created automatically when the disks are installed. ESE and TSE volumes require virtual capacity to be available in the Extent Pool. we start with a number of disks that are grouped in array sites. 2. we can reserve space for TSE volumes by creating a repository. They can be grouped logically to simplify storage management. The extents from selected ranks are added to an Extent Pool. The host HBAs are configured into a host attachment that is associated with a volume group. An array site is transformed into an array. 5. 6. TSE volumes for use with FlashCopy SE can be created only within the repository of the Extent Pool. Virtualization concepts 131 . striping the volumes).

Array Site RAID Array Data Data Data Data Data Data Parity Spare Rank Type FB Extent Pool 1 GB FB 1 GB FB 1 GB FB Logical Volume 1 GB FB 1 GB FB 1 GB FB Server0 1 GB FB LSS FB Address Group X'2x' FB 4096 addresses LSS X'27' Volume Group X'3x' CKD 4096 addresses Figure 5-19 Virtualization hierarchy 132 IBM System Storage DS8870 Architecture and Implementation 1 GB FB Host Attachment 1 GB FB .The virtualization hierarchy is shown in Figure 5-19.

182. No connection of LSS performance to underlying storage. designed for 219 TB – FB: 16 TB. Virtualization concepts 133 . Number of LSSs can be defined based upon the following device number requirements: – – – – With larger devices. Test systems can have their own LSSs with fewer volumes than production systems. Volumes for a particular application can be kept in a single LSS. fewer LSSs might be used. Increased number of logical volumes: – Up to 65280 (CKD) – Up to 65280 (FB) – 65280 total for CKD + FB Any mixture of CKD or FB addresses in 4096 address groups. if required (for applications that require less storage).006 cylinders). No strict relationship between RAID ranks and LSSs.3 Benefits of virtualization The DS8000 physical and logical architecture defines new standards for enterprise storage virtualization.5. Virtualization layers include the following benefits: Flexible LSS definition allows maximization and optimization of the number of devices per LSS. and RAID 10) Storage types (CKD and FB) aggregated into Extent Pools Volumes that are allocated from extents of Extent Pool Storage pool striping Dynamically add and remove volumes Logical Volume Configuration States Dynamic Volume Expansion ESE volumes for Thin Provisioning TSE volumes for FlashCopy SE Extended Address Volumes (CKD) Dynamic Extent Pool merging for Easy Tier Dynamic Volume Relocation for Easy Tier Virtualization reduces storage management requirements. designed for 1 PB Flexible logical volume configuration: – – – – – – – – – – – – Multiple RAID types (RAID 5. Increased logical volume size: – CKD: about 1 TB (1. Chapter 5. RAID 6. Smaller LSSs can be defined.

including the Sysplex scope Auto-Configuration: – For high availability reasons. it looks at single point of failure only.z/OS FICON discovery and Auto-Configuration DS8870 supports the z/OS FICON Discovery and Auto-Configuration Feature (zDAC). When zDAC is used. The numbering of the new devices. It does not consider any channel or port speed. zDAC provides the following capabilities: Discovery: – Provides capability to discover attached disk that is connected to FICON fabrics – Detects new and older storage subsystems – Detects new control units on existing storage subsystems – Proposes control units and device numbering – Proposes paths for all discovery systems to newly discovered control units. which is deployed by the new IBM zEnterprise zEC12 and z196 servers.4 zDAC . or any current performance information. With that scope of discovery and autoconfiguration. The number of paths to new control units that should be configured. and devices are displayed to the user.5. keep in mind the following considerations: Physical planning is still required. – After a storage subsystem is explored. paths are proposed to new control units. 134 IBM System Storage DS8870 Architecture and Implementation . HCD and Hardware Configuration Management (HCM) users must have authority for making dynamic I/O configuration changes As its name implies. This function was developed to reduce the complexity and skills that are needed in a complex FICON production environment for changing the I/O configuration. zDAC proposes new configurations that incorporate the current contents of your I/O Definition File (IODF) with additions for new and changed subsystems and their devices that are based on the policy you defined in the Hardware Configuration Definition (HCD). the target work IODF is updated. when zDAC proposes channel paths. the discovered information is compared against the target IODF. you can add storage subsystems to an existing I/O configuration in less time. Logical configurations of the Storage Subsystem are still required. By using zDAC. depending on the policy you defined. The z/OS image that should be allowed to use the new devices. The following requirements must be met for using zDAC: Your System z must be a zEnterprise zEC12 or z196 running z/OS V1 R12 or above LPAR must be authorized to make dynamic I/O Configuration (zDCM) changes on each processor that is hosting a discovery system.

Chapter 5.1 and later. Support is also available for older models at LIC level R5. System z Discovery and Auto-Configuration (zDAC) z/OS CSS Name Server FICON SYSPEX Common Fabrics Common Fabrics z/OS FICON SYSPEX FICON z/OS Common Fabrics Common Fabrics HCD zDAC RNID for Topology Discovery New FICON ELS for rapid discovery of CU images New FICON Channel Commands for issuing ELS commands to Name Server and Storage IODF IODF’ Name Server Figure 5-20 zDAC concept Important: The DS8870 supports zDAC. Virtualization concepts 135 . For more Information. see z/OS V1R12 HCD User’s Guide SC33-7988.A schematic overview of the zDAC concept is shown in Figure 5-20.

006 cylinders: DS8870 and z/OS V1. 136 IBM System Storage DS8870 Architecture and Implementation .535 cylinders.182. EAV V1 supported volumes with up to 262.763 (51 x 1113) to 1. The use of the 16-bit cylinder addressing allows a theoretical maximum address of 65.Extended Address Volumes Today's large storage facilities tend to expand to larger CKD Volume capacities.520 cylinders.193 DS: IBM.668 volume. With the introduction of EAV volumes. Some installations are running out of the z/OS addressable UCB 64-K limitation disk storage.520 cylinders. we must have a new format to address the area above 65. the addressing changed from track to cylinder addressing.1. This space is allocated in so-called Multicylinder Units (MCU).006(1062 x 1113) cylinders The size of an existing Mod 3/9/A volume can be increased to its maximum supported size by using Dynamic Volume Expansion (DVE).006 cylinders (about 1. 16-bit cylinder numbers: Existing track address format: CCCCHHHH: – HHHH: 16-bit track number – CCCC: 16-bit track cylinder Cylinder Managed Space: The area on an EAV that is located above the first 65. The expansion can be done with the Data Studio CLI command. The partial change from track to cylinder addressing creates the following address areas on EAV volumes: Track Managed Space: The area on an EAV that is located within the first 65.2107-75KAB25 CMUC00022I chckdvol: CKD Volume 9AB0 successfully modified. which currently have a size of 21cylinders.R12 or above support CKD EAV volume size: – 3390 Model A: 1 . Because of the four-digit device addressing limitation.006 cylinders (about 1 TB). Mai 2010 07:52:55 CEST IBM DSCLI Version: 6. as shown in Example 5-1.5.182.520 cylinders.182.182. A new cylinder-track address format addresses the extended capacity on an EAV: 28-bit cylinder numbers: CCCCcccH where: – CCCC: The low order 16 bits of a 28-bit cylinder number – ccc: The high order 12 bits of a 28-bit cylinder number – H: A 4-bit track number (0-14) z/OS components and products now support 1.56. it is necessary to define larger CKD volumes by increasing the number of cylinders per volume. EAV V2 now supports up to 1.1.5. To allocate more cylinders.520 cylinders – 113 Cylinders boundary sizes: 56.004-TB addressable Storage) – 3390 Model A: Up to 1062 x 3390 Model 1 (Four times the size we have with EAV R1) Configuration granularity: – 1 Cylinder boundary sizes: 1 . Example 5-1 Dynamically expand CKD volume dscli> chckdvol -cap 262268 -captype cyl 9ab0 Date/Time: 10.5 EAV V2 .

Format 8 and 9 DSCBs are new for EAV. BDAM. The VTOC allocation method for an EAV volume was changed compared to the VTOC used for LVS volumes.048 blocks. and zFS data sets. In the actual releases. but are not eligible to have extents in the extended address space (cylinder managed space) in z/OS V1. then that Copy Services relationship must be terminated until the source and target volumes are at their new capacity. such as KSDS. sequential data set types: Extended and large format. For EAV Release 2 volume. The size of an EAV VTOC index was increased four-fold. PDSE. IBM CICS®. A VTOC refresh through ICKDSF is a best practice.192 blocks instead of 2. Linear data set. new DSCB formats (Format 8 and Format 9) were created to protect existing programs from seeing unexpected track addresses. VTOC reformat is performed automatically if REFVTOC=ENABLE is enabled in the DEVSUPxx parmlib member.12: – – – – – – – VSAM data sets with incompatible CA sizes VTOC (it is still restricted to the first 64K-1 tracks) VTOC index Page data sets A VSAM data set with embedded or keyrange attributes are currently not supported HFS file system SYS1. and DB2. and now has 8. Chapter 5.668 cylinders. the following data sets might exist. These DSCBs are called extended attribute DSCBs. IBM IMS™.DVE can be done while the volume remains online (to the host system). RRDS. The existing Format 4 DSCB also were changed to point to the new format 8 DSCB. ESDS. and BCS can be placed on the extended address space (EAS) (cylinder managed space) of an EAV R2 volume that is running on z/OS V1. VVDS. Important: Copy Services are supported only on EAV volumes no larger than 262. and then the Copy Service pair must be re-established. Virtualization concepts 137 . PDS. Data set type dependencies on an EAV R2 EAV R2 includes the following data set type dependencies: All VSAM. Because there is no space left inside the format 1 DSCB. – The VSAM data sets placed on an EAV volume can be SMS or non-SMS managed. In the case of a sequential data set.NUCLEUS All other data sets can be placed on an EAV R2 EAS. as it shows the newly added free space. When the relevant volume is in a Copy Services relationship.12 and above: – Includes all VSAM data types. you can expand all Mod 3/9/A to a large EAV 2 by using DVE.

which is presented by the storage subsystem to the z/OS.182. A non-VSAM data set that is allocated with EADSCB on z/OS V1. The permission is granted by REFVTOC=ENABLE in Parmlib member DEVSUPxx.006Cyls) and the system is granted permission. it is supported only on z/OS V1. the parameter USEEAV(YES) must be set to allow data set allocations on EAV volumes. The default value is NO and prevents allocating data sets to an EAV volume.520 1 TB Volume Cylinder Addresses < or = 65.520 Track Region • Continue Exploitation (z/OS 1. There are no other HCD considerations for the 3390 Model A definitions.10 and above. an automatic VTOC refresh and Index rebuild is performed.12 cannot be opened on earlier versions of z/OS V1.11 – z/OS 1.12) – Non-VSAM Extended Format Datasets – Sequential Datasets – PDS – PDSE – BDAM – BCS/VVDS • XRC Secondary’s in Alternate Subchannel Set • Larger Volumes – 1 TB Volume Sizes • Dynamic Volume Expansion – With Copy Service Inactive – Automatic VTOC and Index Rebuild 3390-9 EAV Volume Figure 5-21 Data set placement on EAV supported on z/OS V1 R12. EAV R2 layout Chunk Region VSAM Sequential PDS BDAM PDSE VVDS Cylinder Addresses > 65. If you want to use the improvements of EAV R2. z/OS prerequisites for EAV volumes EAV volumes include the following prerequisites: EAV volumes are supported only on z/OS V1.The data set placement on EAV as supported on z/OS V1 R12 is shown in Figure 5-21. 138 IBM System Storage DS8870 Architecture and Implementation .10 release. Example 5-2 on page 139 shows a sample message that you receive when you are trying to allocate a data set on EAV volume and USEEAV(NO) is set. If you try to bring an EAV volume online for a system with a pre-z/OS V1. the EAV Volume does not come online. On parmlib member IGDSMSxx.12. After a Large Volume is upgraded to a Mod1062 (EAV2 with 1. The trigger to the system is a state change interrupt that occurs after the volume expansion.12 and above.

To address this volume. How to identify an EAV 2 Any EAV has more than 65. This parameter determines which size the data set must feature to be allocated on a cylinder-managed area. The default for the parameter is 10 cylinders. Example 5-3 TSO/ISPF 3.com/webapp/set2/psp/srchBroker Chapter 5.10 and V1. For EAV 2. enter either: "/" on the data set list command field for the command prompt pop-up.520 cylinders. CLIST. the format 4 DSCB was updated to x’FFFE’ and DSCB 8+9 is used for cylinder-managed address space.software. An easy way to identify any EAV that is used is to list VTOC Summary in TSO/ISPF option 3.12 must be installed. or "=" to execute the previous command. the name of a TSO command. which can be set on parmlib member IGDSMSxx and in the Storage Group definition (Storage Group BPV overrides system-level BPV).11 coexisting maintenance levels must be applied.Example 5-2 Message IEF021I with USEEVA set to NO IEF021I TEAM142 STEP1 DD1 EXTENDED ADDRESS VOLUME USE PREVENTED DUE TO SMS USEEAV (NO)SPECIFICATION. There is a new parameter called Break Point Value (BPV). or REXX exec. the latest maintenance and z/OS V1. see the latest Preventive Service Planning (PSP) information at this website: http://www. an ISPF line command. the latest maintenance for z/OS V1. Virtualization concepts 139 . The BPV value can be 0-65520: 0 means that cylinder-managed area is always preferred and 65520 means that a track-managed area is always preferred.ibm. Most of the EAV eligible data sets were modified by software with EADSCB=YES.4 panel for an 1 TB EAV volume: VTOC Summary Menu Reflist Refmode Utilities Help When the data set list is displayed. For more information. Important: Before EAV volumes are implemented. Example 5-3 shows the VTOC summary of a 3390 Model A with a size of 1 TB CKD usage.4.

Now that we have new addressing mode. and FlashCopy. – Look for programs that examine any of the many operator messages that contain a DASD track. LDMF. This task is important because now we have new values that are returning for the volume size.EAV R2 Migration considerations When you are reviewing EAV R2 migration. – Look for programs that calculate volume or data set size by any means.ibm. see APAR II13752. Copy Services Global Mirror. – Copy data at the volume level: IBM TDMF®. – Add new EAV volumes to storage groups and storage pools. The messages now show new values. DFSMSdss. Metro Mirror.wss?uid=isg1II13752 Recommended actions: – Review your programs and look for the calls for the macros OBTAIN. DFSMS. TSO batch and load libraries. and system volumes. PPRC. and DFSMShsm. – Review your programs and look for EXCP and STARTIO macros for DASD channel programs and other programs that examine DASD channel programs or track addresses. Migrating data: – Define new EAVs by creating them on the DS8870 or expanding existing volumes by using Dynamic Volume Expansion. and CVAFFILT. – Copy data at the data set level: SMS attrition. – With z/OS V1. DFSMSdss. which is available at this website: http://www. CVAFDIR. The macros were changed and you must update your program to reflect those changes. 140 IBM System Storage DS8870 Architecture and Implementation . data set. programs must be updated. including reading a VTOC or VTOC index directly with a BSAM or EXCP DCB. all data set types are currently good volume candidates for EAV R2 except: work volumes. and update ACS routines. REALLOC. For more information about Assistance Tracker.com/support/docview. block address. or volume size. CVAFSEQ. consider the following items: Assistance: Migration assistance is provided by using the Application Migration Assistance Tracker. Global Copy. CVAFDSM.12.

2013. IBM System Storage DS8000 Copy Services overview This chapter provides an overview of the Copy Services functions that are available with the DS8000 series models. 141 . SG24-6787 IBM System Storage DS8000 Series: IBM FlashCopy SE. This chapter covers the following topics: Introduction to Copy Services FlashCopy and FlashCopy SE Remote Pair FlashCopy (Preserve Mirror) Remote Mirror and Copy: – Metro Mirror – Global Copy – Global Mirror – Metro/Global Mirror – z/OS Global Mirror – z/OS Metro/Global Mirror The information that is provided in this chapter is only an overview. All rights reserved. Copy Services are covered in more detail in the following IBM Redbooks and IBM Redpapers™ publications: IBM System Storage DS8000: Copy Services for Open Systems. including Remote Mirror and Copy. and for data duplication and backup solutions. data migration activities. and Point-in-Time Copy functions.6 Chapter 6. These functions make the DS8000 series a key component for disaster recovery solutions. REDP-4368 © Copyright IBM Corp. SG24-6788 IBM System Storage IBM System Storage DS8000: Copy Services for IBM System z.

with which you manage large Copy Services implementations easily and provides data consistency across multiple systems. consider IBM System Storage Metro/Global Copy. Tivoli Storage Productivity Center for Replication. Copy Services management interfaces You control and manage the DS8000 Copy Services functions by using the following interfaces: DS Storage Manager. you can create backup data with little or no disruption to your application. for example. 142 IBM System Storage DS8870 Architecture and Implementation . which include: – IBM System Storage Metro Mirror. a three-site solution to meet the most rigorous business resiliency needs – For migration purposes on an RPQ base. the following options are available: – z/OS Global Mirror. With the Copy Services functions. and data duplication functions. previously known as PPRC eXtended Distance – IBM System Storage Global Mirror. previously known as eXtended Remote Copy (XRC) – z/OS Metro/Global Mirror. it is only intended for migration purposes. the GUI of the DS8000 (DS GUI).1 Copy Services Copy Services is a collection of functions that provide disaster recovery. You also can back up your application data to the remote site for disaster recovery. previously known as asynchronous PPRC – IBM System Storage Metro/Global Mirror. and features contribute to the full-time protection of your data. For IBM System z users. They are also supported on other DS8000 family models. DS Command-Line Interface (DS CLI).6. DS8000 Copy Services functions Copy Services in the DS8000 include the following optional licensed functions: IBM System Storage FlashCopy and IBM FlashCopy SE. previously known as synchronous Peer-to-Peer Remote Copy (PPRC) – IBM System Storage Global Copy. which provides a set command that cover all Copy Service functions and options. data migration. its data copy and mirror capabilities. Understand that this combination of Metro Mirror and Global Copy is not suited for disaster recovery solutions. a three-site solution that combines z/OS Global Mirror and Metro Mirror Many design characteristics of the DS8000. Tivoli Storage Productivity Center for Replication is now part of Tivoli Productivity Center 5. The Copy Services functions run on the DS8870 storage unit and support open systems and System z environments.1 and IBM SmartCloud™ Virtual Storage Center. DS Open Application Programming Interface (DS Open API). which are point-in-time copy functions Remote mirror and copy functions.

you can read and write to the source and target volumes. For more information about feature and function requirements. To use FlashCopy. This target volume must have at least the same size as the source volume and the space is fully allocated in the storage system.1. FlashCopy is an optional. Space is allocated just for updated tracks only when the source or target volume are written.System z users can also use the following interfaces: TSO commands ICKDSF utility commands ANTRQST application programming interface (API) DFSMSdss utility 6. For Open Systems. “IBM System Storage DS8000 licensed functions” on page 272. IBM System Storage DS8000 Copy Services overview 143 . The target LUN must exist before you can use FlashCopy to copy the data from the source LUN to the target LUN.1 Basic concepts FlashCopy creates a point-in-time copy of the data. When a FlashCopy operation is started. When the pair is established. You also must acquire the corresponding DS8000 function authorization with the adequate feature number license in terms of physical capacity. FlashCopy and FlashCopy SE can coexist on a DS8000. 6. However. a copy of the source volume is available as though all the data was copied. “Space Efficient volumes” on page 118) as FlashCopy targets. licensed feature of the DS8000.2. The following variations of FlashCopy are available: Standard FlashCopy. we describe the basic characteristics and options of FlashCopy and FlashCopy SE. An SE target volume features a virtual size that is at least that of the source volume. If you want to be able to create space-efficient FlashCopies and full volume copies. These types of copies are called point-in-time copies. FlashCopy Space Efficient (SE) uses Space Efficient volumes (see 5. which consists of the source and target volume pairing and the necessary control bitmaps.6. The following variations of FlashCopy are available: Standard FlashCopy uses a normal volume as target volume. you must have the corresponding licensed function indicator feature in the DS8870. space is not allocated for this volume when the volume is created and the FlashCopy initiated. FlashCopy creates a copy of the logical unit number (LUN). Chapter 6.2. see 10. In this section. also referred to as the Point-in-Time Copy (PTC) licensed function FlashCopy SE licensed function FlashCopy and FlashCopy SE are different licenses. you need both licenses.2 FlashCopy and FlashCopy SE FlashCopy and FlashCopy SE provide the capability to create copies of logical volumes with the ability to access the source and target copies immediately. it takes only a few seconds to establish the FlashCopy relationship. Thereafter.

– If the requested data was not yet copied. – If the point-in-time data was not yet copied to the target. track refers to a piece of data in the DS8000. it is read from there. Read from the target volume When a read request goes to the target volume. The basic concepts of a standard FlashCopy are explained in the following section and shown in Figure –. FlashCopy provides a point-in-time copy Source Target FlashCopy command issued Copy immediately available Write Read Time Read Write Read and write to both source and copy possible T0 When copy is complete. relationship between source and target ends Figure 6-1 FlashCopy concepts If you access the source or the target volumes while the FlashCopy relation exists. 144 IBM System Storage DS8870 Architecture and Implementation . it is read from the source. it is now copied immediately and only then is the update written to the source. The DS8000 uses the concept of logical tracks to manage Copy Services functions. data is directly read from there. the update is written to the source directly. FlashCopy checks the bitmap for the location that is to be overwritten and takes one of the following actions: – If the point-in-time data was already copied to the target. when the data is destaged to the physical extents of the source volume.Important: In this chapter. FlashCopy checks the bitmap and takes one of the following actions: – If the requested data was copied to the target. I/O requests are handled in the following manner: Read from the source volume When a read request goes to the source. Write to the source volume When a write request goes to the source. the data is first written to the cache and persistent memory (write cache). Later.

IBM System Storage DS8000 Copy Services overview 145 . The background copy can slightly impact application performance because the physical copy needs storage resources. After the completion of this process. 6. standard FlashCopy likely is the better choice. A full background copy would contradict the concept of space efficiency. With this option. FlashCopy does not initiate a background copy. binary copy. For more information about Global Mirror. standard FlashCopy (also called FULL copy) starts a background copy process that copies all point-in-time data to the target volume. To the host or application. Standard FlashCopy often has superior performance to FlashCopy SE. or creating a copy of transactional data for data mining purposes. Point-in-time data is copied only when required because of an update to the source or target. If performance on the source or target volumes is important. No background copy option A standard FlashCopy relationship also is established by using the NOCOPY option. Performing regular online backup for different points in time. This configuration eliminates the impact of the background copy. if necessary.Write to the target volume Whenever data is written to the target volume while the FlashCopy relationship exists. FlashCopy SE is optimized for use cases in which only about 5% of the source volume data is updated during the life of the relationship. the target looks exactly like the original source. the FlashCopy relation ends and the target volume is independent of the source. The impact is minimal because host I/O always has higher priority than the background copy. FlashCopy does not overwrite data that was written to the target with point-in-time data. testing new applications.3. IBM FlashCopy SE is designed for temporary copies. see 6. If more than 20% of the source data is expected to change. the storage system checks the bitmap and updates it. It is an instantly available. This option is useful in the following situations: When the target is not needed as an independent volume When repeated FlashCopy operations to the same target are expected FlashCopy SE is automatically invoked with the NOCOPY option because the target space is not allocated and the available physical space is smaller than the size of the volume.3. FlashCopy target volumes in a Global Mirror (GM) environment. Use cases for the point-in-time copy that is created by FlashCopy include online backup. the use of standard FlashCopy is a better choice. The FlashCopy background copy By default. Chapter 6. Creating temporary point-in-time copies for application development or DR testing.2 Benefits and use The point-in-time copy that is created by FlashCopy often is used when you need a copy of the production data that is produced with little or no application downtime. The following scenarios are examples of when the use of IBM FlashCopy SE is a good choice: Creating a temporary copy and backing it up to tape. “Global Mirror” on page 152.2.

you can use this function for daily backups and save the time for the physical copy of FlashCopy. 6. Data Set FlashCopy By using Data Set FlashCopy. not application-consistent copies. In many cases. We explain the following options and capabilities in this section: Data Set FlashCopy Incremental FlashCopy (refresh target volume) Multiple Relationship Consistency Group FlashCopy FlashCopy on existing Metro Mirror or Global Copy primary Inband commands over remote mirror link Persistent FlashCopy Persistent FlashCopy allows the FlashCopy relationship to remain even after the (FULL) copy operation completes. In this situation. Consistency Group FlashCopy By using Consistency Group FlashCopy. Refresh target volume refreshes a FlashCopy relation without copying all data from source to target again. To recover an application from Consistency Group FlashCopy target volumes. only a small percentage of the entire data is changed in a day. you can create a point-in-time copy of individual data sets instead of complete volumes in an IBM System z environment. Only one of the multiple relations can be incremental. the write activity to source and target is the crucial factor that decides whether FlashCopy SE can be used. from (former) target to source. When a subsequent FlashCopy operation is initiated. 146 IBM System Storage DS8870 Architecture and Implementation .In all of these scenarios. only the changed tracks on the source and target must be copied from the source to the target. Consistency Group FlashCopy ensures that the order of dependent writes is always maintained and thus creates host-consistent copies. and even across multiple storage units. you must perform the same recovery as after a system crash or power outage. Incremental FlashCopy (refresh target volume) Incremental FlashCopy requires the background copy and the Persistent FlashCopy option to be enabled and the first full copy should be completed. The direction of the refresh also can be reversed. you can freeze and temporarily queue I/O activity to a volume. A usage case for this feature is creating regular point-in-time copies as online backups or time stamps. You must explicitly delete the relationship to terminate it. The copies have power-fail or crash level consistency. Multiple Relationship FlashCopy FlashCopy allows a source to have relationships with up to 12 targets simultaneously. Consistency Group FlashCopy helps you to create a consistent point-in-time copy without quiescing the application across multiple volumes.3 FlashCopy options FlashCopy provides many more options and functions.2.

data set level FlashCopy is not supported within FlashCopy SE. 6.2. Therefore. IBM System Storage DS8000 Copy Services overview 147 . “FlashCopy options” on page 146) work identically for FlashCopy SE. Important: This function is available by using the DS CLI. Incremental FlashCopy Because Incremental FlashCopy implies an initial full volume copy and a full volume copy is not possible in an IBM FlashCopy SE relationship. As a result. Incremental FlashCopy is not possible with IBM FlashCopy SE.3. TSO. it might be advisable to avoid by using all 12 possible relations. “Metro Mirror” on page 151 and in 6.FlashCopy on existing Metro Mirror or Global Copy primary By using this option.2. Multiple Relationship FlashCopy SE Standard FlashCopy supports up to 12 relationships per source volume and one of these relationships can be incremental. commands to manage FlashCopy at the remote site can be issued from the local or intermediate site and transmitted over the remote mirror Fibre Channel links.1. see 6. “Global Copy” on page 152. Important: You cannot FlashCopy from a source to a target if the target also is a Global Mirror primary volume. Data Set FlashCopy FlashCopy SE relationships are limited to full volume relationships.3.3. This ability eliminates the need for a network connection to the remote site solely for the management of FlashCopy. you establish a FlashCopy relationship where the target is a Metro Mirror or Global Copy primary volume.4 FlashCopy SE-specific options Most options for standard FlashCopy (see 6. but not by using the DS Storage Manager GUI. and ICKDSF commands. Inband commands over remote mirror link In a remote mirror environment. Through this relationship you create full or incremental point-in-time copies at a local site and then use remote mirroring to copy the data to the remote site. For more information about Metro Mirror and Global Copy.2. A FlashCopy onto a Space Efficient volume has a certain overhead because more tables and pointers must be maintained. The options that differ are described in this section. Chapter 6.

During the DUPLEX PENDING window. the Metro Mirror volume pair changes its status from DUPLEX PENDING back to FULL DUPLEX. the mirroring behavior is shown in Figure 6-2. The remote Volume B provides a recoverable state and can be used if there is a planned or unplanned outage at the local site. the Remote Volume B does not provide a defined state about its data status and is unusable from a recovery viewpoint. 148 IBM System Storage DS8870 Architecture and Implementation .5 Remote Pair FlashCopy Remote Pair FlashCopy or Preserve Mirror transmits the FlashCopy command to the remote site if the target volume is mirrored with Metro Mirror. After FlashCopy finishes replicating the data from Local A volume to Local B volume. the Metro Mirror volume pair status changes from FULL DUPLEX to DUPLEX PENDING. If Preserve Mirror is not used. When the FlashCopy operation starts and replicates the data from Local A to Local B volume. Local Storage Server Remote Storage Server Local A (1) FlashCopy Metro Morrir Local B Remote B (2) Replicate FULL DUPLEX FULL DUPLEX (2) (3) (2) (3) DUPLEX PENDING DUPLEX PENDING Figure 6-2 The FlashCopy target is also an MM source volume Complete the following steps to use Preserve Mirroring: 1. 2. 3.2.6. which starts a FlashCopy relationship between the Local A and the Local B volumes. FlashCopy is issued at Local A volume.

12 and V1.As the name implies. Figure 6-4 on page 150 shows an example in which Remote Pair FlashCopy might have the most relevance: A data set level FlashCopy in a Metro Mirror CKD volumes environment where all participating volumes are replicated. and IBM GDPS recovery standpoint is fully assured. Remote-Pair FlashCopy is now allowed even if one or both pairs are suspended or duplex-pending. Chapter 6. Figure 6-3 shows this approach. This feature is supported on z/OS V1. If Remote Pair FlashCopy is used. Preserve Mirror preserves the existing Metro Mirror status of FULL DUPLEX. FlashCopy B  B’ is done.11. V1.13. If FlashCopy between PPRC primaries A  A’ and PPRC secondaries B and B’ are on the same storage system (Storage Facility Image). The Local Storage Server and the Remote Storage Server then run the FlashCopy operation independently of each other. which guarantees that there is no discontinuity of the disaster recovery readiness. The key item of this configuration is that disaster recovery protection is not exposed at any time and FlashCopy operations can be freely taken within the disk storage configuration. the Metro Mirror volume pair status keeps FULL DUPLEX. Usually the user has no influence where the newly allocated FlashCopy target data set is going to be placed. the DR viewpoint. The Local Storage Server coordinates the activities at the end of the process and takes action if the FlashCopies do not succeed at both Storage Servers. The FlashCopy command is issued by an application or by you to the Local A volume with Local B volume as the FlashCopy target. Issue FlashCopy Local Storage Server Remote Storage Server (1) Local A Metro r Mirro Remote A (1) FlashCopy command C links PPR (1) FlashCopy command (2) FlashCopy (2) FlashCopy PPRC Local B links Remote B Metro r Mirro FULL DUPLEX FULL DUPLEX Figure 6-3 Remote Pair FlashCopy preserves the Metro Mirror FULL DUPLEX state Complete the following steps as shown in Figure 6-3: 1. Important: On a DS8870. The DS8000 firmware propagates the FlashCopy command through the PPRC links from the Local Storage Server to the Remote Storage Server. This inband propagation of a Copy Services command is possible only for FlashCopy commands. IBM System Storage DS8000 Copy Services overview 149 . a Remote Pair FlashCopy is allowed while PPRC pair is suspended. 2.

For a more information. 6. 150 IBM System Storage DS8870 Architecture and Implementation . see the IBM Redbooks publications that are listed in “Related publications” on page 533. REDP-4504. In addition. These functions are used to implement remote data backup and disaster recovery solutions. see IBM System Storage DS8000: Remote Pair FlashCopy (Preserve Mirror).A B A’ B’ Figure 6-4 FlashCopy allowed when PPRC is suspended For a more information about Remote Pair FlashCopy. The following Remote Mirror and Copy functions are optional licensed functions of the DS8000: Metro Mirror Global Copy Global Mirror Metro/Global Mirror Remote Mirror functions can be used in Open Systems and System z environments.3 Remote Mirror and Copy The Remote Mirror and Copy functions of the DS8000 are a set of flexible data mirroring solutions that allow replication between volumes on two or more disk storage systems. we describe these Remote Mirror and Copy functions. System z users can use the DS8000 for the following functions: z/OS Global Mirror z/OS Metro/Global Mirror GDPS In the following sections.

You also must acquire the corresponding DS8870 function authorization with the adequate feature number license in terms of physical capacity. 6. IBM System Storage DS8000 Copy Services overview 151 .1. DS8700. For more information about feature and function requirements. you must have the corresponding licensed function indicator feature in the DS8000. that can be located up to 300 km from each other.Licensing requirements To use any of these Remote Mirror and Copy optional licensed functions. previously known as synchronous PPRC. integrate more than one licensed function. The basic operational characteristics of Metro Mirror are shown in Figure 6-5. “IBM System Storage DS8000 licensed functions” on page 272. before it is considered complete. In this case. such as Global Mirror. or z/OS Metro/Global Mirror. DS8100. Metro/Global Mirror. consider that some of the remote mirror solutions. It is a synchronous copy solution in which a write operation must be carried out on both copies. and ESS800.3. Server write 1 4 Write acknowledge Write hit on secondary 3 2 Primary (local) Write to secondary Secondary (remote) Figure 6-5 Metro Mirror basic operation Chapter 6. or any other combination of DS8870. see 10. at the local and remote sites.1 Metro Mirror Metro Mirror. provides real-time mirroring of logical volumes between two DS8870s. DS8300. DS6800. you must have all of the required licensed functions. DS8800. Also.

This solution integrates the Global Copy and FlashCopy technologies. and wait until it is finished. the data that the host writes at the local site is asynchronously mirrored to the storage unit at the remote site. Write data is sent to the target as the connecting network allows an independent of the order of the host writes. When you are operating in Global Copy mode. This configuration makes the target data lag behind and is inconsistent during normal operation. copies data asynchronously and over longer distances than is possible with Metro Mirror. When the Global Copy completes.2 Global Copy Global Copy. is a two-site. Which of the following steps are used depends on the purpose of the copy: Data migration You can use Global Copy to migrate data over long distances. Asynchronous mirroring Global Copy also is used to create a full-copy of data from an existing machine to a new machine without affecting customer performance. previously known as Asynchronous PPRC. If the Global Copy is incomplete. the source does not wait for copy completion on the target before a host write operation is acknowledged. With special management steps (under control of the local master storage unit). With Global Mirror. the data at the remote machine is not consistent. previously known as Peer-to-Peer Remote Copy eXtended Distance (PPRC-XD). Therefore.3. tell Global Copy to synchronize the data. You must take extra steps to make Global Copy target data usable at specific points in time. When you want to switch from old to new data. Global Copy is included in the Metro Mirror or Global Mirror license.3 Global Mirror Global Mirror.6. long distance. you can stop it and then start with the Copy relationship (Metro Mirror or Global Mirror) starting with a full resynchronization so the data is consistent.3. 6. the host is not affected by the Global Copy operation. You need extra storage at the remote site for these FlashCopies. you must stop the applications on the old site. remote copy technology. a consistent copy of the data is automatically maintained and periodically updated by using FlashCopy on the storage unit at the remote site. asynchronous. 152 IBM System Storage DS8870 Architecture and Implementation .

which helps to reduce the time that is required to switch back to the local site after a planned or unplanned outage. which minimizes the amount of data exposure in the event of an unplanned outage. based upon business requirements and optimization of available bandwidth.5 seconds.Global Mirror benefits Global Mirror features the following benefits: Support for almost unlimited distances between the local and remote sites. including specific workload characteristics and bandwidth between the local and remote sites. This unlimited distance enables you to choose your remote site location that is based on business needs and enables site separation to add protection from localized disasters. Server write 1 2 Write acknowledge Write to secondary (non-synchronously) A B FlashCopy (automatically) Automatic cycle controlled by active session C Figure 6-6 Global Mirror basic operation Chapter 6. created with minimal impact to applications at the local site. Dynamic selection of the wanted recovery point objective (RPO). Efficient synchronization of the local and remote sites with support for failover and failback operations. IBM System Storage DS8000 Copy Services overview 153 . Session support: data consistency at the remote site is internally managed across up to eight storage units that are at the local site and the remote site. A consistent and restartable copy of the data at the remote site. How Global Mirror works The basic operational characteristics of Global Mirror are shown in Figure 6-6. Data currency where. with the distance typically limited only by the capabilities of the network and the channel extension technology. for many environments. The actual lag in data currency that you experience depends upon a number of factors. the remote site lags behind the local site typically 3 .

and intermediate site (site B) to remote site (site C) supports long-distance disaster recovery replication with Global Mirror (see Figure 6-7). Global Copy is halted for a brief period while Global Mirror creates a FlashCopy from the B to the C volumes. These volumes now contain a consistent set of data at the secondary site.3. This cascaded approach for a three-site solution does not burden the primary storage system with sending out the data twice. Server or Servers *** normal application I/Os failover application I/Os Global Copy asynchronous long distance Metro Mirror A Metro Mirror synchronous short distance FlashCopy incremental NOCOPY B C D Global Mirror Intermediate site (site B) Remote site (site C) Local site (site A) Figure 6-7 Metro/Global Mirror elements 154 IBM System Storage DS8870 Architecture and Implementation . even if they are in separate storage units. The data from the A volumes is replicated to the B volumes by using Global Copy. Local site (site A) to intermediate site (site B) provides high availability replication by using Metro Mirror. Global Mirror is a solution for disaster recovery implementations where a consistent copy of the data always must be available at a remote location that can be separated by a long distance from the production site. The missing increment of the consistent data is sent to the B volumes by using the existing Global Copy relations. replication solution.4 Metro/Global Mirror Metro/Global Mirror is a three-site. 6. After all data reaches the B volumes. a Consistency Group is created from all of the A volumes.5 seconds. At a certain point. This creation has little impact on applications because the creation of the Consistency Group is quick (often a few milliseconds). With its efficient and autonomic implementation.The A volumes at the local site are the production volumes and are used as Global Copy primaries. but this recovery point depends on the workload and bandwidth that is available to the remote site. multi-purpose. After the Consistency Group is created. The data at the remote site is current within 3 . the application writes can continue updating the A volumes.

5 seconds.3. a shorter distance for the Metro Mirror connection is more appropriate to effectively guarantee high availability of the configuration.Metro Mirror and Global Mirror are well-established replication solutions. IBM System Storage DS8000 Copy Services overview 155 . – The Global Mirror methodology has no effect on applications at the local site. – This solution provides a recoverable. DS8100 / DS8300 DS8100 / DS8300 Session 20 Primary Primary 20 A Primary Primary Primary 20 Primary A 20 A Global Copy Subordinate Primary Primary Secondary PENDING A Primary Primary Secondary PENDING A Primary Primary Primary Primary Primary Primary A Primary Primary Primary PENDING PENDING Secondary SAN Primary Primary 20 A Primary Primary Primary 20 Primary A 20 A GM master Primary Primary Secondary PENDING A Primary Primary Secondary PENDING A Primary Primary Primary Primary Primary Primary A Primary Primary Secondary PENDING Primary PENDING Local Site Site 1 Remote Site Site 2 Figure 6-8 Single GM hardware session support Chapter 6. However. when used in a Metro/Global Mirror implementation.5 Multiple Global Mirror sessions The DS8870 supports several Global Mirror sessions within a storage system (SFI). as shown in Figure 6-8. Global Mirror: – Asynchronous operation supports long-distance replication for disaster recovery. – The opportunity to locate the intermediate site disk systems close to the local site allows use of intermediate site disk systems in a high-availability configuration. Up to 32 Global Mirror hardware sessions can be supported within the same DS8870. Metro/Global Mirror combines Metro Mirror and Global Mirror to incorporate the following best features of the two solutions: Metro Mirror: – Synchronous operation supports zero data loss. typically within 3 . restartable. Long distances: Metro Mirror can be used for distances of up to 300 km. and consistent image at the remote site with an RPO. 6.

which is commonly used by various application servers. which contains Global Copy primary volumes that belong to session 20. The two storage systems configuration consists of a GM master in the DS8000 at the bottom and a subordinate DS8000 that also contains Global Copy primary volumes that belong to session 20. all volumes are spread across the primary DS8300s.The session that is shown in Figure 6-8 on page 155 is meant to be a GM master session that controls a GM session. The GM master controls the subordinate through PPRC FCP-based paths between both DS8000 storage systems. The session ID applies to any LSS at Site 1. 156 IBM System Storage DS8870 Architecture and Implementation . number 20). Consistency is provided across all primary subsystems. To provide good performance. When server 2 with Application 2 fails and the participating volumes that are connected to Application 2 are not accessible from the servers in the remote site or Site 2. With the DS8100 and DS8300. it is not possible to create more that one GM session per GM master. Potential impacts with such a single GM session are shown in Figure 6-9. For disaster recovery purposes. the entire GM session 20 must fail over to the remote site. a remote site exists with corresponding DS8300s and the data volumes are replicated through a Global Mirror session with the Global Mirror master function in a DS8100 or a DS8300.single GM session Assume that a disk storage consolidated environment is used. A GM session is identified by a GM session ID (in this example. DS8100 / DS8300 Session 20 Global Copy DS8100 / DS8300 Application 1 Subordinate Application 2 SAN GM master Application 3 Local Site Site 1 Remote Site Site 2 Figure 6-9 Multiple applications .

Because there is only one GM session possible with a DS8100 or DS8300 on one SFI. Application 1 and Application 3. the entire session must be failed over to the remote site to restart Application 2 on the backup server at the remote site. Site 1 must be shut down and restart in Site 2 after the GM session fail over process is completed. IBM System Storage DS8000 Copy Services overview 157 .Figure 6-10 shows the impact on the other two applications. Chapter 6.single GM sessions . need to fail over as well DS8100 / DS8300 fail over DS8100 / DS8300 Application 1 Application 2 GM master Network Application 2 Application 3 Session 20 Remote Site Site 2 Local Site Site 1 need to fail over as well Figure 6-10 Multiple applications . The other two servers with Application 1 and Application 3 are affected and must also be swapped over to the remote site.fail over requirements This configuration implies services interruption to the failed server with Application 2 and service impacts to Application 1 and Application 3.

xx.xx. Application 2 connects to volumes in LSS 40-7F and the server with Application 3 connects to volumes in LSS 80-BF. This finer granularity and dedicated recovery action is not uncommon because different applications might have different RPO requirements.Figure 6-11 shows the same server configuration. In the example in Figure 6-10 on page 157. The GM session builds on the existing Global Mirror technology and microcode of the DS8000. the storage subsystems DS8100 or DS8300 are exchanged by DS8870 or DS8700/DS8800 with release 6. which you must consider when you are planning on how to divide up volumes into separate GM sessions. Only a GM session can be in a certain LSS. you can use up to 32 dedicated GM master sessions. Application 1 DS8100 / DS8300 DS8100 / DS8300 GM master Session SAN 10 20 Application 2 GM master Session Application 2 GM master GM master Session 30 Application 3 Local Site Site 1 Remote Site Site 2 Figure 6-11 DS8000 provides multiple GM master sessions support within R6.1.1 Each set of volumes on an Application server are in its own GM session.1 on LMC: 7. which is controlled by the concerned GM master session within the same DS8000. Notice that the basic management of a GM session does not change.6. Now when the Application 2 server fails. With this configuration. However. only GM session 20 is failed over to the remote site and the concerned server in Site 2 restarts with Application 2 after the fail over process completes. 158 IBM System Storage DS8870 Architecture and Implementation . The ability to fail over only the configuration of a failing server or applications does improve the availability of other applications when compared to the situation on older DS8000 models. With the DS8870. an installation can now have one or more test sessions in parallel with one or more productive GM sessions within the same SFI to test and gain experience on possible management tools and improvements. Application 1 is connected to volumes that are in LSS number 00 to LSS number 3F.

ESE FlashCopy target portion of Global Mirror releases only at initial establish. and Metro Global Mirror are supported. the copy is done only on an effective amount of customer data and not on all volume capacity. unless no copy option is selected. With this new enhancement. All types of Copy Services. but with the following limitations: All volumes must be ESE or standard (full-sized) volumes. IBM System Storage DS8000 Copy Services overview 159 .6. Global Copy. The FlashCopy portion of Global Mirror can be ESE or TSE. customers can save disk capacity on PPRC devices. During the initial establish.6 Thin provisioning enhancements on open environments The DS8870 storage system provides a full support for thin provisioned volumes on fixed block volumes only.3. such as Metro Mirror. Global Mirror. This means that the source and target volume must be of the same type. No intermixing of PPRC volumes is allowed. as shown in Figure 6-12. all space on the secondary volume is released. The same amount of extents that is allocated for the primary volume are allocated for the secondary. Now with thin provisioning. Server Thin Provisioning Full Provisioning Storage Real capacity Used capacity Real capacity Thin Provisioning Allocates on write Capacity Pool Virtual capacity Used capacity Used capacity Figure 6-12 Thin Provisioning: Full Provisioning comparison example Chapter 6.

and for data migration. If the Side file exceeds 5% of cache. However. The DS8870 and previous models at LMC release 6. previously known as eXtended Remote Copy (XRC). is a copy function that is available for the z/OS operating systems. No z/OS Global Mirror license is required for the auxiliary storage system (it could be any storage system that is supported by z/OS). even if the transaction on the remote site is incomplete. This configuration implies that the host writes to CG tracks are held in abeyance while the track is locked. you should consider that you might want to reverse the mirror. it is important to allow hosts to complete an I/O operation. z/OS Global Mirror function is an optional licensed function (called Remote Mirroring for System z. which results in a collision that increases the response time. Side file can grow up to 5% of cache.3 (LMC 7. the following steps are performed: Host adapter copies CG track data to side file to allow host write to complete without having to wait for previous write to complete. RMZ) of the DS8000 that enables the SDM to communicate with the primary DS8000. microcode allows collision for one CG. It involves a System Data Mover (SDM) that is found only in z/OS. consistency cannot be obtained as requested. long running batch jobs). in which case your secondary DS8000 would need a z/OS Global Mirror license. At long distance. which increases RPO and causes increased run times for jobs. multiple writes might use the same track or block. If such cancellations occur five times consecutively. During high activities (for example. The previous implementation did not always meet this objective.3.3. z/OS Global Mirror maintains a consistent copy of the data asynchronously at a remote location.8 z/OS Global Mirror z/OS Global Mirror.6. and can be implemented over unlimited distances. If the host write collides with a locked CG track. the DS8000 microcode cancels the current CG formation.6. The host might experience an increased response time if collision occurs. In this case. It is a combined hardware and software solution that offers data integrity and data availability and can be used as part of business continuance solutions.7 GM and MGM improvement because of collision avoidance Global Copy and Global Mirror are asynchronous functions that are suited for long distances between a primary and a secondary DS8000 storage system. for workload movement.xx) provide a significant improvement on GM collision avoidance. 6. 160 IBM System Storage DS8870 Architecture and Implementation . Global Mirror locks tracks in the consistency group (CG) on primary DS8000 at the end of CG formation window.30.

Primary site Secondary site SDM manages the data consistency Server write 1 Write acknowledge 2 Read asynchronously System Data Mover Figure 6-13 z/OS Global Mirror basic operations z/OS Global Mirror on zIIP The IBM z9® Integrated Information Processor (zIIP) is a special engine that is available for System z since the z9 generation. IBM System Storage DS8000 Copy Services overview 161 . a range of zGM workload can be offloaded to zIIP processors. Given the appropriate hardware and software. The z/OS software must be at V1. z/OS can use these processors to handle eligible workloads from the System Data Mover (SDM) in an z/OS Global Mirror (zGM) environment.The basic operational characteristics of z/OS Global Mirror are shown in Figure 6-13. Chapter 6. specifying zGM PARMLIB parameter zIIPEnable(YES).8 and later with APAR OA23174.

This configuration enables a z/OS three-site high-availability and disaster recovery solution for even greater protection against unplanned outages. There is a slight performance impact for write operations. Metro Mirror Metro Mirror is a function for synchronous data copy at a limited distance and includes the following considerations: There is no data loss.10 Summary of Remote Mirror and Copy function characteristics In this section.9 z/OS Metro/Global Mirror This mirroring capability implements z/OS Global Mirror to mirror primary site data to a location that is a long distance away and also uses Metro Mirror to mirror primary site data to a location within the metropolitan area. 162 IBM System Storage DS8870 Architecture and Implementation .3.3. Intermediate Site Local Site Remote Site z/OS Global Mirror Metropolitan distance Unlimited distance Metro Mirror P’ DS8000 Metro Mirror Secondary P FlashCopy when required X X’ DS8000 z/OS Global Mirror Secondary DS8000 Metro Mirror/ z/OS Global Mirror Primary X” Figure 6-14 z/OS Metro/Global Mirror 6. we summarize the use of and considerations for the set of Remote Mirror and Copy functions that are available with the DS8000 series.6. The basic operational characteristics of a z/OS Metro/Global Mirror implementation are shown in Figure 6-14. and it allows for rapid recovery for distances up to 300 km.

Global Copy is typically used for data migration to new DS8000s by using the existing PPRC FC infrastructure. Consistency groups suspend all copies simultaneously if a suspension occurs on one of the copies. the RPO might grow if the bandwidth capability is exceeded. z/OS Global Mirror z/OS Global Mirror is an asynchronous copy technique that is controlled by z/OS host software called System Data Mover. The following considerations apply: It can copy to nearly unlimited distances. It is highly scalable. the use of consistency groups is recommended to ensure data consistency across multiple volumes. making it suitable for data migration and daily backup to a remote distant site. Global Mirror causes only a slight impact to your application system.3. It is scalable across multiple storage units. 6. It has low RPO. The following considerations apply: Global Mirror can copy to nearly an unlimited distance. which is limited only by the network implementation and includes the following considerations: It can copy your data at nearly an unlimited distance. It can realize a low RPO if there is enough link bandwidth. The copy is normally fuzzy but can be made consistent through a synchronization procedure. when the link bandwidth capability is exceeded with a heavy workload. Consistency groups should be managed by GDPS or Tivoli Storage Productivity Center for Replication to automate the control and actions in real time and to be able to freeze all copy services I/O to the secondaries to keep all data aligned. RPO specifies how much data you can afford to re-create if the system must be recovered. You can create a consistent copy in the secondary site with an adaptable RPO.Global Copy Global Copy is a function for non-synchronous data copy at long distances. the RPO grows. or host performance might be impacted. Chapter 6. IBM System Storage DS8000 Copy Services overview 163 . Global Mirror Global Mirror is an asynchronous copy technique.11 Consistency group considerations In disaster recovery environments that are running Metro/Global Mirror (MGM). Additional host server hardware and software are required.

For more information about GDPS. This configuration gives you the ability of multi-tenancy by assigning specific resources to specific tenants. and monitor a full MGM environment. and also allows customers to perform disaster recovery tests without affecting production. During Resource Groups definition. depending how the resources are configured or managed.3. These features lead to faster recovery from real disaster events. GDPS functionality includes the following examples: Option to hot swap between primary and secondary Metro Mirror is managed concurrently with customer operations. GDPS is the ideal solution if you target for 99. see IBM Tivoli Storage Productivity Center V5. With its HyperSwap capability. in a multi-customer environment. You can manage your mirroring environment with a few mouse clicks. 6.1 or IBM SmartCloud Virtual Storage Center. Tivoli Storage Productivity Center for Replication manages groups of volumes (sessions).1 Release Guide. GDPS freezes the Metro Mirror pairs if there is a problem with mirroring. we protect the customer data logically from each other.12 GDPS on zOS environments GDPS is the solution that is offered by IBM to manage large and complex environments and to always keep the customer data safe and consistent. which maintains data consistency on all pairs.13 Tivoli Storage Productivity Center for Replication functionality By using IBM Tivoli Storage Productivity Center for Replication.6. SG24-6374. you can manage synchronous and asynchronous mirroring in several environments. Operations can continue if there is a disaster or planned outage. which limits copy services relationship so that they exist only between resources within each tenant’s scope of resources. activate. we define an aggregation of resources and define certain policies.9999% availability. SG24-7894. It provides an easy interface to manage multiple sites with MGM pairs. see GDPS Family: An introduction to concepts and capabilities. Copy Services for Open Systems SG24-6788. which is now part of Tivoli Storage Productivity Center 5. It restarts the copy process to secondaries after the problem is evaluated and solved. 6. Tivoli Storage Productivity Center for Replication makes it easy to start. GDPS easily monitors and manages your MGM pairs.3. For more information. and IBM System Storage DS8000. Instead of managing individual volume pairs (as you do with the DSCLI). Disaster recovery management in case of disaster at the primary site allows operations to restart at the remote site quickly and safely while data consistency is always monitored. 164 IBM System Storage DS8870 Architecture and Implementation .4 Resource Groups for copy services Resource Groups are implemented in such a way that each copy service volume is separated and protected from other volumes in a copy service relationship. Therefore.

The DS8870 supports the Resource Groups concept and is implemented into IBM Storage System DS8700 and DS8800 with microcode Release 6.Resource Groups provide more policy-based limitations to DS8000 users to secure partitioning of copy services resources between user-defined partitions. It has a unique RGL within a storage facility image (SFI). A resource scope specifies a selection criteria for a set of resource groups. Figure 6-15 shows an example of how the multi-tenancy is used in a mixed DS8000 environment and how the OS environment is separated. Resource Group (RG): An RG consists of new configuration objects. User Resource Scope (URS): Each user includes an ID that is assigned to the URS that contains an RS. IBM System Storage DS8000 Copy Services overview 165 .1.32 characters long. An RG contains specific policies volumes and LSS and LCUs are that associated with a single RG. starting on level 4.32 characters long that selects one or more resource group labels by matching the RS to RGL string. Figure 6-15 Example of a multi-tenancy configuration in a mixed environment Chapter 6. The use of a resource group on DS8000 introduces the following concepts: Resource Group Label (RGL): The RGL is a text string 1 . The URS cannot equal zero.1. Resource Scope (RS): The RS is a text string 1 . This process of specifying the appropriate rules is performed by an administrator by using resource group functions.1. The RG environments also can be managed by Tivoli Storage Productivity Center for Replication.6 and later.

and the use of Resource Groups. Important: Resource Groups are implemented in the code by default and are available at no extra cost. 166 IBM System Storage DS8870 Architecture and Implementation .For a more information about implementation of. see IBM System Storage DS8000 Series: Resource Groups. REDP-4758. planning for.

© Copyright IBM Corp. All rights reserved. we described the performance characteristics of the IBM System Storage DS8870 with regards to physical and logical configuration.com/systems/z/resources/faq/index.7 Chapter 7. see the IBM Redbooks publication DS8800 Performance Monitoring and Tuning.html For more information. The considerations that are presented in this chapter can help you plan the physical and logical setup of DS8000. Architectured for Performance In this chapter. 2013.ibm. 167 . This chapter covers the following topics: DS8870 hardware: Performance characteristics Software performance: Synergy items Performance considerations for disk drives DS8000 superior caching algorithms Performance considerations for logical configuration I/O Priority Manager IBM Easy Tier Performance and sizing considerations for open systems Performance and sizing considerations for System z http://www. SG24-8013.

The DS8870 features IBM POWER7 processor-based server technology and uses a PCI Express I/O infrastructure to help support high performance. The DS8870 provides a nondisruptive upgrade from the smallest to the largest configuration. These features are combined with world-class business resiliency and encryption capabilities to deliver a unique combination of high availability. In this section. contribute to performance potential. I/O Priority Manager.1. You also can add hard disk drives (HDDs) and solid-state drives (SSDs) and other model 96E frames to the base 961 frame for increased capacity. performance. such as Easy Tier.1 DS8870 hardware: Performance characteristics The IBM System Storage DS8870 is designed to support the most demanding business applications with its exceptional all-around performance and data throughput. the DS8870 includes the options for 8-core and 16-core processors per controller. Figure 7-1 DS8870 Scalability details 168 IBM System Storage DS8870 Architecture and Implementation . we review the architectural layers of the DS8870 and describe the performance characteristics that differentiate the DS8870 from other disk systems. 7.7. Besides the 2-core and 4-core processor options. and storage pool striping. including adding cache and processors for increased performance and adding host ports for increased connectivity. Up to 1 TB of system memory is available in the DS8870 for increased performance. which also existed in the DS8700 and DS8800 models. and security.1 Vertical growth and scalability Scalability details for the DS8870 are shown in Figure 7-1. Other advanced-function software features.

Figure 7-3 shows an example of how DS8870’s performance scales as the configuration changes from 2-core to 16-core in an open systems database environment. Architectured for Performance 169 . Figure 7-3 Linear performance scalability Chapter 7. “DS8870 hardware components and architecture” on page 33. Figure 7-2 Enterprise class scalability For more information about hardware and architectural scalability. see Chapter 3.By using different colors. how the eight device adapter pairs in the I/O enclosure pairs (vertically in the lower left corner of first two frames) correlate with disk enclosure pairs (spread over four frames) in the DS8870 enterprise class is shown in Figure 7-2.

as shown in Figure 4-10 on page 85. Performance is enhanced because both device adapters (DAs) connect to the switched Fibre Channel subsystem back-end.1. To overcome the arbitration issue within FC-AL. and propagated back to the drive. These switches use the FC-AL protocol and attach to the SAS drives (bridging to SAS protocol) through a point-to-point connection.2 DS8870 Fibre Channel switched interconnection at the back-end The FC technology is commonly used to connect a group of disks in a daisy-chained fashion in a Fibre Channel Arbitrated Loop (FC-AL). This system is called a Fibre Channel switched disk system. the DS8870 architecture uses a switch-based approach when FC-AL switched loops are created. Each DA port can concurrently send and receive data. without routing it through all of the other drives in the loop. The arbitration message of a drive is captured in the switch.7. processed. as shown in Figure 7-4. Figure 7-4 High availability and increased bandwidth connect both DAs to two logical loops 170 IBM System Storage DS8870 Architecture and Implementation .

If one of the switches fails. or RAID 10 array. as shown in Figure 7-5. which works with the full 8-Gbps FC speed up to the back-end place where the FC-to-SAS conversion is made. With the use of the virtualization approach and the concept of extents. This configuration increases bandwidth. For more information about disk system virtualization. In addition to superior performance. which are recoverable. and serviceability (RAS) is improved in this setup when compared to conventional FC-AL. “Virtualization concepts” on page 105. a disk enclosure service processor detects the failing switch and reports the failure by using the other loop. The full SAS 2. 7. These DDMs are spread over two Fibre Channel fabrics. The failure of a drive is detected and reported by the switch. which also connect both DAs to each switch. “Virtualization concepts” on page 105. the device adapters (DAs) are mapping the virtualization scheme over the disk system back-end. and collect data for predictive failure statistics. A virtualization approach that is built on top of the high-performance architectural design contributes even further to enhanced performance. see Chapter 5.3 Fibre Channel device adapter The DS8000 relies on eight disk drive modules (DDMs) to form a RAID 5. All drives can still connect through the remaining switch. RAID 6.1. as described in Chapter 5. The switch ports distinguish between intermittent failures and permanent failures. Architectured for Performance 171 .0 speed of 6 Gbps also is used for each individual drive.These two switched point-to-point connections to each drive. This architecture doubles the bandwidth over conventional FC-AL implementations because of two simultaneous operations from each DA that allow for two concurrent read operations and two concurrent write operations. availability. results in the following benefits: There is no arbitration competition and interference between one drive and all the other drives because there is no hardware in common for all the drives in the FC-AL loop. To host servers Adapter Adapter Storage server DA PowerPC Processor Memory Processor Adapter Adapter Fibre Channel Protocol Proc Fibre Channel Protocol Proc Fibre Channel ports Figure 7-5 Fibre Channel device adapter Chapter 7. reliability. Thus far we described the physical structure. The ports understand intermittent failures.

The total memory that is supported was increased by 166% when compared to the DS8800. 172 IBM System Storage DS8870 Architecture and Implementation . For more information about the CEC hardware architecture. Split Affinity means that each CEC uses one device adapter in every I/O enclosure.2. Figure 7-6 Change in CEC’s and XC communication For communications between CECs and I/O enclosures. This configuration balances the bandwidth from each bay and allows for considerable performance improvements when compared to the approach that is used in previous DS8000 models. The adapter also is PCIe Gen2-based and runs at 8 Gbps.The RAID device adapter is built on PowerPC technology with four Fibre Channel ports and high-function and high-performance ASICs. “DS8870 architecture overview” on page 38. Each DA performs the RAID logic and frees up the processors from this task. which continues to be managed in 4-KB segments for optimal cache efficiency. Figure 7-6 shows the changes in the communication and attachment of the Central Electronics Complex (CEC) and some changes in the PCI Express adapters. The actual throughput and performance of a DA is determined by the port speed and hardware that is used and by the firmware efficiency. see 3. The DS8870 provides from 16 GB to 1 TB of processor memory. For cross-cluster communication. the DS8870 uses the Device Adapters in Split Affinity mode. The DS8870 also features some internal communication improvements when compared to the DS8800. the DS8870 uses the PCIe fabric instead of a separate path (RIO) in DS8800. Comparing the DS8870 to DS8800 When compared to DS8800’s POWER6+™ processor technology with two different processor complex options. which affect its performance. DS8870 uses the new IBM POWER7 processor technology with four different processor complex options.

we briefly review the host adapters and their design characteristics to address performance.4 Eight-port and four-port host adapters Before we examine the heart of the DS8870. see this website: http://www.1. the POWER7 processor in the DS8870 provides better bandwidth scaling when compared to DS8800 as a function of PPRC paths.org/results/benchmark_results_spc2 7. when compared to the DS8800. Fibre Channel Host ports To host servers Fibre Channel Protocol Proc Fibre Channel Protocol Proc Adapter Adapter Processor Memory Processor Storage server PowerPC HA Adapter Adapter Figure 7-7 Host adapter with four Fibre Channel ports Chapter 7. which can be configured to support Fibre Channel Protocol (FCP) or Fibre Connection (FICON). For Peer-to-Peer Remote Copy (PPRC) establish. Architectured for Performance 173 . where random I/O workload is used for testing. see this website: http://www. The DS8870 also provides significant performance improvements in sequential read and sequential write throughputs.storageperformance. Latest benchmark values: Vendor-neutral independent organizations develop generic benchmarks to allow an easier comparison of intrinsic storage product values in the marketplace. Figure 7-7 shows the host adapters.org/results/benchmark_results_spc1 For an SPC-2 benchmark result.storageperformance.These enhancements in CPU. See the following sections of the Storage Performance Council (SPC) website for the latest benchmark values of the DS8870 to compare with other IBM and non-IBM storage products: For an SPC-1 benchmark result. where large block-sized sequential workload is used for testing. memory. and internal communication allow the DS8870 to deliver up to three times I/O operations per second performance improvement in transaction processing workload environments and better response times. These adapters are designed to hold eight or four Fibre Channel (FC) ports.

This feature brings the following values: Server and storage resources remain optimized for performance and cost objectives. “IBM Easy Tier” on page 197. regardless of the operating system of the server to which DS8000 is attached.2. use a SAN switch.5-GHz PowerPC processor The adapter memory is increased fourfold from the previous DS8000 model The 8-Gbps adapter ports can negotiate to 8. Significant performance increase Reduction in administrative costs 174 IBM System Storage DS8870 Architecture and Implementation . To fully access 65. you can expose 64 control-unit images (16. This configuration results in a theoretical aggregated host I/O bandwidth of 128 times 8 Gbps. For more information.With FC adapters that are configured for FICON. These items allow the DS8000 to cooperate with the host systems in manners beneficial to the overall performance of the systems. 7.384. The Host Adapter architecture of DS8870 includes the following characteristics (which are identical to the details of DS8800): The architecture is fully at 8 Gbps Uses Gen2 PCIe interface Features dual-core 1.384 devices) to each host channel.7. By using a switched configuration. Full Easy Tier support IBM Easy Tier is an intelligent data placement algorithm of DS8000 that works on extent level for Open Systems and System z environments. Each port provides industry-leading throughput and I/O rates for FICON and FCP. it is necessary to connect a minimum of four FICON host channels to the storage unit. or 2 Gbps (1 Gbps is not possible). 4. depending on the DS8870 processor feature A maximum of 509 logins per FC port A maximum of 8192 logins per storage unit A maximum of 1280 logical paths on each FC port Access to all control-unit images over each FICON port A maximum of 512 logical paths per control unit image FICON host channels limit the number of devices per channel to 16. 7. The front end with the 8-Gbps ports scales up to 128 ports for a DS8870 by using the eight-port host bus adapters (HBAs). see 7.280 devices on a storage unit.1 Synergy on System p The IBM System Storage DS8000 can work in cooperation with System p to provide the following performance enhancement functions. the DS8000 series provides the following configuration capabilities: Fabric or point-to-point topologies A maximum of 128 host adapter ports. For attachments to 1-Gbps hosts.2 Software performance: Synergy items There are a number of performance features in the DS8000 that work together with the software on the host and are collectively referred to as synergy items.

Cooperative caching: Synergy with AIX and DB2 on System p Another software-related performance item is cooperative caching. The priority of an AIX process can be 0 (no assigned priority) or any integer value from 1 (highest priority) to 15 (lowest priority). This ability improves the performance of the subsystem by keeping more of the repeatedly accessed data within the cache. with end-to-end I/O priority. Currently. such as requests that might be prerequisites for others (for example. IBM intends to expand its IBM Easy Tier functions on the DS8000 to a broader level by leveraging direct-attached solid-state storage on selected AIX and Linux servers. However. see this website: http://www. With the implementation of cooperative caching. the AIX operating system allows trusted applications. It is only applicable to raw volumes (no file system) and with the 64-bit kernel. This status decreases the retention period of the cached data. IBM Easy Tier will manage the direct-attached SSD on the host as a large and low latency cache for the hottest data. which allows the subsystem to conserve its cache for data that is more likely to be reaccessed. such as RAID protection and remote mirroring. while advanced disk system functions are preserved. to provide cache hints to the DS8000. At the DS8000. According to this announcement. thus improving the cache hit ratio. The priority is delivered to storage subsystem in the FCP Transport Header. such as DB2. which improves performance for specific requests that are deemed important by the application. This feature is only applicable to raw volumes (no file system) and with the 64-bit kernel. 2012 to provide information about the future enhancements on some IBM System Storage products. For more information about this letter. the host adapter gives preferential treatment to higher priority I/O. Currently. DB2 logs). DB2 can change this value for critical data transfers.ibm. the host can indicate that the information recently accessed is unlikely to be accessed again soon. AIX supports this feature with DB2. This feature allows trusted applications to override the priority that is given to each I/O by the operating system. to the SCSI T10 standard.Statement of direction: IBM released a Statement of direction announcement letter on June 4. Chapter 7.com/common/ssi/cgi-bin/ssialias?subtype=ca&infotype=an&appname=i Source&supplier=877&letternum=ENUSZG12-0163 End-to-end I/O priority: Synergy with AIX and DB2 on System p End-to-end I/O priority is a new addition (requested by IBM). Cooperative caching is supported in System p AIX with the Multipath I/O (MPIO) Path Control Module (PCM) that is provided with the Subsystem Device Driver (SDD). Architectured for Performance 175 . All I/O requests that are associated with a process inherit its priority value. a feature that provides a way for the host to send cache management hints to the storage facility.

“Parallel Access Volume” on page 206. together with z/OS Workload Manager (zWLM). see 7. DS8000 I/O Priority Manager (System z) I/O Priority Manager.2 Synergy on System z The IBM System Storage DS8000 can work in cooperation with system z to provide the following performance enhancement functions: Parallel Access Volume (PAV) and HyperPAV Parallel Access Volume (PAV) is an optional licensed function of the DS8000 for the z/OS and z/VM operating systems. 7. Full IBM Easy Tier support IBM Easy Tier is an intelligent data placement algorithm of DS8000 that works on extent level for Open Systems and System z environments. It provides the ability to perform multiple I/O requests to the same volume at the same time.Long busy wait host tolerance: Synergy with AIX on System p Another addition to the SCSI T10 standard is SCSI long busy wait. This function. and it is also supported by the DS8000. 176 IBM System Storage DS8870 Architecture and Implementation .2. in turn. is intended to improve disk I/O performance for important workloads. PowerHA SystemMirror with Metro Mirror supports distances of up to 300 km. enable more effective storage consolidation and performance management when different workloads share a common disk pool (Extent Pool). the reaction time of WLM is too slow to cope with rapidly changing workload. I/O of less prioritized workload to the same Extent Pool is slowed down to give the higher prioritized workload a higher share of the resources. “IBM Easy Tier” on page 197.2. The number of alias addresses defined the parallelism of I/Os to a volume.9. However.7. It can also take advantage of the Metro Mirror or Global Mirror functions of the DS8000 as a data replication mechanism between the primary and remote site. It also drives I/O prioritization to the disk system by allowing WLM to give priority to the system’s resources (disk arrays) automatically when higher priority workloads are not meeting their performance goals. Important: I/O Priority Manager is an optional feature. HyperPAV is an extension to PAV where the WLM no longer is involved and any alias address from a pool of addresses can be used to drive the I/O. This delay. Integration with zWLM is exclusive to DS8000 and System z systems. which provides a way for the target system to specify that it is busy and how long the initiator should wait before an I/O is tried again. It is not supported for the DS8870 business class configuration with 16-GB cache. For more information. see 7. provided in the FCP status response. For more information about PAV. prevents the initiator from trying again too soon. IBM System p AIX supports SCSI long busy wait with MPIO. mainly the disk drives. reduces unnecessary requests and potential I/O failures because of exceeding a set threshold for the number of times it is tried again. regardless of the operating system of the server to which DS8000 is attached. PowerHA Extended distance extensions: Synergy with AIX on System p The IBM PowerHA® SystemMirror® Enterprise Edition (former HACMP/XD) provides server and LPAR failover capability over extended distances. now tightly integrated with zWLM. z/OS’s Workload Manager (WLM) manages the assignment of so called alias addresses to base addresses. This information. The DS8000 requires no changes to be used in this fashion. With dynamic PAV.

Quick initialization improves devices initialization speeds and allows a copy services relationship to be established after a device is created.9. see DB2 for z/OS and List Prefetch Optimizer. Therefore. the extent metadata must be allocated and initialized before the quick initialization function is started.9. Normal read and write access to the logical volume is allowed during the initialization process. High Performance FICON High Performance FICON (zHPF) is a protocol extension of FICON. Architectured for Performance 177 . REDP-4862. In addition. are eligible for zHPF. Format Write. and zHPF support for sequential access methods. “High Performance FICON for z” on page 218.60x increase in sequential or batch processing performance.Extended Address Volumes This capability can help relieve address constraints to support large storage capacity needs by addressing the capability of System z environments to support volumes that can scale up to approximately 1 TB (1. Quick initialization (System z) IBM System Storage DS8000 supports quick volume initialization for System z environments. When combined with the latest releases of z/OS and DB2. It allows the control unit to stream the data for multiple commands back in a single data transfer section for I/Os that are initiated by various access methods. see 7. the quick initialization can be started for the entire logical volume or for an extent range on the logical volume. DB2 can benefit from the new caching algorithm at the DS8000 level called List Pre-fetch Optimizer (LPO).006 cylinders). which improves the channel throughput on small block transfers. Depending on the operation. These enhancements include a new cache optimization algorithm that can greatly improve performance and hardware efficiency. allowing capacity to be reconfigured without waiting for initialization. Recent enhancements to zHPF include Extended Distance capability. All DB2 I/Os. which can help customers who frequently delete volumes. including format writes and list prefetches. DS8870 with zHPF and z/OS V1. Quick initialization initializes the data logical tracks or block within a specified extent range on a logical volume with the appropriate initialization patter for the host. For more information about List Prefetch.182. zHPF is an optional feature of the DS8870. zHPF is enhanced to support DB2 list prefetch.13 has significant I/O performance improvements for certain I/O transfers for workloads that use QSAM. BPAM. Chapter 7. which communicates in a single packet a set of commands that must be executed. it can demonstrate up to 14x . For more information about zHPF. zHPF List Pre-fetch. and BSAM access methods.

it provides a first estimate.5% (1400) when you assume a spare drive in the eight pack. various disk vendors provide the disk specifications on their websites. you should consider the capacity and the number of disk drives that are needed to satisfy the performance requirements. read-to-write ratio. A 146 GB 15K RPM disk drive can be used for access densities up to. For transferring only a small block. it is approximately 0. the transfer time can be neglected. We made the assumption that server I/O is purely random. Important: When a storage system is sized. Because the access times for the disks are the same for same RPM speeds. track-to-track seek times are much lower and higher I/O rates are possible. This time is an average 5 ms per random disk I/O operation or 200 IOPS. Current SAS 15-K RPM disks. and slightly over. 1 I/O per GB·s. A combined number of eight disks (as is the case for a DS8000 array) thus potentially sustains 1600 IOPS when spinning at 15-K RPM. 1000 random I/Os from a server with a standard read-to-write ratio and a standard cache hit ratio saturate the disk drives. a read-to-write ratio of 70% .30%. you can determine the number and type of ranks that are required based on the needed capacity and on the workload characteristics in terms of access density. for example. see Table 8-8 on page 247. Back on the host side. 178 IBM System Storage DS8870 Architecture and Implementation . we actually complete 1550 I/O operations on disk that is compared to a maximum of 1600 operations for 7+P configurations or 1400 operations for 6+P+S configurations. and 50% read cache hits. When there are sequential I/Os. This configuration leads to the following IOPS numbers: 700 read IOPS 350 read I/Os must be read from disk (based on the 50% read cache hit ratio) 300 writes with RAID 5 results in 1200 disk operations because of the RAID 5 write penalty (read old data and parity. provide an average seek time of approximately 3 ms and an average latency of 2 ms. For 600-GB drives. We also assumed that reads have a hit ratio of only 50%. These considerations show the importance of intelligent caching algorithms as used in the DS8000. write new data and parity) Totals to 1550 disk I/Os With 15K RPM DDMs performing 1000 random IOPS from the server. the I/O density is different. and hit rates.7. Reduce the number by 12. For a single disk drive. After the speed of the disk is decided. the capacity can be calculated based on your storage capacity needs and the effective capacity of the RAID configuration you use. With higher hit ratios. consider an example with 1000 IOPS from the host.25 I/O per GBs.3 Performance considerations for disk drives When you are planning your system. but they have different capacities. For more information about calculating these needs. Although this discussion is theoretical in approach. Thus. higher workloads are possible. You can approach this task from the disk side and look at basic disk figures.

such as various fixed content. Data and parity are written to disk. a write in RAID 5 causes the following four disk operations. the old data and parity do not need to be read. the DS8000 series switches to a RAID three-like algorithm. Database applications with their random and intensive I/O workloads are prime candidates for deployment on SSDs. The performance advantages are the fast seek time and average access time. Instead. Architectured for Performance 179 . Enterprise drives rotate at 15. When a complete stripe is in cache for destage. SSDs have no moving parts (no spinning platters and no actuator arm). and random access workload.Solid-State Drives From a performance point of view. keep in mind that the 3 TB near line drives are the largest and slowest of the drives that are available for the DS8870. New parity is calculated in the device adapter. the new parity is calculated across the stripe. These drives are meant to complement. Important: SSD drives should be configured as RAID 5 arrays and have the RAID 10 option via RPQ. Because a complete stripe must be destaged. and serviceability. and near-line applications that require large amounts of storage capacity for lighter workloads. RAID 5 is used because it provides good performance for random and sequential workloads and it does not need much more storage for redundancy (one parity drive). REDP-4667. Nearline-SAS drives are a cost-efficient storage option for lower intensity storage workloads and are available with the DS8870. Enterprise SAS drives Enterprise SAS drives provide high performance.000 or 10. see IBM System Storage DS8000 Easy Tier. existing Enterprise SAS drives.000 RPM. If an application requires high-performance data throughput and continuous. and RAID 10. A random write causes a cache hit. but the I/O is not complete until a copy of the write data is put in NVS. the so-called write penalty: Old data and the old parity information must be read. not compete with. reference data. bad cache hit rates. Chapter 7. They are targeted at applications with heavy IOPS. enterprise drives are the best price performance option. When data is destaged to disk. This configuration provides good sequential performance. Nearline-SAS When disk alternatives are analyzed. The DS8000 series can detect sequential workload. RAID level The DS8000 series offers RAID 5. and the data and parity are destaged to disk. intensive I/O operations. RAID 6. data archival. availability. These drives are a poor choice for high performance or I/O intensive applications. Most of this activity is hidden to the server or host because the I/O is complete when data enters cache and NVS. the best choice for your DS8870 disks would be the SSDs. For more information about concepts and functions of IBM Easy Tier and its practical usage. reliability. which necessitates fast response times. RAID 5 Normally. Cost-effective option: The nearline-SAS drives offer a cost-effective option for lower priority data.

It has the following characteristics: Sequential Read: About 99% x RAID 5 Rate Sequential Write: About 65% x RAID 5 Rate Random 4K 70%R/30%W IOPs: About 55% x RAID 5 Rate The performance is degraded with two failing disks. Important: The 3-TB nearline drives should be configured as RAID 6 arrays. can be made at any time. RAID 6 arrays can be reconfigured into RAID 5 arrays). RAID 6 should be considered in situations where you would consider RAID 5. so it is worth considering the use of RAID 10 for high-performance random write workloads. but has more write penalty than RAID 5 because it must write a second parity stripe. but need increased reliability.RAID 6 RAID 6 is an option that increases data fault tolerance. Consult with your IBM Service Representative for the latest information about supported RAID configurations. However. RAID 10 A workload that is dominated by random writes benefits from RAID 10. RAID 6. Important: RAID configuration information does change occasionally. for twice the number of drives (and probably cost). The decision to configure capacity as RAID 5. RAID 6. RAID 6 was designed for protection during longer rebuild times on larger capacity drives to cope with the risk of having a second drive failure within a rank while the failed drive is being rebuilt.6. For more information about important restrictions on DS8870 RAID configurations. the arrays must first be emptied because reformatting does not preserve the data. by using a second independent distributed parity scheme (dual parity). RAID 6 is an option for the enterprise SAS drives. However. RAID 6 provides a Read Performance similar to RAID 5. data is striped across several disks and mirrored to another set of disks. and RAID 10 arrays can be intermixed within a single system and the physical capacity can be logically reconfigured later (for example.1. 180 IBM System Storage DS8870 Architecture and Implementation . and the amount of capacity to configure for each type. A write causes only two disk operations when compared to four operations of RAID 5. RAID 5. we can achieve four times more random writes. Here. Thus. see 4. and have the RAID 10 option via RPQ. It allows more failure when compared to RAID 5. or RAID 10. “RAID configurations” on page 84. you need nearly twice as many disk drives for the same capacity when compared to RAID 5.

Megiddo. such as sequential workload and transaction-oriented random workload. see “Outperforming LRU with an adaptive replacement cache algorithm” by N. With its powerful POWER7 processors. Which data is evicted when the cache becomes full.4 DS8000 superior caching algorithms Most. It is a self-tuning. 7. et al. 2004. Cache hits are also optimized when different workloads. the server architecture of the DS8870 makes it possible to manage such large caches with small cache segments of 4 KB (and hence large segment tables) without the need to partition the cache. Which data is copied into the cache..1 Sequential Adaptive Replacement Cache The DS8000 series uses the Sequential Adaptive Replacement Cache (SARC) algorithm. et al.7. pages 293–308. Therefore. For more information about ARC. This configuration means that more cache is available than double of what the DS8800 model offers for the same maximum disk capacity. This unit of allocation (which is smaller than the values that are used in other storage systems) ensures that small I/Os do not waste cache memory. Architectured for Performance 181 . The DS8000 series cache is organized in 4-KB pages that are called cache pages or slots. SARC is inspired by the Adaptive Replacement Cache (ARC) algorithm and inherits many features of it. pages 58–65. the DS8870 provides excellent I/O response times. SARC attempts to determine the following cache characteristics: When data is copied into the cache. self-optimizing solution for a wide-range of workloads with a varying mix of sequential and random I/O streams. number 4. in IEEE Computer. if not all. see “SARC: Sequential Prefetching in Adaptive Replacement Cache” by Binny Gill. Write data is always protected by maintaining a copy of write-data in Non-volatile Storage (NVS) of the other DS8000 internal server until the data is destaged to disks.4. The DS8870 can be equipped with up to 1024 GB of memory of which the major part is used as cache. These algorithms and the small cache segment size optimize cache hits. in Proceedings of the USENIX 2005 Annual Technical Conference.. high-end disk systems have an internal cache that is integrated into the system design. For more information about SARC. volume 37. Chapter 7. How the algorithm dynamically adapts to different workloads. are active at the same time. The POWER7 processors have enough power to implement sophisticated caching algorithms. which was developed by IBM Storage Development in partnership with IBM Research.

Demand paging is always active for all volumes and ensures that I/O patterns with some locality discover at least recently used data in the cache. file servers. and recovery. see 7. Today. databases. which dynamically adapts the amount and timing of prefetches optimally on a per-application basis (rather than a system-wide basis). backup.4. on-disk caches. For more information about AMP. Because effective. A track is a set of 128 disk blocks (16 cache pages). sequential and random (non-sequential) data is separated into separate lists. a prediction of likely data accesses is needed.2. In this manner. the DS8000 monitors application read-I/O patterns and dynamically determines whether it is optimal to stage into cache the following I/O elements: Only the page requested The page that is requested plus the remaining data on the disk track An entire disk track (or a set of disk tracks) that was not requested The decision of when and what to prefetch is made in accordance with the Adaptive Multi-stream Prefetching (AMP) algorithm. Prefetching Data is copied into the cache speculatively even before it is requested. RANDOM MRU MRU SEQ Desired size SEQ bottom LRU RANDOM bottom LRU Figure 7-8 Sequential Adaptive Replacement Cache 182 IBM System Storage DS8870 Architecture and Implementation . Sequential prefetching becomes active only when these counters suggest a sequential access pattern. To prefetch. Sequential access patterns naturally arise in video-on-demand. SARC uses prefetching for sequential workloads. To detect a sequential access pattern. For prefetching. counters are maintained with every track to record whether a track was accessed together with its predecessor. To decide which pages are evicted when the cache is full. copy. prefetching is ubiquitously applied in web servers and clients. “Adaptive Multi-stream Prefetching” on page 183. the cache management uses tracks. database scans. The SARC algorithm for random and sequential data is shown in Figure 7-8. and multimedia servers. sophisticated prediction schemes need an extensive history of page accesses (which is not feasible in real systems). The goal of sequential prefetching is to detect sequential access and effectively preload the cache with data to minimize cache misses.The decision to copy data into the DS8000 cache can be triggered from the following policies: Demand paging Eight disk blocks (a 4K cache page) are brought in only on a cache miss.

it goes down to the bottom of Least Recently Used (LRU). 7. Other implementation details take into account the relationship of read and write (NVS) cache. Specifically. While SARC is carefully dividing the cache between the RANDOM and the SEQ lists to maximize the overall hit ratio. This function makes SARC scan-resistant so that one-time sequential requests do not pollute the whole cache.2 Adaptive Multi-stream Prefetching As described previously.4. Additionally. manages the SEQ list. Cache pollution occurs when less useful data is prefetched instead of more useful data. AMP is an autonomic. the wanted size is increased. A page that was brought into the cache by a sequential access or by sequential prefetching is added to the head of MRU of the SEQ list and then goes in that list. if the bottom portion of the SEQ list is found to be more valuable than the bottom portion of the RANDOM list. AMP helps any workload that has a sequential read component. including pure sequential read workloads. In a steady state. the algorithm dynamically modifies the sizes of the two lists and the rate at which the sizes are adapted. Other rules control the migration of pages between the lists to not keep the same pages in memory twice. an algorithm that was developed by IBM Research. where the SEQ list maintains pages that are brought into the cache by sequential access or sequential prefetching. Chapter 7. In DS8870. the wanted size is decreased. workload-responsive. The AMP algorithm solves the following problems that plague most other prefetching algorithms: Prefetch wastage occurs when prefetched data is evicted from the cache before it can be used. AMP is managing the contents of the SEQ list to maximize the throughput obtained for the sequential workloads. Adaptive Multi-stream Prefetching (AMP). AMP provides optimal sequential read performance and maximizes the aggregate sequential read throughput of the system. The amount that is prefetched for each stream is dynamically adapted according to the application's needs and the space that is available in the SEQ list. otherwise. A larger (or smaller) rate of misses affects a faster (or slower) rate of adaptation. Whereas SARC impacts cases that involve both random and sequential workloads. efficient destaging.A page that was brought into the cache by simple demand paging is added to the head of Most Recently Used (MRU) of the RANDOM list. and the cooperation with Copy Services. To follow workload changes. Without further I/O access. SARC and AMP play complementary roles. pages are evicted from the cache at the rate of cache misses. the algorithm trades cache space between the RANDOM and SEQ lists dynamically and adaptively. SARC dynamically divides the cache between the RANDOM and SEQ lists. the DS8000 cache management goes far beyond the usual variants of the Least Recently Used/Least Frequently Used (LRU/LFU) approaches. The constant adaptation strives to make optimal use of limited cache space and delivers greater throughput and faster response times for a specific cache size. SARC maintains a wanted size parameter for the sequential list. In this manner. self-optimizing prefetching technology that adapts the amount of prefetch and the timing of prefetch on a per-application basis to maximize the performance of the system. Architectured for Performance 183 . The wanted size is continually adapted in response to the workload. By wisely choosing the prefetching parameters. The timing of the prefetches is also continuously adapted for each stream to avoid misses and any cache pollution.

the recency bit is set to one. was implemented in the DS8000 series. In the CSCAN algorithm. 7. For a more detailed description. an efficient write cache algorithm. where a destage pointer is maintained that scans the circular list and looks for destage victims. Out of this combination..AMP dramatically improves performance for common sequential and batch processing workloads. At that point. forming a circular queue. Furthermore. When a page must be inserted in the cache. Now this algorithm allows destaging of only write groups whose recency bit is zero. et al. It keeps a circular list of pages in memory. see “AMP: Adaptive Multi-stream Prefetching in a Shared Cache” by Binny Gill. When a write hit occurs. This algorithm is a combination of CLOCK. in the ACM Journal of Transactions on Storage. such as Copy and Recover. IBM produced a powerful and widely applicable write cache algorithm.4. This process results in more equal performance for all head positions. The CSCAN algorithm is the circular variation of the SCAN algorithm. The concept of how this mechanism is illustrated in Figure 7-9 on page 185. which gives an extra life to those write groups that were hit since the last time the destage pointer visited them. February 13 . in USENIX File and Storage Technologies (FAST). After the head arrives at the outer edge of the disk. 2007. the clock hand moves one step clockwise forward and the process is repeated until a page is replaced. as in the CLOCK algorithm. Then. The CSCAN algorithm uses spatial ordering. For more information about AMP and the theoretical analysis for its optimal usage. The smallest and the highest write groups are joined. A write group is always inserted in its correct sorted position and the recency bit is set to zero at the beginning. as in the CSCAN algorithm. The destage operation proceeds. CA.. and CSCAN. it returns to the beginning of the disk and services the new requests in this one direction only.16. IWC improves performance through better write cache management and a better destaging order of writes. It also provides excellent performance synergy with DB2 by preventing table scans from being I/O bound and improves performance of index scans and DB2 utilities. San Jose. with the hand that points to the oldest page in the list. see “Optimal Multistream Sequential Prefetching in a Shared Cache” by Binny Gill. a predominantly read cache algorithm. The write groups with a recency bit of one are skipped and the recent bit is then turned off and reset to zero. The SCAN algorithm tries to minimize the disk head movement when servicing read and write requests. the requests are always served in the same direction. the direction changes. the R bit is cleared and set to zero. et al. The CLOCK algorithm uses temporal ordering. 184 IBM System Storage DS8870 Architecture and Implementation . The new idea is to maintain a recency bit for each write group. The basic idea of IWC is to maintain a sorted list of write groups. It maintains a sorted list of pending requests with the position on the drive of the request. otherwise.3 Intelligent Write Caching Another cache algorithm. AMP reduces the potential for array hot spots that result from extreme sequential workload demands. Requests are processed in the current direction of the disk head until it reaches the edge of the disk. the new page is put in place of the page the hand points to and R is set to 1. If R is zero. October 2007. referred to as Intelligent Write Caching (IWC). then an R (recency) bit is inspected at the hand's location.

Chapter 7. In summary. The rate of destage is proportional to the portion of NVS that is occupied by an IWC list (the NVS is shared across all ranks in a cluster). This improvement focuses on maximizing throughput with good average response time. Another enhancement to IWC is an update to the cache algorithm that increases residency time of data in NVS.Figure 7-9 Intelligent Write Caching In the DS8000 implementation. Architectured for Performance 185 . destages are smoothed out so that write bursts are not translated into destage bursts. an IWC list is maintained for each rank. In addition. Furthermore. The dynamically adapted size of each IWC list is based on workload intensity on each rank. IWC has better or comparable peak throughput to the best of CSCAN and CLOCK across a wide gamut of write cache sizes and workload configurations. even at lower throughputs. IWC has lower average response times than CSCAN and CLOCK.

1 Workload characteristics The answers to questions such as “How many host connections do I need?” and “How much cache do I need?” always depend on the workload requirements. Consider placing specific database objects (such as logs) on separate ranks. Before the disk subsystem is designed. the disk space requirements of the application should be well-understood. Distribute capacity and workload across DA pairs. the other managed by server 1). Use as many disks as possible. disk space. Use multirank Extent Pools.7. the I/O performance requirements of the servers and applications should be defined up front because they play a large part in dictating the physical and logical configuration of the disk system. Consider mixed Extent Pools with several tiers and SSDs as the highest tier. The following information must be conducted for a detailed modeling: Number of I/Os per second I/O density Megabytes per second Relative percentage of reads and writes Random or sequential access characteristics Cache hit ratio Response time 7. use the following guidelines: Equally spread the LUNs and volumes across the DS8000 CECs.5.5. and I/Os per second per gigabyte of storage. Stripe your logical volume across several ranks (the default for large Extent Pools). managed by IBM Easy Tier. As is common for data placement. consider the use of two dedicated Extent Pools (one managed by server 0. and to optimize DS8000 resource utilization.5 Performance considerations for logical configuration To determine the optimal DS8000 layout. how many I/Os per second per server. such as. use volumes from even and odd-numbered Extent Pools (even-numbered pools are managed by server 0. Avoid idle disks. even if all storage capacity is not to be initially used. 186 IBM System Storage DS8870 Architecture and Implementation . and odd numbers are managed by server 1). Spreading the volumes equally on rank group 0 and 1 balances the load across the DS8000 units. performance-sensitive applications. you must make a decision regarding data placement. For an application.2 Data placement in the DS8000 After you determine the disk subsystem throughput. and the number of disks that are required by your hosts and applications. 7. For large.

your data should be spread across as many hardware resources as possible. Furthermore. The following approaches can be used to spread your data across even more disk drives: Storage Pool Striping (usually combined with automated intra-tier auto-rebalancing) Striping at the host level Intra-tier auto-rebalancing or Auto-rebalance is a capability of IBM Easy Tier that automatically rebalances the workload across all ranks of a storage tier within a managed extent pool. Auto-rebalance can be enabled for hybrid and homogeneous extent pools. A practical method is to use IBM Easy Tier auto-rebalancing. make extensive use of volume-level striping across disk drives. Auto-rebalance migrates extents across ranks within a storage tier to achieve a balanced workload distribution across the ranks and avoid hotspots. The ranks of Extent Pools can come from arrays on different device adapter pairs. or RAID 10 already spreads the data across the drives of an array. but this configuration is not always enough. You can select an Extent Pool that is owned by one server. RAID 5. Any disk that is used more than the other disks becomes a bottleneck to performance. There could be only one Extent Pool per server or you could have several. By doing so. Alternatively. as shown in Figure 7-10. Architectured for Performance 187 .5. Half of the ranks should be managed by each server.Important: Balance your ranks and Extent Pools between the two DS8000 servers. RAID 6. auto-rebalance also automatically populates new ranks that are added to the pool when the workload is rebalanced within a tier. 7. Chapter 7.3 Data placement There are several options for creating logical volumes. Server 0 Server 1 DA2 DA0 DA3 DA1 ExtPool 0 ExtPool 1 DA2 DA0 DA3 DA1 Figure 7-10 Ranks in a multirank Extent Pool configuration that is balanced across DS8000 servers All disks in the storage disk system should have roughly equivalent utilization. auto-rebalance reduces performance skew within a storage tier and provides the best available I/O performance from each tier. For optimal performance.

Figure 7-11 Select the All Pools option to balance not only mixed storage pools 188 IBM System Storage DS8870 Architecture and Implementation . Use the option that is shown in Figure 7-11 to balance all of the storage pools.Important: It is suggested to use IBM Easy Tier to balance workload across all ranks even when SSD drives are not installed.

3. Chapter 7. this I/O capability of this rank also applies to the volume. we describe how many random I/Os can be performed for a standard workload on a rank. consider combining it with auto-rebalancing. Important: Use Storage Pool Striping and Extent Pools with a minimum of four to eight ranks to avoid hot spots on the disk drives. Storage Pool Striping 4 rank per Extent Pool Extent pool Rank 1 Extent Rank 2 1GB 8 GB LUN Rank 3 Rank 4 Figure 7-12 Storage Pool Striping In 7. However. If a volume is on just one rank. The easiest way to stripe is to use Extent Pools with more than one rank and use Storage Pool Striping when a new volume (see Figure 7-12) is allocated. The total number of I/Os that can be performed on a set of ranks does not change with Storage Pool Striping.Storage Pool Striping: Extent Rotation Storage Pool Striping is a technique for spreading the data across several disk arrays. The I/O capability of many disk drives can be used in parallel to access data on the logical volume. Architectured for Performance 189 . if this volume is striped across several ranks. In addition to this configuration. This striping method is independent of the operating system. “Performance considerations for disk drives” on page 178. the I/O rate to this volume can be much higher.

A good configuration is shown Figure 7-13. LVM striping is a technique for spreading the data in a logical volume across several disk drives in such a way that the I/O capacity of the disk drives can be used in parallel to access data on the logical volume. and ranks on separate device adapters are used in a multi-rank Extent Pool. 190 IBM System Storage DS8870 Architecture and Implementation . DS8000 Server 0 Server 1 DA2 DA0 DA3 6+P+S 6+P+S 6+P+S 6+P+S 6+P+S 6+P+S 6+P+S 6+P+S 7+P 7+P 7+P 7+P 7+P 7+P 7+P 7+P 6+P+S 6+P+S 6+P+S 6+P+S 6+P+S 6+P+S 6+P+S 6+P+S 7+P 7+P 7+P 7+P 7+P 7+P 7+P 7+P DA2 DA0 DA3 DA1 DA2 DA0 DA3 DA1 DA2 DA0 DA3 DA1 DA2 DA0 DA3 DA1 Extent Pool P0 DA1 DA2 DA0 DA3 DA1 DA2 DA0 DA3 DA1 DA2 DA0 DA3 DA1 Extent Pool P1 Extent Pool P2 Extent Pool P3 Figure 7-13 Balanced Extent Pool configuration Striping at the host level Many operating systems include the option to stripe data across several (logical) volumes. Other examples for applications that stripe data across the volumes include the SAN Volume Controller and IBM System Storage N series Gateways. but there are also benefits for random access. The primary objective of striping is high-performance reading and writing of large sequential files. The ranks are attached to DS8000 server 0 and server 1 in a half-and-half configuration. An example is AIX’s Logical Volume Manager (LVM). Do not expect that double striping (at the storage subsystem level and at the host level) will enhance performance any further.

you can create a host logical volume from several DS8000 logical volumes (LUNs). You could have more Extent Pools and ranks. you should make sure that you spread it across the two servers. Architectured for Performance 191 . across different device adapter pairs. Host LVM volume LSS 00 LSS 01 Extent Pool FB-0a Extent Pool FB-1a DA pair 1 Extent Pool FB-0b Extent Pool FB-1b DA pair 1 DA pair 2 Server 0 Extent Pool FB-0c Extent Pool FB-1c DA pair 2 Extent Pool FB-0d Extent Pool FB-1d Figure 7-14 Optimal placement of data Figure 7-14 shows an optimal distribution of eight logical volumes within a DS8000. You can select LUNs from different DS8000 servers and device adapter pairs. the best performance for this LVM volume is realized. as shown in Figure 7-14. you must consider having Extent Pools with more than one rank. By striping your host logical volume across the LUNs. To be able to create large logical volumes or to be able to use Extent Pool striping. but when you want to distribute your data for optimal performance.If you use a logical volume manager (such as LVM on AIX) on your host. and across several ranks. Server 1 Chapter 7.

Extent Pool striping is done at a 1 GiB stripe size. Combining Extent Pool striping and logical volume manager striping Striping by a logical volume manager is done on a stripe size in the MB range (about 64 MB). The stripe size must be large enough to keep sequential data relatively close together. You should choose a stripe size close to 4 MB if you have many applications that share the arrays and a larger size when you have few servers or applications that share the arrays. Balanced implementation: LVM striping 1 rank per extent pool Rank 1 Extent pool 1 2 GB LUN 1 Non-balanced implementation: LUNs across ranks More than 1 rank per extent pool Extent Pool Pool 5 Extent Rank 5 8GB LUN Extent Rank 2 1GB Extent pool 2 2GB LUN 2 Rank 6 Extent 1GB Extent pool 3 2GB LUN 3 Rank 7 Rank 3 Extent pool 4 2GB LUN 4 Rank 8 Rank 4 LV striped across 4 LUNs Figure 7-15 Spreading data across ranks Combining Extent Pools that are made up of one rank and then LVM striping over LUNs that were created on each Extent Pool offers a balanced method to evenly spread data across the DS8000 without the use of Extent Pool striping. 192 IBM System Storage DS8870 Architecture and Implementation . or you can easily unbalance your system (as shown on the right side of Figure 7-15). The stripe size Each striped logical volume that is created by the host’s logical volume manager has a stripe size that specifies the fixed amount of data that is stored on each DS8000 logical volume (LUN) at one time. as shown on the left side of Figure 7-15.64 MB. you might want to continue to use that striping. Double striping likely does not increase performance. you must be careful where to put your data. but not too large to keep the data on a single array. Both methods could be combined.If you use multirank Extent Pools and you do not use Storage Pool Striping. We recommend that you define stripe sizes by using your host’s logical volume manager in the range of 4 MB . If you already use LVM Physical Partition (PP) wide striping. LVM striping can stripe across Extent Pools and use volumes from Extent Pools that are attached to server 0 and server 1 of the DS8000 series.

7. which are competing for the same shared and possibly constrained storage resources. Architectured for Performance 193 . DS8000 I/O Priority Manager is a licensed function feature that is available for the DS8870. share the physical disk drives. you do not need the I/O Priority manager.Important: Striping at the host layer contributes to an equal distribution of I/Os to the disk drives to reduce hot spots. I/O Priority Manager is designed to understand the load on the system and modify it by using dynamic workload control. without operator intervention. But. when production workload and. The I/O of less important workload is slowed down to give the higher priority workload a higher share of the resources. Important: If you separated production and non-production data by using different Extent Pools and different device adapters. The DS8000 storage hardware resources that are monitored by the I/O Priority Manager for possible contention are the RAID ranks and device adapters. It enables more effective storage consolidation and performance management and the ability to align quality of service (QoS) levels to separate workloads in the system. I/O Priority Manager uses QoS to assign priorities for different volumes and applies network QoS principles to storage by using a particular algorithm that is called Token Bucket Throttling for traffic control. However. IBM Easy Tier can work best if there are hot extents that can be moved to SSDs. for example. Chapter 7. DS8000 I/O Priority Manager constantly monitors system resources to help applications meet their performance targets automatically. potentially the test system could negatively affect the production performance. test systems. if you are using tiered Extent Pools with SSD drives.6 I/O Priority Manager It is common practice to have large Extent Pools and stripe data across all disks.

In step 2. Figure 7-16 Automatic control of disruptive workload In step 1. the DS8000 includes four defined performance policies: default (unmanaged).1 Performance policies for open systems When I/O Priority Manager is enabled. For open systems. This QoS target is used to determine whether a volume is experiencing appropriate response times. and low priority. The DS8000 also includes 16 performance groups: five performance groups each for the high. high priority. and one performance group for the default performance policy. a non-critical application B begins to work causing performance degradation for application A. and low performance policies. A performance group associates the I/O operations of a logical volume with a performance policy that sets the priority of a volume relative to other volumes. 194 IBM System Storage DS8870 Architecture and Implementation . critical application A works normally.6. Each performance group has a QoS target. medium priority.Figure 7-16 shows a three-step example of how I/O Priority Manager uses dynamic workload control. All volumes fall into one of the performance policies. each volume is assigned to a performance group when the volume is created. In step 3. medium. 7. I/O Priority Manager detects automatically the QoS impact on critical application A and dynamically restores the performance for application A.

High priority performance policy The high priority performance policy has a QoS target of 70. The maximum delay added is 200 ms. Medium priority performance policy The medium priority performance policy has a QoS target of 40. Chapter 7.The following performance policies are available: Default performance policy The default performance policy does not have a QoS target that is associated with it. Important: Only z/OS operating systems use the I/O Priority Manager with software support.5 times the optimal response time of the rank.2 Performance policies for System z With System z. If there is no bottleneck for a shared resource. four performance groups for medium-performance policies. I/Os from volumes with the medium performance policy attempt to stay under 2. I/Os in the high performance policy are never delayed. Two operation modes are available for I/O Priority Manager only with System z: without software support or with software support. 7.6.5 times the optimal response of the rank. I/Os to volumes that are assigned to the default performance policy are never delayed by I/O Priority Manager. the I/O of low priority workload is slowed down first by delaying the response to the host. there are 14 performance groups: three performance groups for high-performance policies. low priority workload is not pruned. This delay is increased until the higher-priority I/O meets its goal. I/Os from volumes that are associated with the high performance policy attempt to stay under approximately 1. and one performance group for the default performance policy. if a higher priority workload does not achieve its goal. Low performance policy Volumes with a low performance policy have no QoS target and have no goal for response times. six performance groups for low-performance policies. However. Architectured for Performance 195 .

13 and above Without z/OS software support. The maximum delay added is 200 ms. z/OS can optionally specify parameters that determine priority of each I/O operation and allow multiple workloads on a single CKD volume to have different priorities.I/O Priority Manager CKD support In a System z environment. V1. For more information: For more information about I/O Priority Manager. V1. low priority workload is not pruned. the I/O of low priority workload is slowed down first by delaying the response to the host.11. see DS8000 I/O Priority Manager. Supported on z/OS V1. I/O is managed according to I/O’s performance policy If there is no bottleneck for a shared resource.12. However. REDP-4760 196 IBM System Storage DS8870 Architecture and Implementation . This delay is increased until higher-priority I/O meets its goal. if a higher priority workload does not achieve its goal. the volume’s I/O is managed according to the volume’s performance group’s performance policy With z/OS software support: – User assigns application priorities via eWLM – z/OS assigns an importance value to each I/O based on eWLM inputs – z/OS assigns an achievement value to each I/O based on prior history of I/O response times for I/O with same importance and based on eWLM expectations for response time – Importance and achievement value on I/O associates this I/O with a performance policy (independently of volume’s performance group/performance policy) – On ranks in saturation. I/O priority Manager includes the following characteristics: User assigns a performance policy to each CKD volume that applies in the absence of more software support. on ranks in saturation.

IBM Easy Tier is for optimal long-term data placement. You can switch it on or off. These processes are sequential nature and despite high I/O activity. sequential or random access. Architectured for Performance 197 . which gives a higher weight to the last 24 hours than to the older observations. see“IBM Easy Tier Automatic Mode” on page 201. consider at least one to three extents for each rank in an extent pool. Important: To move extents. read/write ratio. it looks only at I/Os to the drives and ignores cache hits. if there is only a small difference in I/O activity between extents. Therefore. Because IBM Easy Tier is used to optimize drive usage. IBM Easy Tier does not move this data to SSDs right away. Chapter 7. IBM Easy Tier works on an Extent level: 1 GiB for Open Systems and a 1113 cylinder capacity (3390 Model 1) for System z environments. otherwise. Easy Tier does not move an extent. For more information about IBM Easy Tier. Do not worry that IBM Easy Tier might shift around all data when the nightly batch processing starts or data backup jobs run. No tuning is required nor possible for IBM Easy Tier Automatic Mode. IBM Easy Tier determines the appropriate tier of storage that is based on data access requirements and then automatically and nondisruptively moves data (at the extent level) to the appropriate tier on the DS8000. For more information. The history of access pattern is exponentially weighted. As a guideline. IBM Easy Tier Automatic Mode monitors I/O access at the Extent level and keeps a history of access density. IBM Easy Tier assumes that SSDs do not benefit much from sequential workloads and that Nearline disks are good candidates for data that is primarily accessed sequentially.7. IBM Easy Tier needs at least a few unused extents in each Extent Pool. there must be a real benefit in moving extents around.7 IBM Easy Tier IBM Easy Tier is an optional and no-charge feature on the DS8000 that can enhance performance and balance workloads through the following capabilities: Automated hot spot management and data relocation Auto-rebalancing Manual volume rebalancing and volume migration Rank depopulation Extent pool merging It also supports thin provisioned volumes and encrypted drives. IBM Easy Tier also considers that data movement puts some load on the disk backend. and cache hit ratio. see IBM System Storage DS8000 Easy Tier. REDP-4667.

Figure 7-17 IBM Easy Tier migration cycle The following tasks are part of the basic cycle: 1.The basic IBM Easy Tier migration cycle is shown in Figure 7-17. tier1 features the next performance level (SAS). Tier 2 features the lowest disks (Near-Line SAS). 198 IBM System Storage DS8870 Architecture and Implementation . IBM Easy Tier monitors the performance of each extent to determine the data temperature (I/O Activity). On the DS8870. Extents are migrated within an extent pool according to the plan over a 24-hour period. the following benefits are realized: Automatically rebalances to accommodate growth Incrementally grow your environment to accomplish goals Replacement for Tier 2 footprint growth Important: IBM Easy Tier is a DS8000 licensed function. A limited number of extents are chosen for migration every 5 minutes. By adding IBM Easy Tier on top of existing workloads. 2. which can be ordered and installed at no extra fee. 3. IBM Easy Tier can be used with Encryption. and tier 0 features the fastest disks (SSD). An extent migration plan is created for optimal data placement every 24 hours that is based on performance statistics.

The first generation of IBM Easy Tier introduced automated storage performance management by efficiently boosting Enterprise-class performance with SSDs. rank depopulation.1 IBM Easy Tier operating modes IBM Easy Tier has two different operating modes to optimize the data placement on a DS8000: automatic and manual.7. The third generation of IBM Easy Tier introduced further enhancements. auto performance rebalancing in homogeneous and hybrid pools. there is no difference in performance. IBM Easy Tier can perform volume migration. The second generation also introduced intra-tier performance management (auto-rebalance) for hybrid pools and manual volume rebalance and rank depopulation. which provided automated storage performance and storage economics management across all three drive tiers. Encryption usage is optional. It also automated storage tiering from Enterprise-class drives to SSDs. The second generation of IBM Easy Tier added automated storage economics management by combining Enterprise-class drives with Nearline drives to maintain Enterprise-tier performance while shrinking the footprint and reducing costs with large capacity Nearline drives. Whether you use encryption or not. In this section. Full Disk Encryption support: All drive types in DS8870 support Full Disk Encryption. Architectured for Performance 199 . and thin provisioning (ESE volumes only) on encrypted drives and the non-encrypted drives. thus optimizing SSD deployments with minimal costs. The fourth generation enhanced the support of FDE drives. It also introduced dynamic volume relocation and dynamic extent pool merge. we describe these modes of operation. It also introduced support for auto-rebalance in homogeneous pools and support for thin provisioned (extent space-efficient (ESE)) volumes. 7. With these enhancements. hot spot management. IBM Easy Tier Manual mode IBM Easy Tier Manual mode provides the following extended capabilities for logical configuration management: Dynamic volume relocation Dynamic extent pool merge Rank depopulation capabilities Chapter 7. you can consolidate and efficiently manage more workloads on a single DS8000 system.

Volume-Based Data Relocation (Dynamic Volume Relocation) As shown in Figure 7-18. Figure 7-18 Volume-Based Data Relocation (Dynamic volume relocation) Limitations: Dynamic volume relocation is allowed only among extent pools with the same server affinity or rank group. IBM Easy Tier is a DS8000 built-in dynamic data relocation feature that allows host-transparent movement of data among the storage system resources. It allows a user to initiate a volume migration from its current extent pool (source extent pool) to another extent pool (target extent pool). During the volume relocation process. all of the volumes in the source and target extent pools remain accessible to the hosts. During this process. the dynamic extent pool merge is not allowed in the following circumstances: If source and target pools feature different storage types (FB and CKD) If both extent pools contain track space-efficient (TSE) volumes If there are TSE volumes on the SSD ranks If you selected an extent pool that contains volumes that are being migrated If the combined extent pools include 2 PB or more of ESE logical capacity 200 IBM System Storage DS8870 Architecture and Implementation . the dynamic volume relocation is not allowed in the following circumstances: If source and target pools feature different storage types (FB and CKD) If the volume to be migrated is a track space-efficient (TSE) volume Dynamic extent pool merge Dynamic extent pool merge is an IBM Easy Tier Manual Mode capability that allows the initiation of a merging process of one extent pool (source extent pool) into another extent pool (target extent pool). Limitations: Dynamic extent pool merge is allowed only among extent pools with the same server affinity or rank group. the volume remains accessible to hosts. Additionally. Additionally. This feature significantly improves configuration flexibility and performance tuning and planning.

For the rank to be unassigned. This task is done without any user intervention and is fully transparent to the application host. IBM Easy Tier Automatic Mode can be enabled for all extent pools (including single-tier pools).Important: No actual data movement is performed during a dynamic extent pool merge. Extent pools that are handled by Easy Tier are referred to as managed pools. During this process. only logical definition updates occur. IBM Easy Tier Automatic Mode In Automatic Mode. The cross-tier or inter-tier capabilities deal with the Automatic Data Relocation (ADR) feature that aims to relocate the extents of each logical volume to the most appropriate storage tier within the extent pool to improve the overall storage cost-to-performance ratio. IBM Easy Tier automatically attempts to migrate all of the allocated extents to other ranks within the same extent pool. or no extent pools. Extent pools that are not handled by IBM Easy Tier Automatic Mode are referred to as non-managed pools. IBM Easy Tier Automatic Mode manages the data relocation across different tiers (inter-tier or cross-tier management) and within the same tier (intra-tier management). Chapter 7. the affected volumes remain accessible to hosts. Logical volume extents with high latency in the rank are migrated to storage media with higher performance characteristics. for only multi-tier pools. Extents with low latency in the rank are kept in storage media with lower performance characteristics. which means disabled. even if the rank includes extents that are allocated by volumes in the pool. Rank Depopulation Rank depopulation is an IBM Easy Tier Manual Mode capability that allows a user to unassign a rank from an extent pool. IBM Easy Tier dynamically manages the capacity in single-tier (homogeneous) extent pools (auto-rebalance) and multitier (hybrid) extent pools that contain up to three different disk tiers. Architectured for Performance 201 .

Eventually. This feature brings the following values: server and storage resources remain optimized for performance and cost objectives. When this event occurs. which will be used to accelerate performance with a special design for SSD usage. The area of application control will be in what. Figure 7-19 Automatic Mode 7. significant performance increase.After a migration of extents is finished. certain extents on a higher performance tier become cold and other extents on a lower-cost tier become hotter compared to cold extents on the higher performance tier. IBM Easy Tier always evaluates first if the cost of moving an extent to a higher performance tier is worth the expected performance gain.2 IBM Easy Tier Statement of Direction In the IBM Statement of direction announcement letter that was released on June 4. when. while preserving advanced disk system functions. and reduction in administrative costs. IBM intends to use an application-aware storage application programming interface (API) to help deploy storage more efficiently.7. This migration scenario is shown in Figure 7-19. The use of API enables applications and middle ware to direct more optimal placement of data by communicating important information about current workload activity and application performance requirements. the degree of hotness of the extents does not stay the same over time. cold extents on a higher performance tier are eventually demoted or swapped to a lower-cost tier and replaced by new hot extents from the lower-cost tier. and how the data is placed and it will allow the application to proactively influence data placement in the underlying storage system that uses IBM Easy Tier. IBM Easy Tier will manage the direct-attached SSD on the host as a large and low latency cache for the hottest data. the following future enhancements were announced: IBM intends to expand its IBM Easy Tier functions on DS8000 to a broader level by leveraging direct-attached solid-state storage on AIX and Linux operating systems on System p. 202 IBM System Storage DS8870 Architecture and Implementation . IBM intends to use a new high-density flash module that consists of SSDs within the DS8870. such as RAID protection and remote mirroring. 2012.

It is in a host system with the native disk device driver. For more information about the SDD. For example. a path is chosen at random from those paths.ibm. 7. see IBM System Storage DS8000 Easy Tier. (If you use MPIO in AIX instead of SDD.1 Determining the number of paths to a LUN When configuring a DS8000 for an open systems host. Multipath load balancing of data flow prevents a single path from becoming overloaded. which protects against outages. REDP-4667. check whether there is a plug-in for the multipathing driver.For more information about this letter.) In a Windows environment. 7. If you prefer to use the native multipathing drivers that many operating systems offer now. IBM provides a plug-in that provides the same functionality as SDD. we describe topics that are relevant to open systems. causing I/O congestion that occurs when many I/O operations are directed to common devices along the same I/O path.8.8 Performance and sizing considerations for open systems In the following sections. see this website: http://www. A good compromise is between two and four paths per LUN. The load is a function of the number of I/O operations currently in process. The path to use for an I/O operation is chosen by estimating the load on each adapter to which each path is attached. Architectured for Performance 203 . you need SDDPCM.2 Dynamic I/O load-balancing: SDD The SDD is an IBM-provided pseudo-device driver that is designed to support the multipath configuration environments in the DS8000. IBM SDD is available for most operating environments.8. SG24-8887. The dynamic I/O load-balancing option (default) of SDD is recommended to ensure better performance for the following reasons: SDD automatically adjusts data routing for optimum performance. 7. The following opposing factors must be considered when you are deciding on the number of paths to a LUN: Increasing the number of paths increases availability of the data. Increasing the number of paths increases the amount of CPU that is used because the multipathing software must choose among all available paths each time an I/O is issued. there is the SDDPCM plug-in for AIX if you use AIX’s MPIO. because the multipathing software allows (and manages) multiple paths to a LUN. you should use SDDDSM. see DS8000: Host Attachment and Interoperability. For some multipathing drivers. Chapter 7.com/common/ssi/cgi-bin/ssialias?subtype=ca&infotype=an&appname=iSou rce&supplier=877&letternum=ENUSZG12-0163 For more information about IBM Easy Tier. If multiple paths include the same load. a decision must be made regarding the number of paths to a particular LUN.

204 IBM System Storage DS8870 Architecture and Implementation . The normal recovery is a 30-second timeout for the server. By avoiding error recovery and the 30-second blocking SCSI Queue Full recovery interval. the queue depth of a storage host bus adapter should be larger than the one on the server side. However. This mechanism allows higher port queue oversubscription while maintaining a fair share for the servers and the accessed LUNs. After that time. the server host adapter and the DS8870 host bus adapter supports queuing I/Os. This operation is a normal error recovery operation in the Fibre Channel protocol to prevent more damage. Because several servers can and usually do communicate with few DS8870 ports. How long this queue can be is called the queue depth. The port that the queue is filling up goes into SCSI Queue Full mode. sometimes the port queue in the DS8870 HBA can be flooded. Command Timeout entries are seen in the server logs. When the number of commands that are sent to the DS8000 port exceeds the maximum number of commands that the port can queue. the overall performance is better with Automatic Port Queues.8. where it accepts no additional commands to slow down the I/Os. This parameter is also true for the DS8870. the port discards these additional commands. the command is resent. The server includes a command retry count before it fails the command.3 Automatic port queues When there is I/O between a server and a DS8870 Fibre Channel port. which supports 2048 FC commands queued on a port. Automatic Port Queues is a mechanism the DS8870 uses to self-adjust the queue that is based on the workload.7.

the following considerations apply: Choose the attached I/O ports on separate host adapters. In this case.4 Determining where to attach the host When you are determining where to attach multiple paths from a single host system to I/O ports on a host adapter to the storage system. which is created in the Extent Pool 0 that is controlled by the DS8000 server 0. When a read command is executed. If you want the maximum throughput of the DS8870. Chapter 7. Spread the attached I/O ports evenly between the I/O enclosures. Figure 7-20 Dual-port host attachment The host has access to LUN0. not more total throughput. Architectured for Performance 205 . These eight-port cards provide only more connectivity. but the device adapters and the rank have server affinity. the logical device is managed by server 0.8. you should consider doubling the number of host adapters and use only two ports of a four-port host adapter. The DS8000 host adapters have no server affinity. Figure 7-20 shows a host that is connected through two FC adapters to two DS8000 host adapters in separate I/O enclosures.7. The host system sends read commands to the storage server. Options for four-port and eight-port host adapters are available in the DS8870. one or more logical blocks are transferred from the selected logical drive through a host adapter over an I/O interface to a host. and the data is handled by server 0.

one of the major components in z/OS response time. The DS8870 FICON ports support zHPF I/O from z/OS if the zHPF feature is present.2 Parallel Access Volume Parallel Access Volume (PAV) is an optional licensed function of the DS8000 for the z/OS and z/VM operating systems. not more total throughput. you can almost forget about manual performance tuning. You can mix ports on an adapter (some to operate as FICON ports. which helps the System z servers that are running applications to concurrently share logical volumes.9. The ability to handle multiple I/O requests to the same volume nearly eliminates I/O supervisor queue delay (IOSQ) time. If you want the maximum throughput of the DS8870. too. FICON ports can be directly attached to a System z host or through a FICON capable SAN switch or director.9 Performance and sizing considerations for System z Here we describe several System z specific topics regarding the performance potential of the DS8000 series. splitting data across multiple volumes. 7. These eight-port cards provide more connectivity. you should consider doubling the number of host adapters and use only two ports of a four-port host adapter.9. With PAV and the Workload Manager (WLM). for mirroring) or as a FICON port. Traditionally. WLM manages PAVs across all the members of a Sysplex. It is recommended that the switch/director supports the Control Unit Port (CUP) feature to enable switch management through z/OS. access to highly active volumes involved manual tuning.7.1 Host connections to System z servers Each I/O enclosure can hold up to two host adapters. DS8870 host adapters are available as four-port and eight-port cards. some as Fibre Channel ports) but you might also want to use dedicated adapters for FICON. You can configure each port of a host adapter to operate as a Fibre Channel Port (for example. We also describe the considerations that you must have when you configure and size a DS8000 that replaces older storage hardware in System z environments. Traditional z/OS behavior without PAV Traditional storage disk subsystems allowed for only one channel program to be active to a volume at a time to ensure that data that is accessed by one channel program cannot be altered by the activities of another channel program. 7. and more. 206 IBM System Storage DS8870 Architecture and Implementation . Distribute the FICON ports across the host adapters and across the I/O enclosures.

while an I/O was already active for that volume. where subsequent simultaneous I/Os to volume 100 are queued while volume 100 is still busy with a preceding I/O. B UCB 100 Appl. the z/OS systems did not try to issue another I/O to a volume. A UCB 100 UCB Busy Appl. Appl. Knowing this fact. it did not make sense to send more than one I/O at a time to the storage system. Chapter 7. which. Not only were the z/OS systems limited to processing only one I/O at a time. is shown in Figure 7-21. as indicated by a UCB busy flag (see Figure 7-21). Architectured for Performance 207 . for the same reasons that were previously mentioned (see Figure 7-21). C UCB 100 Device Busy System z One I/O to one volume at one time System z 100 Figure 7-21 Traditional z/OS behavior From a performance standpoint. is represented by a Unit Control Block (UCB). because the hardware could process only one I/O at a time. but the storage subsystems accepted only one I/O at a time from different system images to a shared volume.The traditional z/OS behavior without PAV. in z/OS.

208 IBM System Storage DS8870 Architecture and Implementation . C UCB 1FE alias to UCB 100 z/OS Single image System z DS8000 with PAV 100 Logical volume Physical layer Figure 7-22 z/OS behavior with PAV PAV allows parallel I/Os to a volume from one host. Optional licensed function: PAV is an optional licensed function on the DS8000 series. I/O operations to an alias run against the associated base address storage space. and you can add new aliases nondisruptively. base address 100 might include alias addresses 1FF and 1FE. a z/OS host can use several UCBs for the same logical volume instead of one UCB per logical volume. SG24-8887. A UCB 100 Appl. By using the alias address and the conventional base address.no one is queued Appl. The following basic concepts are featured in PAV functionality: Base address The base device address is the conventional unit address of a logical volume. as shown in Figure 7-22. There is no physical space that is associated with an alias address. the association between base and alias is not fixed. Alias addresses must be defined to the DS8000 and to the I/O definition file (IODF). B UCB 1FF alias to UCB 100 Appl. This association is predefined. Alias address An alias device address is mapped to a base address. For example. Still. PAV also requires the purchase of the FICON Attachment feature. the alias address can be assigned to another base address by the z/OS Workload Manager. concurrent I/Os to volume 100 using different UCBs --. For more information about PAV definition and support. You can define more than one alias per base. which allows for three parallel I/O operations to the same volume. see DS8000: Host Attachment and Interoperability.Parallel I/O capability z/OS behavior with PAV The DS8000 performs more than one I/O to a CKD volume. There is only one base address that is associated with any volume.

z/OS can use automatic PAV tuning if you are using the z/OS Workload Manager (WLM) in Goal mode. If dynamic PAVs are enabled. and how many.9.3 z/OS Workload Manager: Dynamic PAV tuning It is not always easy to predict which volumes should have an alias address assigned. as shown in Figure 7-23. The WLM can dynamically tune the assignment of alias addresses. the WLM can reassign an alias to another base by instructing the IOS to do so when necessary. Chapter 7. Architectured for Performance 209 . The Workload Manager monitors the device performance and is able to dynamically reassign alias addresses from one base to another if predefined goals for a workload are not met. Your software can automatically manage the aliases according to your goals. UCB 100 WLM can dynamically reassign an alias to another base WLM IOS Assign to base 100 Base 100 Alias 1F0 to 100 to 100 Alias 1F1 Alias 1F2 to 110 Alias 1F3 to 110 Base 110 Figure 7-23 WLM assignment of alias addresses z/OS Workload Manager in Goal mode tracks system workloads and checks whether workloads are meeting their goals as established by the installation. z/OS recognizes the aliases that are initially assigned to a base during the Nucleus Initialization Program (NIP) phase.7.

WLMs exchange performance information Goals not met because of IOSQ ? Who can donate an alias ? System z WLM IOSQ on 100 ? System z WLM IOSQ on 100 ? System z WLM IOSQ on 100 ? System z WLM IOSQ on 100 ? Base 100 Alias to 100 Alias to 100 Alias to 110 Alias to 110 Base 110 Dynamic PAVs Dynamic PAVs DS8000 Figure 7-24 Dynamic PAVs in a sysplex 210 IBM System Storage DS8870 Architecture and Implementation . accumulates this information over time. WLM attempts to find an alias device that can be reallocated to help this workload achieve its goal. as shown in Figure 7-24.WLM also tracks the devices that are used by the workloads. and broadcasts it to the other systems in the same sysplex. If WLM determines that any workload is not meeting its goal because of IOS queue (IOSQ) time.

the WLM must coordinate the reassignment of alias addresses within the sysplex and the DS8000. the job that caused the overload of one volume could end before the WLM reacts. This capability also allows multiple HyperPAV hosts to use one alias to access different bases. with no latency in assigning an alias to a base. as shown in Figure 7-25. an alias address can be picked from a pool of alias addresses within the same LCU. In these cases. Then. It takes time until the WLM detects an I/O bottleneck. the WLM is no longer involved in managing alias addresses. Applications do I/O to base volumes z/OS Image UCB 0801 UCB 08F3 UCB 08F2 UCB 08F1 Aliases kept in pool for use as needed P O O L DS8000 z/OS Sysplex Applications do I/O to base volumes Applications do I/O to base volumes UCB 08F0 UCB 0802 Logical Subsystem (LSS) 0800 Alias UA=F0 Alias UA=F1 Alias UA=F2 Alias UA=F3 Base UA=01 z/OS Image UCB 08F0 UCB 0801 UCB 08F1 UCB 08F3 UCB 0802 UCB 08F2 Base UA=02 P O O L Applications do I/O to base volumes Figure 7-25 HyperPAV: Basic operational characteristics With HyperPAV.4 HyperPAV Dynamic PAV requires the WLM to monitor the workload and goals. an on demand proactive assignment of aliases is possible.9. Architectured for Performance 211 . This process takes time.7. With HyperPAV. Chapter 7. For each I/O. the IOSQ time was not eliminated completely. while the same or fewer operating system resources are used. which reduces the number of alias addresses that are required to support a set of bases in an IBM System z environment. and if the workload is fluctuating or is characterized by burst. This functionality is also designed to enable applications to achieve better performance than is possible with the original PAV feature alone.

which is found at this website: http://www. With Multiple Subchannel Sets (MSS) on IBM System zEnterprise zEC12.ibm.004 sec). Simplify alias management.10 and later. there is a huge reduction in PAV-alias UCBs with HyperPAV. as aliases are dynamically bound to a base during the I/O for the z/OS image that issued the I/O. z196. you have even more flexibility in device configuration. z10.Benefits of HyperPAV HyperPAV includes the following benefits: Provide an even more efficient PAV function. see IBM System Storage DS8000: Host Attachment and Interoperability. because this calculation is based on the average during the IBM RMF™ period. Depending on the workload. Make it easier for users to decide to migrate to larger volume sizes. However. For example. and z114. in this case. So. The HyperPAV license is independent of the capacity.com/abstracts/redp4387. z9. taking fewer from the 64-K device limitation and leaving more storage for capacity use Enable a more dynamic response to changing workloads. see Multiple Subchannel Sets: An Implementation View.redbooks.004 sec/IO = 8 This result means that the average number of I/O operations that are executing at one time for that LCU during the peak period is eight. REDP-4387. the recommended number of aliases would be 16 (2 x 8 = 16). Use the FICON architecture to reduce impact. and provide the following storage capacity and performance improvements: – More dynamic assignment of PAV aliases improves efficiency – Number of PAV aliases that are needed might be reduced. you should multiply the result by two to accommodate higher peaks within that RMF interval.html?Open 212 IBM System Storage DS8870 Architecture and Implementation . For more information about EAV specifications and considerations. When I/O completes. in turn. increase the amount of addressable storage that is available on z/OS. For more information: For more information about MSS. The combination of HyperPAV and EAV allows you to significantly reduce the constraint on the 64-K device address limit and. if the peak of the calculation that happened when the I/O rate is 2000 I/O per second and the average response time is 4 ms (which is 0. improve addressing efficiencies. Help clients who implement larger volumes to scale I/O rates without the need for more PAV alias definitions. eight aliases should be able to handle the peak I/O rate for that LCU. SG24-8887. the result of our calculation is: 2000 IO/sec x 0. It then becomes available to subsequent I/Os. Therefore. Optional licensed function HyperPAV is an optional licensed function of the DS8000 series. It is required in addition to the normal PAV license (which is capacity-dependent) and the FICON Attachment feature. Our rule of thumb is that the number of aliases that are required can be approximated by the peak of the following multiplication: I/O rate that is multiplied by the average response time. HyperPAV alias consideration on EAV HyperPAV provides a far more agile alias management algorithm. the alias is returned to the pool in the LCU. The EAV volumes are supported only on IBM z/OS V1.

2. SG24-8887. 7. VM supports PAV minidisks. RMF reporting on PAV RMF reports the number of exposures for each device in its Monitor/DASD Activity report and in its Monitor II and Monitor III Device reports. 5. A Device Reserve makes the volume unavailable to the next I/O.5 PAV in z/VM environments z/VM provides PAV support in the following ways: As traditionally supported.9. HyperPAV helps minimize the Input/Output Supervisor Queue (IOSQ) Time. This value is the average number of HyperPAV volumes (base and alias) in that interval. Architectured for Performance 213 .HyperPAV implementation and system requirements For more information about support and implementation guidance. DSK001 E101 E100 E102 DASD E100-E102 access same time base alias alias 9800 9801 9802 RDEV E100 RDEV E101 RDEV E102 Guest 1 Figure 7-26 z/VM support of PAV volumes that are dedicated to a single guest virtual machine Chapter 7. the number is followed by an H (for example. If the device is a HyperPAV base device. This delay is recorded as IOSQ Time. There is Device Reserve issued against the volume. for VM guests as dedicated guests through the CP ATTACH command or DEDICATE user directory statement. with APAR VM63952. which causes the next I/O to be queued. see DS8000: Host Attachment and Interoperability. RMF reports all I/O activity against the base address. Starting with z/VM 5.0.4H). You still see IOSQ Time for one of the following reasons: There are more aliases that are required to handle the I/O load when compared to the number of aliases that are defined in the LCU. not by the individual base and associated aliases. PAV in a z/VM environment is shown in Figure 7-26 and Figure 7-27 on page 214. The performance information for the base includes all base and alias I/O activity.

4. a device had an implicit allegiance. the storage disk subsystem sends back a device busy indication. see 10. which increases parallelism and reduces channel impact.2.9. The DS8000 series accepts multiple I/O requests from different hosts to the same device address. In older storage disk systems. a relationship that was created in the control unit between the device and a channel path group when an I/O operation is accepted by the device.DSK001 E101 E100 E102 9800 9801 9802 9800 9801 9802 9800 9801 9802 Guest 1 Guest 2 Guest 3 Figure 7-27 Linkable minidisks for guests that use PAV In this way. The allegiance causes the control unit to guarantee access (no busy status presented) to the device for the remainder of the channel program over the set of paths that are associated with the allegiance. With the small programming enhancement (SPE) that was introduced with z/VM 5. other enhancements are available when PAV with z/VM is used. “z/VM considerations” in DS8000: Host Attachment and Interoperability.0 and APAR VM63952. This delay is shown in the Device Busy Delay (AVG DB DLY) column in the RMF DASD Activity Report. SG24-8887. This result delays the new request and adds to the overall response time of the I/O. that is. PAV provides to the z/VM environments the benefits of a greater I/O performance (throughput) by reducing I/O queuing. as shown in Figure 7-21 on page 207. Device Busy Delay is part of the Pend time. For more information. 214 IBM System Storage DS8870 Architecture and Implementation . 7.6 Multiple Allegiance If any System z host image (server or LPAR) performs an I/O request to a device address for which the storage disk subsystem is already processing an I/O that came from another System z host image.

Chapter 7. all requests to a shared volume are rejected. B UCB 100 System z Multiple Allegiance System z DS8000 100 Logical volume Physical layer Figure 7-28 Parallel I/O capability with Multiple Allegiance Nevertheless. In systems without Multiple Allegiance. The requests show up in Device Busy Delay and PEND time in the RMF DASD Activity reports. limiting the extent scope to a minimum. the requests are accepted by the DS8000 and all requests are processed in parallel. This condition occurs when an active I/O is writing a certain data portion on the volume and another I/O request comes in and tries to read or write to that same data. those subsequent I/Os get a busy condition until that previous I/O is finished with the write operation. if no write is intended.With Multiple Allegiance. for example. as shown in Figure 7-28. A UCB 100 Appl. or System z systems that are sharing access to data volumes. To ensure data integrity. and the I/Os are queued in the System z channel subsystem. Architectured for Performance 215 . Multiple Allegiance provides significant benefits for environments that are running a sysplex. good application software access patterns can improve global parallelism by avoiding reserves. parallel I/O capability Appl. Multiple Allegiance and PAV can operate together to handle multiple requests from multiple hosts. Multiple Allegiance allow multiple I/Os to a single volume to be serviced concurrently. and setting an appropriate file mask. unless there is a conflict when writing to the same data portion of the CKD logical volume. a device busy condition can still happen. except the first I/O request. However.

System A System B WLM IO Queue for I/Os that have to be queued Execute WLM IO with Priority X'21' I/O from A Pr X'FF' : I/O from B Pr X'9C' : : : : I/O from B Pr X'21' IO with Priority X'80' DS8000 3390 Figure 7-29 I/O priority queuing 216 IBM System Storage DS8870 Architecture and Implementation . This subsystem I/O queuing capability provides the following significant benefits: Compared to the traditional approach of responding with a device busy status to an attempt to start a second I/O operation to a device. the priority of the low-priority programs is increased. A fast system cannot monopolize access to a volume that also is accessed from a slower system.9. Contention in a shared environment is eliminated. Priority queuing I/Os from different z/OS system images can be queued in a priority order. Queuing of channel programs When the channel programs conflict with each other and must be serialized to ensure data consistency. WLM must run in Goal mode. the DS8000 internally queues channel programs. Channel programs that cannot run in parallel are processed in the order that they are queued.7. while the data accessed by one channel program is not altered by another channel program. Each system receives a fair share. I/O queuing in the storage disk subsystem eliminates the effect that is associated with posting status indicators and redriving the queued channel programs. This configuration prevents high-priority channel programs from dominating lower priority programs and gives each system a fair share. You can activate I/O priority queuing in WLM Service Definition settings. When a channel program with a higher priority comes in and is put in front of the queue of channel programs with lower priority. as shown in Figure 7-29. It is the z/OS Workload Manager that uses this priority to privilege I/Os from one system against the others.7 I/O priority queuing The concurrent I/O capability of the DS8000 allows it to execute multiple channel programs concurrently.

after the Extended Distance FICON (Persistent IU Pacing) function is installed. There is one SDM configured with a single. see Chapter 10. 7. Figure 7-30 shows Extended Distance FICON (EDF) performance comparisons for a sequential write workload. SG24-8887. the performance returns to where it was with XRC Emulation on. Extended Distance FICON does not actually extend the distance that supported by FICON. With Extended Distance FICON. it can provide the same benefits as XRC Emulation. For more information about support and implementation. Figure 7-30 Extended Distance FICON with small data blocks sequential writes on one SDM reader Chapter 7. there is no need to have XRC Emulation on Channel extenders. When the XRC Emulation (Brocade emulation in the diagram) is turned off.9. However. especially at longer distances. Architectured for Performance 217 . which saves costs. I/O priority queuing works on a host adapter level and is free of charge. However.6. The workload consists of 64 jobs that are performing 4-KB sequential writes to 64 data sets with 1113 cylinders each. I/O Priority Manager works on the device adapter and array levels and is a licensed function.Important: Do not confuse I/O priority queuing with I/O Priority Manager. non-enhanced reader to handle the updates.8 Performance considerations on Extended Distance FICON The function that is known as Extended Distance FICON produces performance results similar to z/OS Global Mirror (zGM) Emulation/XRC Emulation at long distances. which all are on one large disk volume. “Extended Distance FICON” in DS8000: Host Attachment and Interoperability. the performance drops significantly.

9 High Performance FICON for z The FICON protocol involved several exchanges between the channel and the control unit. as shown in Figure 7-32. the protocol is streamlined and the number of exchanges was reduced. which is used this time with Multiple Reader support. With High Performance FICON.Figure 7-31 shows EDF performance. performance drops significantly at long distances. There is one SDM configured with four enhanced readers. Figure 7-31 Extended Distance FICON with small data blocks sequential writes on four SDM readers These results again show that when the XRC Emulation is turned off. CCWs TCWs (Transport Control Word) Figure 7-32 zHPF protocol 218 IBM System Storage DS8870 Architecture and Implementation . which led to unnecessary overhead. When the Extended Distance FICON function is installed. the performance again improves significantly.9. 7.

com/systems/z/resources/faq/index.12 is NO.11 and higher. z/OS V1. However. z/OS configuration changes are required. These changes are nondisruptive. the DS8870 also supports TCW for I/O operations on multiple tracks.8. or z10 processors support zHPF. or z/OS 1.html Chapter 7. or Extended Format SAM. VTOC Index (CVAF). see this website: http://www. I/O that uses the Media Manager.11 and 1. Realistic workloads with a mix of data set transfer sizes can see a 30% . IBM suggests that clients use the ZHPF=YES setting after the required configuration changes and prerequisites are met. After these items are addressed. you might need to enable it.13 is YES. Hardware Configuration Definition (HCD) must have Channel path ID (CHPID) type FC defined for all the CHPIDs that are defined to the 2107 control unit. you must set ZHPF=YES in IECIOSxx in SYS1. but older cards are also supported. zFS.ibm. z/OS V1. Although clients can see I/Os complete faster as a result of implementing zHPF.PARMLIB or issue the SETIOS ZHPF=YES command. Architectured for Performance 219 .com/systems/z/hardware/connectivity/ficon_performance. existing FICON port definitions in the DS8000 function in FICON or zHPF protocols in response to the type of request that is being performed. or increasing the number of disk volumes that are supported by existing channels. VSAM. installation of the Licensed Feature Key for the zHPF Feature is required. and sequential access methods.High Performance FICON for z (zHPF) is an enhanced FICON protocol and system I/O architecture that results in improvements in response time and throughput. PDSE. It can be dynamically enabled by SETSMS or by the entry SAM_USE_HPF(YES | NO) in IGDSMSxx parmlib member. the real benefit is expected to be obtained by using fewer channels to support existing disk volumes. and default for z/OS 1. To use zHPF for QSAM/BSAM/BPAM. which results in a 10% to 30% channel usage savings. For more information about zHPF. Systems zEC12. such as DB2. and serviceability (RAS). more access methods were changed in z/OS to support zHPF.7 with IBM Lifecycle Extension for z/OS V1.70% increase in FICON IOs that uses zHPF.10 with PTFs. or z/OS V1. Over time. Catalog BCS/VVDS.ibm. ZHPF=NO is the default setting. The default for z/OS 1. IBM Laboratory testing and measurements are available at the following website: http://www. Transport Control Words (TCWs) are used. For the DS8000. The old FICON Express adapters are not supported. zHPF is also supported for DB2 List-prefetch. Although the original zHPF implementation supported the new Transport Control Words only for I/O that did not span more than a track. which also supports zHPF. Format Writes. z196. Additionally. In situations where zHPF is the exclusive access in use.7 (5637-A01). For z/OS. after the PTFs are installed in the LPAR. z114. FICON Express8S cards on the host provide the most benefit. availability.html zHPF is transparent to applications. the changes in architecture offer end-to-end system enhancements to improve reliability. The required software is z/OS V1. benefit from zHPF. Instead of Channel Command Word (CCWs). High Performance FICON for z (zHPF) is an optional licensed feature.9. it can improve FICON I/O throughput on a single DS8000 port by 100%.

220 IBM System Storage DS8870 Architecture and Implementation .

221 . we discuss matters related to the installation planning process for the IBM System Storage DS8870 series.Part 2 Part 2 Planning and installation In this part of the book. All rights reserved. The following topics are included: DS8870 Physical planning and installation DS8870 HMC planning and setup IBM System Storage DS8000 features and license keys © Copyright IBM Corp. 2013.

222 IBM System Storage DS8870 Architecture and Implementation .

see IBM System Storage DS8870 Introduction and Planning Guide. 2013. Tivoli Key Lifecycle Manager. DS8870 Physical planning and installation This chapter describes the various steps that are involved in the planning and installation of the IBM System Storage DS8870. and AOS planning Remote mirror and copy connectivity Disk capacity considerations Planning for growth For more information about the configuration and installation process. © Copyright IBM Corp. Tivoli Storage Productivity Center. 223 . All rights reserved.8 Chapter 8. GC27-4209. It includes a reference listing of the information that is required for the setup and where to find detailed technical reference material. This chapter covers the following topics: Considerations before installation Planning for the physical installation Network connectivity planning Secondary HMC. LDAP.

you can use IBM or IBM Business Partner services. Expansion frames can be placed only to the right (from the front) of the DS8870 base frame. floor loading. Examine environmental requirements. Plan for type of disks. Plan for staff education and availability to implement the storage plan. such as Redundancy and the use of Uninterruptible Power Supply (UPS). elevators. and so on. Consider location suitability. Self-encrypting drives are a standard feature for the DS8870. Consider new Resource Groups function feature for the IBM System Storage DS8000. access constraints. and Nearline. Determine a place and connection for the secondary Hardware Management Console (HMC). Consider Assist On-site (AOS) installation to provide a continued secure connection to IBM support center. 224 IBM System Storage DS8870 Architecture and Implementation . In general. Analyze power requirements. Consider IBM Tivoli Storage Productivity Center for monitoring and for DS8000 storage manager management in your environment. Consider available Copy Services and backup technologies. doorways. such as adequate cooling capacity. the following items should be considered for your installation planning checklist: Plan for growth to minimize disruption to operations. such as solid-state drives (SSDs). Oversee the available services from IBM to check for microcode compatibility and configuration checks.8. Consider implementing Easy Tier to increase machine performance. Create a plan that details the wanted logical configuration of the storage.1 Considerations before installation Start by developing and following a project plan to address the many topics that are needed for a successful implementation. Enterprise. Consider the use of the I/O Priority Manager feature to prioritize specific applications. Alternatively. consider a place and connection needs for the Tivoli Key Lifecycle Manager servers. Consider integration of Lightweight Directory Access Protocol (LDAP) to allow a single user ID and password management. If you want to use encryption.

Logical configuration refers to the creation of RAID ranks. Integration of Tivoli Storage Productivity Center and SNMP into the client environment for monitoring of performance and configuration. power. GC27-4209. a stand-alone Tivoli Key Lifecycle Manager server is required. IBM Security Key Lifecycle Manager is also available to manage Encryption Keys. Configuration and integration of Tivoli Key Lifecycle Manager servers and DS8000 Encryption for extended data security. 8. which is a fee-based service. This plan is needed to configure the storage after the initial hardware installation is complete. DS8870 Physical planning and installation 225 . Your Storage Marketing Specialist can help you plan and select the DS8870 model physical configuration and features. the following activities are some of the required planning and installation activities for which the client is responsible at a high level: Physical configuration planning. In this chapter. applied. Lightweight Directory Access Protocol (LDAP). Installation planning Integration of LDAP. and encryption Administrator and operator for monitoring and handling considerations IBM Service Representative or IBM Business Partner installation engineer Chapter 8. DS Command Line Interface (CLI). However. A storage administrator should also coordinate requirements from the user applications and systems to build a storage plan for the installation. Tivoli Key Lifecycle Manager. Installation of AOS. IBM provides services to set up and integrate these components. volumes. and modified by using the DS Storage Manager. IBM can assist in planning and implementation upon client request. If you run IBM Security Key Lifecycle Manager on DS8000 systems that use encryption. For zOS environment. IBM Global Services also can apply or modify your logical configuration. or DS Open API. Installation requires close cooperation with the user community. you can run into a deadlock situation when you power on the system and the DS8000 must talk to a key server but IBM Security Key Lifecycle Manager cannot start. For more information. Therefore. The following people should be briefed and engaged in the planning process for the physical installation: Systems and storage administrators Installation planning engineer Building engineer for floor loading. the IT support staff.1 Who should be involved Have a project manager to coordinate the many tasks that are necessary for a successful installation. and to the assignment of the configured capacity to servers. IBM can assist in planning and implementation upon client request. air conditioning. Logical configuration planning and application. see IBM System Storage DS8870 Introduction and Planning Guide. IBM can provide services to set up and integrate these components. and logical unit numbers (LUNs). Application of the initial logical configuration and all subsequent modifications to the logical configuration also are client responsibilities. and electrical considerations Security engineers for virtual private network (VPN). The logical configuration can be created.1. and the technical resources that are responsible for floor space. information is presented that can assist you with planning and installation tasks. and cooling.Client responsibilities for the installation The DS8870 is specified as an IBM or IBM Business Partner installation and setup system.

which are mandatory. and mandatory local area network (LAN) connections. storage area network (SAN).8. and any optional license keys. Approval to use elevators if the weight and size are acceptable. Connectivity information. which ensures that the weight is within limits for the route to the final installation position. servers.1. Ensure that you have a detailed storage plan agreed upon. Ensure that the configuration specialist has all the information to configure all of the arrays and set up the environment as required. 226 IBM System Storage DS8870 Architecture and Implementation .2 What information is required A validation list to help the installation process should include the following items: Drawings that detail the DS8000 placement as specified and agreed upon with a building engineer. Agreement on the security structure of the installed DS8000 with all security engineers. License keys for the Operating Environment License (OEL).

) Depth 105. The use of fewer than three persons to move it can result in injury.0 cm (31. Inform your carrier of the weight and size of the packages to be delivered and inspect the site and the areas where the packages will be moved (for example.) Height 69.5 cm (54. 8.2 Planning for the physical installation This section describes the physical installation planning process and gives important tips and considerations. “IBM System Storage DS8000 features and license keys” on page 271.0 cm (41. see Chapter 10. and loading). DS8870 Physical planning and installation 227 .) Height 207.) Depth 137.) Maximum packaged weight (in kilograms and pounds) 1325 kg (2920 lb) 1311 kg (2810 lb) Up to 90 kg (199 lb) 75 kg (71 lb) Important: A fully configured model in the packaging can weigh over 1406 kg (3100 lbs). you receive delivery of a DS8870 model in multiple shipments that do not exceed 909 kg (2000 lb) each.0 cm (41. By using the shipping weight reduction option.6 in.) Width 101.) Depth 137. Table 8-1 lists the final packaged dimensions and maximum packaged weight of the DS8870 storage unit shipgroup.7 in.2.5 cm (40 in.3 in.5 cm (40 in. Table 8-1 Packaged dimensions and weight for DS8870 models Shipping container Model 961 pallet or crate Model 96E expansion unit pallet or crate Shipgroup (height might be lower and weight might be less) (if ordered as MES) External HMC container Packaged dimensions (in centimeters and inches) Height 207.5 in. floor protection.) Width 65.3 in. hallways.7 in. elevator size.0 cm (27.2 in.2 in. For more information about the Shipping Weight Reduction option.8.0 cm (25.2 in.) Depth 120.5 cm (81.3 in.5 cm (81.) Height 105.) Width 80.) Width 101.5 cm (54.0 cm (47. Chapter 8.1 Delivery and staging area The shipping carrier is responsible for delivering and unloading the DS8870 as close to its final destination as possible.

The total weight and space requirements of the storage unit depend on the configuration features that you ordered. You might consider calculating the weight of the unit and the expansion box (if ordered) in their maximum capacity to allow for the addition of new features. you can operate the storage unit with better cooling efficiency and cabling layout protection. 228 IBM System Storage DS8870 Architecture and Implementation . Raised floors can better accommodate cabling layout. Table 8-2 lists the weights of the various DS8870 models. Table 8-2 DS8870 weights Model Model 961 (two-core) Model 961 (four-core) Model 96E expansion model Model 961 and one 96E expansion model Model 961 and two 96E expansion models Maximum weight 1172 kg (2585 lb) 1324 kg (2920 lb) 1268 kg (2790 lb) 2601 kg (5735 lb) 3923 kg (8650 lb) Important: You must check with the building engineer or other appropriate personnel to make sure that the floor loading was properly considered.2 Floor type and loading The DS8870 can be installed on a raised or nonraised floor.2.8. Installing the unit on a raised floor is preferable because with this option.

0 in.7 cm (18.) Depth: 16 cm (6.) Figure 8-1 Floor tile cable cutout for DS8870 Chapter 8.3 in. You can use the following measurements when you cut the floor tile: Width: 45.Figure 8-1 for DS8870 show the location of the cable cutouts. DS8870 Physical planning and installation 229 .

8. Figure 8-2 Overhead cabling for DS8870 230 IBM System Storage DS8870 Architecture and Implementation . is available for DS8870 as an alternative to the standard rear cable exit.2.3 Overhead cabling features Overhead cabling (top exit) feature. This feature requires the following items: Feature Code (FC) 1400 Top exit bracket for overhead cabling FC 1101 Safety-approved fiberglass ladder Multiple FC for power cords. GC27-4209. For more information. as shown in Figure 8-2. see IBM System Storage DS8870 Introduction and Planning Guide. depending on the AC power characteristics of your geography. Verify whether you ordered the top exit feature before the tiles for a raised floor are cut.

For the front of the unit. Figure 8-3 DS8870 three frames service clearance requirements Chapter 8.) for the service clearance. allow a minimum of 121. keeping in mind that the DS8870 has a maximum of three expansion frames (for a total of four frames).1 cm (2 in.8 cm The storage unit location area also should cover the service clearance that is needed by IBM service representatives when the front and rear of the storage unit is accessed.4 cm 84. Table 8-3 DS8870 dimensions Dimension with covers Height Width Depth Model 961 193. allow a minimum of 5.8 cm 122.9 cm (48 in. allow a minimum of 76. You can use the following minimum service clearances.4 Room space and service clearance The total amount of space that is needed by the storage units can be calculated by using the dimensions that are shown in Table 8-3. DS8870 Physical planning and installation 231 . For the sides of the unit. For the rear of the unit.) for the service clearance.2 cm (30 in. An example of the dimensions for a DS8870 with two expansion frames is shown in Figure 8-3. Verify your configuration and the maximum configuration for your needs.2.) for the service clearance.8.

or 240 RMS Vac 180 RMS Vac 264 RMS Vac 50-60 Amps 50 ± 3 or 60 ± 3.0 Hz High voltage 380. see IBM System Storage DS8870 Introduction and Planning Guide. three-phase wye. GC27-4209. 208.8 kVA 19.612 Model 96E with I/O enclosure 5. Input voltage The DS8870 supports a three-phase input voltage source.0 Hz Power consumption Table 8-5 lists the power consumption specifications of the DS8870.605 The values represent data that was obtained from the following configured systems: Model 961 base models that contain 15 disk drive sets (240 disk drives) and Fibre Channel adapters Model 96E first expansion models that contain 21 disk drive sets (336 disk drives) and Fibre Channel adapters 232 IBM System Storage DS8870 Architecture and Implementation . For more information about power connectors and power cords. 415. or 480 RMS Vac 333 RMS Vac 456 RMS Vac 30-35 Amps 50 ± 3 or 60 ± 3. Table 8-4 lists the power specifications for each feature code. The DC Supply Unit (DSU) is designed to operate with three-phase delta.8. Table 8-5 DS8870 power consumption Measurement Peak electric power Thermal load (BTU/hr) Model 961 (four-way) 6 kVA 20. The power estimates presented here are conservative and assume a high transaction rate workload. Use a 60-A rating for the low voltage feature and a 32-A rating for the high voltage feature.0 Hz 50 ± 3 or 60 ± 3. 3-ph) Steady-state input frequency PLD input frequencies (<10 seconds) Low voltage 200. 400. The two power cords to each frame should be supplied by separate AC power distribution systems.2. or one-phase input power. Table 8-4 DS8870 input voltages and frequencies Characteristic Nominal input voltage Minimum input voltage Maximum input voltage Customer wall breaker rating (1-ph.5 Power requirements and operating environment Consider the following basic items when the DS8870 power requirements are planned for: Power connectors Input voltage Power consumption and environment Power control features Power Line Disturbance (ePLD) feature Power connectors Each DS8870 base and expansion unit features redundant power supply systems. 220.0 Hz 50 ± 3 or 60 ± 3.

especially with environments that have no UPS. All of the fans in the DS8870 direct air flow from the front of the frame to the rear of the frame. DS8870 Physical planning and installation 233 .50%.90o F) at a relative humidity range of 40% . Another power control feature is available for the System z environment. we describe the cooling system. Chapter 8. Power control features The DS8870 has remote power control features that you use to control the power of the storage complex through the HMC.DS8000: Cooling the storage complex In this section. Although the DS8870 does not vent though the top of the rack. install this feature. There is no additional physical connection planning that is needed for the client with or without the ePLD. For more information about power control features. Power Line Disturbance feature The extended Power Line Disturbance (ePLD) feature stretches the available uptime of the DS8870 from 4 seconds to 50 seconds during a PLD event. GC27-4209. Generally. see IBM System Storage DS8870 Introduction and Planning Guide. No air exhausts through the top of the frame.32o C (60o . as shown in Figure 8-4. DS8870 cooling Air circulation for the DS8870 is provided by the various fans that are installed throughout the frame. do not store anything on top of the DS8870 for safety reasons. Important: The following factors must be considered when the DS8870 is installed: Make sure that air circulation for the DS8870 base unit and expansion units is maintained free from obstruction to keep the unit operating in the specified temperature range. The use of a directional air flow in this manner allows for cool aisles to the front and hot aisles to the rear of the systems. F ro nt-to -b ac k a irflow for h o t-ais le – c old-a is le d a ta c e n tre s • More da ta c ent res a re m oving to hot -aisl e / c old-ai sle des igns to opt imis e ene rgy eff icie nc y • DS887 0 is now des igne d w it h c omple te f ront -t o-ba ck a irf low Figure 8-4 DS8870 air flow The operating temperature for the DS8870 is 16o .

when used with 50 micron multi-mode fibre cable.16 The DS8870 Model 961/96E supports four-port and eight-port cards per host adapter. supports point-to-point distances of up to 500 meters on 8-Gbps link speed with four ports. and ESS Models 800 and 750 also in z/OS environments. Brocade. feature 3157. Fabric components from various vendors. 234 IBM System Storage DS8870 Architecture and Implementation . as shown in Table 8-6. single mode 9 micron. Table 8-7 FCP/FICON cable features Feature 1410 1411 1412 1420 1421 1422 Length 40 m (131 ft) 31 m (102 ft) 2 m (6. multimode 50 micron. when used with 9 micron single-mode fibre cable.and 16-way) enterprise class Attached expansion model None None (single rack) 96E models (1-3) Maximum host adapter 2-4 2-8 2 . Each port supports FCP or FICON. A 31-meter fiber optic cable or a 2-meter jumper cable can be ordered for each Fibre Channel adapter port. DS6000s. the DS8870 has a maximum of 128 ports at 8 Gb. are supported by both environments.5 ft) 31 m (102 ft) 31 m (102 ft) 2 m (6. extends the point-to-point distance to 10 km for feature 3253 (8 Gb 10-km LW host adapter with four ports). The Fibre Channel and FICON shortwave host adapter. supports point-to-point distances of up to 500 meters on 8-Gbps link speed with eight ports. Table 8-6 Maximum host adapter Base model 961 (two-way) business class 961 (four-way) enterprise class 961 (8. therefore. multimode 9 micron. The Fibre Channel and FICON shortwave host adapter. single mode Important: The Remote Mirror and Copy functions use FCP as the communication link between the IBM System Storage DS8000 series. feature 3153. multimode 50 micron. but not simultaneously. when used with 50 micron multi-mode fibre cable.8. Fibre Channel and FICON The DS8870 Fibre Channel and FICON adapter has four or eight ports per card. Table 8-7 lists the fiber optic cable features for the FCP/FICON adapters.2. The Fibre Channel and FICON longwave host adapter. Feature 3257 (8-Gb LW host adapter with eight ports) supports point-to-point distances of up to 10 km. All ports are 8 Gb capable. including IBM.6 Host interface and cables The DS8870 can support the number of host adapters. and Cisco.5 ft) Connector LC/LC LC/SC SC to LC adapter LC/LC LC/SC SC to LC adapter Characteristic 50 micron. QLogic. single mode 9 micron.

which includes host ports and PPRC target and initiator ports Access to 63750 LUNs per target (one target per host adapter). the DS8870 provides the following configuration capabilities: Fabric or point-to-point topologies A maximum of 128 host adapter ports. SG24-8887. and each port has a unique worldwide port name (WWPN). SC26-7917 and IBM System Storage DS8000: Host Attachment and Interoperability.2.2. it is necessary to connect a minimum of four FICON host channels to the storage unit. the DS8870 provides the following configuration capabilities: A maximum of 128 Fibre Channel ports A maximum of 509 logins per Fibre Channel port. see the DS8000 System Storage Interoperability Center at this website: http://www. By using a switched configuration. Chapter 8. see IBM System Storage DS8000 Host Systems Attachment Guide. To fully access 65. models. adapters.For more information about IBM-supported attachments.7 Host adapter Fibre Channel specifics for open environments Each storage unit host adapter has four or eight ports. For the latest information about host types. switched-fabric. or point-to-point topologies 8.ibm. depending on host type Either arbitrated loop. depending on the DS8870 processor feature A maximum of 509 logins per Fibre Channel port A maximum of 8192 logins per storage unit A maximum of 1280 logical paths on each Fibre Channel port Access to all control-unit images over each FICON port A maximum of 512 logical paths per control unit image FICON host channels limit the number of devices per channel to 16.com/systems/support/storage/ssic/interoperability. and operating systems that are supported by the DS8870.wss 8.280 devices on a storage unit. You can add Fibre Channel shortwave and longwave adapters to I/O enclosures of an installed DS8870. DS8870 Physical planning and installation 235 .384 devices) to each host channel. You can configure a port to operate with the SCSI-FCP upper-layer protocol by using the DS Storage Manager or the DS CLI.8 FICON specifics on zOS environment With host adapters that are configured for FICON.384. you can expose 64 control-unit images (16. With host adapters that are configured as FC.

use four-port. After you connect to the storage system through the DS CLI.2107-75ZA570 961 5005076303FFEDAA On 236 IBM System Storage DS8870 Architecture and Implementation .10 WWNN and WWPN determination The incoming and outgoing data to the DS8870 is tracked via worldwide node name (WWNN) and worldwide port name (WWPN).8. If there is no real need for a maximum port configuration. one HA card should be installed on each available I/O enclosure before the second HA card is installed on same I/O enclosure. we have an address similar to the following strings: 50:05:07:63:0z:FF:Cx:xx or 50:50:07:63:0z:FF:Dx:xx The z and x:xx values are unique combinations for each system and each Storage Facility Image (SFI) that is based on a machine serial number. Example 8-2 Machine WWNN dscli> lssu Name ID Model WWNN pw state ============================================================= DS8870_ATS02 IBM. the following best practices are recommended: To obtain the maximum ratio for availability and performance.2. The addresses can be determined by using the DS CLI or GUI. as shown in Example 8-2. Example 8-1 SFI WWNN determination dscli> lssi Name ID Storage Unit Model WWNN State ESSNet ============================================================================== ATS_02 IBM. Determining a WWNN by using DS CLI When the DS8870 WWNN is on. as shown in Example 8-1. To determine these addresses. use the lssi command to determine the SFI WWNN. These ports are assigned to the Fibre Channel Fabric and are used as a MAC address for the Ethernet protocol. 8-Gbps cards.2. 8. we analyze how they are composed. It is not used as reference because hosts can see only the SFI.2107-75ZA571 IBM.9 Best practice for host adapters For optimum availability and performance. but the SFI WWNN is used only for any configuration because the SFI is the machine that the host knows.2107-75ZA571 961 5005076303FFD5AA Online Enabled Do not use the lssu command because it determines the machine WWNN. Each SFI has its own WWNN. The storage unit itself has its unique WWNN. Copy services best performance can be obtained by using dedicated HA cards for copy services links.

we have a WWPN in the DS8870 that looks like the following address: 50:05:07:63:0z:YY:Yx:xx However. from the logical port naming. the DS8870 WWPN is a child of SFI WWNN. as shown in Example 8-3. where the WWPN inserts the z and x:xx values from SFI WWNN.2107-75ZA571 ID WWPN State Type topo portgrp =============================================================== I0000 50050763030015AA Online Fibre Channel-SW SCSI-FCP 0 I0001 50050763030055AA Online Fibre Channel-SW SCSI-FCP 0 I0002 50050763030095AA Online Fibre Channel-SW SCSI-FCP 0 I0003 500507630300D5AA Online Fibre Channel-SW SCSI-FCP 0 I0030 50050763030315AA Online Fibre Channel-SW SCSI-FCP 0 I0031 50050763030355AA Online Fibre Channel-SW SCSI-FCP 0 I0032 50050763030395AA Online Fibre Channel-SW FICON 0 I0033 500507630303D5AA Online Fibre Channel-SW SCSI-FCP 0 I0034 50050763034315AA Online Fibre Channel-SW SCSI-FCP 0 I0035 50050763034355AA Online Fibre Channel-SW SCSI-FCP 0 I0036 50050763034395AA Online Fibre Channel-SW SCSI-FCP 0 I0037 500507630343D5AA Online Fibre Channel-SW SCSI-FCP 0 I0100 50050763030815AA Online Fibre Channel-SW SCSI-FCP 0 I0101 50050763030855AA Online Fibre Channel-SW SCSI-FCP 0 I0102 50050763030895AA Online Fibre Channel-SW FICON 0 I0103 500507630308D5AA Online Fibre Channel-SW SCSI-FCP 0 I0104 50050763034815AA Online Fibre Channel-SW SCSI-FCP 0 I0105 50050763034855AA Online Fibre Channel-SW SCSI-FCP 0 I0106 50050763034895AA Online Fibre Channel-SW SCSI-FCP 0 I0107 500507630348D5AA Online Fibre Channel-SW SCSI-FCP 0 I0130 50050763030B15AA Online Fibre Channel-SW SCSI-FCP 0 I0131 50050763030B55AA Online Fibre Channel-SW SCSI-FCP 0 I0132 50050763030B95AA Online Fibre Channel-SW SCSI-FCP 0 I0133 50050763030BD5AA Online Fibre Channel-SW SCSI-FCP 0 I0200 50050763031015AA Online Fibre Channel-SW SCSI-FCP 0 I0201 50050763031055AA Online Fibre Channel-SW SCSI-FCP 0 I0202 50050763031095AA Online Fibre Channel-SW FICON 0 I0203 500507630310D5AA Online Fibre Channel-SW SCSI-FCP 0 I0204 50050763035015AA Online Fibre Channel-SW SCSI-FCP 0 I0205 50050763035055AA Online Fibre Channel-SW SCSI-FCP 0 I0206 50050763035095AA Online Fibre Channel-SW SCSI-FCP 0 I0207 500507630350D5AA Online Fibre Channel-SW SCSI-FCP 0 I0230 50050763031315AA Online Fibre Channel-LW FICON 0 I0231 50050763031355AA Online Fibre Channel-LW FICON 0 I0232 50050763031395AA Online Fibre Channel-LW FICON 0 I0233 500507630313D5AA Online Fibre Channel-LW FICON 0 I0300 50050763031815AA Online Fibre Channel-LW FICON 0 I0301 50050763031855AA Online Fibre Channel-LW FICON 0 I0302 50050763031895AA Online Fibre Channel-LW FICON 0 Chapter 8. Example 8-3 WWPN determination dscli> lsioport IBM.Determining a WWPN by using DS CLI Similar to the WWNN. use the lsioport command to determine the SFI WWPN. DS8870 Physical planning and installation 237 . It also includes the YY:Y. which is derived from where the HA card is physically installed. After you are connected to the machine through the DS CLI.

3. Select Properties. Connect via a web browser to this website: http://< hmc ip address >:8451/DS8000/Login Select System Status. 4.I0303 I0330 I0331 I0332 I0333 I0334 I0335 I0336 I0337 500507630318D5AA 50050763031B15AA 50050763031B55AA 50050763031B95AA 50050763031BD5AA 50050763035B15AA 50050763035B55AA 50050763035B95AA 50050763035BD5AA Online Online Online Online Online Online Online Online Online Fibre Fibre Fibre Fibre Fibre Fibre Fibre Fibre Fibre Channel-LW Channel-SW Channel-SW Channel-SW Channel-SW Channel-SW Channel-SW Channel-SW Channel-SW FICON SCSI-FCP SCSI-FCP FICON SCSI-FCP SCSI-FCP SCSI-FCP SCSI-FCP SCSI-FCP 0 0 0 0 0 0 0 0 0 Determine WWNN by using a web GUI Use the following guidelines to determine the WWNN by using the DS8000 GUI from the HMC: 1. Select Advanced and you can find the WWNN value. Right-click the SFI in the status column and select Storage Image. 2. Figure 8-5 SFI WWNN value 238 IBM System Storage DS8870 Architecture and Implementation . as shown in Figure 8-5.

DS8870 Physical planning and installation 239 . Chapter 8. Check your local environment for the following DS8870 unit connections: HMC and network access Tivoli Storage Productivity Center Basic Edition (if used) and network access DS CLI Remote support connection Remote power control SAN connection Tivoli Key Lifecycle Manager connection LDAP connection For more information about physical network connectivity. 3. GC27-4209. as shown in Figure 8-6. see IBM System Storage DS8000 User´s Guide. 2. we can use it to find the WWPN: 1.3 Network connectivity planning Implementing the DS8870 requires that you consider the physical network connectivity of the storage adapters and the HMC within your local area network.Determine WWPN by using a web GUI Just as we used the GUI options from the HMC to determine the WWNN. Connect via a web browser to this website: http://< hmc ip address >:8451/DS8000/Login Select System Status. and IBM System Storage DS8870 Introduction and Planning Guide. SC26-7915. Select Configure I/O ports and you receive the full list of each installed I/O port with its WWPN and its physical location. Right-click the SFI in the status column and select Storage Image. Figure 8-6 I/O ports WWPN determination 8.

3. IBM Tivoli Storage Productivity Center provides a DS8000 management interface. The IBM System Storage DS8000 Storage Manager is also accessible by using the IBM Tivoli Storage Productivity Center. Important: The DS8870 uses 172. see Chapter 9. redundant external HMC is orderable. If the customer network uses the same addresses. The HMC can be connected to your network (eth2 . Having a second HMC is a good idea for environments that use Tivoli Key Lifecycle Manager encryption management and Advanced Copy Services functions. and domain name. Copy Services management. verify that the DNS servers are reachable from HMC to avoid HMC internal network slowdown for timeouts on external network. Ethernet cables connect the HMC to the storage unit in a redundant configuration. The HMC consists of a notebook (Lenovo ThinkPad T520) with adapters for modem and 10/100/1000 Mb Ethernet. 240 IBM System Storage DS8870 Architecture and Implementation .y. The second HMC is external to the DS8870 rack and consists of a similar mobile workstation as the primary HMC. “DS8870 HMC planning and setup” on page 251.16.z private network addresses.8.1 Hardware Management Console and network access HMCs are the focal point for configuration. Gateway routing information. For more information about HMC planning. An Ethernet connection to the customer network for ETH2 also should be provided. and maintenance for a DS8870 unit. A second. it is possible to manage and configure multiple DS8000 storage systems from a single point of control.2 IBM Tivoli Storage Productivity Center The IBM Tivoli Storage Productivity Center is an integrated software solution that can help you improve and centralize the management of your storage environment through the integration of products.z and 172.y. you must provide the following settings to your IBM service representative so the management consoles can be configured for attachment to your LAN: Management console network IDs. if present) to your network.17. 8. IBM must be informed as early as possible to avoid conflicts. Valuable time can be saved if there are problems with the primary HMC.customer network) for the following tasks: Remote management of your system by using the DS CLI Remote DS Storage Manager GUI management of your system that connects directly from the customer notebook by opening a browser and navigating to the following website: http://< HMC IP address>:8451/DS8000/Login To connect the HMCs (internal and external.3. Important: The external HMC must be directly connected to the private DS8870 ETH switches. host names. Domain Name Server (DNS) settings. The internal HMC that is included with every base frame is mounted in a pull-out tray for convenience and security. If you plan to use DNS to resolve network names. A secondary HMC is also recommended for environments that perform frequent logical reconfiguration tasks. You can use this interface to add and manage multiple DS8000 series storage units from one console. With the Tivoli Storage Productivity Center.

Copy Services configuration. and exporting audit logs. These tasks can be performed interactively. For more information. and view Copy Services functions and for the logical configuration of a storage unit. “Remote support” on page 465. see Chapter 17. such as the storage administrator’s workstation or any separate server that is connected to the LAN of the storage unit. or in DS CLI script files. modify. SC53-1127. A typical remote support connection is shown in Figure 8-7. A DS CLI script file is a text file that contains one or more DS CLI commands and can be issued as a single command. Figure 8-7 DS8000 HMC remote support connection Chapter 8. The AOS software is also available and allows the IBM Support Center to establish a connection tunnel to the DS8000 through a secure VPN. You must provide an analog telephone line for the HMC modem. delete.8.3. For more information about the hardware and software requirements for the DS CLI. and other functions for a storage unit. DS CLI can be used to manage logical configuration.3 DS command-line interface The IBM System Storage DS CLI can be used to create. The DS CLI can be installed on and used from a LAN-connected system. DS8870 Physical planning and installation 241 .3. querying point-in-time performance information or status of physical resources. in batch processes (operating system shell scripts). see IBM System Storage DS Command-Line Interface User’s Guide for DS8000 series.4 Remote support connection Remote support connection is available from the HMC by using a modem (dial-up) and the VPN over the Internet through the client LAN. You can take advantage of the DS8000 remote support feature for outbound calls (Call Home function) or inbound calls (remote service access by an IBM technical support representative). including managing security settings. 8.

see Chapter 17. SANs provide the capability to interconnect open systems hosts. be sure that you order the System z power control feature. If email notification for service alerts is allowed. You might need to implement zoning to limit the access (and provide access security) of host ports to your storage ports. GC27-4209. Assign a TCP/IP address and host name to the HMC in the DS8870. “Remote support” on page 465. SAN bandwidth should also be evaluated to handle the new workload. In a System z environment. A SAN allows your single Fibre Channel host ports to have physical access to multiple Fibre Channel ports on the storage unit. A worksheet is available in IBM System Storage DS8870 Introduction and Planning Guide. 3. If you plan to use the System z power control feature. 8. This feature comes with four power control cables. you can power on and off the storage unit from a System z interface. System z hosts. Use the information that was entered on the installation worksheets during your planning.Complete the following steps to prepare for attaching the DS8870 to the customer’s LAN: 1. Your IBM System Service Representative (SSR) needs the configuration worksheet during the configuration of your HMC. For more information. Important: A SAN administrator should verify periodically that the SAN is working correctly before any new devices are installed. “DS8870 HMC planning and setup” on page 251. use a service connection through the high-speed VPN network by using a secure Internet connection. see Chapter 9. When you use this feature. The control unit is controlled by the host through the power control cable. 2. and other storage systems. Shared access to a storage unit Fibre Channel port might come from hosts that support a combination of bus adapter types and operating systems. 8. so be sure to consider the physical distance between the host and DS8870.6 Storage area network connection The DS8870 can be attached to a SAN environment through it is Fibre Channel ports. For more information about remote support connection. such as the DS8870. enable the support on the mail server for the TCP/IP addresses that is assigned to the DS8870. then select the option zSeries® Power Mode in the HMC GUI.3. the host must have the Power Sequence Controller (PSC) feature installed to turn on and off specific control units. you must specify the System z power control setting in the Power Control Pane menu. 242 IBM System Storage DS8870 Architecture and Implementation . Generally. The power control cable comes with a standard length of 31 meters. You must provide the network parameters for your HMC before the console is configured.3.5 Remote power control By using the System z remote power control setting.

GC27-4209.0. an isolated Tivoli Key Lifecycle Manager (Tivoli Key Lifecycle Manager) server also is required.ibm. IBM System Storage DS8000 series offers IBM Tivoli Key Lifecycle Manager Server with hardware feature code #1760.0 and V2.3.jsp?topic=/com. including required and optional features.doc_2. Isolated key servers that are ordered with feature code #1760 include a Linux operating system and Tivoli Key Lifecycle Manager software that is preinstalled.wss?uid=swg21386446 For more information about Tivoli Key Lifecycle Manager product and features.ibm.tk lm. can be found in the IBM System Storage DS8870 Introduction and Planning Guide. Feature Code #1754 is used to disable encryption. and IBM Security Key Lifecycle Manager. If encryption is wanted.7 Tivoli Key Lifecycle Manager server for encryption The DS8870 is configured with FDE drives and can be enabled for encryption. Chapter 8.8. DS8870 Physical planning and installation 243 . then Feature Code #1750 should be included in the order.0/welcome.boulder. see this website: http://www.com/infocenter/tivihelp/v2r1/index. This installation and activation review is performed by the IBM Systems and Technology Lab Services group. Review all planning requirements and include them in your installation considerations: Key Server Planning Introductory information. This feature enables the customer to download from the DSFA website the function authorization and to elect to turn on encryption. REDP-4500. This activation is part of the installation and configuration steps that are required to use the technology. which is ordered separately from the stand-alone server hardware. Important: No other hardware or software is allowed on the isolated key server.com/support/docview.htm Encryption planning Encryption planning is a customer responsibility. For more information about supported products and platforms. For more information about the required Tivoli Key Lifecycle Manager server and other requirements and guidelines. The following major planning components are part of the implementation of an encryption environment.ibm. The software is purchased separately from the Tivoli Key Lifecycle Manager isolated server hardware. see this website: http://publib. see IBM System Storage DS8000 Disk Encryption Implementation and Usage Guidelines. When encryption is enabled. Tivoli Key Lifecycle Manager Planning The DS8000 series supports IBM Tivoli Key Lifecycle Manager V1. An isolated server must use an internal disk for all files that are necessary to boot and have the Tivoli Key Lifecycle Manager key server become operational. Customers must acquire a Tivoli Key Lifecycle Manager license for use of the Tivoli Key Lifecycle Manager software. A Tivoli Key Lifecycle Manager license is required for use with the Tivoli Key Lifecycle Manager software. Full Disk Encryption Activation Review Planning IBM Full Disk Encryption offerings must be activated before use.

This configuration can enable a single sign-on interface to all DS8000s in the client environment. For more information. 8. you must provide the following settings to your IBM service representative: Tivoli Storage Productivity Center server network IDs. and IBM System Storage DS8000: Copy Services for IBM System z.Tivoli Key Lifecycle Manager connectivity and routing information To connect the Tivoli Key Lifecycle Manager to your network. REDP-4505. These ports are defined by the Tivoli Key Lifecycle Manager administrator.4 Remote mirror and copy connectivity The DS8000 uses the high speed Fibre Channel protocol (FCP) for Remote Mirror and Copy connectivity. see IBM System Storage DS8000: Copy Services for Open Systems. and domain name Domain Name Server (DNS) settings (if you plan to use DNS to resolve network names) There are two network ports that must be opened on a firewall to allow DS8870 connection and have an administration management interface to the Tivoli Key Lifecycle Manager server. your Copy Services solution can use hardware such as channel extenders or dense wavelength division multiplexing (DWDM). SG24-6787. For more information. There also might be an SSL between the Tivoli Storage Productivity Center and the DS8000 that must be opened to allow LDAP traffic between them. SG24-6788.8 Lightweight Directory Access Protocol server for single sign-on A Lightweight Directory Access Protocol (LDAP) server can be used to provide directory services to the DS8000 through Tivoli Storage Productivity Center. host names.3. Make sure that you have enough FCP paths that are assigned for your remote mirroring between your source and target sites to address performance and redundancy issues. see IBM System Storage DS8000: LDAP Authentication. Use another set of logical and physical paths for the Global Copy. the LDAP port (which is verified during the Tivoli Key Lifecycle Manager installation) must be opened in that socket. and host names domain name and port User ID and password of the LDAP server If the LDAP server is isolated from the Tivoli Storage Productivity Center by a Secure Socket Layer (SSL). use separate logical and physical paths for the Metro Mirror. 244 IBM System Storage DS8870 Architecture and Implementation . you must provide the following settings to your IBM service representative: LDAP network IDs. Plan the distance between the primary and secondary storage units to properly acquire the necessary length of fiber optic cables that you need. When you plan to use Metro Mirror and Global Copy modes between a pair of storage units. Typically. LDAP connectivity and routing information To connect the LDAP server to the Tivoli Storage Productivity Center. 8. If necessary. there is one LDAP server that is installed in the client environment to provide directory services.

or RAID 10. This feature might be useful for certain installations because it allows the repair of some DDM failures to be deferred until a later repair action is required. “Spare creation” on page 91. RAID arrays automatically attempt to recover from a DDM failure by rebuilding the data for the failed DDM on a spare DDM. see IBM System Storage DS8870 Introduction and Planning Guide.9. DS8870 Physical planning and installation 245 .5.5 Disk capacity considerations The effective capacity of the DS8000 is determined by the following factors. After sparing is initiated. For more information about the DS8000 sparing concepts.6. the spare and the failing DDM are swapped between their respective array sites such that the spare DDM becomes part of the array site that is associated with the array at the failed DDM. Important: Starting from release 6. in two sparing combinations The storage type: Fixed Block (FB) or Count Key Data (CKD) 8. an improved sparing management named Smart Rebuild was added into microcode to better-manage errors on disks and arrays that are sparing. The failing DDM becomes a failed spare DDM in the array site from which that the spare came.2. The DS8000 assigns spare disks automatically. GC27-4209. RAID 6. A minimum of one spare is created for each array site that is defined until the following conditions are met: A minimum of four spares per DA pair A minimum of four spares of the largest capacity array site on the DA pair A minimum of two spares of capacity and RPM greater than or equal to the fastest array site of any capacity on the DA pair The DDM sparing policies support the over-configuration of spares. For sparing to occur. Chapter 8. which apply equally to standard and encrypted storage drives: The spares configuration The size of the installed disk drives The selected RAID configuration: RAID 5.8. and 4.1 Disk sparing On internal storage. a DDM with a disk capacity equal to or greater than failed disk capacity must be available on the same device adapter pair. The first four array sites (a set of eight disk drives) on a Device Adapter (DA) pair normally each contribute one spare to the DA pair.

246 IBM System Storage DS8870 Architecture and Implementation .8. Table 8-8 on page 247 show the effective capacity of one rank in the various possible configurations. 7+P RAID 5 configuration: The array consists of seven data drives and one parity drive. 5+P+Q RAID 6 configuration: The array consists of five data drives and two parity drives. The remaining drive on the array site is used as a spare. or RAID 10 configuration. 6+P+Q RAID 6 configuration: The array consists of six data drives and two parity drives. A disk drive set contains 16 drives. The following RAID configurations are possible: 6+P RAID 5 configuration: The array consists of six data drives and one parity drive. Hard disk drive (HDD) capacity is added in increments of one disk drive set. which form two array sites. The capacities in the table are expressed in decimal gigabytes and as the number of extents. 3+3 RAID 10 configuration: The array consists of three data drives that are mirrored to three copy drives.2 Disk capacity The DS8870 operates in a RAID 5.5. SSD capacity can be added in increments of a half disk drive set (eight drives). The remaining drive on the array site is used as a spare. Two drives on the array site are used as spares. RAID 6. 4+4 RAID 10 configuration: The array consists of four data drives that are mirrored to four copy drives.

38 (812) 1621.000.6 (2777) N/A N/A Important: When reviewing Table 8-8.39 (724) 777.13 (2858) 4125.75 (3075) 3298.13 (5297) 2277.63 (17386) Rank of RAID 5 arrays 6+P 794.77 (5188) N/A N/A 16662.80 (1740) 2520.000.43 (4630) 4966.35 (1017) 1090.32 (3842) 4122.77 (2376) N/A N/A 7+P 933.07 (2347) 2516.Important: Because of the larger metadata area.55 (482) 516.62 (3141) 3367.87 (4107) 5925 (5518) 5917.35 (1510) 1618.36 (1727) 3372.33 (3666) 3931.53 (12890) 13831.1 (869) 931.6 (740) 793.60 (2083) 2232.73 (4729) 5071.4 (2121) 2274. DS8870 Physical planning and installation 247 .43 (1139) 2236.89 (2689) N/A N/A N/A N/A 4+4 517.7 (1542) 1653. Although disk drive sets contain 16 drives.10 (1249) 1340. Chapter 8. keep in mind the following points: Effective capacities are in decimal gigabytes (GB). and the numbers for the required extents for each available type of RAID.04 (2550) 2736.01 (844) 1667.60 (754) 808.38 (4306) N/A N/A 13840.97 (540) 1092.30 (1400) 2738.09 (3445) 4971.90 (14448) 6+P+Q 777.52 (668) 1341.99 (3518) 5077.7.16 (395) 809.89 (1691) 3301.43 (6181) 2660.000 bytes.7 (2478) 2658.52 (1553) 1665. arrays use only eight drives.77 (3143) 3368 (3518) N/A N/A N/A N/A Rank of RAID 6 arrays 5+P+Q 640 (596) 639. the net capacity of the ranks is lower than in previous DS8000 models.33 (15518) 16644.5 (973) 1936 (1803) 1934 (2020) 3936. The effective capacity assumes that you have two arrays for each disk drive set. An updated version of Capacity Magic (see “Capacity Magic” on page 518) aids you in determining the raw and net storage capacities. Table 8-8 DS8870 Disk drive set capacity for open systems and System z environments Disk size/ Rank type Effective capacity of one rank in decimal GB (Number of extents) Rank of RAID 10 arrays 3+3 146 GB / FB 146 GB / CKD 300 GB / FB 300 GB / CKD 600 GB / FB 600 GB / CKD 900 GB / FB 900 GB / CKD 400 GB (SSD)/FB 400 GB (SSD)/ CKD 3 TB (NL)/FB 3 TB (NL)/CKD 379. 1 GB is 1.03 (353) 378.56 (2332) 3374.66 (829) 1655.

Important: An eight drive installation increment means that the SSD rank that is added is assigned to only one DS8000 server (CEC). SSD Placement The following rules apply to SSD placement: SSD sets have a default location when a new machine is ordered and configured.8. RAID 5 is the main supported implementation for SSDs (RAID 6 is not supported). Performance consideration Using a base SSD configuration (16 DDMs) and implementing the Easy Tier functions provides the optimal price to performance ratio. SSDs can be ordered and installed in eight-drive installation groups (half drive sets) or 16-drive installation groups (full drive sets). 248 IBM System Storage DS8870 Architecture and Implementation . This installation is done to spread the SSDs over as many DA pairs as possible to achieve an optimal price-to-performance ratio.3 DS8000 Solid-State drive considerations SSDs are a higher performance option when compared to HDDs. A DS8870 system is limited to 48 SSDs per DA pair. For the DS8870. This configuration is not preferred for performances reasons. SSDs have limitations and considerations that differ from HDDs. which is the first storage enclosure pair on each device adapter pair. RAID 10 is supported only with a customer-requested RPQ. To achieve the optimal price-to-performance ratio in DS8000. All disks that are installed in a storage enclosure pair must be of the same capacity and speed. SSDs are installed in default locations from manufacturing. Limitations The following limitations apply to SSDs: Drives of different capacities and speeds cannot be intermixed in a storage enclosure pair. The DS8870 is the preferable platform for SSDs because of its better overall system performance. This requirement ensures that the limitation of 48 SSDs per DA pair for DS8870 is not exceeded.5. A half drive set (8) is always upgraded to a full drive set (16) when SSD capacity is added. A frame can contain one SSD half drive set at most. Adding SSDs to an existing configuration (to the third and fourth frame for DS8870) requires an RPQ. The array configuration is 6+P+S or 7+P. The default locations for SSDs in a DS8870 are split among eight DA pairs (if installed) in the first two frames: four in the first frame and four in the second frame. The maximum number of SSDs in a DS8870 system is 384 drives that are spread over eight DA pairs. SSDs are available in 400 GB capacity with Full Disk Encryption (FDE) capability. SSDs follow normal sparing rules.

“Using Capacity on Demand” on page 509.Important: When possible. Planning for future growth normally suggests an increase in physical requirements in your installation area. DS8870 Physical planning and installation 249 . use another Device Adapter pair for each new SSD drive-set. you access the extra storage capacity when you need it non-disruptively. electrical power. but a careful sizing is needed for all installations. This offering provides you with the ability to tap into more storage. you can also try to locate the HDDs fully on other Device Adapter pairs than the SSDs. For larger configurations. A key feature that you can order for your dynamic storage requirement is the Standby Capacity on Demand (CoD). and environmental cooling. By using Standby CoD. including floor loading. Chapter 8. see 18. cache size. and is attractive if you have rapid or unpredictable growth. For smaller configurations. processor cores. For more information about CoD. Easy Tier often takes care of this task.6 Planning for growth The DS8870 system is a highly scalable storage solution. or if you want to know that the extra storage is there when you need it. 8. it is also possible that one SSD rank might be mixed with HDD ranks behind one particular Device Adapter. Features such as total storage capacity. and host adapters can be easily increased by adding the necessary hardware and by changing the needed licensed feature keys (as ordered) without any downtime (concurrent upgrade).2.

250 IBM System Storage DS8870 Architecture and Implementation .

251 . This chapter covers the following topics: Hardware Management Console overview Hardware Management Console software HMC activities HMC and IPv6 HMC user management External HMC Configuration flow © Copyright IBM Corp. All rights reserved. 2013. DS8870 HMC planning and setup This chapter describes the planning tasks that are involved in the setup of the required DS8870 Hardware Management Console.9 Chapter 9.

The HMC is a configuration and management station for the DS8870. A second HMC. The HMC does not process any of the data from hosts. is available as an option to provide redundancy. The service rail is on top of the uninterruptible power supply (UPS). including power consumption. It also provides the interface where service personnel perform diagnostic and repair tasks. The use of a mobile workstation makes the HMC efficient in many ways. on a side slip able platter. The DS8870 HMC location is shown in Figure 9-1 on page 253. then lift and release it to the service rail. Lift this latch to allow the workstation to slide to forward. The HMC is mounted on a slide-out tray that pulls forward when the door is fully open. there is a latch on the platter in the front of the notebook HMC.1. external to the DS8870. When the tray is extended forward. it is not even in the path that the data takes from a host to the storage. a Lenovo T520. The HMC is the focal point for DS8870 management that includes the following functions: DS8870 power control Storage provisioning Advanced Copy Services management Interface for on-site service personnel Collection of diagnostic data and Call Home Problem Management Remote support by secure virtual private network (VPN) tunnel Remote support by modem Connection to Tivoli Key Lifecycle Manager for encryption functions Interface for microcode and other firmware updates Every DS8870 installation includes an HMC that is in the base frame.1 Storage HMC hardware The HMC consists of a mobile workstation (notebook).1 Hardware Management Console overview The Hardware Management Console (HMC) is a multi-purpose piece of equipment that provides the services that the client needs to configure and manage the storage and manage some of the operational aspects of the Storage System.9. Because of width constraints. the HMC is seated on the tray sideways. 9. 252 IBM System Storage DS8870 Architecture and Implementation .

Figure 9-1 HMC location in DS8870 The mobile workstation is equipped with adapters for a modem and 10/100/1000 Mb Ethernet. For more information about adding an external HMC. Figure 9-2 DS8870 HMC modem and Ethernet connections (rear) A second. “External HMC” on page 265. These connectors are only in the base frame of a DS8870 and not in any of the expansion frames.. Chapter 9. DS8870 HMC planning and setup 253 . as shown in Figure 9-2. These adapters are routed to special connectors in the rear of the DS8870 frame. redundant mobile HMC workstation is orderable and should be used in environments that use Tivoli Key Lifecycle Manager encryption management and Advanced Copy Services functions. see 9.6. The second HMC is external to the DS8870 frame.

easy-to-use interfaces for a storage administrator to perform DS8870 management tasks to provision the storage arrays. Important: The internal Ethernet switches that are shown in Figure 9-3 are for the DS8870 private network only. 9.2 Private Ethernet networks The HMC communicates with the storage facility through a pair of redundant Ethernet networks. The DS8870 HMC provides the following management interfaces: DS Storage Manager graphical user interface (GUI) DS Command-Line Interface (DS CLI) DS Open application programming interface (DS Open API) Web-based user interface (WebUI) The GUI and the DS CLI are comprehensive. two or three ports might be unused on each switch. Figure 9-3 DS8870 Internal Ethernet Switches In most DS8870 configurations. Figure 9-3 shows how each port is used on the pair of DS8870 Ethernet switches. depending on the particular task. The interfaces can be used interchangeably.1. Each HMC and each Central Electronics Complex (CEC) is connected to both switches. Enterprise Storage Server Network Interface server (ESSNI) ESSNI is the logical server that communicates with the DS Storage Management server and interacts with the two CECs of the DS8870. manage application users. There are two switches that are included in the rear of the DS8870 base frame.9. Do not connect the client network (or any other equipment) to these switches because they are for the DS8870 internal use only. No client network connection should ever be made directly to these internal switches.2 Hardware Management Console software The Linux based HMC includes the following application servers that run within a Tomcat environment: DS Storage Management server The DS Storage Management server is the logical server that communicates with the outside world to perform DS8870-specific tasks. which are designated as the Black Network and Gray Network. and change HMC options. 254 IBM System Storage DS8870 Architecture and Implementation .

a middleware application that provides a CIM-compliant interface.The DS Open API provides an interface for external storage management programs. It channels traffic through the IBM System Storage Common Information Model (CIM) agent. lightweight. To access the DS Storage Manager GUI through the network. “Accessing the DS GUI” on page 324.1. but it is no longer the only way to use the DS Storage Manager GUI.html Chapter 9. DS8870 HMC planning and setup 255 .2. it is possible to access the DS Storage Manager GUI directly. Login procedures are described in the following sections. to communicate with the DS8870. With the DS8870. such as Tivoli Storage Productivity Center. IBM Tivoli Storage Productivity Center login to DS Storage Manager GUI The DS Storage Manager graphical user interface (GUI) can be started by using the Tivoli Storage Productivity Center Element Manager that is installed on a customer server from any supported. see 13.1.1 DS Storage Manager GUI DS Storage Manager can be accessed by using the IBM Tivoli Storage Productivity Center Limited (Limited is the minimum software requirement and can be installed on a customer provided server) or by using Tivoli Storage Productivity Center Element Manager from any network-connected workstation with a supported browser. and fast interface that is called WebUI that can be used remotely over a VPN by support personnel to check the health status and perform service tasks. network-connected workstation. Tivoli Storage Productivity Center is still an available option. open a new browser window or tab and enter the following address: http://<TPC EM ipaddress>:9550/ITSRM/app/welcome. IBM Tivoli Storage Productivity Center simplifies storage management by providing the following benefits: Centralizing the management of heterogeneous storage network resources with IBM storage management software Providing greater synergy between storage management software and IBM storage devices Reducing the number of servers that are required to manage your software infrastructure Migrating from basic device management to storage management applications that provide higher-level functions Log in to DS Storage Manager GUI The DS Storage Manager graphical user interface (GUI) can be started by using the Tivoli Storage Productivity Center Element Manager from any supported network-connected workstation. For more information about using the GUI without Tivoli Storage Productivity Center. The DS8870 uses an slim. 9.

2 Command-line interface The DS CLI. Configuration of the CIM agent must be performed by an IBM Service representative by using the DS CIM Command-Line Interface (DSCIMCLI). DS CLI and DS Open API communicate directly with the ESSNI server software that is running on the HMC. see this website: http://www.2.snia. the CIM agent is preloaded with the HMC code and is started when the HMC boots. which provides a secure and full interface to utilities that are running at the HMC.4 Web-based user interface The Web User Interface (WebUI) is a browser-based interface that is used for remote access to system utilities.org/forums/smi/tech_programs/lab_program/ 9.2. data offloading. and many service actions. This option avoids the web page load time for each window in the DS Storage Manager GUI. GC53-1127. see Chapter 14.9.2. 256 IBM System Storage DS8870 Architecture and Implementation . For a complete list of DS CLI commands. The connection uses port 443 over secure sockets layer (SSL). is a second option to communicate with the HMC. The DS CLI might be a good choice for configuration tasks when there are many updates to be done. For more information about the CIM agent. If a VPN connection is set up. “Configuration with the DS Command-Line Interface” on page 385. An active CIM agent allows access only to the DS8000s that are managed by the HMC on which it is running.3 DS Open application programming interface Calling DS Open application programming interfaces (DS Open APIs) from within a program is a third option to implement communication with the HMC. For more information about DS CLI use and configuration. which must be executed in the command environment of an external workstation. 9. WebUI can be used by support personnel for DS8870 diagnostic tasks. Important: Use of a secure VPN or Assist On-site (AOS) VPN allows service personnel to quickly respond to client needs by using the WebUI. see IBM Systems Storage DS8000 Series: Command-Line Interface User’s Guide. For the DS8000.

DS8870 HMC planning and setup 257 . you see the HMC window. The default user ID is customer and the default password is cust0mer.Complete the following steps to log in to the HMC by using the WebUI: 1. as shown in Figure 9-4. H MC M ana gem ent Se rvice M ana gem ent H e lp Log off Status Ov ervi ew Figure 9-5 WebUI main window E xtra Information Chapter 9. Click Log on and launch the Hardware Management Console web application to open the login window and log in. Log on at the HMC. Figure 9-4 Hardware Management Console 2. Other areas of interest are shown in Figure 9-5. in which you can select Status Overview to see the status of the DS8870. If you are successfully logged in.

1 HMC planning tasks The following tasks are needed to plan the installation or configuration: A connection to the client network is needed at the base frame for the internal HMC. The DS8870 can work with IPv4 and IPv6 networks.4. The web browser is the only software that is needed on workstations that perform configuration tasks online by using the DS Storage Manager GUI (directly.Because the HMC WebUI is mainly a services interface. or through Tivoli Storage Productivity Center Limited or through Tivoli Storage Productivity Center Element Manager). When a DS8870 is ordered. 9.3 HMC activities This section covers planning and maintenance tasks for the DS8870 HMC. You can also use Tivoli Storage Productivity Center in your environment to access the DS GUI. The IP addresses of NTP servers must be identified if the client wants the DS8870 HMC to use Network Time Protocol for time synchronization. Important: Applying increased feature activation codes is a concurrent action. The connections should be standard CAT5/6 Ethernet cabling with RJ45 connectors. “DS8870 Physical planning and installation” on page 223. as described in IBM System Storage DS8870 Introduction and Planning Guide. For more information about procedures to configure the DS8870 HMC for IPv6. For more information. Another connection also is needed at the location of the second.3. it is not covered here. the license and certain optional features must be activated as part of the customization of the DS8870. If modem access is allowed. Email accounts must be identified if the client wants the DS8870 HMC to send email messages for problem conditions. information about planning that installation is included. Most users use the DSSM GUI directly through a browser. another line also is needed at the location of the second. 9. If ordered. The web browser to be used on any administration workstation must be supported. A decision should be made as to which web browser should be used. “HMC and IPv6” on page 261. More information can be found in the Help menu. external HMC. For more information about for overall planning. IP addresses for the internal and external HMCs are needed. see 9. the information can be safely ignored. see Chapter 8. The IP addresses of SNMP recipients must be identified if the client wants the DS8870 HMC to send SNMP traps to a monitoring station. The connections should be standard phone cabling with RJ11 connectors. external HMC was not ordered. “IBM System Storage DS8000 features and license keys” on page 271. If a second. GC27-4209. a phone line is needed at the base frame for the internal HMC. external HMC. If a second external HMC was ordered for the DS8870. see Chapter 10. 258 IBM System Storage DS8870 Architecture and Implementation . The installation tasks for the optional external HMC must be identified as part of the overall project plan and agreed upon with the responsible IBM personnel.

any prerequisites that are identified for the hosts (for example. and DS Open API. With the DS8870. the HMC can use the Network Time Protocol (NTP) service. if necessary. Customers can specify NTP servers on their internal network to provide the time to the HMC. DS8870 HMC planning and setup 259 . “Licensed machine code” on page 435. contact your IBM representative. see the HBA Support Matrix that is referenced in the Interoperability Matrix and make sure that drivers are downloaded from the IBM Internet site. Host prerequisites When you are planning for initial installation or for microcode updates.3 Time synchronization For proper error analysis. For more information about microcode upgrades.9. the DS Storage Manager. 9.ibm. patches. Sometimes a new level also is required for the SDD. or new drivers) could make it necessary to schedule a maintenance window. it is important to have the date and time information synchronized as much as possible on all components in the DS8870 environment.3.3. Maintenance windows Normally. the microcode update of the DS8870 is a nondisruptive action. Plan on upgrading them on the relevant workstations. Microcode installation An IBM service representative can install the changes. Check whether the new microcode requires new levels of DS Storage Manager. The host environments can then be upgraded to the level needed in parallel to the microcode update of the DS8870 taking place.2 Planning for microcode upgrades The following tasks must be considered regarding the microcode upgrades on the DS8870: Microcode changes IBM might release changes to the DS8870 series Licensed Machine Code.com/systems/support/storage/config/ssic To prepare for downloading the drivers. It is a client responsibility to ensure that the NTP servers are working. An IBM service representative enables the HMC to use NTP servers (ideally at the time of the initial DS8870 installation). However. stable. new maintenance levels. and DS CLI workstations. DS8870 interoperability information can be found at the IBM System Storage Interoperability Center (SSIC) at this website: http://www. make sure that all prerequisites for the hosts are identified correctly. This requirement is necessary to make sure that drivers are used with the settings that correspond to the DS8870 and not some other IBM storage subsystem. This availability does not necessarily mean that former levels of HBA firmware or drivers are no longer supported. see Chapter 15. If in doubt about any supported levels. DS CLI. The components include the DS8870 HMC. Important: The Interoperability Center includes information about the latest supported code levels. Chapter 9. and accurate.

the DS8870 uses SNMP traps. which handles the trap that is based on the MIB that was delivered with the DS8870 software. During the planning process.4 Monitoring DS8870 with the HMC A client can receive notifications from the HMC through SNMP traps and email messages.5 Call Home and remote support The HMC uses outbound (call home) and inbound (remote service) support. The IBM service support representative can view error logs and problem logs and initiate trace or dump retrievals. 9. For more information about the DS8870 and SNMP.9. For more information. create a list of who must be notified. A MIB that contains all of traps can be used for integration purposes into System Management Software. SNMP and email are the only notification options for the DS8870. “Monitoring with Simple Network Management Protocol” on page 445. see Chapter 17. An SNMP trap can be sent to a server in the client’s environment. The supported traps are described in the documentation that comes with the microcode on the CDs that are provided by the IBM service representative. an IBM service support representative could connect to the HMC to perform detailed problem analysis. email messages are sent to all the addresses that are defined on the HMC whenever the storage complex encounters a serviceable event or must alert you to other information. Setup of the remote support environment is done by the IBM service representative during initial installation. Email When you choose to enable email notifications. 260 IBM System Storage DS8870 Architecture and Implementation . such as open serviceable events. Remote Support is the capability of IBM support representatives to connect to the HMC to perform service tasks remotely. Call home is the capability of the HMC to contact IBM support center to report a serviceable event. You receive a notification on the system console if there is a serviceable event. see Chapter 16. If allowed to do so by the setup of the client’s environment. perhaps with System Management Software. You can choose one or both of the following notification methods: Simple Network Management Protocol (SNMP) traps For monitoring purposes. Notifications contain information about your storage complex.3. The IP address to which the traps should be sent must be configured during initial installation of the DS8870. Remote support can be configured for dial-up connection through a modem or high-speed VPN Internet connection.3. “Remote support” on page 465. Service Information Message (SIM) notification is only applicable for System z servers.

Configuring the HMC in an IPv6 environment Usually. DS8870 HMC planning and setup 261 . “Web-based user interface” on page 256. For more information. Complete the following steps to configure the DS8870 HMC client network port for IPv6: 1. The HMC Welcome window opens. the configuration is done by the IBM service representative during the DS8870 initial installation. IPv4 also is still supported. as shown in Figure 9-6.4 HMC and IPv6 The DS8870 HMC can be configured for an IPv6 client network.9.4. see to 9. Launch and log in to WebUI. Figure 9-6 WebUI Welcome window Chapter 9.2.

Figure 9-8 WebUI IPv6 settings window 262 IBM System Storage DS8870 Architecture and Implementation . Only eth2 is shown. Click the LAN Adapters tab. 6. The private network ports are not editable. In the HMC Management window. select Change Network Settings.2. 4. as shown in Figure 9-7. Figure 9-7 WebUI HMC Management window 3. Click Details. Click Add to add a static IP address to this adapter. 5. Figure 9-8 shows the LAN Adapter Details window where you can configure the IPv6 values. Click the IPv6 Settings tab.

The Logical operator (op_volume) has access to all service methods and resources that relate to logical volumes. This role requires an Administrator user to confirm the actions that are taken during the encryption deadlock prevention and resolution process. and Volume Groups. Proxy used is a Tivoli Component (Embedded Security Services. and the privileges of the Monitor group.1 server. The Physical operator (op_storage) has access to physical configuration service methods and resources. and Extent Pool objects. This group also has the privileges of the Monitor group. LDAP Server). For more information about LDAP-based authentication. excluding security methods. DS8870 HMC planning and setup 263 . A user can be assigned to more than one group. hosts. excluding security methods. The GUI forces you to change the password when you first log in. nonsecurity HMC service methods. Chapter 9. User roles During the planning phase of the project. storage image. except for encryption functionality. The Security Administrator (secadmin) has access to all encryption functions. use the following DS CLI command: chuser-pw passw0rd admin After you issue that command. This user authorizes the actions of the Security Administrator during the encryption deadlock prevention and resolution process. to change the admin user’s password to passw0rd. array. The Copy Services operator has access to all Copy Services methods and resources. a worksheet or a script file was established with a list of all users who need access to the DS GUI or DS CLI. An administrator user ID is preconfigured during the installation of the DS8870 and uses the following defaults: User ID: admin Password: admin The password of the admin user ID must be changed before it can be used. also known as Authentication Service). The Monitor group has access to all read-only. By using the DS CLI. such as list and show commands. REDP-4505. logical subsystems. such as performing code loads and retrieving problem logs. This capability requires a minimum Tivoli Storage Productivity Center Version 5. rank.5 HMC user management User management can be performed by using the DS CLI or the DS GUI. Important: The DS8870 supports the capability to use a Single Point of Authentication function for the GUI and CLI through a proxy to contact the external repository (for example. excluding security methods. you log in but you cannot issue any other commands until you change the password. see IBM System Storage DS8000: LDAP Authentication. For example. such as managing storage complex.9. The Service group has access to all HMC service methods and resources. host ports. At least one user (user_id) should be assigned to each of the following roles: The Administrator (admin) has access to all HMC service methods and all storage image resources. you can issue other commands.

By default. Password settings include the time period (in days) after which passwords expire and a number that identifies how many failed logins are allowed. this password must be changed. this user group is assigned to any user account in the security repository that is not associated with any other user group. If access is denied for the Administrator because of the number of invalid login attempts. The user might opt to use the new defaults with the chpass –reset command. Important: Upgrading an existing storage system to the latest code release does not change old default user acquired rules. as this setting indicates that passwords never expire and unlimited login attempts are allowed. During the first login. a password is entered by the administrator. Password policies Whenever a user is added. The command resets all default values to new defaults immediately. This group is used by an administrator to temporarily deactivate a user ID. The user ID is deactivated if an invalid password is entered more times than the limit. Existing default values are retained to prevent disruption. a procedure can be obtained from your IBM support representative to reset the Administrator’s password.Important: Available Resource Groups offers an enhanced security capability that supports the hosting of multiple customers with Copy Services requirements. For more information. It also supports the single customer with requirements to isolate the data of multiple operating systems’ environments. see IBM Systems Storage DS8000: Copy Services Scope Management and Resource Groups. 264 IBM System Storage DS8870 Architecture and Implementation . Only a user with administrator rights can then reset the user ID with a new initial password. REDP-4758. General rule: Do not set the values of chpass to 0. No access prevents access to any service method or storage image resources.

For this book. For example. length. Chapter 9. but configuration. and history also are checked. if you create a user name that is called Anthony. and maintenance capabilities become severely restricted. The two HMCs run in a dual-active configuration. the following changes were made: – Groups now include alphabetic. In addition.6 External HMC An external. embedded user ID. age. so either HMC can be used at any time.The password for each user account is forced to adhere to the following rules: Passwords must contain one character from at least two groups and must be 8 . but is highly useful. The external HMC is an optional purchase. numeric. Users with invalidated passwords are not automatically disconnected from DS8870. Any organization with high availability requirements should consider deploying an external HMC. Passwords are invalidated by change remain usable until the next password change. secondary HMC (for redundancy) can be ordered for the DS8870. Users must change expired passwords at the next logon. The following password security implementations also are included: Password rules are checked when passwords are changed. the internal and external HMCs are not available to be used as general purpose computing resources. and failed attempts – Users with passwords that expire or are locked out by the administrator while logged on are not automatically disconnected from DS8870. DS8870 HMC planning and setup 265 . error reporting. locked out. The following password rules are checked when a user logs on: – Password expiration. and punctuation – Old rules required at least five alphabetic and one numeric character – Old rules required first and last characters to be alphabetic Passwords cannot contain the user’s ID. Passwords that are reset by an administrator are expired. The DS8870 can perform all storage duties while the HMC is down or offline.16 characters. the distinction between the internal and external HMC is only for the purposes of clarity and explanation because they are identical in functionality. Valid character set. you cannot log in by using the user name anthony. Important: User names and passwords are case-sensitive. Important: To help preserve Data Storage functionality. 9. Initial passwords on new user accounts are expired.

9.6.1 External HMC benefits
Having an external HMC provides the following advantages: Enhanced maintenance capability Because the HMC is the only interface that is available for service personnel, an external HMC provides maintenance operational capabilities if the internal HMC fails. Greater availability for power management The use of the HMC is the only way to safely power on or power off the DS8870. An external HMC is necessary to shut down the DS8870 in the event of a failure with the internal HMC. Greater availability for remote support over modem A second HMC with a phone line on the modem provides IBM with a way to perform remote support if an error occurs that prevents access to the first HMC. If network offload (FTP) is not allowed, one HMC can be used to offload data over the modem line while the other HMC is used for troubleshooting. For more information about HMC modem, see Chapter 17, “Remote support” on page 465. Greater availability of encryption deadlock recovery If the DS8870 is configured for full disk encryption and an encryption deadlock situation occurs, the use of the HMC is the only way to input a Recovery Key to allow the DS8870 to become operational. For more information about encryption deadlock, see 4.8.1, “Deadlock recovery” on page 100. Greater availability for Advanced Copy Services Because all Copy Services functions are driven by the HMC, any environment that uses Advanced Copy Services should include dual HMCs for operations continuance. Greater availability for configuration operations All configuration commands must go through the HMC. This requirement is true regardless of whether access is through the Tivoli Storage Productivity Center BE, Tivoli Storage Productivity Center Enterprise Manager, DS CLI, the DS Storage Manager, or DS Open API with another management program. An external HMC allows these operations to continue in the event of a failure with the internal HMC. When a configuration or Copy Services command is issued, the DS CLI or DS Storage Manager sends the command to the first HMC. If the first HMC is not available, it automatically sends the command to the second HMC instead. Typically, you do not have to reissue the command. Any changes that are made by using one HMC are instantly reflected in the other HMC. There is no caching of host data that is done within the HMC, so there are no cache coherency issues.

266

IBM System Storage DS8870 Architecture and Implementation

9.7 Configuration worksheets
During the installation of the DS8870, your IBM service representative customizes the setup of your storage complex that is based on information that you provide in a set of customization worksheets. Each time that you install a new storage unit or management console, you must complete the customization worksheets before the IBM service representatives can perform the installation. The customization worksheets are important and must be completed before the installation. It is important that this information is entered into the machine so that preventive maintenance and high availability of the machine are maintained. You can find the customization worksheets in IBM System Storage DS8870 Introduction and Planning Guide, GC27-4209. By using the customization worksheets, you specify the initial setup for the following items: Company information: This information allows IBM service representatives to contact you quickly when they must access your storage complex. Management console network settings: You specify the IP address and LAN settings for your management console (MC). Remote support (includes Call Home and Remote Support settings): You specify whether you want outbound or inbound remote support. Notifications (include SNMP trap and email notification settings): You specify the types of notifications that you and others might want to receive. Power control: You select and control the various power modes for the storage complex. Control Switch settings: You specify certain DS8870 settings that affect host connectivity. You must enter these choices on the control switch settings worksheet so that the service representative can set them during the installation of the DS8870. Important: IBM service representatives cannot install a DS8870 system or management console until you provide them with the completed customization worksheets.

Chapter 9. DS8870 HMC planning and setup

267

9.8 Configuration flow
Complete the following tasks to configure storage in the DS8870. The order of the tasks does not have to be completed exactly as shown here: Important: The configuration flow changes when you activate the Full Disk Encryption Feature for the DS8870. Install license keys: Activate the license keys for the DS8870. Create arrays: Configure the installed disk drives as RAID 5, RAID 6, or RAID 10 arrays. Create ranks: Assign each array to a fixed block (FB) rank or a count key data (CKD) rank. Create Extent Pools: Define Extent Pools, associate each one with Server 0 or Server 1, and assign at least one rank to each Extent Pool. If you want to take advantage of Storage Pool Striping, you must assign multiple ranks to an Extent Pool. With current versions of the DS GUI, you can start directly with the creation of Extent Pools (arrays and ranks are automatically and implicitly defined). Create a repository for Space Efficient volumes. Configure I/O ports: Define the type of the Fibre Channel/FICON ports. The port type can be Switched Fabric, Arbitrated Loop, or FICON. Create host connections for open systems: Define open systems hosts and their Fibre Channel (FC) host bus adapter (HBA) worldwide port names. Create volume groups for open systems: Create volume groups where FB volumes are assigned and select the host attachments for the volume groups. Create open systems volumes: Create striped open systems FB volumes and assign them to one or more volume groups. Create System z logical control units (LCUs): Define the LCU type and other attributes, such as subsystem identifiers (SSIDs). Create striped System z volumes: Create System z CKD base volumes and Parallel Access Volume (PAV) aliases for them. The actual configuration can be done by using the DS Storage Manager GUI, DS Command-Line Interface, or a combination of both. A novice user might prefer to use the GUI, whereas a more experienced user might use the CLI, particularly for the more repetitive tasks, such as creating large numbers of volumes. For more information about how to perform the specific tasks, see the following chapters: Chapter 10, “IBM System Storage DS8000 features and license keys” on page 271 Chapter 13, “Configuration by using the DS Storage Manager GUI” on page 323 Chapter 14, “Configuration with the DS Command-Line Interface” on page 385

268

IBM System Storage DS8870 Architecture and Implementation

General guidelines for configuring storage
Remember the following general guidelines when you are configuring storage on the DS8870: To achieve a well-balanced load distribution, use at least two Extent Pools, each assigned to one DS8870 internal server (extent Pool 0 and Extent Pool 1). If CKD and FB volumes are required, use at least four Extent Pools. Address groups, 16 LCUs or logical subsystems (LSSs), are all for CKD or FB. Volumes of one LCU or LSS can be allocated on multiple Extent Pools. An Extent Pool cannot contain all three RAID 5, RAID 6, and RAID 10 ranks. Each Extent Pool pair should have the same characteristics in terms of RAID type, RPM, and DDM size. Ranks in one Extent Pool should belong to separate Device Adapters. Assign multiple ranks to Extent Pools to take advantage of Storage Pool Striping. CKD 3380 and 3390 type volumes can be intermixed in an LCU and an Extent Pool. FB guidelines: – Create a volume group for each server, unless LUN sharing is required. – Place all ports for a single server in one volume group. – If LUN sharing is required, the following options are available: • • Use separate volumes for servers and place LUNs in multiple volume groups. Place servers (clusters) and volumes to be shared in a single volume group.

I/O ports guidelines: – Distribute host connections of each type (FICON and FCP) evenly across the I/O enclosure. – Typically, the access any parameter is used for I/O ports with access to ports that are controlled by SAN zoning.

Chapter 9. DS8870 HMC planning and setup

269

270

IBM System Storage DS8870 Architecture and Implementation

10

Chapter 10.

IBM System Storage DS8000 features and license keys
The activation of licensed functions is described in this chapter. This chapter covers the following topics: IBM System Storage DS8870 licensed functions Activation of licensed functions Licensed scope considerations

© Copyright IBM Corp. 2013. All rights reserved.

271

10.1 IBM System Storage DS8000 licensed functions
Many of the functions of the DS8000 that we described so far are optional licensed functions that must be enabled for use. The licensed functions are enabled through a 242x licensed function indicator feature, plus a 239x licensed function authorization feature number, in the following manner: The licensed functions for DS8870 are enabled through a pair of 242x-961 licensed function indicator feature numbers (FC 07xx and FC 7xxx), plus a Licensed Function Authorization (239x-LFA) feature number (FC 7xxx). These functions and numbers are listed in Table 10-1.
Table 10-1 DS8000 licensed functions Licensed function for DS8000 with Enterprise Choice warranty Operating Environment License FICON Attachment Thin Provisioning Database Protection High Performance FICON Easy Tier z/OS Distributed Data Backup FlashCopy Space Efficient FlashCopy Metro/Global Mirror Metro Mirror Global Mirror z/OS Global Mirror z/OS Global Mirror Incremental Resync Parallel Access Volumes HyperPAV I/O Priority Manager IBM 242x indicator feature numbers 0700 and 70xx 0703 and 7091 0707 and 7071 0708 and 7080 0709 and 7092 0713 and 7083 0714 and 7094 0720 and 72xx 0730 and 73xx 0742 and 74xx 0744 and 75xx 0746 and 75xx 0760 and 76xx 0763 and 76xx 0780 and 78xx 0782 and 7899 0784 and 784x IBM 239x function authorization model and feature numbers 239x Model LFA, 703x/705x 239x Model LFA, 7091 239x Model LFA, 7071 239x Model LFA, 7080 239x Model LFA, 7092 239x Model LFA, 7083 239x Model LFA, 7094 239x Model LFA, 725x–726x 239x Model LFA, 735x–736x 239x Model LFA, 748x–749x 239x Model LFA, 750x–751x 239x Model LFA, 752x–753x 239x Model LFA, 765x–766x 239x Model LFA, 768x–769x 239x Model LFA, 782x–783x 239x Model LFA, 7899 239x Model LFA, 784x–785x

The DS8000 provides Enterprise Choice warranty options that are associated with a specific machine type. The x in 242x designates the machine type according to its warranty period, where x can be 1, 2, 3, or 4. For example, a 2424-961 machine type designates a DS8870 storage system with a four-year warranty period. The x in 239x can be 6, 7, 8, or 9, according to the associated 242x base unit model. The 2396 function authorizations apply to 2421 base units, 2397 - 2422, and so on. For example, a 2399-LFA machine type designates a DS8000 Licensed Function Authorization for a 2424 machine with a four-year warranty period.

272

IBM System Storage DS8870 Architecture and Implementation

The 242x licensed function indicator feature numbers enable the technical activation of the function, subject a feature activation code that is made available by IBM and applied by the client. The 239x licensed function authorization feature numbers establish the extent of authorization for that function on the 242x machine for which it was acquired.

10.1.1 Licensing
Some of the orderable feature codes must be activated through the installation of a corresponding license key. These codes are listed in Table 10-1 on page 272. Some features can be directly configured for the client through the IBM marketing representative during the ordering process.

Feature codes that work with a license key
The following features also are available: Metro Mirror (MM) is our synchronous way to perform remote replication. Global Mirror (GM) enables asynchronous replication, which is useful for larger distances and lower bandwidth. For more information about Copy Services, see Chapter 6, “IBM System Storage DS8000 Copy Services overview” on page 141. Metro/Global Mirror (MGM) enables cascaded three-site replications, which combine synchronous mirroring to an intermediate site with asynchronous mirroring from that intermediate site to a third site at a large distance. Combinations with other Copy Services features are possible, and sometimes even needed. Usually, the three-site MGM installation also requires an MM license on the A site with the MGM license (and even a GM license, if after a B breakdown you want to re-synchronize between A and C). At the B site, on top of the MGM, you also need the MM and GM licenses. At the C site, you then need licenses for MGM, GM, and FlashCopy. There are two possibilities for FlashCopy: – The standard FlashCopy Point-in-Time Copy (PTC) license FC72xx, which works with thick (standard) volumes or thin provisioned Extent Space Efficient (ESE) volumes, if you also have the Thin Provisioning license FC7071. – The FlashCopy SE FC73xx, with which you make FlashCopies with Track-Space-Efficient (TSE) target volumes. TSE volumes are thin volumes with a fine granularity, which saves space. However, they are supported only as FlashCopy targets and are not meant for direct server attachments. Because writes are slower on the small granularity of TSE volumes, the sizing for FlashCopy SE target repositories must be done with sufficient care under performance and capacity aspects. TSE volumes are not handled by Easy Tier rebalancing algorithms. The more modern way to perform Thin Provisioning is to use the ESE volumes, which require the Thin Provisioning license FC7071 that can be combined with the classic PTC FlashCopy license, if needed. The ESE thin volumes also can go into remote-mirroring relations. Because of their larger granularity, ESE volumes are handled with the same good performance as standard (thick) volumes, and they are managed by Easy Tier algorithms. However, ESE thin volumes are not available for System z CKD clients or for IBM i. The z/OS Global Mirror (zGM) license, which is also known as eXtended Remote Copy (XRC), enables z/OS clients to copy data by using System Data Mover (SDM). This copy is an asynchronous copy. For more information, see 6.3.8, “z/OS Global Mirror” on page 160.

Chapter 10. IBM System Storage DS8000 features and license keys

273

For System z clients, Parallel Access Volumes (PAV) allows multiple concurrent I/O streams to the same CKD volume. HyperPAV also reassigns the alias addresses dynamically to the base addresses of the volumes that are based on the needs of a dynamically changing workload. Both features result in such large performance gains that for many years they are configured as a de facto standard for mainframe clients, much like the FICON, which is required for z/OS. High-Performance FICON (zHPF, FC#7092) is a feature that uses a protocol extension for FICON that allows the data for multiple commands to be grouped in a single data transfer. This grouping increases the channel throughput for many workload profiles because of the decreased overhead. It works on newer zEnterprise systems such as zEC12, z196, z114, or z10, and is recommended for these systems because of the performance gains it offers. z/OS Distributed Data Backup (zDDB) is a feature for clients with a mix of mainframe and distributed workloads to use their powerful System z host facilities to back up and restore open systems data. For more information, see IBM System Storage DS8000: z/OS Distributed Data Backup, REDP-4701. Easy Tier is available in the following modes: – Automatic mode, which works on sub-volume level (extent level), and allows for auto-tiering in hybrid extent pools. The most-accessed volume parts go to the upper tiers. In single-tier pools, it allows auto-rebalancing if switched on. – Manual dynamic volume relocation mode works on the level of full volumes and allows volumes to be relocated or restriped to other places in the DS8000 online. It also allows ranks to be moved out of pools. Because this feature is free of charge, it is usually configured on all DS8000s. For more information, see IBM System Storage DS8000 Easy Tier, REDP-4667. I/O Priority Manager is the Quality-of-Service feature for the System Storage DS8000 series. When larger extent pools are used that include many servers that are competing for the same rank and device adapter resources, clients can define Performance Groups of higher-priority and lower-priority servers and volumes. In overload conditions, the I/O Priority Manager throttles the lower-priority Performance Groups to maintain service on the higher-priority groups. For more, see DS8000 I/O Priority Manager, REDP-4760. IBM Database Protection, FC#7080: With this feature, you receive the highest level of protection for Oracle databases by the use of more end-to-end checks for detecting data corruption on the way through the different SAN and storage hardware layers. This feature complies with the Oracle Hardware-Assisted Resilient Data (HARD) initiative. For more information about this feature, see IBM Database Protection User’s Guide, GC27-2133-02, which is available at this website: http://www.ibm.com/support/docview.wss?uid=ssg1S7003786

274

IBM System Storage DS8870 Architecture and Implementation

Feature code ordering options without the need to install a license key
The following ordering options of the DS8870 do not require the client to install a license key: Earthquake Resistance Kit, FC1906: The Earthquake Resistance Kit is an optional seismic kit for stabilizing the storage unit racks so that the racks comply with IBM earthquake resistance standards. It includes cross-braces on the front and rear of the racks, and the racks are secured to the floor. These stabilizing features limit potential damage to critical DS8000 machine components and help to prevent human injury. Overhead cabling: For more information about FC1400 (top-exit bracket) and FC1101 (ladder), see 8.2.3, “Overhead cabling features” on page 230. One ladder per site is sufficient. Shipping Weight Reduction, FC0200: If your site features delivery weight constraints, IBM offers this option that limits the maximum shipping weight of the individually packed components to 909 kg (2000 lb). Because this feature increases installation time, it should be ordered only when required. Extended Powerline Disturbance Feature, FC1055: This feature extends the available uptime in case both power cords lose the external power, as described in “Power Line Disturbance feature” on page 233. Tivoli Key Lifecycle Manager server, FC1760: This feature is used for the Full Disk Encryption (FDE). It consists of a System x server hardware, with SUSE Linux, which runs the Tivoli Key Lifecycle Manager software to manage the encryption keys. Epic (FC0964), VMware VAAI (FC0965): For clients who want to use the Epic healthcare software or VMware VAAI, these features should be selected by the IBM marketing representative. For the VAAI XCOPY/Clone primitive, the PTC (FlashCopy) license also is needed. For more information about these features, see IBM System Storage DS8870 Introduction and Planning Guide, GC27-4209-00.

Encryption
If encryption is wanted, Feature Code FC1750 should be included in the order. This feature enables customer to download from the DSFA website the function authorization (see 10.2, “Activating licensed functions” on page 277) and to elect to turn on encryption. Feature Code #1754 is used to disable encryption. If encryption is wanted, it should be enabled at first use. For more information about disk encryption, see IBM System Storage DS8000 Disk Encryption, REDP-4500.

Chapter 10. IBM System Storage DS8000 features and license keys

275

10.1.2 Licensing: cost structure
IBM offers value-based licensing for the Operating Environment License. It is priced that is based on the disk drive performance, capacity, speed, and other characteristics that provide more flexible and optimal price and performance configurations. As shown in Table 10-2, each feature indicates some value units.
Table 10-2 Operating Environment License (OEL): value unit indicators Feature number 7050 7051 7052 7053 7054 7055 7060 7065 Description OEL – inactive indicator OEL – 1 value unit indicator OEL – 5 value unit indicator OEL – 10 value unit indicator OEL – 25 value unit indicator OEL – 50 value unit indicator OEL – 100 value unit indicator OEL – 200 value unit indicator

These features are required in addition to the per-TB OEL features (#703x–704x). For each disk drive set, the corresponding number of value units must be configured, as shown in Table 10-3.
Table 10-3 DS8870 Value unit requirements that are based on drive size, type, and speed Drive set feature number 5108 5308 5708 5758 6158 6156 5858 Drive size Drive type Drive speed Encryption capable Yes Yes Yes Yes Yes Yes Yes Value units required 4.8 6.8 11.5 16.0 36.4 18.2 13.5

146 GB 300 GB 600 GB 900 GB 400 GB 400 GB 3 TB

SAS SAS SAS SAS SSD SSD half set NL SAS half set

15K RPM 15K RPM 10K RPM 10K RPM N/A N/A 7.2K RPM

Important: Check with an IBM representative or consult the IBM website for an up-to-date list of available drive types. The HyperPAV license is a flat-fee, add-on license that requires the PAV license to be installed. High-Performance FICON also is a flat-fee license. Easy Tier is a free license feature. Therefore, it is usually configured by default. The Database Protection and the IBM z/OS Distributed Data Backup features also are available at no charge.

276

IBM System Storage DS8870 Architecture and Implementation

other ordered keys).ibm. the FlashCopy SE is licensed in tiers by gross amount of TB that is installed.2 Activating licensed functions Activating the license keys of the DS8000 can be done after the IBM service representative completes the storage complex installation. If you are activating codes for an existing storage unit. you can print the activation codes and manually enter them by using the DS Storage Manager GUI or via DS Command Line Interface (DS CLI). IBM System Storage DS8000 features and license keys 277 . Instead of downloading the activation codes in softcopy format. Before you connect to the IBM DSFA website to obtain your feature activation codes.wss Use the DS8870 keyword as a search criterion in the Contents field. Tip: For more information about the features and the considerations you must have when DS8000 licensed functions are ordered. Based on your 239x licensed function order. the Thin Provisioning license is needed with PTC.ibm. ensure that you have the following items: The IBM License Function Authorization documents. also use the PTC license. A USB memory device can be used for downloading your activation codes if you cannot access the DS Storage Manager from the system that you are using to access the DSFA website. MM license and GM also can be complementary features. these documents are included in the shipment of the storage unit. on initial activation of the storage unit) or they can be activated individually (for example. If you want to work with ESE thin volumes.The license for Space-Efficient FlashCopy does not require the FlashCopy (PTC) license.com/storage/dsfa You can activate all license keys at the same time (for example. FlashCopy (PTC) and FlashCopy SE can be complementary licenses. this process is slow and error prone because the activation keys are 32-character long strings.com/common/ssi/index. IBM sends the documents to you in an envelope. 10. Chapter 10. FlashCopy SE performs FlashCopies with Track Space-Efficient (TSE) target volumes. When FlashCopies-to-standard target volumes are done. see the following announcement letters: IBM System Storage DS8870 (IBM 242x) IBM System Storage DS8870 (M/T 239x) high performance flagship – Function Authorizations IBM announcement letters are available at this website: http://www. However. you must obtain the necessary keys from the IBM Disk Storage Feature Activation (DSFA) website at this website: http://www. As with the ordinary FlashCopy. If you are activating codes for a new storage unit.

2. Select System Status. Move your cursor to the left top icon so that a pop-up window opens. contact your IBM service representative for the user ID and password. the DS8000 Storage Manager Overview window opens. DS Storage Manager GUI Complete the following steps to obtain the required information by using the DS Storage Manager GUI: 1. Log in by using a user ID with administrator access. After a successful login. Figure 10-1 DS8000 Storage Manager GUI: Overview window 278 IBM System Storage DS8870 Architecture and Implementation . If you are accessing the machine for the first time.1 Obtaining DS8000 machine information To obtain license activation keys from the DSFA website. These options are described next. as shown in Figure 10-1.10. You can obtain the required information by using the DS Storage Manager GUI or DS CLI. you need to know the serial number and machine signature of your DS8000 unit. Start the DS Storage Manager application.

IBM System Storage DS8000 features and license keys 279 . as shown in Figure 10-2.2. The Add Activation Key window shows the Serial Number and the Machine Signature information about your DS8000 Storage Image. Move your cursor to the Storage Image and select Add Activation Key. then click Action. Figure 10-3 DS8000 Machine Signature & Serial Number Chapter 10. Figure 10-2 DS8000 Storage Manager: Add Activation Key 3. Click the Serial Number under the Storage Image header. as shown in Figure 10-3.

except that it ends with 1 instead of 0 (zero).2107-75ZA571 IBM.2107-75ZA571 Storage Unit IBM.7 GB MTS IBM.2107-75za571 Name DS8870_ATS02 desc Mako ID IBM.566 DS: Name ID Storage Unit Model WWNN State ============================================================================= DS8870_ATS02 IBM. 280 IBM System Storage DS8870 Architecture and Implementation .Gather the following information about your storage unit: – The Machine Type – Model Number – Serial Number (MTMS) is a string that contains the machine type.7. which is identical to the storage unit serial number. and serial number.2107-75za571 Date/Time: 17 September 2012 15:56:38 CEST IBM DSCLI Version: 7.0.7 GB Processor Memory 253. The last seven characters of the string are the machine's serial number (XYABCDE).2107-75ZA570 961 5005076303FFD5AA Online dscli> showsi ibm. log on to DS CLI and issue the lssi and showsi commands.0. – The machine signature. DS CLI To obtain the required information by using DS CLI.566 DS: ibm. The serial number always ends with 0 (zero).2421-75ZA570 numegsupported 1 ETAutoMode tiered ETMonitor all IOPMmode Disabled Note: The showsi command requires the SFI serial number. which is found in the Machine signature field and uses the following format: ABCD-EF12-3456-7890. as shown in Example 10-1.2107-75ZA570 Model 961 WWNN 5005076303FFD4D4 Signature 3a19-bf2f-7a16-41f7 State Online ESSNet Enabled Volume Group V0 os400Serial 5AA NVS Memory 8. The machine type is 242x and the machine model is 961.7. model number. Example 10-1 Obtain DS8000 information by using DS CLI dscli> lssi Date/Time: 17 September 2012 15:55:22 CEST IBM DSCLI Version: 7.0 GB Cache Memory 233.

Table 10-4 DS8000 machine information Property Machine type Machine’s serial number Machine signature Your storage unit’s information Chapter 10. which is a string that contains the machine type and the serial number. – The model. which always ends with 0 (zero). which is always 961. which is entered in the IBM DSFA website to retrieve the activation codes. The machine type is 242x and the last seven characters of the string are the machine's serial number (XYABCDE). Use Table 10-4 to document this information.Gather the following information about your storage unit: – The Machine Type – Serial Number (MTS). which is found in the Machine signature field and uses the following format: ABCD-EF12-3456-7890. – The machine signature. IBM System Storage DS8000 features and license keys 281 .

Click IBM System Storage DS8000 series. connect to the DSFA website at the following address: http://www.2 Obtaining activation codes Complete the following steps to obtain the activation codes: Important: A DS8800 is shown in the following figures. 1.2. the steps are identical for all models of the DS8000 family. as shown in Figure 10-5. Select the appropriate 242x Machine Type. A shown in Figure 10-4.ibm. The Select DS8000 series machine window opens.10.com/storage/dsfa Figure 10-4 IBM DSFA website 2. however. Figure 10-5 DS8000 DSFA machine information entry window 282 IBM System Storage DS8870 Architecture and Implementation .

The View machine summary window opens.3. When assigning licenses for the first time. the Assigned field shows 0. IBM System Storage DS8000 features and license keys 283 . Figure 10-6 DSFA View machine summary window The View machine summary window shows the total purchased licenses and how many of them are currently assigned. Enter the machine information that was collected in Table 10-4 on page 281 and click Submit.0 TB. The example in Figure 10-6 shows a storage unit where all licenses are assigned. Chapter 10. as shown in Figure 10-6.

enter the following information that is assigned to the storage image: – – – – License scope: fixed block data (FB) Count key data (CKD) All Capacity value (in TB) to assign to the storage image The capacity values are expressed in decimal terabytes with 0. The Manage activations window opens. For each license type and storage image. Click Manage activations. Figure 10-7 DSFA Manage activations window 284 IBM System Storage DS8870 Architecture and Implementation . The sum of the storage image capacity values for a license cannot exceed the total license value. as shown in Figure 10-7.1-TB increments.4.

click Submit. After the codes are applied. After the values are entered. you can begin to configure storage on a storage image. Select Retrieve activation codes. if the 239x licensed function authorization record is not attached to the 242x record. Important: The initial enablement of any optional DS8000 licensed function is a concurrent activity (assuming the appropriate level of microcode is installed on the machine for the function). In this case.5. Figure 10-8 DSFA Retrieve activation codes window Important: In most situations. 10. The following activation activities are disruptive and require an initial machine load (IML) or reboot of the affected image: Removal of a DS8000 licensed function to deactivate the function. you must assign it to the 242x record by using the Assign function authorization link on the DSFA application. the DSFA application can locate your 239x licensed function authorization record when you enter the DS8000 (242x) serial number and signature. Print the activation codes or click Download to save the activation codes in an XML file that you can import into the DS8000. which shows the license activation codes for the storage image. you need the 239x serial number (which you can find on the License Function Authorization document). Chapter 10. IBM System Storage DS8000 features and license keys 285 . The Retrieve activation codes window opens. as shown in Figure 10-8.3 Applying activation codes by using the GUI Use this process to apply the activation codes on your DS8000 storage images by using the DS Storage Manager GUI. A reduction is defined as changing the license scope from all physical capacity (ALL) to only FB or only CKD capacity. A lateral change is defined as changing the license scope from FB to CKD or from CKD to FB. A lateral change or reduction in the license scope.2. However.

Complete the following steps to apply the activation codes (this method applies the activation codes by using your local computer or a USB drive): 1. Figure 10-9 DS8000 Storage Manager GUI: select Import Key File 286 IBM System Storage DS8870 Architecture and Implementation . The easiest way to apply the feature activation codes is to download the activation codes from the IBM DSFA website to your local computer and import the file into the DS Storage Manager. you must resolve any current DS8000 problems. you can copy the activation codes from the DSFA window and paste them into the DS Storage Manager window. The third option is to manually enter the activation codes in the DS Storage Manager from a printed copy of the codes. If you can access the DS Storage Manager from the same computer that you use to access the DSFA website.Attention: Before you begin this task. as shown in Figure 10-9. Click Action under Activation Keys Information and select Import Key File. Contact IBM support for assistance in resolving these problems.

Click Finish to complete the new key activation procedure. Click Browse and locate the downloaded key file on your computer. Figure 10-10 Apply Activation Codes by importing the key from the file 3. as shown in Figure 10-10. click Next.2. After the file is selected. IBM System Storage DS8000 features and license keys 287 . The key name is shown in the Confirmation window. Figure 10-11 Apply Activation Codes: Confirmation window Chapter 10. as shown in Figure 10-11.

as shown in Figure 10-12. there is one OEL license active. Figure 10-13 Enter License Keys manually A third way to enter the activation keys is to enter the keys manually from a printed copy of the codes. Click OK to exit the Apply Activation Codes wizard. as shown in Figure 10-13. Use Enter or Spacebar to separate the keys. In our example.Your license is now listed in the table. Figure 10-12 Apply Activation Codes window 4. Click Finish to complete the new key activation procedure. Another way to enter the activation keys is to copy the activation keys from the DSFA window and paste them in the Storage Manager window. 288 IBM System Storage DS8870 Architecture and Implementation .

IBM System Storage DS8000 features and license keys 289 . as shown in Figure 10-14. Figure 10-14 Activation codes that are applied Chapter 10. The activation codes are displayed.5.

2107-75ZA570 Model 961 WWNN 5005076303FFD4D4 Signature 3a19-bf2f-7a16-41f7 State Online ESSNet Enabled Volume Group V0 os400Serial 5AA NVS Memory 8. The -file parameter specifies the key file. as shown in Example 10-2. “Obtaining activation codes” on page 282.7 GB MTS IBM.2421-75ZA570 numegsupported 1 ETAutoMode tiered ETMonitor all IOPMmode Disabled 2. Use the showsi command to display the DS8000 machine signature.2107-75ZA571 Storage Unit IBM.xml IBM.2.2.7 GB Processor Memory 253.2107-75za571 Date/Time: 17 September 2012 15:56:38 CEST IBM DSCLI Version: 7.2107-75ZA571 290 IBM System Storage DS8870 Architecture and Implementation . The second parameter specifies the storage image: dscli> applykey -file c:\2421_75ZA570. Obtain your license activation codes from the IBM DSFA website.10.4 Applying activation codes by using the DS CLI The license keys also can be activated by using the DS CLI. Complete the following steps to apply activation codes by using the DS CLI: 1.566 DS: ibm. 3.0.7. Enter an applykey command at the following dscli command prompt.2.0 GB Cache Memory 233. as described in 10. Example 10-2 DS CLI showsi command dscli> showsi ibm. This option is available only if the machine Operating Environment License (OEL) was activated and you have a console with a compatible DS CLI program installed.2107-75za571 Name DS8870_ATS02 desc Mako ID IBM.

License scope: Changing the license scope of the OEL license is a disruptive action that requires a power cycle of the machine.2107-75za571 Date/Time: 04 October 2012 15:47:38 CEST IBM DSCLI Version: 7. as shown in Figure 10-7 on page 284. In every case. see IBM System Storage DS: Command-Line Interface User’s Guide for DS8000 series.7. Example 10-3 Using lskey to list installed licenses dscli> lskey ibm. Verify that the keys were activated for your storage unit by issuing the DS CLI lskey command. as shown in Example 10-3. IBM System Storage DS8000 features and license keys 291 . or from All to CKD or FB. 10. or All.2107-75za571 Activation Key Authorization Level (TB) Scope ========================================================================== Encryption Authorization on All Global mirror (GM) 170 All High Performance FICON for System z (zHPF) on CKD I/O Priority Manager 170 All IBM FlashCopy SE 170 All IBM HyperPAV on CKD IBM System Storage DS8000 Thin Provisioning on All IBM System Storage Easy Tier on All IBM database protection on FB IBM z/OS Distributed Data Backup on FB Metro/Global mirror (MGM) 170 All Metro mirror (MM) 170 All Operating environment (OEL) 170 All Parallel access volumes (PAV) 170 CKD Point in time copy (PTC) 170 All RMZ Resync 170 CKD Remote mirror for z/OS (RMZ) 170 CKD For more information about the DS CLI. which you can download and apply.580 DS: ibm.0. you can set the scope of these functions to be FB. it is possible to return to the DSFA website at any time and change the scope from CKD or FB to All. the Storage Facility Image includes 65 TB of PTC (FlashCopy). and the user decided to set the scope to All. GC53-1127. CKD. a new activation code is generated. If the scope was set to FB.4. In that example. you cannot use FlashCopy with any CKD volumes that are configured later. Chapter 10. You must decide what scope to set.3 Licensed scope considerations For the PTC function and the Remote Mirror and Copy functions. However.

If you choose to change from All to CKD or FB. you also can safely use the All scope. only CKD or only FB). This function is only used by System z hosts. This function is used by open systems and System z hosts.10. If we want to use only PTC for the CKD volumes. there are several scenarios to consider. 292 IBM System Storage DS8870 Architecture and Implementation . 15 TB are configured as FB and 5 TB are configured as CKD. There is no need to buy a new PTC license if you do not need PTC for CKD but would like to use it for FB. Select CKD.1 Why you have a choice Imagine a simple scenario in which a storage system has 20 TB of capacity. Table 10-5 Deciding which scope to use Scenario 1 2 3 4 PTC or Remote Mirror and Copy function usage consideration This function is only used by open systems hosts. but we might use it for open systems hosts. 5 6 Any scenario that changes from FB or CKD to All does not require an outage. Obtain a new activation code from the DSFA website by changing the scope to FB. This table applies to PTC and Remote Mirror and Copy functions. Suggested scope setting Select FB. you must have a disruptive outage. If you are certain that your machine will be used only for one storage type (for example. Select FB and change to scope All if and when the System z requirement occurs. Changing the scope to CKD or FB requires a disruptive outage. When you decide which scope to set. This function is only needed by System z hosts. This function is only needed by open systems hosts. This function is set to All.3. Of this capacity. Select All. Leave the scope set to All. Select CKD and change to scope All if and when the open systems requirement occurs. we can purchase only 5 TB of PTC and set the scope of the PTC activation code to CKD. Use Table 10-5 to guide you in your choice. but we might use it for System z at some point.

0. we have a storage system where the scope of the PTC license is set to FB.2 Using a feature for which you are not licensed In Example 10-4.2107-7520391 Activation Key Authorization Level (TB) Scope ============================================================ Metro mirror (MM) 5 All Operating environment (OEL) 5 All Point in time copy (PTC) 5 FB The FlashCopy scope is currently set to FB. the command fails.0.7.220 DS: IBM. However.7. dscli> lsckdvol Date/Time: 05 October 2012 14:19:17 CEST IBM DSCLI Version: 7. When we try.220 DS: IBM.2107-7520391 Date/Time: 05 October 2012 14:19:17 CEST IBM DSCLI Version: 7.10. IBM System Storage DS8000 features and license keys 293 .2107-7520391 Name ID accstate datastate configstate deviceMTM voltype orgbvols extpool cap (cyl) ========================================================================================= 0000 Online Normal Normal 3390-3 CKD Base P2 3339 0001 Online Normal Normal 3390-3 CKD Base P2 3339 dscli> mkflash 0000:0001 We are not able to create CKD FlashCopies Date/Time: 05 October 2012 14:20:17 CEST IBM DSCLI Version: 7.2107-7520391 CMUN03035E mkflash: 0000:0001: Copy Services operation failure: feature not installed Chapter 10. we can create CKD volumes because the OEL key scope is All.0. Example 10-4 Trying to use a feature for which you are not licensed dscli> lskey IBM.3. This setting means we cannot use PTC to create CKD FlashCopies.7.220 DS: IBM.

220 DS: IBM. We can now perform a CKD FlashCopy. 294 IBM System Storage DS8870 Architecture and Implementation .7.2107-7520391 Activation Key Authorization Level (TB) Scope ============================================================ Metro mirror (MM) 5 All Operating environment (OEL) 5 All Point in time copy (PTC) 5 All The FlashCopy scope is now set to All dscli> lsckdvol Date/Time: 05 October 2012 15:51:53 CEST IBM DSCLI Version: 7.220 DS: IBM.2107-7520391 Name ID accstate datastate configstate deviceMTM voltype orgbvols extpool cap (cyl) ========================================================================================= 0000 Online Normal Normal 3390-3 CKD Base P2 3339 0001 Online Normal Normal 3390-3 CKD Base P2 3339 dscli> mkflash 0000:0001 We are now able to create CKD FlashCopies Date/Time: 05 October 2012 16:09:17 CEST IBM DSCLI Version: 7.2107-7520391 Date/Time: 05 October 2012 14:19:17 CEST IBM DSCLI Version: 7.220 DS: IBM.2107-7520391 CMUC00137I mkflash: FlashCopy pair 0000:0001 successfully created.7. Example 10-5 Changing the scope from FB to All dscli> lskey IBM.10.0. we logged on to DSFA and changed the scope for the PTC license to All.220 DS: IBM. We then apply this new activation code.3 Changing the scope to All In Example 10-5.220 DS: IBM.2107-7520391 Date/Time: 05 October 2012 14:19:17 CEST IBM DSCLI Version: 7.7. dscli> lskey IBM.0.2107-7520391 CMUC00199I applykey: Licensed Machine Code successfully applied to storage image IBM.2107-7520391.3.2107-7520391 Date/Time: 05 October 2012 14:19:17 CEST IBM DSCLI Version: 7.0.2107-7520391 Activation Key Authorization Level (TB) Scope ============================================================ Metro mirror (MM) 5 All Operating environment (OEL) 5 All Point in time copy (PTC) 5 FB The FlashCopy scope is currently set to FB dscli> applykey -key 1234-5678-9FEF-C232-51A7-429C-1234-5678 IBM.7.0.0.7.

10. we made a downward license feature key change. We must schedule an outage of the storage image. We apply the code but discover that because this change is effectively a downward change (decreasing the scope). Do not allow your DS8000 to be in a position where the applied key is different from the reported key. Consideration: Making a downward license change and then not immediately performing a reboot of the storage image is not supported. However.7. it does not apply until we have a disruptive outage on the DS8000.220 DS: IBM.2107-7520391. Chapter 10.2107-7520391 Date/Time: 05 October 2012 14:19:17 CEST IBM DSCLI Version: 7.4 Changing the scope from All to FB In Example 10-6.7. IBM System Storage DS8000 features and license keys 295 .7.2107-7520391 Activation Key Authorization Level (TB) Scope ============================================================ Metro mirror (MM) 5 All Operating environment (OEL) 5 All Point in time copy (PTC) 5 FB The FlashCopy scope is now set to FB dscli> lsckdvol Date/Time: 05 October 2012 14:19:17 CEST IBM DSCLI Version: 7. We should make only the downward license key change immediately before this outage is taken.7.220 DS: IBM.2107-7520391 Activation Key Authorization Level (TB) Scope ============================================================ Metro mirror (MM) 5 All Operating environment (OEL) 5 All Point in time copy (PTC) 5 All The FlashCopy scope is currently set to All dscli> applykey -key ABCD-EFAB-EF9E-6B30-51A7-429C-1234-5678 IBM. we do not want to purchase any more PTC licenses because PTC is used only by open systems hosts and this new capacity is to be used only for CKD storage.0.2107-7520391 Date/Time: 05 October 2012 14:19:17 CEST IBM DSCLI Version: 7. dscli> lskey IBM.0.220 DS: IBM.3. Example 10-6 Changing the scope from All to FB dscli> lskey IBM.7.2107-7520391 CMUC00137I mkflash: FlashCopy pair 0000:0001 successfully created.2107-7520391 Date/Time: 05 October 2012 14:19:17 CEST IBM DSCLI Version: 7.0.0. we change the scope to FB so we log on to the DSFA website and create an activation code.220 DS: IBM. In this scenario.220 DS: IBM. Therefore.0. we decide to increase storage capacity for the entire storage system.2107-7520391 Name ID accstate datastate configstate deviceMTM voltype orgbvols extpool cap (cyl) ========================================================================================= 0000 Online Normal Normal 3390-3 CKD Base P2 3339 0001 Online Normal Normal 3390-3 CKD Base P2 3339 dscli> mkflash 0000:0001 Date/Time: 05 October 2012 14:19:17 CEST IBM DSCLI Version: 7.2107-7520391 CMUC00199I applykey: Licensed Machine Code successfully applied to storage image IBM.

increased the license key for OEL and MM.7. Example 10-7 Insufficient FlashCopy license dscli> lskey IBM. we must first increase the license key capacity of every installed license. License calculations use the disk size number that is shown by the lsarray command. We use the lsarray command to obtain the disk size used by each array. these licenses include the FlashCopy license.6 Calculating how much capacity is used for CKD or FB To calculate how much disk space is used for CKD or FB storage. we multiply the disk size (146.220 DS: IBM. we try to create a rank that brings the total rank capacity above 5 TB. we must combine the output of two commands. Then. The following simple rules apply: License key values are decimal numbers. Example 10-8 Creating a rank when we are exceeding a license key dscli> mkrank -array A1 -stgtype CKD Date/Time: 05 October 2012 14:19:17 CEST IBM DSCLI Version: 7. However. This command fails.7. we can see the FlashCopy license is only 5 TB.220 DS: IBM. we forgot to increase the license key for FlashCopy (PTC). FlashCopy (PTC). we have a scenario in which a DS8000 has a 5-TB OEL. we use the lsrank command to determine which array the rank contains. and Metro Mirror license. 300. We increased the storage capacity and.10. License calculations include the capacity of all DDMs in each array site. 10. In Example 10-7. 296 IBM System Storage DS8870 Architecture and Implementation .2107-7520391 Date/Time: 05 October 2012 14:19:17 CEST IBM DSCLI Version: 7. This configuration is still valid because the configured ranks on the machine total less than 5 TB of storage. 600. we are still able to create FlashCopies.2107-7520391 CMUC00137I mkflash: FlashCopy pair 1800:1801 successfully created. 5 TB of license is 5000 GB.2107-7520391 CMUN02403E mkrank: Unable to create rank: licensed storage amount has been exceeded Important: To configure the additional ranks. Each array site is eight DDMs.5 Applying an insufficient license feature key In Example 10-7. To make the calculation. In this example. However. as a result.220 DS: IBM.7.0.0. and whether this rank is used for FB or CKD storage. or 900) by eight (for eight DDMs in each array site). So.3.3.0. In Example 10-8.2107-7520391 Activation Key Authorization Level (TB) Scope ============================================================ Metro mirror (MM) 10 All Operating environment (OEL) 10 All Point in time copy (PTC) 5 All dscli> mkflash 1800:1801 Date/Time: 05 October 2012 17:46:14 CEST IBM DSCLI Version: 7.

0. giving us 146 × 8 = 1168 GB. Because we already use 3568 GB (2400 GB CKD + 1168 GB FB). which means that we are using 1168 GB for FB storage.0 A4 Unassigned Normal 5 (7+P) S5 0 146. Array A6 uses 146-GB DDMs. array A1 uses 300-GB DDMs. This configuration means that for FB scope and All scope licenses. If we combine Example 10-7 on page 296. the attempt to add 2400 GB fails because the total exceeds the 5 TB license.2107-75ABTV1 Array State Data RAIDtype arsite Rank DA Pair DDMcap (10^9B) ==================================================================== A0 Assigned Normal 5 (6+P+S) S1 R0 0 300. we use 3568 GB. Example 10-9 Displaying array site and rank usage dscli> lsrank Date/Time: 05 October 2012 14:19:17 CEST IBM DSCLI Version: 7. we tried to create a rank by using array A1. we can also see why the mkrank command in Example 10-8 on page 296 failed. we use 2400 GB.0 A2 Unassigned Normal 5 (6+P+S) S3 0 300. Rank R4 in Example 10-9 is based on array A6. IBM System Storage DS8000 features and license keys 297 . we use 300 x 8 = 2400 GB more license keys. Example 10-8 on page 296. we are within scope for all licenses. Chapter 10. The lsarray command tells us that array A0 uses 300-GB DDMs. we use 1168 GB. Now. giving us 300 × 8 = 2400 GB.0.0 A3 Unassigned Normal 5 (6+P+S) S4 0 300. which means that we are using 2400 GB for CKD storage. In Example 10-7 on page 296.220 DS: IBM.In Example 10-9. we had only 5 TB of FlashCopy license with a scope of All. For licenses with a scope of All. If we increase the size of the FlashCopy license to 10 TB.0 A6 Assigned Normal 5 (7+P) S7 R4 0 146.220 DS: IBM. the lsrank command tells us that rank R0 uses array A0 for CKD storage. For FB scope licenses. so we multiply 146 by 8.000 GB of total configured capacity. In Example 10-8 on page 296. So we multiple 300 (the DDM size) by 8.0 A5 Unassigned Normal 5 (7+P) S6 0 146. we can have 10.0 For CKD scope licenses.0 A1 Unassigned Normal 5 (6+P+S) S2 0 300. so the rank creation succeeds. This configuration means that the total configured capacity cannot exceed 5000 GB. By using the limits that are shown in Example 10-7 on page 296. and Example 10-9.0 A7 Assigned Normal 5 (7+P) S8 R5 0 146.2107-75ABTV1 ID Group State datastate Array RAIDtype extpoolID stgtype ========================================================== R0 0 Normal Normal A0 5 P0 ckd R4 0 Normal Normal A6 5 P4 fb dscli> lsarray Date/Time: 05 October 2012 14:19:17 CEST IBM DSCLI Version: 7.7.7.

298 IBM System Storage DS8870 Architecture and Implementation .

The following topics are included: Configuration flow Configuring IBM Tivoli Storage Productivity Center 5. we describe the required storage configuration tasks on an IBM System Storage DS8000.1 for DS8000 Configuration by using the DS Storage Manager GUI Configuration with the DS Command-Line Interface © Copyright IBM Corp. 299 . All rights reserved. 2013.Part 3 Part 3 Storage configuration In this part of the book.

300 IBM System Storage DS8870 Architecture and Implementation .

11 Chapter 11. © Copyright IBM Corp. All rights reserved. Configuration flow This chapter provides a brief overview of the tasks that are required to configure the storage in an IBM System Storage DS8870. 2013. 301 .

It is important that the information from the customization worksheets is entered into the machine so that preventive maintenance and high availability of the machine are ensured. Power control: You select and control the various power modes for the storage complex. you specify the initial setup for the following items: Company information: IBM marketing representatives use this information to contact you quickly when they need to access your storage complex. Each time that you install a new storage unit or management console. By using the customization worksheets. your IBM marketing representative customizes the setup of your storage complex that is based on information that you provide in a set of customization worksheets. Management console network settings: You specify the IP address and LAN settings for your management console (MC). Control Switch settings: You specify certain DS8870 settings that affect host connectivity. GC27-4209. You can find the customization worksheets in IBM System Storage DS8870 Introduction and Planning Guide. Notifications (including SNMP trap and email notification settings): You specify the types of notifications that you want and that others might want to receive.1 Configuration worksheets During the installation of the DS8870. You need to enter these choices on the control switch settings worksheet so that the marketing representative can set them during the DS8870 installation. you must complete the customization worksheets before the installation can be done by the IBM marketing representatives. 302 IBM System Storage DS8870 Architecture and Implementation . Remote support (which includes Call Home and remote service settings): You specify whether you want outbound (Call Home) or inbound (remote services) remote support.11. Important: IBM marketing representatives cannot install a storage unit or management console until you provide them with the completed customization worksheets.

Important: If you plan to use Easy Tier (in particular. or RAID 10 arrays. such as creating large numbers of volumes. select the All ranks option to receive all of the benefits of Easy Tier data management. 10. RAID 6. Configuration flow 303 . 9. Configure I/O ports: Define the type of the Fibre Channel/FICON ports. see IBM System Storage DS8000 Disk Encryption. With current versions of the DS GUI. Although the tasks are numbered. 8. 2. “Configuration with the DS Command-Line Interface” on page 385 Chapter 11.Create striped System z volumes: Create System z CKD base volumes and Parallel Access Volume (PAV) aliases for them. “IBM System Storage DS8000 features and license keys” on page 271 Chapter 13. such as subsystem identifiers (SSIDs).Create System z logical control units (LCUs): Define their type and other attributes. which also applies to DS8870.11. whereas a more experienced user might use the DS CLI. 7. Create open systems volumes: Create striped open systems FB volumes and assign them to one or more volume groups. “Configuration by using the DS Storage Manager GUI” on page 323 Chapter 14. particularly for the more repetitive tasks. 1. you must assign multiple ranks to an Extent Pool.2 Configuration flow The following list shows the tasks that must be done when storage is configured in the DS8870. Arbitrated Loop. 3. the order of the tasks does not need to be done in the order that is shown here: Important: The configuration flow changes when you use the Full Disk Encryption Feature for the DS8870. 11. A novice user might prefer to use the GUI. associate each one with Server 0 or Server 1. 6. 5. and assign at least one rank to each Extent Pool. If you want to take advantage of Storage Pool Striping. or FICON. Install license keys: Activate the license keys for the storage unit. Create a repository for Space Efficient volumes. Create ranks: Assign each array to a fixed block (FB) rank or a count key data (CKD) rank. The actual configuration can be done by using the DS Storage Manager GUI or DS Command-Line Interface (DS CLI). For more information. Create volume groups for open systems: Create volume groups where FB volumes are assigned. Assign volume groups to the host connections. or both. For more information about these tasks. see the following chapters: Chapter 10. Create arrays: Configure the installed disk drives as RAID 5. Create Extent Pools: Define Extent Pools. The port type can be Switched Fabric. 4. Create host connections for open systems: Define open systems hosts and their Fibre Channel (FC) host bus adapter (HBA) worldwide port names. you can start directly with the creation of Extent Pools (arrays and ranks are automatically and implicitly defined). in automatic mode). REDP-4500.

The first volume in an address group determines the type of the address group (all CKD or all FB). disk type). FB: – Create a volume group for each server unless LUN sharing is required. each assigned to one of the internal servers (extent Pool 0 and Extent Pool 1). – A port can be configured to be FICON or FCP. Ranks in one Extent Pool should belong to separate Device Adapters. CKD: 3380 and 3390 type volumes can be intermixed in an LCU and an Extent Pool. An address group contains 16 LCUs or logical subsystems (LSS). RAID level. use at least two Extent Pools. 304 IBM System Storage DS8870 Architecture and Implementation . Assign the individual volume groups to the corresponding server’s host connections. The advantage of this option is that you can assign private and shared volumes to a host. Assign multiple ranks to Extent Pools to take advantage of Storage Pool Striping. Place the shared volumes in the volume group and assign it to the host connections. the following options are available (see Figure 11-1): • Create one volume group for each server. An Extent Pool should contain only ranks with similar characteristics (for example. Volumes of one LCU/LSS can be allocated on multiple Extent Pools. – Ensure that each host is connected to at least two different host adapters in two different I/O enclosures for redundancy.General storage configuration guidelines Remember the following general guidelines when storage is configured in the DS8870: To achieve a well-balanced load distribution. If CKD and FB volumes are required. Place the shared volumes in each volume group. – Typically. use at least four Extent Pools. access any is used for I/O ports with access to ports that are controlled by SAN zoning. Create one common volume group for all servers. Exceptions apply to hybrid pools. – If LUN sharing is required. • Figure 11-1 LUN configuration for shared access I/O ports: – Distribute host connections of each type (FICON and FCP) evenly across the I/O enclosure. – Assign the volume group for one server to all its host connections.

Configuration flow 305 . For more information: For more information about DS8000 configuration that is virtualized behind SAN Volume Controller. see SAN Volume Controller: Best Practices and Performance Guidelines. SG24-7521 Chapter 11.Intermixing: Avoid intermixing host I/O with Copy Services I/O on the same ports.

306 IBM System Storage DS8870 Architecture and Implementation .

307 .1 for DS8000 This chapter introduces IBM Tivoli Storage Productivity Center 5.1 Exploring DS8870 with IBM Tivoli Storage Productivity Center 5. Configuring IBM Tivoli Storage Productivity Center 5. 2013. This chapter covers the following topics: Introducing IBM Tivoli Storage Productivity Center 5.1 IBM Tivoli Storage Productivity Center Architecture Adding a DS8000 storage system with IBM Tivoli Storage Productivity Center 5. All rights reserved.1 web-based GUI © Copyright IBM Corp.1 and describes how to set up and manage this product to work with the IBM System Storage DS8000 series.12 Chapter 12.

and storage systems. Deliver common services for simple configuration and consistent operations across host. or SAN management applications into a single console. Manage performance and connectivity from the host file system to the physical disk. It helps perform device configuration and manage multiple devices. Easily create and integrate IBM Cognos®-based custom reports on capacity and performance. and capacity management. and operation system platform support can be found on the Install/use tab in the Product support section. It does so by combining the management of storage assets. and control (zone) SAN fabric components. Monitor and track the performance of SAN-attached SMI-S compliant storage devices. and can tune and proactively manage the performance of storage devices on the SAN while managing. IBM Tivoli Storage Productivity Center features the following capabilities: Provide comprehensive visibility and help centralize the management of your heterogeneous storage infrastructure from a next-generation. This integrated solution helps to improve the storage total cost of ownership (TCO) and return on investment (ROI). and planning information also can be found in the Product support section. fabric. Easily set thresholds to monitor capacity throughput to detect bottlenecks on storage subsystems and SAN switches. 308 IBM System Storage DS8870 Architecture and Implementation . manage. capacity. storage networks. including device support. performance. downloads. More information about integration and interoperability. and IBM FlashCopy). server hardware requirements. Manage advanced replication services (Global Mirror. Monitor.12. Other technical resources. It also helps reduce the complexity of managing your storage environment by simplifying. Metro Mirror. and operations that are traditionally offered by separate storage resource management (SRM). IBM Tivoli Storage Productivity Center can help you manage capacity.1 Introducing IBM Tivoli Storage Productivity Center 5. performance. database support. and optimizing storage tasks that are associated with storage systems. automating. and operations of storage systems and networks. centralizing.1 IBM Tivoli Storage Productivity Center is a storage infrastructure management software solution that is designed to help you improve time-to-value. and controlling your SAN fabric. including in-depth performance monitoring and analysis of storage area network (SAN) fabric. web-based user interface that uses role-based administration and single sign-on. device. replication services. monitoring. such as troubleshooting.

12. Agents Storage Resource agent. or event reporting when defined thresholds are encountered. This section identifies these components and shows how they are related: Data server This component is the control point for product scheduling functions. Replication Server This component coordinates communication and processes tasks that are related to replication and IBM Tivoli Storage Productivity Center for Replication. reporting. migrate the Data agents and Fabric agents to Storage Resource agents.2 IBM Tivoli Storage Productivity Center Architecture IBM Tivoli Storage Productivity Center consists of the following key components. Automated actions can be defined to perform file system extension. It coordinates communication with and data collection from agents that scan SAN fabrics and storage devices.1. application.html Graphical user interface IBM Tivoli Storage Productivity Center provides two graphical user interfaces for managing the storage infrastructure in an enterprise environment: a stand-alone GUI and the web-based GUI. CIM agents. It coordinates communication with and data collection from agents that scan file systems and databases to gather storage demographics and populate the database with results. data deletion. Configuring IBM Tivoli Storage Productivity Center 5. and SAN fabric information and send that information to the Data Server or Device server. and Out of Band fabric agents gather host.dhe. see this website: http://pic. and graphical user interface (GUI) support.com/infocenter/tivihelp/v59r1/index. However. and controls storage subsystems and SAN fabrics. tpc_V51.jsp?topic=%2Fcom.ibm.ibm. Each GUI provides different functions for working with monitored resources. analyzes performance of. gathers information from. configuration. For more information about migrating agents. It also includes functions that schedule data collection and discovery for the Device server. Command-Line Interface Use the Command-Line Interface (CLI) to issue commands for key IBM Tivoli Storage Productivity Center functions. Device Server This component discovers. Database A single database instance serves as the repository for all IBM Tivoli Storage Productivity Center components. storage system. Tivoli Integrated Portal Chapter 12.1 for DS8000 309 . no new functions were added to those agents for the release. and IBM Tivoli Storage Productivity Center backup or archiving. The Data server is the primary contact point for GUI user interface functions. Important: Data agents and Fabric agents are still supported in V5. event information.doc%2Ffqz0_c_migrating_agents. For optimal results when IBM Tivoli Storage Productivity Center is used.

Tivoli Integrated Portal is a standards-based architecture for web administration. Single sign-on integrates with launch-in-context features so you can move smoothly from one application to another application. Both databases are important to get the information complete. 310 IBM System Storage DS8870 Architecture and Implementation . The installation of Tivoli Integrated Portal is required to enable single sign-on for IBM Tivoli Storage Productivity Center.The IBM Tivoli Storage Productivity Center installation program includes IBM Tivoli Integrated Portal. The IBM Tivoli Storage Productivity Center data is on the DB2 database and the definition of the reports is stored on a separate Cognos database that is based on Apache Derby. Tivoli Integrated Portal enables developers to build administrative interfaces for IBM and independent software products as individual plug-ins to a common console network. Single sign-on is an authentication process that you can use to enter one user ID and password to access multiple applications.

IBM System Storage SAN Volume Controller.3 Adding a DS8000 storage system with IBM Tivoli Storage Productivity Center 5. For example. Configuring IBM Tivoli Storage Productivity Center 5. For example. Complete the following steps to add a DS8000 storage system by using IBM Tivoli Storage Productivity Center 5.12. Figure 12-2 Select Configure Devices Chapter 12. click Add Devices as shown in Figure 12-1. You can add multiple devices of the same type by using a single session of the Add Devices wizard. as shown in Figure 12-2. you can add a DS8000 server.1: 1. From the Welcome to the IBM Tivoli Storage Productivity Center stand-alone GUI on the local server. You cannot configure multiple devices of different types at the same time. and storage systems that use Common Information Model (CIM) agents by using a single session of the wizard. Figure 12-1 Select Add Devices Important: The Configure Devices feature in the IBM Tivoli Storage Productivity Center stand-alone GUI might also be used to add devices.1 This section describes how to add an IBM System Storage DS8000 storage system to IBM Tivoli Storage Productivity Center.1 for DS8000 311 . You cannot add a device from the IBM Tivoli Storage Productivity Center server web GUI. you cannot add a storage system and a fabric by using a single session of the wizard.

2. as shown in Figure 12-3. click Storage Subsystem and click Next. select Add and configure new storage subsystems and click Next. Figure 12-3 Select Storage system 3. as shown in Figure 12-4. From the Select Device Type page. Figure 12-4 Select Add and configure new storage subsystems 312 IBM System Storage DS8870 Architecture and Implementation . From the Select devices page.

You need one user name and password to set up the DS8000 Storage Manager. the stored user name and the privileges that are associated with it are retrieved. you are required to change the password. as shown in Figure 12-5. The default user name is admin. After the user name and password are stored.4. and Password information. From the Configure storage subsystem connections page. locate the Device Type field and enter the HMC Address. The Configure storage subsystem connections page displays a panel that shows the IP (HMC) address and device type connection properties that you entered. and Password Click the arrow to view the list of devices. Click Add. User Name. – HMC2 Address (Optional) Enter the IP address or host name of a second HMC that manages the DS8000 system.1 for DS8000 313 . Chapter 12. when you log on to the DS8000 Storage Manager in IBM Tivoli Storage Productivity Center. then select IBM DS8000. When you log on to the DS8000 Storage Manager for the first time by using the administrator user ID. User name: The user name is the same as the user name for the enterprise storage server network interface (ESSNI). – User Name Enter the user name for logging on to the IBM System Storage DS8000 Storage Manager (also known as the DS8000 element manager or GUI). Configuring IBM Tivoli Storage Productivity Center 5. 5. The user name and the password are stored in the IBM Tivoli Storage Productivity Center database. Enter the following connection properties for the DS8000 storage system: – HMC Address Enter the IP address or host name for the Hardware Management Console (HMC) that manages the DS8000 system. Figure 12-5 Provide HMC Address. User Name.

click the arrow to select monitoring group. click the DS8000 system that you want to add. To enter connection properties for more DS8000 servers. When the discovery is completed. as shown in Figure 12-6. Click Next. its data collection schedule and alerts are automatically applied to the DS8000 system and all storage systems that you are adding. Figure 12-6 Discovering storage subsystems From the Discover storage subsystems page. In the Specify data collection page. When you select a monitoring group. When you include the DS8000 system in a monitoring group. IBM Tivoli Storage Productivity Center discovers the DS8000 servers and collects initial configuration data from them. and disk controllers. click Next.6. 8. 314 IBM System Storage DS8870 Architecture and Implementation . In the field Select monitoring group. After you complete the wizard. When you finish entering connection properties for the DS8000 servers that you want to add. From the Select Storage Subsystems page. Important: Any storage systems that were added in previous steps are automatically selected. the storage systems remain in the group and you can use that group when you are working in other parts of the IBM Tivoli Storage Productivity Center user interface. click Next. complete the following steps to indicate how you want IBM Tivoli Storage Productivity Center to collect data from the DS8000 system: a. the message Completed successfully is displayed in the Status column. Important: The discovery process gathers only raw information and is completed after a few minutes. volumes. Delete storage systems that you do not want to add. Each monitoring group is associated with an existing probe schedule and set of alerts. 7. The time that is needed to complete this process can last up to an hour (depending on the size of the DS8000 storage system). then go to step 7. such as in reports (see Figure 12-7 on page 315). repeat steps 4 and 5. Create a probe for each storage system to collect statistics and detailed information about the monitored storage resources in your environment. 9. b. The devices that are displayed are known to IBM Tivoli Storage Productivity Center. In the field Use a monitoring group or template. such as pools. IBM Tivoli Storage Productivity Center manages the system and a collection of other storage systems in the same manner. click the arrow to select Subsystem Standard Group.

This value is not displayed if you selected a monitoring group. – Information about the probe schedule that was created for a template. the page displays one row for the successful actions and two rows each failed actions. For example. but the operation succeeds only for the three of the devices. a row is displayed for each failed action. – The name of the monitoring group that you selected. Otherwise. if the wizard expects to assign a specific alert to five devices. This value is not displayed if you selected a monitoring group. Configuring IBM Tivoli Storage Productivity Center 5. – The names of the alerts that are created based on the template you selected.From the Review user selections page. – The name of the probe schedule that is created based on the template you selected. – Error messages for any failed actions. This value is not displayed if you selected a template. If the configuration failed. search for the message identifier in this information center. review the following configuration choices that you made for the DS8000 by using the Configure Devices wizard: – The list of devices that you are adding.Click Next. 12. 11. 10. To resolve the error.1 for DS8000 315 . The page displays a row for the successful actions.Click Finish to close the Configure Devices wizard. all of the devices in the same group start the probe schedule at the same time. Figure 12-7 Specify data collection Important: It is desirable to have the least number of devices in the same monitoring group. It includes the following information: – A list of the actions that are performed by the wizard.Click Next to commit your configuration choices and to complete the configuration. Chapter 12. The View results page is displayed.

click Disk Manager Storage Subsystems to view the configured devices.From the IBM Tivoli Storage Productivity Center server web GUI. the new device is shown. When you highlight the IBM DS8000 system.13. Figure 12-8 Storage system view 14. as shown in Figure 12-10. as shown in Figure 12-9. Figure 12-10 IBM Tivoli Storage Productivity Center probe jobs for the DS8000 storage system 316 IBM System Storage DS8870 Architecture and Implementation . certain actions become available with which you view the device configuration or create volumes. as shown in Figure 12-8.After successfully adding the DS8000 system.The associated probe jobs for the DS8000 storage system also can be verified. Figure 12-9 Storage Systems panel view 15.

the displayed dashboard provides a concise. SG24-8053.1 web-based GUI This section describes some of the important functions of IBM Tivoli Storage Productivity Center 5. complete overview of your storage environment. improved user interface that provides different functions for working with monitored resources. 12. acknowledge. and delete alerts Review and acknowledge health status View Internal and External Resources Relationships Access to Cognos reporting At-a-glance assessment of storage environment When you first log on to the web-based GUI. Compared to the stand-alone GUI.ibm. administration. alerting. hypervisors.tpc _V51.4 Exploring DS8870 with IBM Tivoli Storage Productivity Center 5. Stand-alone GUI: A stand-alone GUI is available within IBM Tivoli Storage Productivity Center 5. Web-based and stand-alone GUIs feature different (and some identical) configuration.com/infocenter/tivihelp/v59r1/index.dhe.1.Important: If you schedule a probe to collect data about a storage system and plan to create many volumes on that storage system.html The web-based GUI is a new.1 for DS8000 317 .1.ibm. It includes the following information: The status of monitored resources on the resource diagram (storage systems. and the IBM Tivoli Storage Productivity Center Information Center at this website: http://pic. For more information about Tivoli Productivity Center 5. monitoring. and switches) The overall capacity of monitored resources The status of IBM Tivoli Storage Productivity Center jobs The status of alerts that were detected on monitored resources Chapter 12. see Tivoli Storage Productivity Center v5.1by using its web-based GUI to explore the DS8870. and reporting tasks. Consider scheduling a probe job at a different time than when you plan to create many volumes on the DS8000 storage system. Configuring IBM Tivoli Storage Productivity Center 5.jsp?topic=%2Fcom. fabrics. the performance of the volume creation job and the general performance of the associated CIMOM might decrease. the web-based GUI offers you better and simplified navigation with the following major features: At-a-glance assessment of storage environment Monitor and troubleshoot capabilities Rapid problem determination Review.1 Technical Guide. servers.doc%2FTPC_ic-homepage_v51.

You can use this information to quickly determine the health and state of your environment. Figure 12-11shows the web-based GUI dashboard.

Figure 12-11 Web-based GUI dashboard

Monitor and troubleshoot
By using the web-based GUI, you can easily monitor and troubleshoot your environment. It applies to servers, fabric and switches, block, and file storage systems. By clicking Storage Resources  Storage Systems, summary and detailed information about the storage resources can be gathered easily, including properties, storage usage, storage capacity, and several key performance metrics. For example, when migrating large amounts of data, you can identify the target resources that have enough storage capacity to accommodate the data and require the least amount of reconfiguration.

318

IBM System Storage DS8870 Architecture and Implementation

An example of summary and detailed information about storage systems is shown in Figure 12-12.

Figure 12-12 Web-based GUI: Storage Systems information

Rapid problem determination
Each managed resource includes a status symbol that represents the most critical status that was detected on the internal resources for a resource type. For example, if Tivoli Storage Productivity Center monitors 20 storage systems, and an error was detected on a port for one of those storage systems, a red symbol is shown next to the storage systems icon in the diagram. Figure 12-13 shows an example of storage systems status error.

Figure 12-13 Web-based GUI: Storage Systems Status Errors

If no errors, warnings, or unreachable statuses were detected on the internal resources of monitored storage systems, a green symbol is shown. By knowing the status, you can quickly determine the condition of your storage systems and if any actions must be taken.

Chapter 12. Configuring IBM Tivoli Storage Productivity Center 5.1 for DS8000

319

Review, acknowledge, and delete alerts
In the web-based GUI, you can review, acknowledge, and delete alerts that are generated when Tivoli Storage Productivity Center detects certain conditions or events on monitored resources (as shown in Figure 12-14). Many conditions can trigger alerts, so you can set up Tivoli Storage Productivity Center to generate alert notifications for the conditions that you specify.

Figure 12-14 Web-based GUI: Alerts

Review and acknowledge health status
You can use the web-based GUI to monitor the health status of resources and identify potential problem areas in a storage environment. Monitored resources include top-level resources and their internal and related resources. By using the status information (as shown in Figure 12-15), you can quickly determine the condition of your storage and if any actions must be taken. Tivoli Storage Productivity Center provides a number of different statuses, which are represented by an icon in the web-based GUI to help you determine the condition of the resources.

Figure 12-15 Web-based GUI: Health Status

320

IBM System Storage DS8870 Architecture and Implementation

View Internal and External Resources Relationships
You can use the web-based GUI to monitor storage systems, servers, hypervisors, fabrics, and switches. Information about these top-level resources includes information about their internal resources and related external resources (as shown in Figure 12-16). Internal resources are components that exist in a top-level resource. Related resources are external to a top-level resource, but are related to it through assigned storage, a network connection, or virtual hosting.

Figure 12-16 Web-based GUI: Internal and External Related Resources

Access to Cognos reporting
The web-based GUI provides Cognos reporting to use predefined reports or design custom reports that contain detailed information about the properties and performance of monitored resources. All the performance metrics from the stand-alone GUI are available in the reporting interface. For example, you can drag key metrics into a report to create and generate a performance chart for a specific volume of a storage system. The reporting interface also includes a set of predefined reports that provides quick access to pre-formatted data about resources. For more information about Cognos reporting, see Tivoli Storage Productivity Center v5.1 Technical Guide, SG24-8053.

Chapter 12. Configuring IBM Tivoli Storage Productivity Center 5.1 for DS8000

321

322

IBM System Storage DS8870 Architecture and Implementation

13

Chapter 13.

Configuration by using the DS Storage Manager GUI
The DS Storage Manager provides a graphical user interface (GUI) to configure the IBM System Storage DS8000 series. The DS Storage Manager GUI (DS GUI) is a browser-based tool that can be accessed directly by pointing the browser to the Hardware Management Console (HMC) IP or started by launching an Element Manager in the IBM Tivoli Storage Productivity Center GUI. This chapter describes the ways to access the DS GUI, and how to use it to configure the storage on the DS8000. This chapter covers the following topics: DS Storage Manager GUI overview Logical configuration process Examples of configuring DS8000 storage Examples of exploring DS8000 storage status and hardware For more information about Copy Services configuration in the DS8000 family by using the DS GUI, see the following IBM Redbooks publications: IBM System Storage DS8000: Copy Services for Open Systems, SG24-6788 IBM System Storage DS8000: Copy Services for IBM System z, SG24-6787 For more information about DS GUI changes that are related to disk encryption, see IBM System Storage DS8700: Disk Encryption Implementation and Usage Guidelines, REDP-4500. For more information about DS GUI changes that are related to LDAP authentication, see IBM System Storage DS8000: LDAP Authentication, REDP-4505. Code version: Some of the figures in this chapter might not reflect the latest version of the DS GUI code.

© Copyright IBM Corp. 2013. All rights reserved.

323

13.1 DS Storage Manager GUI overview
In this section, we describe the DS GUI access method design. The DS GUI code is on the DS8000 Hardware Management Console (HMC) and we describe access methodologies. Important: The GUI has changed little between the DS8870 and DS8800. Some figures that are used in this chapter were taken from previous models of the DS8870.

13.1.1 Accessing the DS GUI
You can access the DS GUI in any of the following ways: From a browser that is connected to the HMC From Tivoli Storage Productivity Center on a workstation that is connected to the HMC From a browser that is connected to Tivoli Storage Productivity Center on any server By using Microsoft Windows Remote Desktop The DS8000 HMC that contains the DS Storage Manager communicates with the DS Network Interface Server, which is responsible for communicating with the two controllers of the DS8000. Access to the DS8000 HMC is supported through the IPv4 and IPv6 Internet Protocol. These access capabilities, which use basic authentication, are shown in Figure 13-1. In our illustration, Tivoli Storage Productivity Center Server connects to two HMCs that manage two DS8000 storage complexes.

Browser

Authentication without LDAP

TPC GUI

TC P/IP Dire ctly TCP/IP

User Authentication is managed by the ESSNI Server regardless of type of Connection

TPC

TPC Server

TPC GUI DS GUI ESSN I Client

D S800 0 HMC 1

ESSNI Server
User reposi tory

DS8870 Complex 1

DS8 000 H MC 2

ESSNI Server Rem ote desktop Browser
User reposi tory

DS8870 Complex 2

Figure 13-1 Accessing the DS8000 GUI

324

IBM System Storage DS8870 Architecture and Implementation

The DS8000 supports the ability to use a Single Point of Authentication function for the GUI and DS Command-line Interface (DS CLI) through a centralized Lightweight Directory Access Protocol (LDAP) server. This capability is supported by Tivoli Storage Productivity Center Version 4.2.1 (or later), which is preloaded. If you have an earlier Tivoli Storage Productivity Center version, you must upgrade Tivoli Storage Productivity Center to V4.2.1 to take advantage of the Single Point of Authentication function for the GUI and CLI through a centralized LDAP server. The access capabilities through LDAP authentication are shown in Figure 13-2. As shown, Tivoli Storage Productivity Center connects to two HMCs that are managing two DS8000 storage complexes.

Browser

LDAP Authentication
TPC GUI
The au th en ti cation i s now m anag ed throu gh the Authentication Ser ve r, a TPC com pone nt, a nd a new Authentication Clie nt a t the H MC.

Direc tly TCP/IP

1,2,3 1
TPC 5.1 TC P/IP DS88 70 H MC 1

1

Host System

TPC GUI DS GU I TPC Server 6 7
TIP

2

3 ESSN I Client 5

ESSNI Server 4 9
Auth entica ti on C li ent

10

DS8870 Complex 1

LDAP Service

A uthen ti cation Server

8

D S887 0 HMC 2

1

The Auth entic ati on Se rve r p rovid es the co nne cti on to th e L DAP or othe r re posi tories

1,2,3

ESSNI Server
Auth entica ti on C li ent

DS8870 Complex 2

Remote desktop

Browser / DS CLI

Figure 13-2 LDAP authentication to access the DS8000 GUI and CLI

More information: For more information about LDAP-based authentication, see IBM System Storage DS8000: LDAP Authentication, REDP-4505.

Chapter 13. Configuration by using the DS Storage Manager GUI

325

Accessing the DS GUI directly through a browser
The DS Storage Manager GUI can be launched directly from any workstation with network connectivity to the HMC. To connect to the DS GUI, different versions of Internet browsers can be used. Supported browsers include Mozilla Firefox 10 ESR and Microsoft Internet Explorer 8 or 9. To connect to the DS Storage Manager GUI on one of the HMCs, enter the following URL in a supported browser: http://<HMC-IP>:8451/DS8000 or https://<HMC-IP>:8452/DS8000 An example of the DS GUI is shown in Figure 13-3.

Figure 13-3 DS GUI start page

326

IBM System Storage DS8870 Architecture and Implementation

Accessing the DS GUI through the Element Manager
The DS GUI code at the DS8000 HMC also can be invoked from the Tivoli Storage Productivity Center GUI and accessed by starting an Element Manager. The sequence of windows that is shown when the DS GUI is accessed through the Element Manager is shown in Figure 13-4, Figure 13-5 on page 328, and Figure 13-6 on page 328. Complete the following steps to access the DS GUI through the Element Manager: 1. Log in to your Tivoli Storage Productivity Center installed server and start the IBM Tivoli Storage Productivity Center. 2. Enter your Tivoli Storage Productivity Center user ID and password. 3. In the Tivoli Storage Productivity Center window that is shown in Figure 13-4, click Element Management (above the Navigation Tree) to start the Element Manager.

Figure 13-4 Tivoli Storage Productivity Center GUI: Start Element Manager

Important: We assume that the DS8000 storage subsystem (Element Manager) is already configured in Tivoli Storage Productivity Center.

Chapter 13. Configuration by using the DS Storage Manager GUI

327

4. After the Element Manager is started, click the disk system that you want to access, as shown in Figure 13-5.

Figure 13-5 Tivoli Storage Productivity Center GUI: Select the DS8000

5. You are presented with the DS GUI Overview window for the selected disk system as shown in Figure 13-6.

Figure 13-6 Tivoli Storage Productivity Center GUI: DS GUI Overview window

328

IBM System Storage DS8870 Architecture and Implementation

see Chapter 12. Use the procedure that is described in “Accessing the DS GUI through the Element Manager” on page 327. Accessing the DS GUI with a browser that is connected to a Tivoli Storage Productivity Center workstation To access the DS GUI. Configuration by using the DS Storage Manager GUI 329 . In this Overview window. Use the procedure that is described in “Accessing the DS GUI through the Element Manager” on page 327.Accessing the DS GUI with a browser that is connected to Tivoli Storage Productivity Center Server To access the DS GUI. you can connect to a Tivoli Storage Productivity Center workstation by using a web browser. Accessing the DS GUI with a remote desktop connection to Tivoli Storage Productivity Center Server You can use a remote desktop connection to connect to Tivoli Storage Productivity Center Server. For information about Tivoli Storage Productivity Center. “Configuring IBM Tivoli Storage Productivity Center 5. follow the procedure that is described in “Accessing the DS GUI through the Element Manager” on page 327 to access the DS GUI. For more information about Tivoli Storage Productivity Center.2 DS GUI Overview window After you log on. you can connect to Tivoli Storage Productivity Center Server by using a web browser. the DS Storage Manager Overview window that is shown in Figure 13-6 on page 328 displays. Pictures in the shaded area can be clicked to view their description in the lower half of the window.1 for DS8000” on page 307. After you are connected to Tivoli Storage Productivity Center Server. Chapter 13. 13.1. The left side of the window is the navigation pane. see Chapter 12.1 for DS8000” on page 307. “Configuring IBM Tivoli Storage Productivity Center 5. you can see pictures and descriptions of DS8000 configuration components.

the . Select the object that you want to access and then the appropriate action (for example. This function can be useful if you want to document your configuration. click Download. The Print report option opens a new window with the table in HTML format and starts the printer dialog box if you want to print the table.csv file includes all pages. The GUI displays entries in the table that match the criteria.DS GUI window options Figure shows an example of the Manage Volumes window. show only FB extent pools in the table). We explain several of these options next. The Action drop-down menu provides you with specific actions that you can perform. Several important options available on this page are also on many other windows of DS Storage Manager. The file is in comma-separated value (. This function can be useful if you have tables with many items.csv) format and you can open the file with a spreadsheet program. Create or Delete). Figure 13-7 Manage Volumes window The DS GUI displays the configuration of your DS8000 in tables. Choose Column Value sets and clears filters so that only specific items are displayed in the table (for example. 330 IBM System Storage DS8870 Architecture and Implementation . To search the table. enter the criteria in the Filter field. There are several options that you can use: To download the information from the table. This function is also useful if the table on the DS8000 Manager consists of several pages.

Icon view on the left and legacy view on the right When you point to one of the icons in the Icon view. a window opens and shows the actions that are available under this icon. Figure 13-8 Navigation pane. but you can change this default by clicking Navigation Choice in the bottom part of the navigation pane. Configuration by using the DS Storage Manager GUI 331 . The two views are shown in Figure 13-8. the icon increases in size and displays the panels to which you can navigate.DS GUI navigation pane In the navigation pane. you navigate to the various functions of the DS8000 GUI. The default view is set to Icon view. as shown in Figure 13-9. It has two views to choose from: Icon view and Original view. When you point to one of the icons in Icon view. Figure 13-9 Example icon view Chapter 13. which is on the left side of the window.

The Icon view features the following menu structure: Home: – Getting Started – System Status Monitor: – Tasks Pools: – Internal Storage Volumes: – FB Volumes – Volume Groups – CKD LCUs and Volumes Hosts: – Hosts Copy Services: – – – – FlashCopy Metro Mirror/Global Copy Global Mirror Mirroring Connectivity Access: – Users – Remote Authentication – Resource Groups Configuration: – Encryption Key Servers – Encryption Groups 332 IBM System Storage DS8870 Architecture and Implementation .

as shown in Figure 13-11. sign on to the DS GUI and complete the following steps: 1.13. Configuration by using the DS Storage Manager GUI 333 . Figure 13-10 DS Storage Manager GUI main window 2. select User Administration under the section Monitor System. From the categories in the left sidebar. as shown in Figure 13-10. From the categories in the left sidebar.2 User management by using the DS GUI For GUI-based user administration. select Remote Authentication (under Configuration). Figure 13-11 Remote Authentication Chapter 13.

Select a policy by highlighting the row. select Properties from the Select menu. The next window displays all of the security policies on the HMC for the storage complex you chose. Figure 13-12 Selecting a storage complex 4. Figure 13-13 Selecting a security policy 334 IBM System Storage DS8870 Architecture and Implementation . as shown in Figure 13-13. see “Defining a storage complex” on page 338. Then. You can create many policies. A list of the storage complexes and their active security policies is shown. (For more information.) You can choose to create a security policy or manage one of the existing policies. Select Create Storage Authentication Service Policy or Manage Authentication Policy from the Action menu.3. Select the complex that you want to modify. but only one can be active at a time. as shown in Figure 13-12.

and the number of login retry attempts that a user receives before the account is locked. as shown in Figure 13-14. You can choose to add a user (click Select action  Add user) or modify the properties of an existing user. Figure 13-14 Selecting Modify User The administrator can perform the following tasks from this window: – Add User (the DS CLI equivalent is mkuser) – Modify User (the DS CLI equivalent is chuser) – Lock or Unlock User: The choice toggles (the DS CLI equivalent is chuser) – Delete User (the DS CLI equivalent is rmuser) – Password Settings (the DS CLI equivalent is chpass) 6. The Password Settings window is where you can modify the number of days before a password expires. Configuration by using the DS Storage Manager GUI 335 .5. Figure 13-15 Password Settings window Chapter 13. as shown in Figure 13-15.

or delete users of the Security Administrator role. The only action that they can perform is to change their password. 7.Important: If a user who is not in the Administrator group logs on to the DS GUI and goes to the User Administration window. 336 IBM System Storage DS8870 Architecture and Implementation . as shown in Figure 13-16. The role decides what type of activities can be performed by this user. or delete Storage Administrators. and the role. If you are logged in to the GUI as a Storage Administrator. Similarly. the temporary password. Security Administrators cannot create. modify. Selecting Add user displays a window in which a user can be added by entering the user ID. you cannot create. Figure 13-16 Adding a user to the HMC Take special note of the new role of the Security Administrator (secadmin). In this window. This feature is new to the microcode for the DS8870. the user sees only their own user ID in the list. modify. This role was created to separate the duties of managing the storage from managing the encryption for DS8870 units that are shipped with Full Disk Encryption storage drives. Notice how the Security Administrator option is disabled in the Add/Modify User window in Figure 13-16. the user ID can be temporarily deactivated by selecting only the No access option.

“Virtualization hierarchy summary” on page 131. 4. 5.3 Logical configuration process When performing the initial logical configuration. Configuration by using the DS Storage Manager GUI 337 . The Task Summary window can be seen by pointing to Monitor and clicking Tasks in the navigation pane. Create count key data (CKD) logical control units (LCUs) and volumes. Define the storage complex. 3. Create open system volumes. Click the specific task link to get more information about the task. For example. you cannot create ranks on arrays until the array creation is complete.2. When performing the logical configuration. 2. use the following approach: 1.13. Create Extent Pools. Figure 13-17 Task Summary window Chapter 13. the first step is to create the storage complex (processor complex) with the definition of the hardware of the storage unit. The Tasks summary window assists you in this process by reporting the progress and status of these long-running tasks. Create host connections and volume groups.10. More information: For more information. Figure 13-17 shows the successful completion of the tasks. Tasks summary window Some logical configuration tasks include dependencies on the successful completion of other tasks. see 5.

your IBM service representative customizes the setup of your storage complex that is based on information that you provide in the customization worksheets. Complete the following steps to add a storage complex. If you have more than one DS8000 system in your environment that is connected to the same network. Select Storage Complex  Add from the Action drop-down menu to add a storage complex.1 Defining a storage complex During the DS8000 installation. At the end of each process. The System Status window opens.4. as shown in Figure 13-18. 13. we show an example of a DS8000 configuration that is made through the DS GUI. For each configuration task (for example. the process guides you through windows in which you enter the necessary information. as shown in Figure 13-19.4 Examples of configuring DS8000 storage In the following sections. 1. Figure 13-18 System Status window You must have at least one storage complex that is listed in the table. you can define it here. check the status of your storage system. In the navigation pane of the DS GUI. you receive a verification window in which you can verify the information that you entered before you submit the task. creating an array). During this process. you can go back to make modifications or cancel the process.13. mouse over Home and click System Status. Figure 13-19 Add Storage Complex window 338 IBM System Storage DS8870 Architecture and Implementation . After you log in to the DS GUI and before you start the logical configuration.

Figure 13-22 Check the status details Chapter 13. The status information indicates the healthiness of each storage complex. A new storage complex is added to the table. Click OK to continue. Configuration by using the DS Storage Manager GUI 339 . By clicking the status description link of any storage complex.The Add Storage Complex window opens. Figure 13-20 Add Storage Complex window 2. as shown in Figure 13-21. Enter the IP address of the HMC that is connected to the new storage complex that you want to add. you can obtain more detailed health check information for various vital DS8000 components. as shown in Figure 13-22. as shown in Figure 13-20. Figure 13-21 New storage complex is added Having all the DS8000 storage complexes that are defined together provides flexible control and management.

In Figure 13-23. an example of various status states is shown. Both storage servers inside the storage image also must be online and operational. Extent Pools. ranks. or volumes) only when your HMC consoles are connected to the storage complex.Status descriptions can be reported for your storage complexes. One example is when only one storage server inside a storage image is offline. Because the DS8000 has redundant components. Figure 13-24 One storage server is offline Check the status of your storage complex and proceed with logical configuration tasks (create arrays. the storage complex is still operational. Figure 13-23 Different Storage Complex Status states A Critical status indicates vital storage complex resources are unavailable. as shown in Figure 13-24. An Attention status might be triggered by resources that are unavailable. 340 IBM System Storage DS8870 Architecture and Implementation . These descriptions depend on the availability of the vital storage complexes components.

mouse over Pools and click Internal Storage. Complete the following steps in the DS GUI to create an array: 1. from the navigation pane. some is assigned to System z/OS. In our example. In the GUI. and some is unassigned. Figure 13-25 Disk Configuration window Important: If you defined more storage complexes or storage images.13. some of the DS8000 capacity is assigned to open systems. The Internal Storage window displays. Configuration by using the DS Storage Manager GUI 341 .4. You can proceed directly with the creation of Extent Pools.2 Creating arrays Important: You do not necessarily need to create arrays first and then ranks.4. “Creating Extent Pools” on page 349. select the storage image that you want to access. be sure to select the correct storage image before you start creating arrays.4. Chapter 13. From the Storage image drop-down menu. as shown in Figure 13-25. as described in 13.

To see more details about each array site. some array sites are unassigned and therefore are eligible to be used for array creation. a shown in Figure 13-26. Figure 13-26 Array sites 3. as shown in Figure 13-27. Click the Array Sites tab to check the available storage that is required to create the array. In our example. Figure 13-27 Select Array Site Properties 342 IBM System Storage DS8870 Architecture and Implementation . select the array site and click Properties under the Action drop-down menu. The Single Array Site Properties window opens and provides general array site characteristics. Each array site has eight physical disk drives.2.

Click the Array tab in the Manage Disk Configuration section and select Create Arrays in the Action drop-down menu.4. as shown in Figure 13-28. Figure 13-29 Select Create Arrays Chapter 13. After we identify the unassigned and available storage. Click the Status tab to get more information about the Disk Drive Modules (DDMs) and the state of each DDM. All DDMs in this array site are in the Normal state. Figure 13-28 Single Array Site Properties: Status 5. Click OK to close the Single Array Site Properties window and return to the Internal Storage main window. Configuration by using the DS Storage Manager GUI 343 . we can create an array. as shown in Figure 13-29. 6.

a table of available array sites is displayed. select the total capacity. Figure 13-30 Create New Arrays window You must provide the following information: – RAID Type: • • • RAID 5 (default) RAID 6 RAID 10 SSD disks support only RAID 5 and RAID 10 (with RPQ). and it allows the system to choose the best array site configuration that is based on your capacity and DDM type. select the appropriate action. From the Select Capacity to Configure list. and so on. Click OK to continue. When you select this option. – If you select the Automatic configuration type. You manually select array sites from the table. – Type of configuration: The following options are available: • • Automatic is the default. From the DA Pair Usage drop-down menu. From the Drive Class drop-down menu. The bar graph displays the effect of your choice.The Create New Arrays window opens. select the DDM type that you want to use for the new array. The Manual option can be used if you want more control over the resources. select Add Another Array as many times as required. The 3-TB nearline-SAS disks support only RAID 6. 344 IBM System Storage DS8870 Architecture and Implementation . we created one RAID 5 array on SSDs. The Spread Among Least Used Pairs option assigns the array to the least used DA pairs. The Spread Among All Pairs option balances arrays evenly across all available Device Adapter (DA) pairs. The Sequentially Fill All Pairs option assigns arrays to the first DA pair. then to the second DA pair. In our example that is shown in Figure 13-30. you need to provide more information: • • • If you want to create many arrays with different characteristics (RAID and DDM type) in one task. as shown in Figure 13-30.

as shown in Figure 13-31. At this stage. Figure 13-32 Creating arrays: Completed message The graph in the Internal Storage summary section is changed to reflect the arrays that were configured. All array sites that were chosen for the new arrays we want to create are listed here. Chapter 13. Click Create All after you decide to continue with the proposed configuration. Figure 13-31 Create array verification window Wait for the message in Figure 13-32 to appear and then click Close. if required. The Create array verification window is displayed. Configuration by using the DS Storage Manager GUI 345 .7. you can still change your configuration by deleting the array sites from the lists and adding new array sites.

mouse over Pools and click Internal Storage.4. In the GUI.4. Figure 13-33 Select Create Ranks 2. as shown in Figure 13-33. “Creating Extent Pools” on page 349.13. You can proceed directly with the creation of Extent Pools. Figure 13-34 Create New Ranks window 346 IBM System Storage DS8870 Architecture and Implementation . from the navigation pane. From the Storage image drop-down menu. as shown in Figure 13-34.4. The Internal Storage window opens. Complete the following steps in the DS GUI to create a rank: 1. The Create New Ranks window opens.3 Creating ranks Important: You do not necessarily need to create arrays first and then ranks. Select Create Rank from the Action drop-down menu. as described in 13. Important: If you defined more storage complexes or storage images. be sure to select the correct storage image before you start creating ranks. select the storage image that you want to access. Click the Ranks tab to start working with ranks.

select the appropriate action. Chapter 13. – Encryption Group indicates whether encryption is enabled or disabled for ranks. Count key data (CKD) extents = 3390 Mod 1. When you select this option. The Sequentially Fill All Pairs option assigns arrays to the first DA pair. – RAID Type: • • • SSD disks support only RAID 5 (and RAID 10 with RPQ request). select the DDM type that you want to use for the new array. The bar graph displays the effect of your choice. only RAID 5 is available. you need to provide the following information: • From the DA Pair Usage drop-down menu. a table of available array sites is displayed. Click OK to continue. RAID 5 (default) RAID 6 RAID 10 for HDDs. The 3-TB nearline-SAS disks support only RAID 6 (and RAID 10 with RPQ request). select Add Another Rank as many times as required. For SSDs. You then manually select resources from the table. In our example. select the wanted total capacity. and DDM type) at one time. – Type of configuration: The following options are available: • • Automatic is the default and it allows the system to choose the best configuration of the physical resources that are based on your capacity and DDM type. – If you select the Automatic configuration type. then to the second DA pair. we create one FB rank on SSDs with RAID 5. The Spread Among All Pairs option balances arrays evenly across all available Device Adapter (DA) pairs. RAID. In count-key-data architecture. select None. The storage type can be set to one of the following values: • • Fixed block (FB) extents = 1 GB. Configuration by using the DS Storage Manager GUI 347 . Otherwise. Select 1 from the Encryption Group drop-down menu if the encryption feature is enabled on this machine. the data (the logical volumes) is mapped over fixed-size blocks or sectors. From the Drive Class drop-down menu. In fixed block architecture. • • If you want to create many ranks with different characteristics (Storage. you must provide the following information: – Storage Type: The type of extent for which the rank is to be configured. The Manual option can be used if you want more control over the resources. The Spread Among Least Used Pairs option assigns the array to the least-used DA pairs.To create a rank. the data field stores the user data. and so on. From the Select capacity to configure list.

4. Click Create All after you decide to continue with the proposed configuration. as shown in Figure 13-35. At this stage. Each array site that is listed in the table is assigned to the corresponding array that we created in 13. as shown in Figure 13-36. Figure 13-36 Creating ranks: Task Properties view 348 IBM System Storage DS8870 Architecture and Implementation .2. if required. Figure 13-35 Create rank verification window 4. The Creating Ranks window opens. It displays the Task Properties window. you can still change your configuration by deleting the ranks from the lists and adding new ranks.3. The Create rank verification window is displayed. “Creating arrays” on page 341. Click View Details to check the overall progress.

4. as shown in Figure 13-37.4 Creating Extent Pools Complete the following steps in the DS GUI to create an Extent Pool: 1.5. The bar graph in the summary section provides information about unassigned and assigned capacity. Chapter 13. The Internal Storage window opens. check the list of newly created ranks. Select Create Extent Pools from the Action drop-down menu. There are new ranks. point to Pools and click Internal Storage. Configuration by using the DS Storage Manager GUI 349 . In the GUI. Click the Extent Pool tab. but they are not assigned to Extent Pools. Figure 13-37 Select Create Extent Pools Important: If you defined more storage complexes or storage images. The bar graph in the Disk Configuration Summary section is changed. under the Rank tab. be sure to select the correct storage image before you create Extent Pools. from the navigation pane. After the task is completed. return to Internal Storage and. From the Storage image drop-down menu. 13. select the storage image that you want to access.

Count key data (CKD) extents = 3390 Mod 1. the data field stores the user data. you must provide the following information: – Storage Type: The type of extent for which the rank is to be configured. The Create New Extent Pools window opens. In the fixed block architecture.2. In the count-key-data architecture. as shown in Figure 13-38. Figure 13-38 Create New Extent Pools window To create an Extent Pool. The storage type can be set to one of the following values: • • Fixed block (FB) extents = 1 GB. RAID 5 (default) RAID 6 RAID 10 – RAID Type: • • • 350 IBM System Storage DS8870 Architecture and Implementation . Scroll down to see the rest of the window and provide input for all the fields. the data (the logical volumes) is mapped over fixed-size blocks or sectors.

It is the only choice when you select the Two Extent Pool option as the number of Extent Pools. you make any adjustments before a full storage condition occurs. dividing all ranks equally among each pool. – Type of configuration: The following options are available: • • Automatic is the default and it allows the system to choose the best configuration of physical resources that are based on your capacity and DDM type. you might want to create more Extent Pools accordingly. The 3-TB nearline-sas disks support only RAID 6 (and RAID 10 with RPQ request). – Encryption Group indicates whether encryption is enabled or disabled for ranks. There are three available options: Two Extent Pools (ease of management). The bar graph displays the effect of your choice. Click OK to continue. Single Extent Pool. and so on. Chapter 13. Otherwise. – Nickname Prefix and Suffix: Provides a unique name for each Extent Pool. – If you select the Automatic configuration type. If you have the FB and CKD storage type. – Storage Threshold: Specifies the percentage when the DS8000 generates a storage threshold alert. To create all of the required Extent Pools in one task. a table of available array sites is displayed. select the appropriate action. From the Select capacity to configure list. if the encryption feature is enabled on the machine. Configuration by using the DS Storage Manager GUI 351 . select None. • • – Number of Extent Pools: Here you choose the number of Extent Pools to create. You must manually select resources from this table. 3. From the Drive Class drop-down menu. or have different types of DDMs installed. select Add Another Pool as many times as required. no more than half of the ranks that are attached to a DA pair are assigned to each server so that each server's DA within the DA pair has the same number of ranks. The Manual option can be used if you want more control over the resources. The default configuration creates two Extent Pools per storage type. select the DDM type that you want to use for the new array. For example. When you select this option. The Sequentially Fill All Pairs option assigns arrays to the first DA pair then to the second DA pair.SSD disks support only RAID 5 (and RAID 10 with RPQ request). This setup is useful if you have multiple Extent Pools. each assigned to separate hosts and platforms. – Server assignment: The Automatic option allows the system to determine the best server for each Extent Pool. and Extent Pool for each rank (physical isolation). By using this option. select the wanted total capacity. This percentage is prevented from being allocated to volumes or space-efficient storage. Select 1 from the Encryption Group drop-down menu. you need to provide more information: • From the DA Pair Usage drop-down menu. The Spread Among All Pairs option balances ranks evenly across all available Device Adapter (DA) pairs. – Storage reserved: Specifies the percentage of the total Extent Pool capacity that is reserved.

as shown in Figure 13-40. select the appropriate action from the Action drop-down list. their capacity. as shown in Figure 13-39. If you want to add capacity to the Extent Pools or add another Extent Pool. RAID protection.4. The Task Properties window opens. The bar graph in the summary section is changed. Figure 13-39 Create Extent Pool verification window 5. The Creating Extent Pools window opens. and other information. There are ranks that are assigned to Extent Pools and you can create new volumes from each Extent Pool. After the task is completed. server assignments. return to the Internal Storage window (under the Extent Pools tab) and check the list of newly created ranks. Here you can check the names of the Extent Pools that are going to be created. 352 IBM System Storage DS8870 Architecture and Implementation . After you are satisfied with the specified values. Figure 13-40 Creating extent pools: Task Properties window 6. Click View Details to check the overall progress. click Create all to create the Extent Pools. The Create Extent Pool verification window opens.

7. Configuration by using the DS Storage Manager GUI 353 . Figure 13-42 SIngle Pool Properties: General tab Chapter 13. select the Extent Pool and click Properties from the Action drop-down menu. Figure 13-41 Extent pool action Properties 8. Basic Extent Pool information and volume relocation-related information is provided here. as shown in Figure 13-42. The options that are available from the Action drop-down menu are shown in Figure 13-41. Storage Threshold. and Storage Reserved values. To check the Extent Pool properties. The Single Pool properties window opens. you can change the Extent Pool Name. Select Apply to commit all of the changes. If necessary.

Complete the following steps to independently configure each port: 1. The DDM Property window opens. as shown in Figure 13-43. as shown in Figure 13-44. For more information about drive types or ranks that are included in the Extent Pool. click DDM Properties. 10. download it in . The System Status window opens. and modify the table view by selecting the appropriate icon at the top of the table. Figure 13-44 System Status window: Configure I/O ports 354 IBM System Storage DS8870 Architecture and Implementation . from the Action drop-down menu. select the Extent Pool from the Manage Internal Storage table and. Point to the Home icon and select System Status.To see more information about the DDMs.4. from the Action drop-down menu. You can print the table. click the appropriate tab. Click OK to return to the Internal Storage window. select Storage Image Configure I/O Ports. 2. There are four or eight FCP/FICON ports on each card (depending on the model). 13.csv file format. Click OK to return to the Internal Storage window.9.5 Configuring I/O ports Before you can assign host attachments to I/O ports. Select the storage image for which you want to configure the ports and. you must define the format of the I/O ports. Figure 13-43 Extent Pool: DDM Properties Use the DDM Properties window to view all of the DDMs that are associated with the selected Extent Pool and to determine the state of the DDM.

The Configure I/O Port window opens. or FICON) from the Action drop-down menu. Multiple port selection is supported. Chapter 13. You can repeat this step to format all ports to their required function. as shown in Figure 13-45. FC-AL. Here. Configuration by using the DS Storage Manager GUI 355 . you select the ports that you want to format and then click the wanted port format (FcSf. Figure 13-45 Select I/O port format You receive a warning message that the ports might become unusable by the hosts that are currently connected to them.3.

4.The Host connections summary opens. the list of logged in host ports includes all of the host ports that the storage unit detects. as shown in Figure 13-46. there are shortcut links for various actions. the storage until cannot detect that a cable was disconnected from the port of the host device or that a fabric zoning change occurred. If you have more than one storage image. A default FICON host definition is automatically created after you define an I/O port to be a FICON port. This process applies only for open systems hosts. If you want to modify the I/O port configuration that was previously defined. we show you how to configure host systems.13. It does not take into account changes that the storage unit could not detect. you must select the correct one and then select Create new host connection in the Tasks section to create a host.6 Configuring logical host systems In this section. Figure 13-46 Host connections summary Under the Tasks section. the host might not be able to communicate with the storage device. In these cases. click Configure I/O ports. 356 IBM System Storage DS8870 Architecture and Implementation . Important: In the View Host Port Login status window. For example. Point to Hosts and click Hosts. You also can use this window to debug host access and switch configuration issues. However. the storage device might not detect this state and still views the host as logged in. Important: You can use the View host port login status link to query the host that is logged in to the system. Complete the following steps to create a host system: 1.

2. Host Nickname: Name of the host. After the host entry is added into the table. d. Port Type: You must specify whether the host is attached over an FC Switch fabric (P-P) or direct FC arbitrated loop to the DS8000. as shown in Figure 13-47. Click Next after you select the appropriate option. as shown in Figure 13-48 on page 358. Choose an existing volume group from the menu. Figure 13-47 Define Host Ports window In the General host information window. Host Type: The drop-down menu gives you a list of host types from which to select. beginning with the Define Host Ports window. Only volume groups that are compatible with the host type that you selected from the previous window are displayed. Configuration by using the DS Storage Manager GUI 357 . After you enter the necessary information. we create a Linux host. you can choose the following options: – Select Map at a later time to create a host connection without mapping host ports to a volume group. The Map Host Ports to a Volume Group window opens. Chapter 13. 3. you can manually add a description of each host. – Select Map to a new volume group to create a volume group to use in this host connection. The Host WWPN numbers or select the WWPN from the drop-down menu and click Add. b. – Select Map to an existing volume group to map to a volume group that is already defined. enter the following information: a. c. click Next. In this window. In our example. The resulting windows guide you through the host configuration.

as shown in Figure 13-49.Figure 13-48 Map Host Ports to a Volume Group window The Define I/O Ports window opens. Figure 13-49 Define I/O Ports window 358 IBM System Storage DS8870 Architecture and Implementation .

the host ports access only the volume group on those specific I/O ports. In the Verification window. check the information that you entered. If you want to make modifications. Figure 13-51 Modify host connections Chapter 13. or cancel the process. Configuration by using the DS Storage Manager GUI 359 . select your host in the Manage Host table and choose the appropriate action from the drop-down menu. After the I/O ports are defined. Defining I/O ports determines which I/O ports can be used by the host ports in this host connection. From the Define I/O ports window. you can choose to automatically assign your I/O ports or manually select them from the table. click Finish to create the host system. as shown in Figure 13-51. Figure 13-50 Verification window 5. This action takes you to the Manage Host table where you can see the list of all host connections that were created. If specific I/O ports are chosen. After you verify the information. select Next.4. If you need to change a host system definition. The Verification window opens. as shown Figure 13-50. in which you can approve your choices before you commit them. select Back.

as shown in Figure 13-52. The FB Volumes Summary window opens.7 Creating fixed block volumes Complete the following steps to create fixed block (FB) volumes: 1. In the Tasks window at the bottom of the window. as shown in Figure 13-53. Figure 13-53 Create Volumes: Select Extent Pools 360 IBM System Storage DS8870 Architecture and Implementation .13. Point to Volumes and select FB Volumes. click Create new volumes. Figure 13-52 FB Volumes summary window 2. If you have more than one storage image. you must select the appropriate image.4. The Create Volumes window opens.

“Space Efficient volumes” on page 118. see 5. – Extent allocation method: Defines how volume extents are allocated on the ranks in the Extent Pool.3. – Size: The size of the volume in the units you specified. Figure 13-54 Add Volumes: Define Volume Characteristics To create a fixed block volume. Click Next to continue. select Extent Pools in pairs (one from each server). If you select multiple pools. The following options are available: Chapter 13. The Define Volume Characteristics window opens. To ensure a balanced configuration. provide the following information: – Volume type: Specifies the units for the size parameter. Configuration by using the DS Storage Manager GUI 361 . – Storage allocation method: This setting gives you the option to create a standard volume or a space efficient volume. For more information about space efficient volumes. as shown in Figure 13-54.6. The table in the Create Volumes window contains all the Extent Pools that were previously created for the FB storage type.2. This field is not applicable for TSE volumes. – Volume quantity: The number of volumes to create. the new volumes are assigned to the pools based on the assignment option that you select on this window.

• Rotate extents: The extents of a volume are allocated on all ranks in the Extent Pool in a round-robin fashion. • – Performance group: You set the priority level of your volume’s I/O operations. This function is called Storage Pool Striping. click Next to continue. When your selections are complete. select the volumes that you are about to modify and select the appropriate Action from the Action drop-down menu. mouse over Access and select Resource Groups. If the volume does not fit on any one rank. 362 IBM System Storage DS8870 Architecture and Implementation . REDP-4760. The Create Volumes window opens. Otherwise. you can provide a Nickname prefix. it can span multiple ranks in the Extent Pool. If you need to make any other modifications to the volumes in the table. see DS8000 Performance I/O Manager. – Resource Group: If you plan to use Resource Groups (which means that only certain operators manage copy services for these volumes). Rotate volumes: All extents of a volume are allocated on the rank that contains the most free extents. To define a Resource Groups. you can specify a Resource Group for the volumes you are going to create. This method is the default allocation method. a Nickname suffix. For more information. Otherwise. Here you can define a Resource Group name. click OK to continue. and one or more volume groups (if you want to add this new volume to a previously created volume group). It also helps to avoid hotspots by spreading the workload more evenly on the ranks. A Resource Group other than PUBLIC (the default) must be defined first. as shown in Figure 13-55. Optionally. Figure 13-55 Create Volumes window 4. This allocation method can improve performance because the volume is allocated on multiple ranks. click Add Another if you want to create more volumes with different characteristics.

5. The Creating Volumes information window opens. Optionally. you select the appropriate action from the Action drop-down list. the system assigns the volume addresses for you. Figure 13-56 Select LSS 6. The Create Volumes Verification window that is shown in Figure 13-57 opens. You can choose Automatic. Chapter 13. as shown in Figure 13-56. or Manual (Fill). Manual (Group). 7. click Create all to create the volumes. After you are satisfied with the specified values. If you choose one of the manual assignment methods. If you want to add more volumes or modify the existing volumes. select one or more LSSs to assign volume addresses. Depending on the number of volumes. Configuration by using the DS Storage Manager GUI 363 . Scroll down to view the information for more servers. Click Finish to continue. we select the Automatic assignment method. You need to select an LSS for all created volumes. Figure 13-57 Create Volumes Verification window 8. click View Details to check the overall progress. the process can take some time to complete. If you choose Automatic. which lists all of the volumes that are going to be created. In our example.

you can now select other actions. From there. The Manage Volumes window is shown in Figure 13-58. select Create from the Action drop-down menu. If you click Close. After the creation is complete. a final window opens. select a volume and click the appropriate action from the Action drop-down menu. Figure 13-59 Volume Groups window: Select Create 364 IBM System Storage DS8870 Architecture and Implementation . 10.8 Creating volume groups Complete the following steps to create a volume group: 1. Point to Volumes and select Volume Groups. you return to the main FB Volumes window. as shown in Figure 13-59.9. You can select View Details or Close.4. The Volume Groups window opens. 2. To create a volume group.The bar graph in the Open Systems . Figure 13-58 FB Volumes: Manage Volumes If you need to change a volume.Storage Summary section is changed. 13. such as Manage existing volumes.

enter the nickname for the volume group and select the host type from which you want to access the volume group. After you verify the information. Figure 13-61 Create New Volume Group Verification window 6. 4.The Define Volume Group Properties window opens. IBM pSeries®). 5. as shown in Figure 13-61. you see the new volume group in the Volume Group window. Select the volumes to include in the volume group. Figure 13-60 Define Volume Group Properties window 3. select Back. you can specify the LSS so that only these volumes display in the list. 7. If you want to make modifications. This selection does not affect the functionality of the volume group. or you can cancel the process. After you select Close. Click Next to open the Verification window. all other host types with the same addressing method are automatically selected. In the Define Volume Group Properties window. If you need to select many volumes. check the information that you entered during the process. In the Verification window. Configuration by using the DS Storage Manager GUI 365 . click Finish to create the host system attachment. it supports the host type selected. Chapter 13. After the creation completes. as shown in Figure 13-60. a Create Volume Group completion window opens in which you can select View Details or Close. If you select one host (for example. and then you can select all.

You can edit the column under the LUN ID.4 kernels. and then modify the LUN ID to the new LUN ID. or add a volume to an existing volume group. If there is a gap in the LUN ID sequence. A list of devices that were discovered and are recognized by the SCSI subsystem are listed in the /proc/scsi/scsi directory. You can change the LUN ID field when you create the volume group. Figure 13-63 Update the LUN ID field 366 IBM System Storage DS8870 Architecture and Implementation . use Remove Volumes. Use Add Volumes to add the volumes back to the Volume Group. Small Computer System Interface (SCSI) devices are discovered by scanning the SCSI bus when the host adapter driver is loaded. Figure 13-62 Gaps in the LUN ID If you want to modify the LUN ID of an FB volume that is already in the Volume Group. the LUNs after the gap are not discovered (see Figure 13-62). Use the cat command to display the output of /proc/scsi/scsi to verify that the correct number of LUNs was recognized by the kernel. as shown in Figure 13-63.Creating Volume Group of scsimap256 In Linux 2.

you can enable it by right-clicking the menu bar. The LUN ID field is shown. Configuration by using the DS Storage Manager GUI 367 .If the LUN ID column is not displayed in the window. Select the box to the right of LUN ID. Figure 13-65 Error message for duplicated LUN ID Chapter 13. You can edit this column. Figure 13-64 Enable display of LUN ID column If you enter a LUN ID that is used in this volume group. (see Figure 13-64). an error message is shown (see shown Figure 13-65).

It is vital that the two configurations match each other. Figure 13-67 CKD LCUs and Volumes window 368 IBM System Storage DS8870 Architecture and Implementation .9 Creating LCUs and CKD volumes In this section. This process is necessary only for IBM System z. as shown in Figure 13-67. The CKD LCUs and Volumes window opens. Point to Volumes and select CKD LCUs and Volumes. you receive the error message that is shown in Figure 13-66. Figure 13-66 Error message for number larger than 255 13.There are only 256 LUNs in a scsimap256 volume group 0-255. each LCU ID number you select during this process must correspond to a CNTLUNIT definition in the HCD/IOCP with the same CUADD number. Complete the following steps to create an LCU and CKD volume: 1. If you enter a number that is larger than 255.4. we show how to create LCUs and CKD volumes. Important: The LCUs you create must match the LCU definitions on the host I/O configuration. More precisely.

The LCUs that are attached to the same SYSPLEX must have different SSIDs. Select a storage image from the Select storage image drop-down menu if you have more than one image. You must enter the following necessary parameters for the selected LCUs: – Starting SSID: Enter a Subsystem ID (SSID) for the LCU. Use unique SSID numbers across your whole environment. select Create new LCUs with volumes from the tasks list. Select the LCUs you want to create. the SSID number is incremented by one for each LCU. click the available LCU square. The SSID is a four-character hexadecimal number. To create new LCUs. Figure 13-68 Create LCUs window 4. The Create LCUs window opens.2. or you can use the map. The window is refreshed to show the LCUs in the storage image. You can select them from the list that is displayed on the left by clicking the number. When you use the map. The following options are available: • • • 3990 Mod 3 3990 Mod 3 for TPF 3990 Mod 6 Chapter 13. Select 3990 Mod 6. unless your operating system does not support Mod 6. Configuration by using the DS Storage Manager GUI 369 . If you create multiple LCUs at once. as shown in Figure 13-68. 3. – LCU type: Select the LCU type that you want to create.

– Consistency group timeout enabled: Check the box to enable remote mirror and copy consistency group timeout option on the LCU. While in the extended long busy state. 5. – Optionally specify a Resource Group other than PUBLIC if you want different groups of people manage your copy services. With recent enhancements to z/OS Global Mirror. you must configure your base volumes and. optionally. assign alias volumes. – z/OS Global Mirror Session timeout: The time in seconds that any logical device in a z/OS Global Mirror session (XRC session) stays in long busy before the XRC session is suspended. Critical heavy mode controls the behavior of the remote copy and mirror pairs that have a primary logical volume on this LCU. click Next. Figure 13-69 Create Volumes window 370 IBM System Storage DS8870 Architecture and Implementation . – Consistency group timeout: The time in seconds that remote mirror and copy consistency group volumes on this LCU stay extended long busy after an error that causes a consistency group volume to suspend. – Critical mode enabled: Check the box to enable critical heavy mode. there is now an option to suspend the z/OS Global Mirror session instead of presenting the long busy status to the applications. In the next window (as shown in Figure 13-69). I/O is prevented from updating the volume. When all of the selections are made.The following parameters affect the operation of certain Copy Services functions: – Concurrent copy session timeout: The time in seconds that any logical device on this LCU in a concurrent copy session stays in a long busy state before a concurrent copy session is suspended. The long busy occurs because the data mover has not offloaded data when the logical device (or XRC session) is no longer able to accept more data. The Parallel Access Volume (PAV) license function must be activated to use alias volumes.

Rotate volumes: All extents of a volume are allocated on the rank that contains most free extents. If the volume does not fit on any one rank. This method is the default allocation method. This field is not applicable for TSE volumes. The volume addresses are allocated sequentially.Define the base volume characteristics in the first third of this window with the following information: – Base type: • 3380 Mod 2 • 3380 Mod 3 • 3390 Standard Mod 3 • 3390 Standard Mod 9 • 3390 Mod A (used for Extended Address Volumes . • Chapter 13.EAV) • 3390 Custom – Volume size: This field must be changed if you use the volume type 3390 Custom or 3390 Mode A. It also helps to avoid hotspots by spreading the workload more evenly on the ranks. This allocation method can improve performance because the volume is allocated on multiple ranks. – Extent allocation method: Defines how volume extents are allocated on the ranks in the Extent Pool. This function is called Storage Pool Striping. the next free address is used. If an address is already allocated. Configuration by using the DS Storage Manager GUI 371 . Specify a decimal number in the range of 0 . Track Space Efficient (TSE): Allocate Space Efficient volumes to be used as FlashCopy SE target volumes. – Base start address: The starting address of volumes you are about to create.255. – Size format: This format must be changed only if you want to enter a special number of cylinders. – Storage allocation method: This field only appears only on systems that have the FlashCopy SE function activated. The following options are available: • Rotate extents: The extents of a volume are allocated on all ranks in the Extent Pool in a round-robin fashion. – Order: Select the address allocation order for the base volumes. – Volume quantity: Enter the number of volumes you want to create. This number defaults to the value that is specified in the Address Allocation Policy definition. This format can also be used only by 3390 Custom or 3390 Mod A volume types. it can span multiple ranks in the Extent Pool. starting from the base start address in the selected order. The options are: • • Standard: Allocate standard volumes.

WLM eventually moves the aliases from the initial base volume to other volumes as needed. – Nickname suffix: You can select None as described previously. you must delete and re-create aliases. Blank fields are not allowed.255. 372 IBM System Storage DS8870 Architecture and Implementation . Nickname: The nickname is not the System z VOLSER of the volume. Rather. each base volume receives an alias volume. you must enter a four-digit hexadecimal number or a five-digit decimal number for the suffix. – Start: If you select Hexadecimal sequence. If you select 3. you must enter a four-character volume ID for the suffix. you must enter a nickname prefix in this field. Provide the following information: – Alias start address: Enter the first alias address as a decimal number 0 . – Order: Select the address allocation order for the alias volumes. If you select Custom. If you select 2. they are in a common pool and are assigned to base volumes as needed on a per I/O basis. you can optionally assign the following alias nicknames for your volumes: – Nickname prefix: If you select a nickname suffix of None. If you select 1. – Assign aliases by using a ratio of aliases to base volume: You can assign alias volumes by using a ratio of alias volumes-to-base volumes. even though you initially assign each to a certain base volume. If you select Volume ID.Select Assign the alias volume to these base volumes if you use PAV or Hyper PAV. you must enter a number in this field. you must assign aliases to all base volumes on this window because the alias assignments that are made here are permanent. every second base volume receives an alias volume. – Evenly assign alias volumes among bases: When you select this option. If your host system is using Static alias management. If you select a nickname suffix of Volume ID or Custom. The first value gives the number that you assign to each alias volume and the second value selects to which alias volume you want to assign an alias. To change the assignments later. The volume addresses are allocated sequentially starting from the alias start address in the selected order. The VOLSER is created later when the volume is initialized by the ICKDSF INIT command. The selection always starts with the first volume. With HyperPAV. In the last section of this window. With Dynamic alias management. every third base volume receives an alias volume. Assign all aliases: You can assign all aliases in the LCU to just one base volume if you implemented HyperPAV or Dynamic alias management. the alias devices are not permanently assigned to any base volume. you can leave this field blank. you must enter the number of aliases you want to assign to each base volume.

In the Create Volumes window (as shown in Figure 13-70). Figure 13-71 LCU to Extent Pool Assignment window Chapter 13. The Create Volumes window opens. as shown in Figure 13-70.Click OK to proceed. you can change the Extent Pool assignment to your LCU. 7. Figure 13-70 Create Volumes window 6. Configuration by using the DS Storage Manager GUI 373 . In the next window (as shown in Figure 13-71). Select Next if you do not need to create more volumes. Select Finish if you do not want to make any changes. you can select the created volumes to modify or delete them. You also can create more volumes if necessary.

8. The Create LCUs Verification window opens, as shown in Figure 13-72. You can see list of all the volumes that are going to be created. If you want to add more volumes or modify the existing volumes, you can do so by selecting the appropriate action from the Action drop-down list. After you are satisfied with the specified values, click Create all to create the volumes.

Figure 13-72 Create LCUs Verification window

9. The Creating Volumes information window opens. Depending on the number of volumes, the process can take some time to complete. Optionally, click View Details to check the overall progress. 10. After the creation is complete, a final window is shown. You can select View details or Close. If you click Close, you return to the main CKD LCUs and Volumes window, where you see that the bar graph is changed.

374

IBM System Storage DS8870 Architecture and Implementation

13.4.10 Additional tasks on LCUs and CKD volumes
When you select Manage existing LCUs and Volumes (as shown in Figure 13-73), you can complete other tasks at the LCU or volume level. As shown in Figure 13-73, the following options are available: Create: For information, see 13.4.9, “Creating LCUs and CKD volumes” on page 368. Clone LCU: For more information, see 13.4.9, “Creating LCUs and CKD volumes” on page 368. All properties from the selected LCU are cloned here. Add Volumes: You can add base volumes to the selected LCU here. For more information, see 13.4.9, “Creating LCUs and CKD volumes” on page 368 for more information. Add Aliases: You can add alias volumes here without creating more base volumes. Properties: You show the additional properties here. You also can change some of the properties, such as the timeout value. Delete: You can delete the selected LCU here. This action must be confirmed because you also delete all volumes that can contain data. Migrate: You migrate volumes from one extent pool to another. For more information about migrating volumes, see IBM System Storage DS8000 Easy Tier, REDP-4667.

Figure 13-73 Manage LCUs and Volumes window

Chapter 13. Configuration by using the DS Storage Manager GUI

375

The next window (as shown in Figure 13-74) shows that you can take the following actions at the volume level after you select an LCU: Increase capacity: You can increase the size of a 3390-type volume. The capacity of a 3380 volume cannot be increased. After the operation completes, you can use ICKDSF to refresh the volume VTOC to reflect the additional cylinders. Important: The capacity of a volume cannot be decreased. Add Aliases: You define more aliases without creating base volumes. Properties: Here you can view the volumes properties. The only value that you change is the nickname. You can also see whether the volume is online from the DS8000 side. Delete: Here you can delete the selected volume. This action must be confirmed because you also delete all alias volumes and data on this volume. The volume must be offline to any host or you must select the Force option. Migrate: You migrate volumes from one extent pool to another. For more information about migrating volumes, see IBM System Storage DS8000 Easy Tier, REDP-4667.

Figure 13-74 Manage CKD Volumes

Important: After the volumes are initialized by using the ICKDSF INIT command, you also see the Volume Serial Numbers (VOLSERs) in this window. This action is not done in this example. The Increase capacity action can be used to dynamically expand volume capacity without the need to bring the volume offline in z/OS. It is good practice to start by using 3390 Mod A after you can expand the capacity and change the device type of your existing 3390 Mod 3, 3390 Mod 9, and 3390 Custom volumes. Keep in mind that 3390 Mod A volumes can be used only on z/OS V1.10 or later. After the capacity is increased on DS8000, you can run an ICKDSF to refresh the VTOC Index to be sure that the new volume size is fully recognized.

376

IBM System Storage DS8870 Architecture and Implementation

13.5 Other DS GUl functions
In this section, we describe other DS GUI functions.

13.5.1 Easy Tier
To enable Easy Tier, go to System Status, highlight the DS8000 storage image, then select Action  Storage Image  PropertiesIn the Advanced tab, it is possible to enable Easy Tier, as shown in Figure 13-75.

Figure 13-75 Storage Image Properties: Set up for EasyTier and IO Priority Manager

Easy Tier Auto Mode manages the Easy Tier Automatic Mode behavior. You can select the following options: All Pools: Automatically manage all single and multitier pools. Tiered Pools: Automatically manage multitier pools only. No Pools: No volume is managed.

Chapter 13. Configuration by using the DS Storage Manager GUI

377

To retrieve information about the performance of Easy Tier, go to System Status, select the DS8000, then select Action  Storage Image Export Easy Tier Summary Report, as shown in Figure 13-76.

Figure 13-76 Easy Tier Summary Report

The data that is collected must be analyzed from the STAT tool that is related to the machine code level. For more information about the use of this tool, see IBM System Storage DS8000 Easy Tier, RDP-4667.

378

IBM System Storage DS8870 Architecture and Implementation

13.5.2 I/O Priority Manager
To enable I/O Priority Manager, go to System Status, select the DS8000, the select Action  Storage Image  PropertiesIn the Advanced tab, click Manage (a shown in Figure 13-75 on page 377). The I/O Priority Manager license must be activated for I/O Priority Management. By enabling SNMP Traps, hosts can be informed when a rank is going in saturation and managed by the host directly, if management is allowed. The priority of all volumes must be manually selected for each volume by selecting Volumes then selecting the volume typology (FB or CKD). Click Manage existing volumes.Select the volume then click Properties Select Performance Group by right-clicking and selecting the appropriate performance group for the selected volume (see Figure 13-77).

Figure 13-77 I/O Priority Manager Performance group selection

Chapter 13. Configuration by using the DS Storage Manager GUI

379

To retrieve information about the performance of I/O Priority Manager, go to System Status. Select the DS8000, then click Action  Storage Image I/O Priority Manager, as shown in Figure 13-78.

Figure 13-78 I/O Priority Manager reports

For more information: For more information about I/O Priority Manager and the different options that are available, see 7.6, “I/O Priority Manager” on page 193.

13.5.3 Checking the status of the DS8000
Complete the following steps to display and explore the overall status of your DS8000 system: 1. In the navigation pane in the DS GUI, mouse over Home and select System Status. The System Status window opens. 2. Select your storage complex and, from the Action drop-down menu, select Storage Unit  System Summary, as shown in Figure 13-79.

Figure 13-79 Select Storage Unit System Summary

380

IBM System Storage DS8870 Architecture and Implementation

3. The new Storage Complex window provides general DS8000 system information. As shown in Figure 13-80, the window is divided into the following sections: a. System Summary: You can quickly identify the percentage of capacity that is used, and the available and used capacity for open systems and System z. In addition, you can check the system state and obtain more information by clicking the state link. b. Management Console information. c. Performance: Provides performance graphs for host MBps, host KIOps, rank MBps, and rank KIOps. This information is updated every 60 seconds. d. Racks: Represents the physical configuration.

Figure 13-80 System Summary overview

4. In the Rack section, the number of racks that is shown matches the racks that are physically installed in the storage unit. If you position the mouse pointer over the rack, more rack information is displayed, such as the rack number, the number of DDMs, and the number of host adapters, as shown in Figure 13-81.

Figure 13-81 System Summary: rack information

Chapter 13. Configuration by using the DS Storage Manager GUI

381

13.5.4 Exploring the DS8000 hardware
The DS8000 GUI allows you to explore the hardware that is installed in your DS8000 system by locating specific physical and logical resources (arrays, ranks, extent pools, and others). The Hardware Explorer shows system hardware and a mapping between logical configuration objects and DDMs. You can explore the DS8000 hardware components and discover the correlation between logical and physical configuration by completing the following steps: 1. In the navigation pane in the DS GUI, mouse over Home and select System Status. 2. The Storage Complexes Summary window opens. Select your storage complex and, from the Action drop-down menu, select Storage Unit  System Summary. 3. Select the Hardware Explorer tab to switch to the Hardware Explorer window, as shown in Figure 13-82.

Figure 13-82 Hardware Explorer window

4. In this window, you can explore the specific hardware resources that are installed by selecting the appropriate component under the Search racks by resources drop-down menu. In the Rack section of the window, there is a front and rear view of the DS8000 rack. You can interact with the rack image to locate resources. To view a larger image of a specific location (which is displayed in the right pane of the window), use your mouse to move the yellow box to the wanted location across the DS8000 front and rear view.

382

IBM System Storage DS8870 Architecture and Implementation

5. To check where the physical disks of arrays are located, change the search criteria to Array and, from the Available Resources section, click one or more array IDs that you want to explore. After you click the array ID, the location of each DDM is highlighted in the rack image. Each disk includes an appropriate array ID label. Use your mouse to move the yellow box in the rack image on the left to the wanted location across the DS8000 front and rear view to view the magnified view of this section, as shown in Figure 13-83.

Figure 13-83 View arrays

6. After you identified the location of array DDMs, you can position the mouse pointer over the specific DDM to display more information, as shown in Figure 13-84.

Figure 13-84 DDM information

Chapter 13. Configuration by using the DS Storage Manager GUI

383

7. Change the search criteria to Extent Pool to discover more about each extent pool location. Select as many extent pools as you need in the Available Resources section and find the physical location of each one, as shown in Figure 13-85.

Figure 13-85 View Extent Pools

8. Another useful function in the Hardware Explorer GUI section is identifying the physical location of each FCP or FICON port. Change the search criteria to I/O Ports and select one or more ports in the Available Resources section. Use your mouse to move the yellow box in the rack image to the rear DS8000 view (bottom pane), where the I/O ports are located, as shown in Figure 13-86.

Figure 13-86 View I/O ports

Click the highlighted port to discover its basic properties and status.

384

IBM System Storage DS8870 Architecture and Implementation

REDP-4505. For more information about DS CLI commands that are related to Performance I/O Priority Manager. Configuration with the DS Command-Line Interface In this chapter. This chapter covers the following topics: DS Command-Line Interface overview Configuring the I/O ports Configuring the DS8000 storage for FB volumes Configuring DS8000 storage for CKD volumes Metrics with DS CLI For more information about Copy Services configuration in the DS8000 by using the DS CLI. For more information about DS CLI commands that are related to Resource Groups. © Copyright IBM Corp. see IBM System Storage DS8000 Easy Tier. see the following publications: IBM System Storage DS: Command-Line Interface User's Guide. we describe how to configure storage on the IBM System Storage DS8000 storage subsystem by using the DS Command-Line Interface (DS CLI). All rights reserved. REDP-4758. 2013. see IBM System Storage DS8700 Disk Encryption Implementation and Usage Guidelines. SG24-6788 IBM System Storage DS8000: Copy Services for IBM System z. REDP-4667.14 Chapter 14. see IBM System Storage DS8000: LDAP Authentication. For more information about DS CLI commands that are related to LDAP authentication. see IBM System Storage DS8000 Performance I/O Priority Manager. SG24-6787 For more information about DS CLI commands that are related to disk encryption. REDP-4760. see IBM System Storage DS8000 Resource Groups. GC53-1127-06 IBM System Storage DS8000: Copy Services for Open Systems. REDP-4500. For more information about DS CLI commands that are related to Easy Tier. 385 .

Many hosts might already have a suitable level of Java installed. or the Windows command prompt. Integrate LDAP policy usage and configuration. and Extent Pools. it can be done silently by using a profile file. such as the bash or korn shell. Manage user ID passwords.1 DS Command-Line Interface overview The DS Command-Line Interface (DS CLI) provides a full-function command set with which you check your Storage Unit configuration and perform specific application functions. Novell NetWare. If you suspect a version incompatibility problem. see the IBM System Storage DS8000 Information Center at this website: http://publib. it is not possible to test every version of DS CLI with every LMC level. Single installation: In almost all cases. Oracle Solaris. 14. Configure and manage Storage Facility Images. The installation program checks for this requirement during the installation process and does not install the DS CLI if you do not have the suitable version of Java.boulder. HP-UX. The following list highlights a few of the functions that you can perform with the DS CLI: Create user IDs that can be used with the GUI and the DS CLI. HP OpenVMS. The installation process also installs software that allows the DS CLI to be uninstalled should it no longer be required. or delete Copy Services configuration settings. Create and delete logical volumes. install the DS CLI version that corresponds to the LMC level that is installed on your system. If installed by using a shell. Install activation keys for licensed features. 386 IBM System Storage DS8870 Architecture and Implementation .com/infocenter/ds8000ic/index. VMware ESX. For more information about DS CLI use and setup. ranks. Check the current Copy Services configuration that is used by the Storage Unit. SUSE Linux. However. specific pre-installation concerns. make sure that you have at least Java version 1. The installation process can be performed through a shell. Implement encryption functionality.1.1 Supported operating systems for the DS CLI The DS CLI can be installed on many operating system platforms. including AIX. Important: For the most recent information about currently supported operating systems. you can use a single installation of the latest version of the DS CLI for all of your system needs. GC53-1127.14. Create. Create and delete RAID arrays. Manage storage complexes and units.ibm. or through a GUI. and Microsoft Windows. You can have more than one version of DS CLI installed on your system.42 or later installed. so an occasional problem might occur despite every effort to maintain that level of compatibility. IBM i i5/OS.jsp Before you can install the DS CLI. Red Hat Linux. see IBM System Storage DS: Command-Line Interface User's Guide. modify. each in its own directory. and installation file locations. Manage host access to volumes.

we create a user called JohnDoe. It also can be used to unlock a user ID that was locked by exceeding the allowable login retry count. 14.1. In Example 14-1. Configuration with the DS Command-Line Interface 387 . The primary or secondary HMC console can be used. The user must use the chpass command when they log in for the first time. you can change the password or group (or both) of an existing user ID.5. rmuser An existing user ID is removed by using this command. The DS8000 forces you to change the password at the first login. we remove a user called JaneSmith. The administrator could also use this command to lock a user ID. Example 14-1 Using the mkuser command to create a user dscli> mkuser -pw passw0rd -group op_storage JohnDoe CMUC00133I mkuser: User JohnDoe successfully created.14. For more information about user accounts. Chapter 14.2 User accounts DS CLI communicates with the DS8000 system through the Hardware Management Console (HMC). Example 14-3 Changing a user with chuser dscli> chuser -unlock -pw time2change -group op_storage JohnDoe CMUC00134I chuser: User JohnDoe successfully modified. DS CLI access is authenticated by using HMC user accounts. In Example 14-2. you might want to define some other users. and change the group membership for a user called JohnDoe.1. change the password. The temporary password of the user is passw0rd. The following commands are used to manage user IDs by using the DS CLI: mkuser A user account that can be used with DS CLI and the DS GUI is created by using this command. The user must use the chpass command the next time they log in. which is in the op_storage group. The pre-configured user ID is admin and the password is admin. see 9. “HMC user management” on page 263. maybe with different rights.3 User management by using the DS CLI Apart from the administration user. we unlock the user. The same user IDs can be used for DS CLI and DS GUI access. In Example 14-3. chuser By using this command. Example 14-2 Removing a user dscli> rmuser JaneSmith CMUC00135W rmuser: Are you sure you want to delete user JaneSmith? [y/n]:y CMUC00136I rmuser: User JaneSmith successfully deleted.

we change the expiration to 365 days and five failed login attempts. Example 14-7 Changing rules by using the chpass command dscli> chpass -expire 365 -fail 5 CMUC00195I chpass: Security properties successfully set. chpass By using this command.dat In Example 14-6.0.0. By default. The password is now saved in an encrypted file that is called security. we list the details of the user JohnDoe. This file can be referred to in a DS CLI profile. the file is in the following directories: Windows Non Windows C:\Documents and Settings\<User>\DSCLI\security. Example 14-5 Using the showuser command to list user information dscli> showuser JohnDoe Name JohnDoe Group op_storage State active FailedLogin 0 DaysToExpire 365 Scope PUBLIC managepwfile An encrypted password file that is placed onto the local machine is created or added by using this command. In Example 14-7. In Example 14-4.dat.1/JohnDoe successfully added to password file C:\Documents and Settings\Administrator\dscli\security. a list of all user IDs can be generated. you also can refer to a password file with the -pwfile parameter. You can run scripts without specifying a DS CLI user password in clear text. If you are manually starting DS CLI. you can change two password policies: password expiration (days) and failed logins allowed. Example 14-6 Using the managepwfile command dscli> managepwfile -action add -name JohnDoe -pw passw0rd CMUC00206I managepwfile: Record 10. we can see three users and the administrator account.lsuser By using this command. 388 IBM System Storage DS8870 Architecture and Implementation .dat. Example 14-4 Using the lsuser command to list users dscli> lsuser Name Group State =========================== JohnDoe op_storage secadmin secadmin admin admin showuser active active active The account details of a user ID can be displayed by using this command.dat $HOME/dscli/security. we manage our password file by adding the user ID JohnDoe. In Example 14-5.

the IP address or host name of the DS8000 HMC.1. c:\Program Files (x86)\IBM\dscli for Windows 7. Configuration with the DS Command-Line Interface 389 .showpass The properties for passwords (Password Expiration days and Failed Logins Allowed) are listed by using this command. c:\Program Files\IBM\dscli\profile\dscli. When you start DS CLI. a default profile is installed in the profile directory with the software. You can create a personal default profile by copying the system default profile as <user_home>/dscli/profile/dscli. It is a good practice to open the default profile and then save it as a new file. and a password are required. The file name is dscli. In Example 14-8. You have the following options for using profile files: You can modify the system default profile: dscli. You can also provide other information. we can see that passwords are set to expire in 90 days and that four login attempts are allowed before a user ID is locked.profile for UNIX and Linux platforms.profile. You can then create multiple profiles and reference the relevant profile file by using the -cfg parameter. At a minimum. For example: – %USERPROFILE%\IBM\DSCLI\profile\operation_name1 – %USERPROFILE%\IBM\DSCLI\profile\operation_name2 Default profile file: The default profile file that you created when you installed the DS CLI might be replaced every time you install a new version of the DS CLI. such as the output format for list commands. and whether a banner is included with the command-line output. for example. and /opt/ibm/dscli/profile/dscli. you must provide certain information by using the dscli command. you must specify only a profile name by using the dscli command. When you install the command-line interface software.profile. Example 14-8 Using the showpass command dscli> showpass Password Expiration Failed Logins Allowed Password Age Minimum Length Password History 365 days 5 0 days 6 4 14. a user name. you do not have to specify this information each time you use DS CLI. Save the profile in the user profile directory.profile for the Windows XP platform. If you create one or more profiles to contain your preferred settings. The default home directory <user_home> is designated in the following directories: – Windows system: %USERPROFILE% usually C:\Documents and Settings\Administrator – UNIX/Linux system: $HOME You can create specific profiles for different Storage Units and operations.profile.4 DS CLI profile To access a DS8000 system with the DS CLI. You can override the values of the profile by specifying a different parameter value by using the dscli command. the number of rows per page in the command-line output. Chapter 14.

Examples of these lines are shown in bold in Example 14-10.These profile files can be specified by using the DS CLI command parameter -cfg <profile_name>.profile 4. Example 14-9 Command prompt operation C:\Program Files\ibm\dscli>cd profile C:\Program Files\IBM\dscli\profile>notepad dscli. enter the command notepad dscli. Use a text editor that correctly interprets UNIX line endings. Default newline delimiter: The default newline delimiter is a UNIX delimiter. If a profile of a user does not exist.250 admin passw0rd 390 IBM System Storage DS8870 Architecture and Implementation . 2.0. In the command window that opens. which can render text in notepad as one long line. 3.0. Profile change illustration Complete the following steps to edit the profile: 1. In the profile directory.2107-AZ12341 devid: hmc1: username: password: IBM. respectively. one in the default system’s directory and one in your personal directory. Example 14-10 DS CLI profile example # DS CLI Profile # # Management Console/Node IP Address(es) # hmc1 and hmc2 are equivalent to -hmc1 and -hmc2 command options.0. There are four lines that you can consider adding. double-click the DS CLI icon. Two default profiles: If there are two default profiles called dscli. the system default profile is used. enter the command cd profile.0. From the Windows desktop.profile.2107-AZ12341 #remotedevid:IBM.2107-75ABCD1 10. The notepad opens and includes the DS CLI profile. #devid: IBM.profile.1 # Default target Storage Image ID # "devid" and "remotedevid" are equivalent to # "-dev storage_image_ID" and "-remotedev storeage_image_ID" command options. the default profile of the user is used. #hmc1:127. your personal profile is loaded.0. If the -cfg file is not specified.1 #hmc2:127. as shown in Example 14-9.0.

To specify the second HMC in a command. hmc1:10. the DS CLI automatically communicates through HMC2 if HMC1 becomes unreachable. Also.5 After these changes are made and save the profile.0. the use of the remotedevid parameter is suggested for the same reasons.0.5 Configuring DS CLI to use a second HMC The second HMC can be specified on the command line or in the profile file that is used by the DS CLI. is suggested. paging: For interactive mode. you can modify the following lines in the dscli.profile (or any profile) file: # Management Console/Node IP Address(es) # hmc1 and hmc2 are equivalent to -hmc1 and -hmc2 command options.1. header: Column names are printed. use the lssi CLI command. It is better to create an encrypted password file with the managepwfile CLI command.0.0. if you specify dscli profile for copy services usage. any changes that you make to users are still replicated onto the other HMC. A password file that is generated by using the managepwfile command is in the user_home_directory/dscli/profile/security/security.0. 14. Not only does this addition help you to avoid mistakes when you are using more profiles. Only one entry should be uncommented (or more literally.Adding the serial number by using the devid parameter. The following customization parameters also affect dscli output: banner: Date and time with the dscli version is printed for each command. Two HMCs: If you have two HMCs and you specify only one of them in a DS CLI command (or profile). Important: Use care if you are adding multiple devid and HMC entries. unhashed) at any one time. Chapter 14.0. To determine the ID of a storage system. Configuration with the DS Command-Line Interface 391 .dat directory. this parameter breaks output after a certain number of rows (24 by default).0.5 Enter your username: JohnDoe Enter your password: IBM.0. or stanza). it is not suggested that you add them because they are an undocumented feature that might not be supported in the future.1 -hmc2 10. If you have multiple hmc1 or devid entries. xml. and the HMC IP address by using the hmc1 parameter. By using this change. Although adding the user name and password parameters simplifies the DS CLI startup. you can perform configuration and Copy Services commands with full redundancy. format: The output format (specified as default. the password is saved in clear text in the profile file.1 hmc2:10. as shown in Example 14-11. Instead. Additionally. the DS CLI uses the entry that is closest to the bottom of the profile. use the -hmc2 parameter. but you do not need to specify this parameter for certain dscli commands that require it. delim.2107-75ZA571 dscli> Alternatively. Example 14-11 Using the -hmc2 parameter C:\Program Files\IBM\dscli>dscli -hmc1 10.

Flags parameter: Provides information that is required to implement the command modification that is specified by a flag. A Command-Line Interface command consists of one to four types of components that are arranged in the following order: 1. 2. Example 14-12 Single-shot command mode C:\Program Files\ibm\dscli>dscli -hmc1 10. 14. 4. You also use this mode if you are embedding the command into an OS shell script.10. such as redirecting the DS CLI output to a file. You must supply the login information and the command that you want to process at the same time. The -pwfile command should be used.10.1. Flags: Modifies the command. it is always the last component of the command. enter the following command: dscli -hmc1 <hostname or ip address> -user <adm user> -passwd <pwd> <command> or dscli -cfg <dscli profile> -pwfile <security file> <command> Important: It is not recommended to embed the username and password into the profile.7 Using the DS CLI application To issue commands to the DS8000. When a command parameter is required.1 -user admin -passwd pwd lsuser Name Group State ===================== admin admin locked admin admin active exit status of dscli = 0 392 IBM System Storage DS8870 Architecture and Implementation . Example 14-12 shows the use of the single-shot command mode. and it is not preceded by a flag.6 Command structure Here we describe the components and structure of a Command-Line Interface command. They provide more information that directs the command-line interface to perform the command task in a specific way. 2.1. Wait for the command to process and display the results. 3. Complete the following steps to use the single-shot mode: 1. you must fist log in to the DS8000 through the DS CLI with one of the following command modes of execution: Single-shot command mode Interactive command mode Script command mode Single-shot command mode Use the DS CLI single-shot command mode if you want to issue an occasional command from the OS shell prompt where you need special handling.14. Command parameters: Provides basic information that is necessary to perform the command task. The command name: Specifies the task that the Command-Line Interface is to perform. At the OS shell prompt.

. Optionally. you can turn off the paging feature in the profile file by using the paging:off parameter. Use the quit or exit command to end interactive mode. You are not required to begin each command with dscli because this prefix is provided by the dscli command prompt. It is important to understand that when a command is executed in single shot mode. The number of rows can be specified in the profile file. 3. The authentication process can take a considerable amount of time. Provide the information that is requested by the information prompts. appears. Configuration with the DS Command-Line Interface 393 . The command prompt switches to a dscli command prompt.. the user must be authenticated.Important: When you are typing the command. Chapter 14. the message Press Enter To Continue. Interactive mode: In interactive mode for long outputs. Interactive command mode Use the DS CLI interactive command mode when you want to issue a few infrequent commands without having to log on to the DS8000 for each command. Log on to the DS CLI application at the directory where it is installed. 2. Complete the following steps to use the interactive command mode: 1. The interactive command mode provides a history function that makes repeating or checking prior command usage easy to do. you can use the host name or the IP address of the HMC. 4. The information prompts might not appear if you provided this information in your profile file. Use the DS CLI commands and parameters.

you can start DS CLI in script mode.0 Assigned A0 S2 0 450. In this case. If you want to run a script that contains only DS CLI commands. The file contains only DS CLI commands.0 Assigned A3 S5 0 450.0 Assigned A16 S18 2 600.0 Assigned A6 S8 1 146.0 Assigned A20 S22 3 146.0 Assigned A5 S7 1 146. Example 14-13 Interactive command mode # dscli -cfg ds8800.0 Assigned A10 S12 1 146.0 Assigned A1 S3 0 450. we show the contents of a DS CLI script file.0 Assigned A15 S17 2 600.0 Assigned A11 S13 2 600.profile dscli> lsarraysite arsite DA Pair dkcap (10^9B) State Array =========================================== S1 0 450. Empty lines are also allowed. The script that DS CLI executes can contain only DS CLI commands.0 Assigned A17 S19 3 146.0 Assigned A8 S10 1 146. In Example 14-14.0 Assigned A19 S21 3 146.0 Assigned A9 S11 1 146.0 Assigned A12 S14 2 600. only a single authentication must occur.0 Assigned A23 dscli> lssi Name ID Storage Unit Model WWNN State ESSNet ============================================================================== ATS_04 IBM.0 Assigned A13 S15 2 600.0 Assigned A7 S9 1 146.0 Assigned A21 S23 3 146.Example 14-13 shows the use of interactive command mode. although comments can be placed in the file by using a hash symbol (#).2107-75TV181 IBM. you can use the host name or the IP address of the HMC.0 Assigned A4 S6 0 450.2107-75TV180 951 500507630AFFC29F Online Enabled Important: When you are typing the command.0 Assigned A18 S20 3 146. Script command mode Use the DS CLI script command mode if you want to use a sequence of DS CLI commands. Example 14-14 Example of a DS CLI script file # Sample ds cli script file # Comments can appear if hashed lsarraysite lsarray lsrank 394 IBM System Storage DS8870 Architecture and Implementation .0 Assigned A2 S4 0 450.0 Assigned A22 S24 3 146.0 Assigned A14 S16 2 600. One advantage of using this method is that scripts that are written in this format can be used by the DS CLI on any operating system into which you can install DS CLI.

In Example 14-15.0 Assigned A2 S4 0 450.For script command mode.0 Assigned A0 S2 0 450.profile -script c:\ds8800. The use of shell commands results in process failure. you can specify an output format that might be easier to parse by your script. you can turn off the banner and header for easier output parsing.0 Assigned A3 CMUC00234I lsarray: No Array found. Example 14-15 Executing DS CLI with a script file C:\Program Files\ibm\dscli>dscli -cfg ds8800. You can add comments in the scripts that are prefixed by the hash symbol (#). CMUC00234I lsrank: No Rank found. Also. we start the DS CLI by using the -script parameter and specifying a profile and the name of the script that contains the commands from Example 14-14 on page 394. The hash symbol must be the first non-blank character on the line. Only one authentication process is needed to execute all of the script commands.script arsite DA Pair dkcap (10^9B) State Array =========================================== S1 0 450. Chapter 14.0 Assigned A1 S3 0 450. Important: The DS CLI script can contain only DS CLI commands. Configuration with the DS Command-Line Interface 395 .

cfg file. 14.9 User assistance The DS CLI is designed to include several forms of user assistance.1. help -s lists all the DS CLI commands with brief descriptions of each. The CLI. If a DS CLI command fails (for example.14. help -l lists all the DS CLI commands with their syntax information. If DS CLI commands are issued as separate commands (rather than by using script mode). The return codes that are used by the DS CLI are listed in Table 14-1. The following examples of usage are included: help lists all the available DS CLI commands.cfg file. a failure reason and a return code are shown.ibm.8 Return codes When the DS CLI exits. This result is effectively a return code. There was a connectivity or protocol error. 396 IBM System Storage DS8870 Architecture and Implementation .jsp Look under the Command-line interface tab. enter the command name as a parameter of the help command. The main form of user assistance is through the IBM System Storage DS8000 Information Center. because a syntax error or the use of an incorrect password). help -s <command name> gives a brief description of the specified command. the exit status code is provided.com/infocenter/dsichelp/ds8000ic/index. a return code is presented for every command. which is available at this website: http://publib. There was a syntax error in the command. Standard techniques to collect and analyze return codes can be used. The javaInstall variable was not provided in the CLI. The format of the configuration file was incorrect. The password or user ID details were incorrect. To obtain information about a specific DS CLI command.boulder. Table 14-1 DS CLI exit codes Return code 0 2 3 4 5 6 63 64 65 66 Category Success Syntax error Connection error Server error Authentication error Application error Configuration error Configuration error Configuration error Configuration error Description The command was successfully processed. An error occurred during a function call to the application server. An error occurred because of a MetaProvider client application-specific process. User assistance can also be found within the DS CLI program through the help command. The javaClasspath variable was not provided in the CLI. The following examples of usage are included: help <command name> gives a detailed description of the specified command.1. help -l <command name> gives syntax information about the specified command.cfg file was not found or is inaccessible.

Example 14-16 shows the output of the help command.profile help applydbcheck lsgmir applykey lshba chaccess lshostconnect chauthpol lshosttype chckdvol lshostvol chextpool lsioport chfbvol lskey chhostconnect lskeygrp chkeymgr lskeymgr chlcu lslcu chlss lslss chpass lsnetworkport chrank lspe chresgrp lsperfgrp chsession lsperfgrprpt chsestg lsperfrescrpt chsi lsportprof chsp lspprc chsu lspprcpath chuser lsproblem chvolgrp lsrank clearvol lsremoteflash closeproblem lsresgrp commitflash lsserver commitremoteflash lssession cpauthpol lssestg diagsi lssi dscli lsss echo lsstgencl exit lssu failbackpprc lsuser failoverpprc lsvolgrp freezepprc lsvolinit help lsvpn helpmsg manageckdvol initckdvol managedbcheck initfbvol managefbvol lsaccess managehostconnect lsaddressgrp managekeygrp lsarray managepwfile lsarraysite managereckey lsauthpol manageresgrp lsavailpprcport mkaliasvol lsckdvol mkarray lsda mkauthpol lsdbcheck mkckdvol lsddm mkesconpprcpath lsextpool mkextpool lsfbvol mkfbvol lsflash mkflash lsframe mkgmir mkhostconnect mkkeygrp mkkeymgr mklcu mkpe mkpprc mkpprcpath mkrank mkreckey mkremoteflash mkresgrp mksession mksestg mkuser mkvolgrp offloadauditlog offloaddbcheck offloadfile offloadss pausegmir pausepprc quit resumegmir resumepprc resyncflash resyncremoteflash reverseflash revertflash revertremoteflash rmarray rmauthpol rmckdvol rmextpool rmfbvol rmflash rmgmir rmhostconnect rmkeygrp rmkeymgr rmlcu rmpprc rmpprcpath rmrank rmreckey rmremoteflash rmresgrp rmsession rmsestg rmuser rmvolgrp sendpe sendss setauthpol setcontactinfo setdbcheck setdialhome setenv setflashrevertible setioport setnetworkport setoutput setplex setremoteflashrevertible setrmpw setsim setsmtp setsnmp setvpn showarray showarraysite showauthpol showckdvol showcontactinfo showenv showextpool showfbvol showgmir showgmircg showgmiroos showhostconnect showioport showkeygrp showlcu showlss shownetworkport showpass showplex showrank showresgrp showsestg showsi showsp showsu showuser showvolgrp testauthpol testcallhome unfreezeflash unfreezepprc ver who whoami Chapter 14. Configuration with the DS Command-Line Interface 397 . Example 14-16 Displaying a list of all commands in DS CLI by using the help command # dscli -cfg ds8800.

see 14. In Example 14-18. Note that I0000-I0003 are on one card. This information can be displayed by issuing the relevant command followed by the -h. “Metrics with DS CLI” on page 426. -help.5. 398 IBM System Storage DS8870 Architecture and Implementation . whereas I0100-I0103 are on another card.2107-7503461 ID WWPN State Type topo portgrp =============================================================== I0000 500507630300008F Online Fibre Channel-SW SCSI-FCP 0 I0001 500507630300408F Online Fibre Channel-SW SCSI-FCP 0 I0002 500507630300808F Online Fibre Channel-SW SCSI-FCP 0 I0003 500507630300C08F Online Fibre Channel-SW SCSI-FCP 0 I0100 500507630308008F Online Fibre Channel-LW FICON 0 I0101 500507630308408F Online Fibre Channel-LW SCSI-FCP 0 I0102 500507630308808F Online Fibre Channel-LW FICON 0 I0103 500507630308C08F Online Fibre Channel-LW FICON 0 The following possible topologies for each I/O port are available: SCSI-FCP: Fibre Channel-switched fabric (also called point-to-point). Example 14-18 Changing topology by using setioport dscli> setioport -topology ficon I0001 CMUC00011I setioport: I/O Port I0001 successfully configured. we list the I/O ports by using the lsioport command. Man pages are most commonly seen in UNIX based operating systems and give information about command capabilities. we set two I/O ports to the FICON topology and then check the results. dscli> lsioport ID WWPN State Type topo portgrp =============================================================== I0000 500507630300008F Online Fibre Channel-SW SCSI-FCP 0 I0001 500507630300408F Online Fibre Channel-SW FICON 0 I0002 500507630300808F Online Fibre Channel-SW SCSI-FCP 0 I0003 500507630300C08F Online Fibre Channel-SW SCSI-FCP 0 I0100 500507630308008F Online Fibre Channel-LW FICON 0 I0101 500507630308408F Online Fibre Channel-LW FICON 0 I0102 500507630308808F Online Fibre Channel-LW FICON 0 I0103 500507630308C08F Online Fibre Channel-LW FICON 0 To monitor the status for each I/O port. dscli> setioport -topology ficon I0101 CMUC00011I setioport: I/O Port I0101 successfully configured. In Example 14-17. Example 14-17 Listing the I/O ports dscli> lsioport -dev IBM. FICON: FICON (for System z hosts only). This port type is also used for mirroring. or -? flags. FC-AL: Fibre Channel-arbitrated loop (for direct attachment without a SAN switch). 14.Man pages A man page is available for every DS CLI command.2 Configuring the I/O ports Set the I/O ports to the wanted topology.

Create Extent Pools. Create volumes. Before the arrays are created. Array sites are groups of eight disks that are predefined in the DS8000. and a DS8000 array site contains eight disk drive modules (DDMs). we create the arrays. we must change the -raidtype parameter to 6 (instead of 5). we must change the -raidtype parameter to 10.0 Unassigned S2 0 146. create repositories for track space efficient volumes.0 Unassigned S4 0 146. Configuration with the DS Command-Line Interface 399 . 5.3. Use the lsarraysite to list the array sites. it is a best practice to list the arrays sites. In this case. we can see that there are four array sites and that we can therefore create four arrays. we used one array site (in the first array. Example 14-19 Listing array sites dscli> lsarraysite arsite DA Pair dkcap (10^9B) State Array ============================================= S1 0 146. Example 14-20 Creating arrays with mkarray dscli> mkarray -raidtype 5 -arsite S1 CMUC00004I mkarray: Array A0 successfully created.0 Unassigned In Example 14-19.1 Creating arrays In this step. dscli> mkarray -raidtype 5 -arsite S2 CMUC00004I mkarray: Array A1 successfully created. Optionally. Chapter 14. If we wanted to create a RAID 10 array. as shown in Example 14-19. 6. Important: An array for a DS8000 can contain only one array site. as shown in Example 14-20. We perform the DS8000 storage configuration by completing the following steps: 1. If we wanted to create a RAID 6 array. 2. Create host connections. Create ranks.14. 14. we review examples of a typical DS8000 storage configuration when they are attached to open systems hosts. 4. We can now issue the mkarray command to create arrays.0 Unassigned S3 0 146. Create volume groups. 7. 3. Create arrays.3 Configuring the DS8000 storage for FB volumes In this section. S1) to create a single RAID 5 array.

we create empty Extent Pools that are related to the type of storage that is in the pool. Example 14-22 shows the mkrank commands and the result of a successful lsrank -l command. However. to associate ranks with both servers. create another for high performance. and which array sites were used to create the arrays. and the format of the rank.0 We can see in this example the type of RAID array and the number of disks that are allocated to the array (in this example 6+P+S. This command displays all of the ranks that were created. which means the usable space of the array is six times the DDM size). as shown in Example 14-21. 400 IBM System Storage DS8870 Architecture and Implementation . For example. the RAID type. the lsrank command is run.2 Creating ranks After we create all of the required arrays. depending on whether you are configuring for open systems or System z hosts.3. The format of the command is mkrank -array Ax -stgtype xxx.Unassigned Normal A0 5 fb 773 R1 . Example 14-22 Creating and listing ranks with mkrank and lsrank dscli> mkrank -array A0 -stgtype fb CMUC00007I mkrank: Rank R0 successfully created. create an Extent Pool for high capacity disk. dscli> lsrank -l ID Group State datastate Array RAIDtype extpoolID extpoolnam stgtype exts usedexts ======================================================================================= R0 . whether it is FB or CKD. Extent Pools for the CKD environment. in our example up to now). to which server the rank is attached (to none.3 Creating Extent Pools The next step is to create Extent Pools. After all of the ranks are created. dscli> mkrank -array A1 -stgtype fb CMUC00007I mkrank: Rank R1 successfully created. the capacity of the DDMs that are used. For easier management. 14. Example 14-21 Listing the arrays with lsarray dscli> lsarray Array State Data RAIDtype arsite Rank DA Pair DDMcap (10^9B) ===================================================================== A0 Unassigned Normal 5 (6+P+S) S1 0 146.3.0 A1 Unassigned Normal 5 (6+P+S) S2 0 146.Unassigned Normal A1 5 fb 773 - 14. we then create the ranks by using the mkrank command. if needed. and.We can now see which arrays were created by using the lsarray command. The number of Extent Pools can range from one to as many as there are existing ranks. for server0 or server1). Remember the following points when you are creating the Extent Pools: Each Extent Pool includes an associated rank group that is specified by the -rankgrp parameter. which defines the Extent Pools’ server affinity (0 or 1. where xxx is fixed block (FB) or count key data (CKD). The Extent Pool type is FB or CKD and is specified by the -stgtype parameter. you need at least two Extent Pools.

Example 14-23 shows one example of Extent Pools that you could define on your machine. we first create empty Extent Pools by using the mkextpool command. Example 14-25 Displaying the ranks after a rank is assigned to an Extent Pool dscli> lsrank -l ID Group State datastate Array RAIDtype extpoolID extpoolnam stgtype exts usedexts =================================================================================== R0 0 Normal Normal A0 5 P0 FB_high_0 fb 773 0 R1 1 Normal Normal A1 5 P1 FB_high_1 fb 773 0 Chapter 14. we can see that rank R0 is assigned to extpool P0. Then. we attach a rank to an empty Extent Pool by using the chrank command. Example 14-24 Creating Extent Pool by using mkextpool. it is good practice to make note of the ID. the system automatically assigns it an Extent Pool ID. The Extent Pool ID is used when referring to the Extent Pool in subsequent CLI commands. we list the Extent Pools again by using lsextpool and note the change in the capacity of the Extent Pool. dscli> chrank -extpool P1 R1 CMUC00008I chrank: Rank R1 successfully modified. Example 14-23 An Extent Pool layout plan FB Extent Pool high capacity 300gb disks assigned to server 0 (FB_LOW_0) FB Extent Pool high capacity 300gb disks assigned to server 1 (FB_LOW_1) FB Extent Pool high performance 146gb disks assigned to server 0 (FB_High_0) FB Extent Pool high performance 146gb disks assigned to server 0 (FB_High_1) CKD Extent Pool High performance 146gb disks assigned to server 0 (CKD_High_0) CKD Extent Pool High performance 146gb disks assigned to server 1 (CKD_High_1) The mkextpool command forces you to name the Extent Pools. Extent Pools that are associated with rank group 0 receive an even ID number. dscli> mkextpool -rankgrp 1 -stgtype fb FB_high_1 CMUC00000I mkextpool: Extent Pool P1 successfully created. Finally. which is a decimal number that starts from 0. preceded by the letter P. The ID that was assigned to an Extent Pool is shown in the CMUC00000I message. We then list the Extent Pools to get their IDs. dscli> lsextpool Name ID stgtype rankgrp status availstor (2^30B) %allocated available reserved numvols =========================================================================================== FB_high_0 P0 fb 0 below 773 0 773 0 0 FB_high_1 P1 fb 1 below 773 0 773 0 0 After a rank is assigned to an Extent Pool. In Example 14-25. and chrank dscli> mkextpool -rankgrp 0 -stgtype fb FB_high_0 CMUC00000I mkextpool: Extent Pool P0 successfully created.When an Extent Pool is created. In Example 14-24. we should be able to see this change when we display the ranks. therefore. This setup requires a system with at least six ranks. Extent Pools that are associated with rank group 1 receive an odd ID number. Configuration with the DS Command-Line Interface 401 . lsextpool. dscli> lsextpool Name ID stgtype rankgrp status availstor (2^30B) %allocated available reserved numvols =========================================================================================== FB_high_0 P0 fb 0 below 0 0 0 0 0 FB_high_1 P1 fb 1 below 0 0 0 0 0 dscli> chrank -extpool P0 R0 CMUC00008I chrank: Rank R0 successfully modified. which is displayed in response to a successful mkextpool command.

you can create Track Space Efficient (TSE) volumes that can be used as FlashCopy targets. You can obtain information about the repository with the showsestg command. A repository has a physical capacity that is available for storage allocations by TSE volumes and a virtual capacity that is the sum of logical unit number (LUN) and volume sizes of all space efficient volumes. The repository provides space to store the data associated with TSE logical volumes.0 0 200.0 100. 402 IBM System Storage DS8870 Architecture and Implementation .0 0 3.0 200. If there are several ranks in the Extent Pool.0 More storage is allocated for the repository in addition to repcap size.Creating a repository for Track Space Efficient volumes If the DS8000 includes the IBM FlashCopy SE feature. A repository can be deleted with the rmsestg command. You might be interested in how much capacity is used within the repository by checking the repcapalloc value. For FB Extent Pools. the unit type can be GB (default) or blocks. In Example 14-27. the line that starts with overhead indicates that 3 GB is allocated in addition to the repcap size. Example 14-26 shows the creation of a repository.0 209715200 0. Only one repository is allowed per Extent Pool.0 419430400 0. Example 14-27 Getting information about a Space Efficient repository dscli> showsestg p9 extpool stgtype datastate configstate repcapstatus %repcapthreshold repcap(GiB) repcap(Mod1) repcap(blocks) repcap(cyl) repcapalloc(GiB/Mod1) %repcapalloc vircap(GiB) vircap(Mod1) vircap(blocks) vircap(cyl) vircapalloc(GiB/Mod1) %vircapalloc overhead(GiB/Mod1) reqrepcap(GiB/Mod1) reqvircap(GiB/Mod1) P9 fb Normal Normal below 0 100. Example 14-27 shows the output of the showsestg command. The unit type of the repository capacity (-repcap) and virtual capacity (-vircap) sizes can be specified with the -captype parameter. Example 14-26 Creating a repository for Space Efficient volumes dscli> mksestg -repcap 100 -vircap 200 -extpool p9 CMUC00342I mksestg: The space-efficient storage for the Extent Pool P9 has been created successfully. the repository’s extents are striped across the ranks (Storage Pool Striping). you must create a space efficient repository in the Extent Pool. Use the lssestg command to list all of the defined repositories. Before you can create TSE volumes. The physical repository capacity is allocated when the repository is created.

3. Chapter 14. each with a capacity of 10 GB. dscli> mkfbvol -extpool p1 -cap 10 -name high_fb_1_#h 1100-1103 CMUC00025I mkfbvol: FB volume 1100 successfully created. we should try to distribute them evenly across the two rank groups in the storage unit. these volumes are 10 GB binary.000 bytes in size.1003 are in LSS 10. CMUC00025I mkfbvol: FB volume 1103 successfully created. That Extent Pool is attached to rank group 0. If you must expand a repository. CMUC00025I mkfbvol: FB volume 1003 successfully created. the default size is a binary size. Now rank group 0 can contain only even-numbered LSSs. Example 14-28 Creating fixed block volumes by using mkfbvol dscli> lsextpool Name ID stgtype rankgrp status availstor (2^30B) %allocated available reserved numvols =========================================================================================== FB_high_0 P0 fb 0 below 773 0 773 0 0 FB_high_1 P1 fb 1 below 773 0 773 0 0 dscli> mkfbvol -extpool p0 -cap 10 -name high_fb_0_#h 1000-1003 CMUC00025I mkfbvol: FB volume 1000 successfully created. it is not possible to expand a Space Efficient repository.418. the first two digits of the volume serial number are 11 (an odd number) which signifies that they belong to rank group 1. you must delete all TSE logical volumes and the repository itself.737.Important: In the current implementation. Looking closely at the mkfbvol command that is used in Example 14-28.240 bytes. because the -type parameter was not used.000. so. CMUC00025I mkfbvol: FB volume 1001 successfully created. Configuration with the DS Command-Line Interface 403 .1003 in Example 14-28. CMUC00025I mkfbvol: FB volume 1102 successfully created. The -cap parameter determines size. then re-create a new repository. For volumes 1100 . If we used the parameter -type ess. in this case. we see that volumes 1000 . the volumes are decimally sized and are a minimum of 10.000. careful planning is required. Therefore. we created eight volumes. which equates to 10. CMUC00025I mkfbvol: FB volume 1002 successfully created. However. 14. So. which means volumes in that Extent Pool must belong to an even-numbered LSS.4 Creating FB volumes We are now able to create volumes and volume groups. The physical size or the virtual size of the repository cannot be changed. CMUC00025I mkfbvol: FB volume 1101 successfully created. volumes 1000 . Creating standard volumes The following format of the command that we use to create a volume is used: mkfbvol -extpool pX -cap xx -name high_fb_0#h 1000-1003 In Example 14-28. which means server 0. When we create the volumes or groups.1003 are in extpool P0. The first two digits of the volume serial number are the LSS number. The first four volumes are assigned to rank group 0 and the second four are assigned to rank group 1.

When you create an FB volume in an address group. We then list the Extent Pools to see how much space is left after the volume is created. 404 IBM System Storage DS8870 Architecture and Implementation . For more information. Currently. REDP-4760.0 high_fb_0_1002 1002 Online Normal Normal 2107-922 FB 512 P0 10. This configuration can be done by adding the -t10dif parameter to the mkfbvol command. you must create volumes that are formatted for T10 DIF usage.0 dscli> lsextpool Name ID stgtype rankgrp status availstor (2^30B) %allocated available reserved numvols =========================================================================================== FB_high_0 P0 fb 0 below 733 5 733 0 4 FB_high_1 P1 fb 1 below 733 5 733 0 4 Important: For the DS8000. This naming convention can be seen in Example 14-29. Resource Group: Starting with DS8000 with Licensed Machine Code Release 6.0 high_fb_1_1101 1101 Online Normal Normal 2107-922 FB 512 P1 10. see IBM System Storage DS8000: Copy Services Resource Groups. T10 DIF is supported for Linux on System z. T10 DIF requires volumes to be formatted in 520-byte sectors with Cyclic Redundancy Check (CRC) bytes added to the data. see “T10 Data Integrity Field support” on page 115. the LSSs can be ID 00 to ID FE. The LSSs are in address groups.0 high_fb_1_1103 1103 Online Normal Normal 2107-922 FB 512 P1 10.0 high_fb_0_1003 1003 Online Normal Normal 2107-922 FB 512 P0 10.In Example 14-28 on page 403 we named the volumes by using naming scheme high_fb_0_#h.1. see IBM System Storage DS8000 I/O Priority Manager. where we list the volumes that we created by using the lsfbvol command. that entire address group can be used only for FB volumes. and so on. Be aware of this fact when you are planning your volume layout in a mixed FB and CKD DS8000. address group 1 is LSS 10 to 1F. you can configure a volume to belong to a certain Resource Group by using the -resgrp <RG_ID> flag in the mkfbvol command. For more information. T10 DIF volumes A standard for end-to-end error checking from the application to the disk drives is emerging called SCSI T10 DIF (Date Integrity Field). Important: You can configure a volume to belong to a certain Performance I/O Priority Manager by using -perfgrp <perf_group_ID> flag in the mkfbvol command.0 high_fb_0_1001 1001 Online Normal Normal 2107-922 FB 512 P0 10.0 high_fb_1_1102 1102 Online Normal Normal 2107-922 FB 512 P1 10. REDP-4758. For more information. If you want to use this technique. Example 14-29 Checking the machine after volumes are created by using lsextpool and lsfbvol dscli> lsfbvol Name ID accstate datastate configstate deviceMTM datatype extpool cap (2^30B) ========================================================================================= high_fb_0_1000 1000 Online Normal Normal 2107-922 FB 512 P0 10.0 high_fb_1_1100 1100 Online Normal Normal 2107-922 FB 512 P1 10. Address group 0 is LSS 00 to 0F. where #h means that you are using the hexadecimal volume number as part of the volume name.

The Storage Pool Striping spreads the I/O of a LUN to multiple ranks. The next rank is used when the next volume is created. Example 14-30 Creating a volume with Storage Pool Striping dscli> mkfbvol -extpool p53 -cap 15 -name ITSO-XPSTR -eam rotateexts 1720 CMUC00025I mkfbvol: FB volume 1720 successfully created. the default allocation policy is rotate extents. The extents of a volume can be kept together in one rank (if there is enough free space on that rank). Configuration with the DS Command-Line Interface 405 . you have a choice of how the volume is allocated in an Extent Pool with several ranks. Default allocation policy: For DS8870. This allocation method is called rotate volumes. You can also specify that you want the extents of the volume that you are creating to be evenly distributed across all ranks within the Extent Pool. This allocation method is called rotate extents. as shown in see Example 14-30.Storage Pool Striping When a volume is created. Chapter 14. which improves performance and greatly reduces hot spots. The extent allocation method is specified with the -eam rotateexts or -eam rotatevols option of the mkfbvol command.

0 cap (10^9B) cap (blocks) 31457280 volgrp ranks 12 dbexts 0 sam Standard repcapalloc eam rotateexts reqcap (blocks) 31457280 ==============Rank extents============== rank extents ============ R24 2 R25 1 R28 1 R29 1 R32 1 R33 1 R34 1 R36 1 R37 1 R38 1 R40 2 R41 2 Track Space Efficient volumes When your DS8000 has the IBM FlashCopy SE feature. as shown in Example 14-32. It also shows how many extents on each rank were allocated for this volume.The showfbvol command with the -rank option (see Example 14-31) shows that the volume we created is distributed across 12 ranks. you can create Track Space Efficient (TSE) volumes to be used as FlashCopy target volumes. Example 14-31 Getting information about a striped volume dscli> showfbvol -rank 1720 Name ITSO-XPSTR ID 1720 accstate Online datastate Normal configstate Normal deviceMTM 2107-900 datatype FB 512 addrgrp 1 extpool P53 exts 15 captype DS cap (2^30B) 15. A repository must exist in the Extent Pool where you plan to allocate TSE volumes (see “Creating a repository for Track Space Efficient volumes” on page 402). 406 IBM System Storage DS8870 Architecture and Implementation . A Track Space Efficient volume is created by specifying the -sam tse parameter with the mkfbvol command. Example 14-32 Creating a Space Efficient volume dscli> mkfbvol -extpool p53 -cap 40 -name ITSO-1721-SE -sam tse 1721 CMUC00025I mkfbvol: FB volume 1721 successfully created.

Example 14-34 Checking the repository usage for a volume dscli> showfbvol 1721 Name ITSO-1721-SE ID 1721 accstate Online datastate Normal configstate Normal deviceMTM 2107-900 datatype FB 512 addrgrp 1 extpool P53 exts 40 captype DS cap (2^30B) 40. Example 14-35 Expanding a striped volume dscli> chfbvol -cap 40 1720 CMUC00332W chfbvol: Some host operating systems do not support changing the volume size.0 0.0 cap (10^9B) cap (blocks) 83886080 volgrp ranks 0 dbexts 0 sam TSE repcapalloc 0 eam reqcap (blocks) 83886080 Dynamic Volume Expansion A volume can be expanded without removing the data within the volume.0 0. Are you sure that you want to resize the volume? [y/n]: y CMUC00026I chfbvol: FB volume 1720 successfully modified. Configuration with the DS Command-Line Interface 407 .When Space Efficient repositories are listed by using the lssestg command (see Example 14-33).0 40. You can specify a new capacity by using the chfbvol command. To see the allocated space in the repository for just this volume. as shown in Example 14-35.0 This allocation comes from the volume that we created.0 200.0 0. Example 14-33 Getting information about Space Efficient repositories dscli> lssestg -l extentpoolID stgtype datastate configstate repcapstatus %repcapthreshold repcap (2^30B) vircap repcapalloc vircapalloc ====================================================================================================================== P4 ckd Normal Normal below 0 64.0 0. but that the allocated (used) capacity repcapalloc is still zero.0 P47 fb Normal Normal below 0 70.0 1. as shown in Example 14-34. we can see that in Extent Pool P53. we have a virtual allocation of 40 extents (GB).0 P53 fb Normal Normal below 0 100. Chapter 14.0 282.0 264. we can use the showfbvol command.

as shown in Example 14-36. you must delete all Copy Services relationships for that volume. Example 14-36 Checking the status of an expanded volume dscli> showfbvol -rank 1720 Name ITSO-XPSTR ID 1720 accstate Online datastate Normal configstate Normal deviceMTM 2107-900 datatype FB 512 addrgrp 1 extpool P53 exts 40 captype DS cap (2^30B) 20. You cannot shrink the volume. Because the original volume included the rotateexts attribute. the other extents are also striped. Copy services are not supported for LUN sizes larger than 2 TB.0 cap (10^9B) cap (blocks) 41943040 volgrp ranks 2 dbexts 0 sam Standard repcapalloc eam rotateexts reqcap (blocks) 41943040 ==============Rank extents============== rank extents ============ R24 20 R25 20 Important: Before you can expand a volume. New capacity: The new capacity must be larger than the previous capacity.The largest LUN size is now 16 TB. 408 IBM System Storage DS8870 Architecture and Implementation .

Example 14-37 Deleting an FB volume dscli> mkfbvol -extpool p1 -cap 12 -eam rotateexts 2100-2101 CMUC00025I mkfbvol: FB volume 2100 successfully created. so we can delete it by not specifying either parameter.3. Chapter 14. but the attempt fails without deleting either of the volumes. Volumes that are not in use are deleted and the volumes that are in use are not deleted. we use the lshosttype command with the -type parameter of scsimask and then scsimap256. the command fails without deleting any volumes. Volumes can be added or removed from volume groups as required. If the -safe parameter is specified and if any of the specified volumes are assigned to a user-defined volume group. We then assign 2100 to a volume group. We can delete volume 2101 with the -safe option because it is not assigned to a volume group. 14. which is used in AIX. Configuration with the DS Command-Line Interface 409 . A fixed block volume can be a member of multiple volume groups. dscli> chvolgrp -action add -volume 2100 v0 CMUC00031I chvolgrp: Volume group V0 successfully modified. dscli> rmfbvol 2100 CMUC00027W rmfbvol: Are you sure you want to delete FB volume 2100? [y/n]: y CMUC00028I rmfbvol: FB volume 2100 successfully deleted.5. the command includes options to prevent the accidental deletion of volumes that are in use. we determine the type of SCSI host with which we are working. dscli> rmfbvol -quiet -safe 2100-2101 CMUC00253E rmfbvol: Volume IBM. Volume deletion is controlled by the -safe and -force parameters (they cannot be specified at the same time) in the following manner: If none of the parameters are specified.5 Creating volume groups Fixed block volumes are assigned to open system hosts by using volume groups. we create volumes 2100 and 2101.xx or higher. Determining whether an open systems host is SCSI MAP256 or SCSI MASK First. the system performs checks to see whether the specified volumes are in use.1. The -force parameter deletes the specified volumes without checking to see whether they are in use. CMUC00025I mkfbvol: FB volume 2101 successfully created. depending on the SCSI LUN address discovery method that is used by the operating system to which the volume group is attached. Each volume group must be SCSI MAP256 or SCSI MASK. In Example 14-37.Deleting volumes FB volumes can be deleted by using the rmfbvol command. which are not to be confused with the term volume groups. Then. An FB volume is considered to be in use if it is participating in a Copy Services relationship or if the volume received any I/O operation in the previous five minutes. Volume 2100 is not in use. On a DS8870 and older models with Licensed Machine Code (LMC) level 6. dscli> rmfbvol -quiet -safe 2101 CMUC00028I rmfbvol: FB volume 2101 successfully deleted. We then try to delete both volumes with the -safe option.2107-75NA901/2100 is assigned to a user-defined volume group. No volumes were deleted.

Example 14-38 Listing host types with the lshostype command dscli> lshosttype -type scsimask HostType Profile AddrDiscovery LBS ================================================== Hp HP .HP/UX reportLUN 512 SVC San Volume Controller reportLUN 512 SanFsAIX IBM pSeries .In Example 14-38.Linux/SanFS LUNPolling 512 Sun SUN . we can see the results of each command.AIX/SanFS reportLUN 512 pSeries IBM pSeries .Windows 2008 LUNPolling 512 iLinux IBM iSeries . In Example 14-38. we can see the address discovery method for AIX is scsimask.Linux Suse LUNPolling 512 AppleOSX Apple .Windows 2000 LUNPolling 512 Win2003 Intel . the example host type we chose is AIX.pLinux LUNPolling 512 Creating a volume group After we determine the host type.Tru64 LUNPolling 512 HpVms HP .zLinux reportLUN 512 dscli> lshosttype -type scsimap256 HostType Profile AddrDiscovery LBS ===================================================== AMDLinuxRHEL AMD .iLinux LUNPolling 512 nSeries IBM N series Gateway LUNPolling 512 pLinux IBM pSeries .1100-1102 AIX_VG_01 CMUC00030I mkvolgrp: Volume group V11 successfully created.Open VMS LUNPolling 512 LinuxDT Intel .OSX LUNPolling 512 Fujitsu Fujitsu .Solaris LUNPolling 512 VMWare VMWare LUNPolling 512 Win2000 Intel .Windows 2003 LUNPolling 512 Win2008 Intel .Linux Suse LUNPolling 512 Novell Novell LUNPolling 512 SGI SGI .Linux Desktop LUNPolling 512 LinuxRF Intel .Linux RHEL LUNPolling 512 LinuxSuse Intel .IRIX LUNPolling 512 SanFsLinux . In Example 14-39.Solaris LUNPolling 512 HpTru64 HP .AIX reportLUN 512 zLinux IBM zSeries . dscli> lsvolgrp Name ID Type ======================================= ALL CKD V10 FICON/ESCON All AIX_VG_01 V11 SCSI Mask ALL Fixed Block-512 V20 SCSI All ALL Fixed Block-520 V30 OS400 All dscli> showvolgrp V11 Name AIX_VG_01 ID V11 Type SCSI Mask Vols 1000 1001 1002 1100 1101 1102 410 IBM System Storage DS8870 Architecture and Implementation . we can create a volume group. Example 14-39 Creating a volume group with mkvolgrp and displaying it dscli> mkvolgrp -type scsimask -volume 1000-1002.Linux RHEL LUNPolling 512 AMDLinuxSuse AMD .Linux Red Flag LUNPolling 512 LinuxRHEL Intel .

Each host connection (hostconnect) can include only one volume group that is assigned to it. we use the chvolgrp command with the -action parameter. We added these volumes to evenly spread the workload across the two rank groups. Example 14-41 Creating host connections by using mkhostconnect and lshostconnect dscli> mkhostconnect -wwname 100000C912345678 -hosttype pSeries -volgrp V11 AIX_Server_01 CMUC00012I mkhostconnect: Host connection 0000 successfully created. You must assign volume groups to those connections. We might also want to add or remove volumes to this volume group later. dscli> showvolgrp V11 Name AIX_VG_01 ID V11 Type SCSI Mask Vols 1000 1001 1002 1003 1100 1101 1102 dscli> chvolgrp -action remove -volume 1003 V11 CMUC00031I chvolgrp: Volume group V11 successfully modified. we add volume 1003 to volume group V11.6 Creating host connections The final step in the logical configuration process is to create host connections for your attached hosts. the host should be able to see the LUNs in volume group V11. 14.AIX 0 V11 all Chapter 14.1102 to the new volume group.Adding or deleting volumes in a volume group In this example. Each host HBA can be defined only once.3. If the SAN zoning is correct. Finally. dscli> showvolgrp V11 Name AIX_VG_01 ID V11 Type SCSI Mask Vols 1000 1001 1002 1100 1101 1102 Important: Not all operating systems can manage the removal of a volume. A volume can be assigned to multiple volume groups. we added volumes 1000 .1002 and 1100 . We display the results and then remove the volume. Example 14-40 Changing a volume group with chvolgrp dscli> chvolgrp -action add -volume 1003 V11 CMUC00031I chvolgrp: Volume group V11 successfully modified. we create a single host connection that represents one HBA in our example AIX host. We allocated it to volume group V11. Configuration with the DS Command-Line Interface 411 . dscli> lshostconnect Name ID WWPN HostType Profile portgrp volgrpID ESSIOport ========================================================================================= AIX_Server_01 0000 100000C912345678 pSeries IBM pSeries . We use the -hosttype parameter by using the hosttype we have in Example 14-38 on page 410. In Example 14-41. To add or remove volumes. We then listed all available volume groups by using the lsvolgrp command. In Example 14-40. See your operating system documentation to determine the safest way to remove a volume from a host. All operations with volumes and volume groups that were previously described also can be used with Space Efficient volumes. we listed the contents of volume group V11 because we created this volume group.

Example 14-42 Using the portgrp number to separate attached hosts dscli> lshostconnect Name ID WWPN HostType Profile portgrp volgrpID =========================================================================================== bench_tic17_fc0 0008 210000E08B1234B1 LinuxSuse Intel . Managing hosts with multiple HBAs If you have a host that features multiple HBAs. The use of more verbose hostconnect naming might make management easier. If you want to use a single command to change the assigned volume group of several hostconnects at the same time. you can specify the -portgrp parameter. you can detect servers with multiple HBAs. except for the worldwide port name (WWPN). Port group 0 is used for all hosts that do not have a port group number set. each with two HBAs. The use of the -hosttype parameter actually invokes both parameters (-profile and -hosttype).Linux Suse 8 V1 all bench_tic17_fc1 0009 210000E08B12A3A2 LinuxSuse Intel . this consideration is not important. When hosts are created. we have six host connections.You can also use -profile instead of -hosttype. The option in the mkhostconnect command to restrict access only to certain I/O ports also is available. In Example 14-42. In contrast. By using a unique port group number for each attached server. host connects 000E and 000F appear as two separate hosts even though they are used by the same server.AIX 10 V3 all Changing host connections If we want to change a host connection. perform zoning on your SAN switch. the host connects must have the same name. use the chhostconnect command to change one hostconnect at a time. If you do not plan to use the GUI to manage host connections. you must consider the following points: For the GUI to consider multiple host connects to be used by the same server.AIX 10 V3 all p615_7 0011 10000000C93E0059 pSeries IBM pSeries . This command changes the assigned volume group for all hostconnects that are assigned to a particular port group. By using the port group number.Linux Suse 8 V1 all p630_fcs0 000E 10000000C9318C7A pSeries IBM pSeries . If you must change the WWPN. This command can be used to change nearly all parameters of the host connection.AIX 9 V2 all p630_fcs1 000F 10000000C9359D36 pSeries IBM pSeries . the use of -profile leaves the -hosttype column unpopulated. Restricting access in this way is usually unnecessary. this method is not a best practice. To change the assigned volume group. we see that there are three separate hosts. or use the managehostconnect command to simultaneously reassign all of the hostconnects in one port group. host connects 0010 and 0011 appear in the GUI as a single server with two HBAs. In Example 14-42. you must create a host connection. you must assign these hostconnects to a unique port group and then use the managehostconnect command. However.AIX 9 V2 all p615_7 0010 10000000C93E007C pSeries IBM pSeries . If you want to restrict access for certain hosts to certain I/O ports on the DS8000. However. This method is done with the -ioport parameter. 412 IBM System Storage DS8870 Architecture and Implementation . we can use the chhostconnect command.

We have two volumes assigned to this host. Configuration with the DS Command-Line Interface 413 . We then issue the lshostvol command. the lshostvol command might fail to show any devices. Example 14-43 lshostvol on an AIX host by using MPIO dscli> lshostvol Disk Name Volume Id Vpath Name ========================================== hdisk3 IBM.3. This command maps assigned LUNs to open systems host volume names. In this section.2107-1300247/1000 vpath0 hdisk2.hdisk4. we have an AIX server that uses Multipath I/O (MPIO). In fact.hdisk8 IBM. it is not possible to tell if MPIO is even installed.hdisk7 IBM. We log on to this host and start DS CLI.2107-1300819/1800 --hdisk4 IBM. we do not see the number of paths. You must run the pcmpath query device command to confirm the path count. Important: The lshostvol command communicates only with the operating system of the host on which the DS CLI is installed. we have an AIX server that uses Subsystem Device Driver (SDD). AIX: Mapping disks when Subsystem Device Driver is used In Example 14-44. We have two volumes assigned to this host. you can run the DS CLI command lshostvol on this host.hdisk5. we give examples for several operating systems. 1800 and 1801. AIX: Mapping disks when Multipath I/O is used In Example 14-43.hdisk3.7 Mapping open systems host disks to storage unit volumes When you assign volumes to an open systems host and you install the DS CLI on this host.2107-1300819/1801 --Open HyperSwap: If you use Open HyperSwap on a host. Because MPIO is used. Example 14-44 lshostvol on an AIX host by using SDD dscli> lshostvol Disk Name Volume Id Vpath Name ============================================================ hdisk1.2107-1300247/1100 vpath1 Chapter 14. In each example. You cannot run this command on one host to see the attached disks of another host. we assign several logical volumes to an open systems host. It does not matter which HMC we connect to with the DS CLI.hdisk6. from this display. We install DS CLI on this host.14. 1000 and 1100. Each volume has four paths.

7520781/4201 --- 414 IBM System Storage DS8870 Architecture and Implementation . Example 14-46 lshostvol on a Solaris host that has SDD dscli> lshostvol Disk Name Volume Id Vpath Name ================================================== c2t1d0s0. it uses an alternative multipathing product.2107-7520781/4205 vpath2 c2t1d1s0. 4205 and 4206. Two volumes are assigned to the host. We have two volumes assigned to this host. Instead. target. Example 14-47 lshostvol on a Solaris host that does not have SDD dscli> lshostvol Disk Name Volume Id Vpath Name ========================================== c6t1d0 IBM-2107.7520781/4201 --c7t2d0 IBM-2107.2107-7520781/4206 vpath1 Solaris: Mapping disks when SDD is not used In Example 14-47. The output of lshostvol on an HP-UX host looks the same.c3t1d1s0 IBM. We have two volumes assigned to this host. The Solaris command iostat -En also can produce similar information. and are using two paths.7520781/4200 --c6t1d1 IBM-2107. The Solaris command iostat -En also can produce similar information. Each volume has two paths.2107-7503461/1106 HP-UX or Solaris: Mapping disks when SDD is used In Example 14-46.2107-7503461/1105 --c38t0d6 IBM. the addresses that are used in the example for the Solaris host do not work in an HP-UX system.7520781/4200 --c7t2d1 IBM-2107.Hewlett-Packard UNIX (HP-UX): mapping disks when SDD is not used In Example 14-45. 4200 and 4201. we have a Solaris host that does not have SDD installed.c3t1d0s0 IBM. 1105 and 1106. Example 14-45 lshostvol on an HP-UX host that does not use SDD dscli> lshostvol Disk Name Volume Id Vpath Name ========================================== c38t0d5 IBM. with each vpath made up of disks with controller. we have an HP-UX host that does not have SDD. we have a Solaris host that has SDD installed. and disk (c-t-d) numbers. However. HP-UX: Current releases of HP-UX support only addresses up to 3FFF.

2107-7503461/4702 --- Windows: Mapping disks when SDD is used In Example 14-49. Create CKD Extent Pools.Disk3 IBM. you must look at the Windows Disk manager. If there are I/O ports in Fibre Channel connection (FICON) mode. which is to create Logical Control Units (LCUs).2107-7520781/4703 Disk3 Disk4.2107-7520781/1710 --Disk5 IBM. Optionally. You do not need to create volume groups or host connects for CKD volumes. 2.4 Configuring DS8000 storage for CKD volumes To configure the DS8000 storage for CKD volumes.Windows: Mapping disks when SDD is not used or SDDDSM is used In Example 14-48.Disk4 IBM. 4. If you want to know which disk is associated with which drive letter.2107-75ABTV1/1009 --Disk7 IBM. If you want to know which disk is associated with which drive letter. Create CKD volumes. Create arrays. There is one other step.2107-7503461/4703 Disk2 Disk3. we run lshostvol on a Windows host that does not use SDD or uses SDDDSM.2107-75ABTV1/4702 --Disk4 IBM. 6. Example 14-49 lshostvol on a Windows host that does not use SDD dscli> lshostvol Disk Name Volume Id Vpath Name ============================================ Disk2.2107-75ABTV1/100A --Disk8 IBM. The disks are listed by Windows Disk number. you follow almost the same steps as for FB volumes.2107-7520781/4702 --Disk3 IBM. Configuration with the DS Command-Line Interface 415 .2107-75ABTV1/4703 Disk4 14. The disks are listed by Windows Disk number. Example 14-48 lshostvol on a Windows host that does not use SDD or uses SDDDSM dscli> lshostvol Disk Name Volume Id Vpath Name ========================================== Disk2 IBM. This step is shown in the following list: 1.2107-75ABTV1/1004 --Disk6 IBM. we run lshostvol on a Windows host that uses SDD. Create CKD ranks.Disk2 IBM. create repositories for Track Space Efficient volumes. access to CKD volumes by FICON hosts is granted automatically. 3. Chapter 14. 5. you must look at the Windows Disk manager. Create LCUs.

dscli> chrank -extpool P2 R0 CMUC00008I chrank: Rank R0 successfully modified. 416 IBM System Storage DS8870 Architecture and Implementation . which apply to FB Extent Pools only. the repository’s extents are striped across the ranks (Storage Pool Striping). The repository provides space to store the data that is associated with TSE logical volumes. dscli> lsextpool Name ID stgtype rankgrp status availstor (2^30B) %allocated available reserved numvol =========================================================================================== CKD_High_0 2 ckd 0 below 252 0 287 0 0 Create a Space Efficient repository for CKD Extent Pools If the DS8000 includes the IBM FlashCopy SE feature.1 Create arrays Array creation for CKD is the same as for FB.3.4. Only one repository is allowed per Extent Pool. dscli> lsrank ID Group State datastate Array RAIDtype extpoolID stgtype ============================================================== R0 . The only exception is that the size of the repository’s real capacity and virtual capacity are expressed in cylinders or as multiples of 3390 model 1 disks (the default for CKD Extent Pools). instead of in GB or blocks. The physical repository capacity is allocated when the repository is created. 14. as shown in Example 14-50. Example 14-50 Rank and Extent Pool creation for CKD dscli> mkrank -array A0 -stgtype ckd CMUC00007I mkrank: Rank R0 successfully created.4.14.1. If there are several ranks in the Extent Pool. you must create a Space Efficient repository in the Extent Pool. Space Efficient repository creation for CKD Extent Pools is identical to that of FB Extent Pools. Example 14-51 Creating a Space Efficient repository for CKD volumes dscli> mksestg -repcap 100 -vircap 200 -extpool p1 CMUC00342I mksestg: The space-efficient storage for the Extent Pool P1 has been created successfully. see 14. A repository has a physical capacity that is available for storage allocations by TSE volumes. you can create Track Space Efficient (TSE) volumes that can be used as FlashCopy targets.Unassigned Normal A0 6 ckd dscli> mkextpool -rankgrp 0 -stgtype ckd CKD_High_0 CMUC00000I mkextpool: Extent Pool P0 successfully created. “Creating arrays” on page 399.2 Ranks and Extent Pool creation When ranks and Extent Pools are created. For more information. Example 14-51 shows the creation of a repository. Before you can create TSE volumes. It also has a virtual capacity that is the sum of LUN and volume sizes of all Space Efficient volumes. you must specify -stgtype ckd.

3 Logical control unit creation When volumes for a CKD environment are created.0 0 176.You can obtain information about the repository by using the showsestg command. the line that starts with overhead indicates that 4 GB was allocated and the repcap size.0 0 4.0 222600 0. The command uses the following format: mklcu -qty XX -id XX -ssXX To display the LCUs that we created.1 100. Therefore. we use the lslcu command. you can see what happens if you try to create a CKD volume without creating an LCU first. The physical size or the virtual size of the repository cannot be changed. Example 14-53 Trying to create CKD volumes without an LCU dscli> mkckdvol -extpool p2 -cap 262668 -name ITSO_EAV1_#h C200 CMUN02282E mkckdvol: C200: Unable to create CKD logical volume: CKD volumes require a CKD logical subsystem. In Example 14-53. In Example 14-52.0 Storage is allocated for the repository and repcap size. 14. If you must expand a repository. A repository can be deleted by using the rmsestg command. check the repcapalloc value. Example 14-52 shows the output of the showsestg command. We must use the mklcu command first. You might particularly be interested in how much capacity is used in the repository.2 200. Configuration with the DS Command-Line Interface 417 . it is not possible to expand a repository. you must delete all TSE volumes and the repository itself and then create a repository.0 200. you must create LCUs before the volumes are created.0 100. Example 14-52 Getting information about a Space Efficient CKD repository dscli> showsestg p1 extpool stgtype datastate configstate repcapstatus %repcapthreshold repcap(GiB) repcap(Mod1) repcap(blocks) repcap(cyl) repcapalloc(GiB/Mod1) %repcapalloc vircap(GiB) vircap(Mod1) vircap(blocks) vircap(cyl) vircapalloc(GiB/Mod1) %vircapalloc overhead(GiB/Mod1) reqrepcap(GiB/Mod1) reqvircap(GiB/Mod1) P1 ckd Normal Normal below 0 88. Chapter 14. Important: In the current implementation. careful planning is required. to obtain this information.4.0 111300 0.

CMUC00017I mklcu: LCU BD successfully created. is in address group 1. To not waste space. for example. is in address group 0.0068 in increments of 1113.113 cylinders. 14. Important: For the DS8000. The second LCU.182. that address group cannot be used for CKD. and then list the created LCUs by using the lslcu command. 418 IBM System Storage DS8870 Architecture and Implementation . By placing the LCUs into both address groups. we create two LCUs by using the mklcu command. use volume capacities that are a multiple of 1.182.006 cylinders. which is ID BC (an even number).R12 or above to use such volumes. Important: For 3390-A volumes.667 (next multiple of 1113) to 1. By default. the size can be specified from 1 to 65. If you create a CKD LCU in an address group. we maximize performance by spreading workload across both rank groups of the DS8000. dscli> lslcu ID Group addrgrp confgvols subsys conbasetype ============================================= BC 0 C 0 0xBC00 3990-6 BD 1 C 0 0xBC01 3990-6 Because we created two LCUs (by using the parameter -qty 2). which equates to rank group 0. the CKD LCUs can be ID 00 to ID FE. the LCUs that were created are 3990-6. and so on. address group 1 is LCUs 10 to 1F. Be aware of this limitation when you are planning the volume layout in a mixed FB/CKD DS8000. The support for Extended Address Volumes was enhanced.520 in increments of 1 and from 65. The Extended Address Volumes (EAV) device type is called 3390 Model A. The LCUs fit into one of 16 address groups. if there were. which equates to rank group 1.4 Creating CKD volumes Now that an LCU was created. Likewise. Example 14-54 Creating a logical control unit with mklcu dscli> mklcu -qty 2 -id BC -ss BC00 CMUC00017I mklcu: LCU BC successfully created.4. the first LCU.In Example 14-54. Address group 0 is LCUs 00 to 0F. The DS8870 now supports EAV volumes up to 1. we can now create CKD volumes by using the mkckdvol command. FB volumes in LSS 40 to 4F (address group 4). that address group cannot be used for FB volumes. The mkckdvol command uses the following format: mkckdvol -extpool P2 -cap 262668 -datatype 3390-A -eam rotatevols -name ITSO_EAV1_#h BC06 The biggest difference with Fixed Block (FB) volumes is that the capacity is expressed in cylinders or as mod1 (Model 1) extents (1. which is ID BD (an odd number).113 cylinders). You need z/OS V1.

Rotate extents: For the DS8870. you have a choice about how the volume is allocated in an Extent Pool with several ranks. Example 14-56 Creating a CKD volume with Extent Pool striping dscli> mkckdvol -extpool p4 -cap 10017 -name ITSO-CKD-STRP -eam rotateexts 0080 CMUC00021I mkckdvol: CKD Volume 0080 successfully created.1 microcode. the default allocation policy is rotate extents. The extent allocation method is specified with the -eam rotateexts or -eam rotatevols option of the mkckdvol command (see Example 14-56). you can configure a volume to belong to a certain Resource Groups by using the -resgrp <RG_ID> flag in the mkckdvol command. Storage pool striping When a volume is created. The next rank is used when the next volume is created. we can create only CKD volumes in LCUs that we already created. dscli> lsckdvol Name ID accstate datastate configstate deviceMTM voltype orgbvols extpool cap (cyl) ================================================================================================ ITSO_BC00 BC00 Online Normal Normal 3390-9 CKD Base P2 10017 ITSO_BC01 BC01 Online Normal Normal 3390-9 CKD Base P2 10017 ITSO_BC02 BC02 Online Normal Normal 3390-9 CKD Base P2 10017 ITSO_BC03 BC03 Online Normal Normal 3390-9 CKD Base P2 10017 ITSO_BC04 BC04 Online Normal Normal 3390-9 CKD Base P2 10017 ITSO_BC05 BC05 Online Normal Normal 3390-9 CKD Base P2 10017 ITSO_EAV1_BC06 BC06 Online Normal Normal 3390-A CKD Base P2 262668 ITSO_BD00 BD00 Online Normal Normal 3390-9 CKD Base P3 10017 ITSO_BD01 BD01 Online Normal Normal 3390-9 CKD Base P3 10017 ITSO_BD02 BD02 Online Normal Normal 3390-9 CKD Base P3 10017 ITSO_BD03 BD03 Online Normal Normal 3390-9 CKD Base P3 10017 ITSO_BD04 BD04 Online Normal Normal 3390-9 CKD Base P3 10017 ITSO_BD05 BD05 Online Normal Normal 3390-9 CKD Base P3 10017 Remember. This allocation method is called rotate volumes. Chapter 14. we create a single 3390-A volume with a capacity of 262. This allocation method is called rotate extents. Configuration with the DS Command-Line Interface 419 .In Example 14-55.668 cylinders. You can also specify that you want the extents of the volume to be evenly distributed across all ranks within the Extent Pool. Important: With the DS8870 and older DS8000 models with Release 6. The extents of a volume can be kept together in one rank (if there is enough free space on that rank). You also must be aware that volumes in even-numbered LCUs must be created from an Extent Pool that belongs to rank group 0. For more information. see IBM System Storage DS8000: Copy Services Resource Groups. Volumes in odd-numbered LCUs must be created from an Extent Pool in rank group 1. Example 14-55 Creating CKD volumes by using mkckdvol dscli> mkckdvol -extpool P2 -cap 262668 -datatype 3390-A -eam rotatevols -name ITSO_EAV1_#h BC06 CMUC00021I mkckdvol: CKD Volume BC06 successfully created. REDP-4758.

the allocated (used) capacity repcapalloc is still zero. we have a virtual allocation of 7.0 200.9 ranks 2 sam Standard repcapalloc eam rotateexts reqcap (cyl) 10017 ==============Rank extents============== rank extents ============ R4 4 R30 5 Track Space Efficient volumes When your DS8000 includes the IBM FlashCopy SE feature.0 7. Example 14-57 Getting information about a striped CKD volume dscli> showckdvol -rank 0080 Name ITSO-CKD-STRP ID 0080 accstate Online datastate Normal configstate Normal deviceMTM 3390-9 volser datatype 3390 voltype CKD Base orgbvols addrgrp 0 extpool P4 exts 9 cap (cyl) 10017 cap (10^9B) 8.The showckdvol command with the -rank option (see Example 14-57) shows that the volume we created is distributed across two ranks. When Space Efficient repositories are listed by using the lssestg command (see Example 14-59).5 cap (2^30B) 7. we can see that in Extent Pool P4. However.0 0. Example 14-58 Creating a Space Efficient CKD volume dscli> mkckdvol -extpool p4 -cap 10017 -name ITSO-CKD-SE -sam tse 0081 CMUC00021I mkckdvol: CKD Volume 0081 successfully created. as shown in Example 14-58. you can create Track Space Efficient (TSE) volumes to be used as FlashCopy target volumes. Example 14-59 Obtaining information about Space Efficient CKD repositories dscli> lssestg -l extentpoolID stgtype datastate configstate repcapstatus %repcapthreshold repcap (2^30B) vircap repcapalloc vircapalloc ====================================================================================================================== P4 ckd Normal Normal below 0 100.9 420 IBM System Storage DS8870 Architecture and Implementation . A Track Space Efficient volume is created by specifying the -sam tse parameter with the mkckdvol command. A repository must exist in the Extent Pool where you plan to allocate TSE volumes (for more information. It also displays how many extents on each rank were allocated for this volume.9 GB. see “Create a Space Efficient repository for CKD Extent Pools” on page 416).

You can specify a new capacity by using the chckdvol command. you cannot shrink the volume. Are you CMUC00022I chckdvol: 30051 0080 Some host operating systems do not support changing the sure that you want to resize the volume? [y/n]: y CKD Volume 0080 successfully modified. Configuration with the DS Command-Line Interface 421 . Example 14-60 Checking the repository usage for a CKD volume dscli> showckdvol 0081 Name ITSO-CKD-SE ID 0081 accstate Online datastate Normal configstate Normal deviceMTM 3390-9 volser datatype 3390 voltype CKD Base orgbvols addrgrp 0 extpool P4 exts 9 cap (cyl) 10017 cap (10^9B) 8. we can use the showckdvol command (see Example 14-60). Example 14-61 Expanding a striped CKD volume dscli> chckdvol -cap CMUC00332W chckdvol: volume size. as shown in Example 14-61. Chapter 14.9 ranks 0 sam TSE repcapalloc 0 eam reqcap (cyl) 10017 Dynamic Volume Expansion A volume can be expanded without removing the data within the volume. The new capacity must be larger than the previous one.5 cap (2^30B) 7. To see the allocated space in the repository for only this volume.This allocation comes from the volume that was created.

8 ranks 2 sam Standard repcapalloc eam rotateexts reqcap (cyl) 30051 ==============Rank extents============== rank extents ============ R4 13 R30 14 Important: Before you can expand a volume. as shown in Example 14-62. You also cannot specify -cap and -datatype for the chckdvol command.Because the original volume had the rotateexts attribute. you first must delete all Copy Services relationships for that volume.5 cap (2^30B) 23. the additional extents also are striped. Example 14-62 Checking the status of an expanded CKD volume dscli> showckdvol -rank 0080 Name ITSO-CKD-STRP ID 0080 accstate Online datastate Normal configstate Normal deviceMTM 3390-9 volser datatype 3390 voltype CKD Base orgbvols addrgrp 0 extpool P4 exts 27 cap (cyl) 30051 cap (10^9B) 25. 422 IBM System Storage DS8870 Architecture and Implementation .

Configuration with the DS Command-Line Interface 423 . When you increase the size of a 3390-9 volume beyond 65.5 cap (2^30B) 7.3 cap (2^30B) 207.10 or V1.520 cylinders. *** Command to show CKD volume definition after expansion: dscli> showckdvol BC07 Name ITSO_EAV2_BC07 ID BC07 accstate Online datastate Normal configstate Normal deviceMTM 3390-A volser datatype 3390-A voltype CKD Base orgbvols addrgrp B extpool P2 exts 236 cap (cyl) 262668 cap (10^9B) 223.9 ranks 1 sam Standard repcapalloc eam rotatevols reqcap (cyl) 10017 *** Command to expand CKD volume from 3390-9 to 3390-A: dscli> chckdvol -cap 262668 BC07 CMUC00332W chckdvol: Some host operating systems do not support changing the volume size. Example 14-63 Expanding a 3390 to a 3390-A *** Command to show CKD volume definition before expansion: dscli> showckdvol BC07 Name ITSO_EAV2_BC07 ID BC07 accstate Online datastate Normal configstate Normal deviceMTM 3390-9 volser datatype 3390 voltype CKD Base orgbvols addrgrp B extpool P2 exts 9 cap (cyl) 10017 cap (10^9B) 8. You can do make these expansions by specifying a new capacity for an existing Model 9 volume. keep in mind that a 3390 Model A can be used only in z/OS V1.12 (depending on the size of the volume) and later. its device type automatically changes to 3390-A. However.It is possible to expand a 3390 Model 9 volume to a 3390 Model A. as shown in Example 14-63.9 Chapter 14. Are you sure that you want to resize the volu me? [y/n]: y CMUC00022I chckdvol: CKD Volume BC07 successfully modified.

the command includes a capability to prevent the accidental deletion of volumes that are in use. Deleting volumes CKD volumes can be deleted by using the rmckdvol command. A CKD volume is considered to be in use if it is participating in a Copy Services relationship. If the -force parameter is not specified with the command.xx or above. If multiple volumes are specified and some are in use and some are not. the ones not in use are deleted. If the -force parameter is specified on the command.1. Are you sure that you want to resize the volume? [y/n]: y CMUN02541E chckdvol: BC07: The expand logical volume task was not initiated because the logical volume capacity that you have requested is less than the current logical volume capacity.ranks sam repcapalloc eam reqcap (cyl) 1 Standard rotatevols 262668 You cannot reduce the size of a volume. or if the IBM System z path mask indicates that the volume is in a grouped state or online to any host system. an error message is displayed. as shown in Example 14-64. 424 IBM System Storage DS8870 Architecture and Implementation . volumes that are in use are not deleted. FB volumes can be deleted by using the rmfbvol command. For the DS8870 and older models with Licensed Machine Code (LMC) level 6. If you try to reduce the size.5. Example 14-64 Reducing a volume size dscli> chckdvol -cap 10017 BC07 CMUC00332W chckdvol: Some host operating systems do not support changing the volume size. the volumes are deleted without checking to see whether they are in use.

For more information about I/O Priority Manager. PG16-PG31.4. There are 16 performance group policies for z/OS. you can control Quality of Service (QOS). REDP-4760. CMUC00024I rmckdvol: CKD volume 0901 successfully deleted. which is offline. To delete volume 0900. dscli> lsckdvol 0900-0901 Name ID accstate datastate configstate deviceMTM voltype orgbvols extpool cap (cyl) ======================================================================================== ITSO_J 0900 Online Normal Normal 3390-9 CKD Base P1 10017 dscli> rmckdvol -force 0900 CMUC00023W rmckdvol: Are you sure you want to delete CKD volume 0900? [y/n]: y CMUC00024I rmckdvol: CKD volume 0900 successfully deleted.5 Resource Groups The Resource Group (RG) feature is designed for multi-tenancy environments. 14.ibm.ibm.redbooks. Example 14-65 Deleting CKD volumes dscli> lsckdvol 0900-0901 Name ID accstate datastate configstate deviceMTM voltype orgbvols extpool cap (cyl) ======================================================================================== ITSO_J 0900 Online Normal Normal 3390-9 CKD Base P1 10017 ITSO_J 0901 Online Normal Normal 3390-9 CKD Base P1 10017 dscli> rmckdvol -quiet 0900-0901 CMUN02948E rmckdvol: 0900: The Delete logical volume task cannot be initiated because the Allow Host Pre-check Control Switch is set to true and the volume that you have specified is online to a host. For more information about RG. we use the -force parameter. and are used for access control for Copy Services functions only.html?Open 14. Configuration with the DS Command-Line Interface 425 . REDP-4758.html?Open Chapter 14. 0900 and 0901. see IBM System Storage DS8000 Resource Groups.com/abstracts/redp4758. see IBM System Storage DS8000 Performance I/O Priority Manager. The resources are volumes. which is available at this website: http://www.4. LCUs and LSSs. The rmckdvol 0900-0901 command deletes only volume 0901. dscli> lsckdvol 0900-0901 CMUC00234I lsckdvol: No CKD Volume found. we try to delete two volumes.In Example 14-65. which is available at this website: http://www. Volume 0900 is online to a host.6 Performance I/O Priority Manager By using Performance I/O Priority Manager.com/abstracts/redp4760.redbooks. whereas 0901 is not online to any host and not in a Copy Services relationship.

to relocate data (at the extent level) across up to three drive tiers.com/abstracts/redp4667. Important: The help command shows specific information about each of the metrics. which is available at this website: http://www. REDP-4667. It enables the system.7 Easy Tier IBM System Storage DS8000 Easy Tier is designed to automate data placement throughout the storage system disks pool.14. For more information about Easy Tier. Easy Tier also automatically rebalances extents among ranks within the same tier. The process is fully automated. automatically and without disruption to applications. The performance counters are reset on the following occurrences: When the storage unit is turned on.4. Example 14-66 and Example 14-67 on page 427 show an example of the showfbvol and showckdvol commands. When a server fails and the failover and fallback sequence is performed. Example 14-66 Metrics for a specific fixed block volume dscli> showfbvol -metrics f000 ID F000 normrdrqts 2814071 normrdhits 2629266 normwritereq 2698231 normwritehits 2698231 seqreadreqs 1231604 seqreadhits 1230113 seqwritereq 1611765 seqwritehits 1611765 cachfwrreqs 0 cachfwrhits 0 cachefwreqs 0 cachfwhits 0 inbcachload 0 bypasscach 0 DASDtrans 440816 seqDASDtrans 564977 426 IBM System Storage DS8870 Architecture and Implementation .redbooks. Performance metrics: All performance metrics are an accumulation since the most recent counter wrap or counter reset. removing workload skew between ranks. see IBM System Storage DS8000 Easy Tier.ibm. even within homogeneous and single-tier extent pools. These commands display detailed properties for an individual volume and include a -metrics parameter that returns the performance counter values for a specific volume ID.html?Open 14.5 Metrics with DS CLI This section describes some command examples from the DS CLI interface that analyzes the performance metrics from different levels in a storage unit. The recommended IBM tool for performance monitoring is the IBM Tivoli Productivity Center.

cachetrans NVSspadel normwriteops seqwriteops reccachemis qwriteprots CKDirtrkac CKDirtrkhits cachspdelay timelowifact phread phwrite phbyteread phbytewrite recmoreads sfiletrkreads contamwrts PPRCtrks NVSspallo timephread timephwrite byteread bytewrit timeread timewrite zHPFRead zHPFWrite zHPFPrefetchReq zHPFPrefetchHit GMCollisionsSidefileCount GMCollisionsSendSyncCount 2042523 110897 0 0 79186 0 0 0 0 0 1005781 868125 470310 729096 232661 0 0 5480215 4201098 1319861 1133527 478521 633745 158019 851671 0 0 0 0 Example 14-67 Metrics for a specific CKD volume dscli> showckdvol -metrics 7b3d ID 7B3D normrdrqts 9 normrdhits 9 normwritereq 0 normwritehits 0 seqreadreqs 0 seqreadhits 0 seqwritereq 0 seqwritehits 0 cachfwrreqs 0 cachfwrhits 0 cachefwreqs 0 cachfwhits 0 inbcachload 0 bypasscach 0 DASDtrans 201 seqDASDtrans 0 cachetrans 1 NVSspadel 0 normwriteops 0 Chapter 14. Configuration with the DS Command-Line Interface 427 .

Example 14-68 Metrics for a specific Rank ID byteread bytewrit Reads Writes timeread timewrite dataencrypted R14 87595588 50216632 208933399 126759118 204849532 408989116 no 428 IBM System Storage DS8870 Architecture and Implementation . This command generates two types of reports. One report displays the detailed properties of a specified rank and the other displays the performance metrics of a specified rank by using the -metrics parameter.seqwriteops reccachemis qwriteprots CKDirtrkac CKDirtrkhits cachspdelay timelowifact phread phwrite phbyteread phbytewrite recmoreads sfiletrkreads contamwrts PPRCtrks NVSspallo timephread timephwrite byteread bytewrit timeread timewrite zHPFRead zHPFWrite zHPFPrefetchReq zHPFPrefetchHit GMCollisionsSidefileCount GMCollisionsSendSyncCount 0 0 0 9 9 0 0 201 1 49 0 0 0 0 0 0 90 0 0 0 0 0 0 0 0 0 0 0 Example 14-68 shows an example of the showrank command.

2107-75Z ID I0000 Date 09/21/2012 13:45:38 CEST byteread (FICON/ESCON) 0 bytewrit (FICON/ESCON) 0 Reads (FICON/ESCON) 0 Writes (FICON/ESCON) 0 timeread (FICON/ESCON) 0 timewrite (FICON/ESCON) 0 bytewrit (PPRC) 0 byteread (PPRC) 0 Writes (PPRC) 0 Reads (PPRC) 0 timewrite (PPRC) 0 timeread (PPRC) 0 byteread (SCSI) 7374196 bytewrit (SCSI) 2551122 Reads (SCSI) 27276351 Writes (SCSI) 9224918 timeread (SCSI) 161324 timewrite (SCSI) 93048 LinkFailErr (FC) 1 LossSyncErr (FC) 5 LossSigErr (FC) 0 PrimSeqErr (FC) 0 InvTxWordErr (FC) 15 CRCErr (FC) 0 LRSent (FC) 0 LRRec (FC) 0 IllegalFrame (FC) 0 OutOrdData (FC) 0 OutOrdACK (FC) 0 DupFrame (FC) 0 InvRelOffset (FC) 0 SeqTimeout (FC) 0 BitErrRate (FC) 0 RcvBufZero (FC) 0 SndBufZero (FC) 9 RetQFullBusy (FC) 0 ExchOverrun (FC) 0 ExchCntHigh (FC) 0 ExchRemAbort (FC) 0 SFPRxPower (FC) 0 SFPTxPower (FC) 0 CurrentSpeed (FC) 8 Gb/s %UtilizeCPU (FC) 46 Average Chapter 14. Configuration with the DS Command-Line Interface 429 . SAN. Here is the point where the HBAs.566 DS: IBM. This command displays the properties of a specified I/O port and the performance metrics by using the parameter -metrics. all of the other components also are affected. and DS8000 exchange information.0. Monitoring the I/O ports is one of the most important tasks of the system administrator.7. 2012 1:45:35 PM CEST IBM DSCLI Version: 7. Example 14-69 Metrics for a specific io port Date/Time: September 21.Example 14-69 shows an example of the showioport command. If one of these components has problems because of hardware or configuration issues.

The following private network security commands are available: chaccess The chaccess command allows you to change the following settings of an HMC: – Enable/Disable the command line shell access to the HMC via the Internet or a VPN connection. The counters IllegalFrame. OutOrdACK. If the InvTxWordErr counter increases by more than 100 per day. and BitErrRate point to congestions in the SAN and can be influenced only by configuration changes in the SAN. Important: Only users with administrator authority can access this command. LossSigErr. OutOrdData. InvRelOffset. The cable that is connected to the port is not covered at the end or the I/O port is not covered by a cap. SeqTimeout. and PrimSeqErr indicates that the SAN probably has HBAs attached to it that are unstable. Example 14-69 on page 429 shows the many important metrics returned by the command. Important: This command affects service access only and does not change access to the machine via the DS CLI or DS Storage Manager. The FC link error counters are used to determine the health of the overall communication. These HBAs log in and log out to the SAN and create name server congestion and performance degradation. 14.6 Private network security commands There are commands available that are used to manage network security on the DS8000 by using the DS CLI. The CRCErr counter shows the errors that arise between the last sending SFP in the SAN and the receiving port of the DS8000. The link reset counters LRSent and LRRec also suggest that there are hardware defects in the SAN. You must replace the cable that is connected to the port or the SFP in the SAN. These errors do not appear in any other place in the data center. 430 IBM System Storage DS8870 Architecture and Implementation . The following groups of errors point to specific problem areas: Any non-zero figure in the counters LinkFailErr. these errors must be investigated. LossSyncErr. DupFrame. The %UtilizeCPU metric for the CPU utilization of the HBA might be of interest and the CurrentSpeed the port is actually using.For the DS8870. some metrics counters were added to the showioport command. – Enable/Disable the Web User Interface (WUI) access on the HMC via the Internet or a VPN connection. It provides the performance counters of the port and the FC link error counters. lsaccess The lsaccess command displays the access settings of an HMC. the port is receiving light from a source that is not an SFP. – Enable/Disable the modem dial-in and VPN initiation to the HMC.

see the following publications: IBM System Storage DS8000: Copy Services for Open Systems. For more information about these commands. Many of these commands deal with the management of Copy Services.7 Copy Services commands There are many more DS CLI commands available. Metro Mirror.14. Configuration with the DS Command-Line Interface 431 . SG24-6788 IBM System Storage DS8000: Copy Services for IBM System z. SG24-6787 Chapter 14. and Global Mirror commands. FlashCopy. These commands are not discussed in this chapter.

432 IBM System Storage DS8870 Architecture and Implementation .

The following topics are included: Licensed machine code Monitoring with Simple Network Management Protocol Remote support DS8870 Capacity upgrades and CoD © Copyright IBM Corp.Part 4 Part 4 Maintenance and upgrades In this part of the book. 2013. we provide useful information about maintenance and upgrades. All rights reserved. 433 .

434 IBM System Storage DS8870 Architecture and Implementation .

2013. Licensed machine code In this chapter. 435 . This chapter covers the following topics: How new microcode is released Bundle installation Concurrent and non-concurrent updates Code updates Host adapter firmware updates Loading the code bundle Post-installation activities Summary © Copyright IBM Corp.15 Chapter 15. there are several enhancements regarding power system firmware updates that are described. However. we describe considerations that are related to the planning and installation of new licensed machine code (LMC) bundles on the IBM System Storage DS8000. The overall process has not changed for the DS8870. All rights reserved.

It should be updated as new bundles are released. In general. These updates are tested together.0.175.com/support/entry/portal/documentation/hardware/system_storage/disk _systems/enterprise_storage_servers/ds8870 At this website. It is important to always match your DS CLI version to the bundle installed on your DS8870.0 : Product DS8870 Release Major 7 Release Minor 0 Fix Level 175 EFIX level 0 436 IBM System Storage DS8870 Architecture and Implementation . When IBM releases new microcode for the DS8870.15. when we refer to what code level is used on a DS8870. For the DS8870. As IBM continues to develop the DS8870.1 How new microcode is released The various components of the DS8870 system use firmware that can be updated as new releases become available. For more information about a DS8870 cross-reference table of code bundles. the following naming convention of bundles is PR. host adapters (HA).ibm. new functional features also are released through new Licensed Machine Code (LMC) levels. we use the term bundle. and Fibre Channel Interface Cards (FCIC). These components include device adapters (DA). Example 15-1 BUNDLE level information For BUNDLE 87.0.FFF. The term bundle is used because a new code release can include updates for various DS8870 components. it is released in the form of a bundle. the microcode and internal operating system that run on the HMCs and each Central Electronics Complex (CEC) can be updated. and then the various code packages are bundled together into one unified release. Components within the bundle each include their own revision levels. see this website: http://www.E is used: P: Product (8 = DS8870) R: Release Major (X) MM: Release Minor (xx) FFF: Fix Level (xxx) E: EFIX level (0 is base. which is current at the time of this writing. In addition. clickDS8870 Code Bundle Information under Product documentation The Cross-Reference Table shows the levels of code for Release 87.MM. power subsystems that are Direct Current Uninterruptible Power Supply (DC-UPS) and Rack Power Control (RPC). and 1.n is the interim fix build above base level) The naming convention is shown in Example 15-1.

and licensed machine code.7. Storage Manager. Click System Status  Storage Image  Properties.fixlevel.release. You cannot use the -s and -l parameters together. and licensed machine code: -s (Optional): The -s parameter displays the version of the Command-Line Interface program. -cli (Optional): Displays the version of the Command-Line Interface program.modification.7. See Figure 15-1. Version numbers are in the format version.7.5.587 StorageManager 7. Example 15-2 DSCLI version command dscli> ver -l DSCLI 7.2107-75ZA571 7. Figure 15-1 LMC level under DS Storage Manager Chapter 15. See Example 15-2. Storage Manager.587 The LMC level also can be retrieved from DS Storage Manager. -lmc (Optional): Displays the version of the licensed machine code (LMC).0.1 ================Version================= Storage Image LMC ========================== IBM. You cannot use the -l and -s parameters together. The ver command uses the following parameters and displays the versions of the Command-Line Interface. you can obtain the CLI and LMC code level information by using the ver command.0. -l (Optional): The -l parameter displays the versions of the Command-Line Interface.If DSCLI is used. This ID is related to Hardware Management Console (HMC) code bundle information.20120830. Licensed machine code 437 . -stgmgr (Optional): Displays the version of the Storage Manager.0. This ID is not the GUI (Storage Manager GUI).

Usually. Installing a new LMC is not a client-serviceable task.2 Bundle installation Important: LMC is always provided and installed by an IBM Service Representative. 4. your IBM service representative will inform you during the previous planning phase. 2. the following steps are performed automatically: 1. After the CDA preload is started. 3. Download the release bundle. Prepare the HMC with any code update-specific fixes. 438 IBM System Storage DS8870 Architecture and Implementation .15. The Bundle package contains the following new levels of code that are updated: HMC Code Levels – HMC OS/Managed System Base – DS Storage Manager – CIM Agent Version Managed System Code Levels PTF Code Levels Storage Facility Image Code Levels Host Adapter Code Levels Device Adapter Code Level IO Enclosure Code Level Power Code Levels Fibre Channel Interface Card Code Levels Storage Enclosure Power Supply Unit Code Levels DDM Firmware Code Level It is likely that a new Bundle includes updates for the following components: Linux OS for the HMC AIX OS for the CECs Microcode for HMC and CECs Microcode or Firmware for HAs A new bundle might include updates for the following components: Firmware for Power subsystem (DC-UPS and RPC) Firmware for Storage DDMs Firmware for Fibre Channel interface cards Firmware for Device Adapters Firmware for Hypervisor on CEC Code Distribution and Activation (CDA) Preload is the current method that is used to perform Concurrent Code Load distribution. there is a Prerequisites or Attention Must read section in Microcode Update Instructions. Distribute the code updates to the LPAR and installs them to an alternative Base Operating System (BOS) repository. If there are any prerequisite or other considerations to take into account. Perform scheduled precheck scans until the distributed code is ready to be activated by the user for up to 11 days. the user performs every non-impacting Concurrent Code Load step for a code load by inserting the DVD in to the primary HMC drive or running a network acquire that uses FTP or FTPS (Secure File Transfer Protocol Secure) of the wanted code level. By using CDA Preload.

new DC-UPS and RPC firmware is released. system planar. Any time after the preload is completed. and activate the previously distributed code on the storage facility. 7. see 4. the RPC card is available most of the time. 3. see 4. so the DC-UPS remains operational during a microcode update.Important: LMC bundle are not available for users to download. 4. and updates to the internal LMC. Certain updates do not require this step. “CEC failover and failback” on page 69. Activation of this firmware might require a shutdown and reboot of each CEC individually. Only IBM Support Representatives have the authority to use FTP or FTPS in the HMC to acquire a release bundle from the network.1). At times. Licensed machine code 439 .5 seconds and should not affect connectivity. when the user logs in to the primary HMC. Perform updates to the CEC operating system (currently AIX V7. current power state is maintained. At times. and I/O enclosure planars is released. It is important to check for the latest DDM firmware because there are more updates that come with new bundle releases. This process causes each CEC to fail over its logical subsystems to the alternative CEC. It is not available only during a short period.4. Chapter 15.7. If an update takes longer. New firmware can be loaded into each RPC card and DC-UPS directly from the HMC. “RAS on the power subsystem” on page 93): – During DC-UPS firmware update. The DS8870 includes the following enhancements about power subsystem microcode update for DC-UPS and RPC cards (for more information. 5. they are guided automatically to correct any serviceable events that might be open. the multipathing software on the host or the Control Unit-Initiated Reconfiguration (CUIR) directs I/O to another host adapter. Update code in the primary HMC (HMC1). If a dual HMC configuration is used. Perform updates to the host adapters. IBM Service Representatives also can download the bundle to their notebook and then load it on the HMC.2. DDM firmware update is a concurrent process in the DS8870 series family. For more information. For DS8870 host adapters.3. update the HMC. which are performed individually. For more information about host attachments. 6. This process also updates the firmware that is running in each device adapter that is owned by that CEC. If a host is attached with only a single path. or it might occur without processor reboots. The installation process involves the following stages: 1. the code is acquired and applied in the secondary HMC (HMC2) that is being retrieved from the primary HMC (HMC1). – During RPC firmware update. new firmware for the Hypervisor. “Host connections” on page 75. This firmware can be loaded into each device directly from the HMC. the impact of these updates on each adapter is less than 2. The overall process is also known as Concurrent Code Load (CCL). connectivity is lost. The updates cause each CEC to fail over its logical subsystems to the alternative CEC. service processor. see 4. 2.

power subsystem firmware update activation (RPC cards and DC-UPSs) is included in the same general task that is started at CDA. Code updates can be installed with all attached hosts that are running with no interruption to your business applications. The IBM Service Representative normally starts the CDA process and then monitors its progress by using the HMC.Although this installation process might seem complex. An example of a CDA progress window that shows the firmware that is being activated in the different system components is shown in Figure 15-2.x. Figure 15-2 CDA progress window example Important: An upgrade of the DS8870 microcode might require that you upgrade the DS CLI on workstations. it does not require a great deal of user intervention. it was necessary to start power update from an option in the HMC when the other elements were already updated. with all attached hosts shut down.x. this task should not be necessary. In previous bundles. 15. However. 440 IBM System Storage DS8870 Architecture and Implementation . This option remains available when only a power subsystem update is required.3 Concurrent and non-concurrent updates The DS8870 allows for concurrent microcode updates. From bundle 87. It is also possible to install microcode update bundles non-concurrently. Check with your IBM Service Representative regarding the description and contents of the release bundle. This method is usually only employed at DS88700 installation time.0.

15. Normally. Run the updated pre-verification test. Remember that if there is a large gap between the present and destination level of bundles. Always consult with your IBM Service Representative regarding proposed code load schedules. The actual time that is required for the concurrent code load varies based on the bundle that you are currently running and the bundle to which you are updating. 15. Chapter 15. and the next version). 2. The HMC can hold up to six versions of code.5 Host adapter firmware updates One of the final steps in the concurrent code load process is updating the host adapters. This procedure is shown in the following example: Thursday 1. Best practice: Many clients with multiple DS8000 systems follow the updating schedule that is detailed in this chapter. Also. They can keep operating during the host adapter update because the update is so fast. there is a notification and a confirmation is needed. Saturday Update the SFI.2 days before the rest of the bundle is applied. then the newest test can be run. Update the HMCs to the new code bundle. check multipathing drivers and SAN switch firmware levels for current levels at regular intervals. If problems are detected. the active version. the update process is concurrent to the attached hosts. Most organizations should plan for two code updates per year. 4. For DS8870 Fibre Channel cards. Your IBM Service Representative can assist you in this situation. some DSCLI commands (specially copy services related) might not be executed until SFI is updated to the same level of the HMC. Before you update the CEC operating system and microcode. Copy or download the new code bundle to the HMCs. there are one or two days before the scheduled code installation window date to correct them. and hosts that do not have multipathing software do not need to be shut down during the update. 3. a pre-verification test is run to ensure that no conditions exist that must be corrected. Resolve any issues that were raised by the pre-verification test. This fast update means that single path hosts. Additionally. The HMC code update installs the latest version of the pre-verification test. This technique allows the cards to switch to the new firmware in less than 2 seconds. Each CEC can hold three versions of code (the previous version. The Fibre Channel cards use a technique that is known as adapter fast-load. every code bundle contains new host adapter code. wherein the HMC is updated 1 .4 Code updates The microcode that runs on the HMC normally is updated as part of a new code bundle. regardless of whether they are used for open systems (FC) attachment or System z (FICON) attachment. Licensed machine code 441 . no SDD path management should be necessary. hosts that boot from SAN. Interactive HA also can be enabled. which means that before HA cards are updated.

or that it cannot take the paths offline. Verify the connectivity from any stand-alone Tivoli Storage Productivity Center Element Manager to the DS8870. The CUIR function automates channel path vary on and vary off actions to minimize manual operator intervention during selected DS8870 service actions. 4. and notifying the DS8870 subsystem that the paths are offline. Verify the connectivity from each DS CLI workstation to the DS8870. CUIR is available for the DS8870 when operated in the z/OS and z/VM environments. there are no special considerations. This feature is useful in environments in which there are many systems that are attached to a DS8870. CUIR reduces manual operator intervention and the possibility of human error during maintenance actions. Contact your IBM Service Representative to review and arrange the required services.ibm.com/storage/ds8000/updates/DS8K_Customer_Download_Files/ 2. 3. 442 IBM System Storage DS8870 Architecture and Implementation . Make sure that you upgrade to the new version of the DS CLI to take advantage of any improvements. there is a corresponding new release of the DS CLI.7 Post-installation activities After a new code bundle is installed.Remote Mirror and Copy path considerations For Remote Mirror and Copy paths that use Fibre Channel ports. CUIR allows the DS8870 to request that all attached system images set all paths that are required for a particular service action to the offline state. System images with the appropriate level of software support respond to these requests by varying off the affected paths.6 Loading the code bundle The DS8870 code bundle installation is performed by the IBM Service Representative. 15. CUIR also reduces the time that is required for the maintenance window. you can use the following FTP site to retrieve previous versions of DS CLI: ftp://ftp.ibm. Verify the connectivity from the DS8870 to all Tivoli Key Lifecycle Manager Key Servers in use. A current version of DS CLI can be downloaded from this website: http://www.software.com/support/entry/portal/documentation/hardware/system_storage/d isk_systems/enterprise_storage_servers/ds8870 When needed. Control Unit-Initiated Reconfiguration Control Unit-Initiated Reconfiguration (CUIR) prevents loss of access to volumes in System z environments because of incorrect or wrong path handling. you might need to perform the following tasks: 1. Upgrade the DS CLI of external workstations. 5. The ability to perform a fast-load means that no interruption occurs to the Remote Mirror operations. For most of new release code bundles. 15. Verify the DS Storage Manager connectivity from the Tivoli Storage Productivity Center to the DS8870. This function automates channel path management in System z environments in support of selected DS8870 service actions.

You can find this information by a specific bundle under the Bundle Release Note information section on the website. These updates and the information about them are documented n the DS8870 Code Cross-Reference website. These changes might include code fixes and feature updates relevant to the DS8870. Licensed machine code 443 . Chapter 15.8 Summary IBM might release changes to the DS8870 Licensed Machine Code.15.

444 IBM System Storage DS8870 Architecture and Implementation .

Monitoring with Simple Network Management Protocol This chapter provides information about the Simple Network Management Protocol (SNMP) notifications and messages for the IBM System Storage DS8000 series. This chapter covers the following topics: Simple Network Management Protocol overview SNMP notifications SNMP configuration with the HMC SNMP configuration with the DSCLI © Copyright IBM Corp. 2013. 445 . All rights reserved.16 Chapter 16.

including the following configuration and statistical information: Information about interfaces Address translation IP. and based on SNMP traps. Agents send traps to the SNMP manager to indicate that a particular condition exists on the agent system. The actual MIB definitions are encoded into the agents that are running on a system. the SNMP manager provides mechanisms to implement automation code that can react to information that is communicated through the SNMP interface to provide an appropriate response to certain situations that are described by such communication. SNMP is an industry-standard set of functions for monitoring and managing TCP/IP-based networks. With SNMP. and a set of data objects. The objects can be product unique. SNMP is a standard for monitoring an IT environment. and can be used to sense information about the product or to control operation of the product.16. and User Datagram Protocol (UDP) SNMP can be extended by using the SNMP Multiplexing protocol (SMUX protocol) to include enterprise-specific MIBs that contain information that is related to a specific environment or application. The SNMP protocol defines two terms (agent and manager) instead of the client and server terms. The SNMP agents are on various network-attached units in the customer environment. which are used in many other TCP/IP protocols. In addition. A management agent (a SMUX peer daemon) retrieves and maintains information about the objects that are defined in its MIB and passes this information to a specialized network monitor or network management station (NMS). such as IP addresses and the number of active TCP connections. SNMP includes a protocol. a system can be monitored. such as the occurrence of an error.1 SNMP agent An SNMP agent is a daemon process that provides access to the MIB objects on IP hosts on which the agent is running. Objects that are contained in the MIB are typically related to the management of the network-attached units. In this application. the SNMP manager generates traps when it detects status changes or other unusual conditions while polling network objects. 446 IBM System Storage DS8870 Architecture and Implementation . MIB-2 is the Internet standard MIB that defines over 100 TCP/IP specific objects. SNMP provides a standard MIB that includes information. IBM NetView®) that runs on a server in the customer environment. A set of data objects forms a Management Information Base (MIB).1 Simple Network Management Protocol overview SNMP is an application layer network protocol that allows communication between SNMP managers and SNMP agents by using TCP/IP for a transport layer. The agent can receive SNMP get or SNMP set requests from SNMP managers and can send SNMP trap requests to SNMP managers. 16. Internet-control message protocol (ICMP).1. Typically. the SNMP manager typically is an application program (for example. a database specification. TCP. event management can be automated.

Chapter 16. and network appliances. including trap number) Time stamp Optional enterprise-specific trap identification List of variables that describe the trap 16.3 SNMP trap A trap is a message that is sent from an SNMP agent to an SNMP manager without a specific request from the SNMP manager. The trap structure conveys the following information to the SNMP manager: Agent’s object that was affected IP address of the agent that sent the trap Event description (a generic trap or enterprise-specific trap. This type of complex SNMP manager provides you with monitoring functions that use SNMP. 16. SNMP defines six generic types of traps and allows definition of enterprise-specific traps. It typically has a GUI for operators. such as UNIX workstations. The SNMP manager gathers information from SNMP agents and accepts trap requests that are sent by SNMP agents. alert. The SNMP trap1 requests sent from SNMP agents can be used to send warning. The agents send back a reply to the manager. You can gather various information about the specific IP hosts by sending the SNMP get and get-next requests.1. which listen on UDP port 161. or error notification messages to SNMP managers. or set requests to SNMP agents. get-next. which listen on UDP port 162.16. Monitoring with Simple Network Management Protocol 447 .2 SNMP manager An SNMP manager can be implemented in two ways: as a simple command tool that can collect information from SNMP agents or it can be composed of multiple daemon processes and database applications. and can update the configuration of IP hosts by sending the SNMP set request.4 SNMP communication The SNMP manager sends SNMP get. The SNMP agent can send SNMP trap requests to SNMP managers. routers.1. The SNMP agent can be implemented on any IP host.1.

This configuration should be true independently of the functionality of any other ESSNet GUI or API that is provided by the product. If these controls cannot be altered through the SNMP interface. Certain SNMP implementations include more security features.6 Generic SNMP security The SNMP protocol uses the community name for authorization. In most cases.You can configure an SNMP agent to send SNMP trap requests to multiple SNMP managers. a community name is sent in a plain-text format between the SNMP agent and the manager. The characteristics of SNMP architecture and communication are shown in Figure 16-1. Consistency group traps (200 and 201) must be prioritized above all other traps and must be surfaced in less than 2 seconds from the real-time incident.5 SNMP Requirements All SNMP implementations should meet the following functional requirements that are defined by this section: SNMP trap generation should be operative whenever events that the traps indicate can occur. Certain controls for SNMP traps might be provided. such as restrictions on the accessible IP addresses.1. Any changes to the MIB that are associated with a trap should be released concurrently with the supported trap. 16. Most SNMP implementations use the default community name public for a read-only community and private for a read/write community. Their state is reflected in the MIB. 448 IBM System Storage DS8870 Architecture and Implementation . Figure 16-1 SNMP architecture and communication 16. they should be alterable through the Integrated Configuration Assistant Tool GUI or service interface.1.

when you must send messages that do not fit other predefined trap types.7 Message Information Base The objects. Communication link is up. called enterprise-specific MIB. Monitoring with Simple Network Management Protocol 449 .1. A trap message contains pairs of an object identifier (OID) and a value. as shown in Table 16-1. Because each vendor has a unique MIB subtree under the private subtree.1. SNMP alert traps provide information about problems that the storage unit detects. the MIB forms a tree structure. are defined as a set of databases that are called the MIB. You or the service provider must perform corrective action for the trap-related problems.1. Most hardware and software vendors provide you with extended MIB objects to support their own requirements. for example. 16. The structure of MIB is defined as an Internet standard in RFC 1155. to notify the cause of the trap message. DISK I/O error and application down. Important: You might want to physically secure the network to which you send SNMP packets by using a firewall because community strings are included as plain text in SNMP packets. which you can get or set by sending SNMP get or set requests. 16. The SNMP standards allow this extension by using the private subtree. Planned restart. Communication link is down. 16. The traps can be sent to a defined IP address. you should be careful about the SNMP security.8 SNMP trap request An SNMP agent can send SNMP trap requests to SNMP managers to inform them about the change of values or status on the IP host where the agent is running. There are seven predefined types of SNMP trap requests. as shown in Table 16-1. Chapter 16. EGP neighbor is down. do not allow access to hosts that are running the SNMP agent from networks or IP hosts that do not necessarily require access. the enterpriseSpecific trap type. Table 16-1 SNMP trap request types Trap type coldStart warmStart linkDown linkUp authenticationFailure egpNeighborLoss enterpriseSpecific Value 0 1 2 3 4 5 6 Description Restart after a crash. Vendor-specific event happened. Invalid SNMP community string was used. You can also set an integer value field that is called Specific Trap on your trap message.9 DS8000 SNMP configuration SNMP for the DS8000 is designed so that the DS8000 sends traps only if there is a notification.Therefore. there is no conflict among vendors’ original MIB extensions. At the very least. You can also use type 6.

001. The management server that is configured to receive the SNMP traps receives all of the generic trap 6 and specific trap 3 messages. you are required to get the destination address for the SNMP trap and the port information on which the Trap Daemon listens. Standard port: The standard port for SNMP traps is port 162.1 Serviceable event that uses specific trap 3 In Example 16-1.boulder. Before SNMP is configured for the DS8000. 450 IBM System Storage DS8870 Architecture and Implementation . a trap is sent every eight hours until the event is closed. which are sent in parallel with the Call Home to IBM. 101. The specific trap 3 is the only event that is sent for serviceable events.The DS8000 does not include an installed SNMP agent that can respond to SNMP polling. The default Community Name parameter is set to public. 16.001.2 SNMP notifications The HMC of the DS8000 sends an SNMPv1 trap in the following cases: A serviceable event was reported to IBM by using Call Home. An event occurred in the Copy Services configuration or processing. or 217 are sent.jsp At this site. 212. 102. generic trap 6 and specific traps 100. select Messages and codes  List of system reference codes and firmware codes. we see the contents of generic trap 6 specific trap 3.1300885U1300. 202. 215.1300885 Fru2Loc=U1300. 216. 210.com/infocenter/dsichelp/ds8000sv/index.001.1300885-P1 For open events in the event log. For reporting Copy Services events. see this website: http://publib.2. 213. 211. A serviceable event is posted as a generic trap 6 specific trap 3 messages. For more information about the System Reference Codes (SRCs). Example 16-1 SNMP special trap 3 of a DS8000 Manufacturer=IBM ReportingMTMS=2107-922*7503460 ProbNm=345 LparName=null FailingEnclosureMTMS=2107-922*7503460 SRC=10001510 EventText=2107 (DS 8000) Problem Fru1Loc=U1300. 214.ibm. 200. 16. The trap holds the following information: Serial number of the DS8000 Event number that is associated with the manageable events from the HMC Reporting Storage Facility Image (SFI) System reference code (SRC) Location code of the part that is logging the event The SNMP trap is sent in parallel with a Call Home for service to IBM.

This event indicates that no communication between the primary and the secondary system is possible. Example 16-3 Trap 101: Remote Mirror and Copy links are inoperable PPRC Links Down UNIT: Mnf Type-Mod SerialNm LS PRI: IBM 2107-922 75-20781 10 SEC: IBM 2107-9A2 75-ABTV1 20 Path: Type PP PLink SP SLink RC 1: FIBRE 0143 XXXXXX 0010 XXXXXX 17 2: FIBRE 0213 XXXXXX 0140 XXXXXX 17 Chapter 16. The RC column in the trap represents the return code for the interruption of the link. If one or several links (but not all links) are interrupted. The traps 1xx are sent for a state change of a physical link connection. The trap is sent if the physical remote copy link is interrupted. For all of these events. Monitoring with Simple Network Management Protocol 451 . Example 16-2 Trap 100: Remote Mirror and Copy links degraded PPRC Links Degraded UNIT: Mnf Type-Mod SerialNm LS PRI: IBM 2107-922 75-20781 12 SEC: IBM 2107-9A2 75-ABTV1 24 Path: Type PP PLink SP SLink RC 1: FIBRE 0143 XXXXXX 0010 XXXXXX 15 2: FIBRE 0213 XXXXXX 0140 XXXXXX OK If all of the links all interrupted.2. 13 traps are implemented. is posted. For more information about these functions and terms. This chapter describes only the messages and the circumstances when traps are sent by the DS8000. The Link trap is sent from the primary system. SG24-6788. return codes are listed in Table 16-2 on page 452. is posted and indicates that the redundancy is degraded. SG24-6787 and IBM System Storage DS8000: Copy Services for Open Systems. no Call Home is generated and IBM is not notified. Physical connection events Within the trap 1xx range.16. a state change of the physical links is reported. see IBM System Storage DS8000: Copy Services for IBM System z. The PLink and SLink columns are only used by the 2105 ESS disk unit.2 Copy Services event traps For state changes in a remote Copy Services environment. a trap 100 (as shown in Example 16-2). The 2xx traps are sent for state changes in the logical Copy Services setup. a trap 101 (as shown in Example 16-3).

mismatch. The maximum number of Fibre Channel path retry operations was exceeded. 10 14 15 452 IBM System Storage DS8870 Architecture and Implementation . There is a secondary storage unit sequence number. peer. There are no resources available in the primary storage unit for establishing logical paths because the maximum numbers of logical paths were established. trap 102 (as shown in Example 16-4) is sent after one or more of the interrupted links are available again. Example 16-4 Trap 102: Remote Mirror and Copy links are operational PPRC Links Up UNIT: Mnf Type-Mod SerialNm LS PRI: IBM 2107-9A2 75-ABTV1 21 SEC: IBM 2107-000 75-20781 11 Path: Type PP PLink SP SLink RC 1: FIBRE 0010 XXXXXX 0143 XXXXXX OK 2: FIBRE 0140 XXXXXX 0213 XXXXXX OK The Remote Mirror and Copy return codes are listed in Table 16-2. 0A The primary storage unit port or link cannot be converted to channel mode if a logical path is already established on the port or link. the secondary storage unit destination address is zero and an ESCON Director (switch) was found in the path. No reason available. Configuration error. The establish failed. ESCON link reject threshold exceeded when attempting to send ELP or RID frames. There is a secondary LSS subsystem identifier (SSID) mismatch. For ESCON paths. The Fibre Channel path link is down. There are no resources available in the secondary storage unit for establishing logical paths because the maximum numbers of logical paths were established. The attempt-to-establish state persists until the establish path operation succeeds or the remove remote mirror and copy paths command is run for the path. the secondary storage unit destination address is not zero and an ESCON director does not exist in the path. The establish paths operation is not tried again within the storage unit. The ESCON link is offline. For ESCON paths. Timeout. or failure of the I/O that collects the secondary information for validation. It is tried again until the command succeeds or a remove paths command is run for the path. or logical subsystem number. Table 16-2 Remote Mirror and Copy return codes Return code 02 03 04 05 06 07 08 09 Description Initialization failed. The source of the error is caused by one of the following conditions: The specification of the SA ID does not match the installed ESCON cards in the primary controller. or switch.After the DS8000 can communicate again by using any of the links. The path is a direct connection. This condition is caused by the lack of light detection that is coming from a host.

which explains the cause of the error that suspended the remote mirror and copy group. If the bit is 1.Return code 16 Description The Fibre Channel path secondary adapter is not Remote Mirror and Copy capable. as shown in Example 16-5. Monitoring with Simple Network Management Protocol 453 . To avoid SNMP trap flooding. the number of SNMP traps for the LSS is throttled. By converting this hex string into binary code. The SR column in the trap represents the suspension reason code. The trap contains the serial number (SerialNm) of the primary and secondary machine. The secondary adapter is already a target of 32 logical subsystems (LSSs). Example 16-5 Trap 200: LSS Pair Consistency Group Remote Mirror and Copy pair error LSS-Pair Consistency Group PPRC-Pair Error UNIT: Mnf Type-Mod SerialNm LS LD SR PRI: IBM 2107-922 75-03461 56 84 08 SEC: IBM 2107-9A2 75-ABTV1 54 84 Trap 202. The Fibre Channel path was removed because of a high failure rate. then the device is suspended. 1=Suspended): C000000000000000000000000000000000000000000000000000000000000000 Chapter 16. The maximum number of Fibre Channel path primary login attempts was exceeded. The primary Fibre Channel adapter is not configured properly or does not have the correct firmware level installed. Example 16-6 Trap 202: Primary Remote Mirror and Copy devices on the LSS were suspended because of an error Primary PPRC Devices on LSS Suspended Due to Error UNIT: Mnf Type-Mod SerialNm LS LD SR PRI: IBM 2107-922 75-20781 11 00 03 SEC: IBM 2107-9A2 75-ABTV1 21 00 Start: 2005/11/14 09:48:05 CST PRI Dev Flags (1 bit/Dev. 17 18 19 1A 1B 1C Remote Mirror and Copy events If you configured Consistency Groups and a volume within this Consistency Group is suspended because of a write error to the secondary device. is sent. The suspended pair information contains a hexadecimal string of a 64 characters. trap 200 is sent. each bit represents a single device. the logical subsystem or LSS (LS). and the logical device (LD). otherwise. is sent if a Remote Copy Pair goes into a suspend state. Suspension reason codes are listed in Table 16-3 on page 457. One trap per LSS. such as Tivoli Storage Productivity Center. which is configured with the Consistency Group option. to freeze this Consistency Group. the device is still in full duplex mode. The secondary adapter Fibre Channel path is not available. This trap can be handled by automation software. The complete suspended pair information is represented in the summary. This incapability could be caused by one of the following conditions: The secondary adapter is not configured properly or does not have the current firmware installed. The Fibre Channel path was established but degraded because of a high failure rate. The last row of the trap represents the suspend state for all pairs in the reporting LSS. as shown in Example 16-6. The maximum number of Fibre Channel path secondary login attempts was exceeded.

is sent when a Consistency Group in a Global Mirror environment can be formed after a previous Consistency Group formation failure. as shown in Example 16-10. is sent when a Consistency Group in a Global Mirror environment was successfully formed. Example 16-9 Trap 212: Global Mirror Consistency Group failure . is sent when a Consistency Group cannot be created in a Global Mirror relationship for one of the following reasons: Volumes were taken out of a copy session. Example 16-8 Trap 211: Global Mirror Session is in a fatal state Asynchronous PPRC Session is in a Fatal State UNIT: Mnf Type-Mod SerialNm IBM 2107-922 75-20781 Session ID: 4002 Trap 212.Trap 210. The Remote Copy link bandwidth might not be sufficient. as shown in Example 16-9. trap 211 is sent if the Global Mirror setup is in a severe error state in which no attempts are made to form a Consistency Group.Retry will be attempted UNIT: Mnf Type-Mod SerialNm IBM 2107-922 75-20781 Session ID: 4002 Trap 213.Retry is attempted Asynchronous PPRC Consistency Group Failure . Example 16-10 Trap 213: Global Mirror Consistency Group successful recovery Asynchronous PPRC Consistency Group Successful Recovery UNIT: Mnf Type-Mod SerialNm IBM 2107-9A2 75-ABTV1 Session ID: 4002 454 IBM System Storage DS8870 Architecture and Implementation . Example 16-7 Trap210: Global Mirror initial Consistency Group successfully formed Asynchronous PPRC Initial Consistency Group Successfully Formed UNIT: Mnf Type-Mod SerialNm IBM 2107-922 75-20781 Session ID: 4002 As shown in Example 16-8. The FC link between the primary and secondary system is not available. as shown in Example 16-7.

is sent if a Global Mirror Session is terminated by using the DS CLI command rmgmir or the corresponding GUI function. the master detects a failure to complete the FlashCopy commit. is sent if a Global Mirror environment was suspended by the DS CLI command pausegmir or the corresponding GUI function. Example 16-14 Trap 217: Global Mirror paused Asynchronous PPRC Paused UNIT: Mnf Type-Mod SerialNm IBM 2107-9A2 75-ABTV1 Session ID: 4002 As shown in Example 16-15. as shown in Example 16-13. This error might occur if the master is terminated by using the rmgmir command but the master cannot terminate the copy relationship on the subordinate.Trap 214. You might need to run a rmgmir command against the subordinate to prevent any interference with other Global Mirror sessions. The trap is sent after a number of commit retries failed. Example 16-12 Trap 215: Global Mirror FlashCopy at Remote Site unsuccessful Asynchronous PPRC FlashCopy at Remote Site Unsuccessful A UNIT: Mnf Type-Mod SerialNm IBM 2107-9A2 75-ABTV1 Session ID: 4002 Trap 216. is sent if a Global Mirror master cannot terminate the Global Copy relationship at one of their subordinates. Monitoring with Simple Network Management Protocol 455 . in the Global Mirror Environment. Example 16-11 Trap 214: Global Mirror Master terminated Asynchronous PPRC Master Terminated UNIT: Mnf Type-Mod SerialNm IBM 2107-922 75-20781 Session ID: 4002 As shown in Example 16-12. trap 218 is sent if a Global Mirror exceeded the allowed threshold for failed consistency group formation attempts. as shown in Example 16-11. trap 215 is sent if. Example 16-15 Trap 218: Global Mirror number of consistency group failures exceed threshold Global Mirror number of consistency group failures exceed threshold UNIT: Mnf Type-Mod SerialNm IBM 2107-9A2 75-ABTV1 Session ID: 4002 Chapter 16. as shown in Example 16-14. Example 16-13 Trap 216: Global Mirror subordinate termination unsuccessful Asynchronous PPRC Slave Termination Unsuccessful UNIT: Mnf Type-Mod SerialNm Master: IBM 2107-922 75-20781 Slave: IBM 2107-921 75-03641 Session ID: 4002 Trap 217.

Example 16-17 Trap 220: Global Mirror number of FlashCopy commit failures exceed threshold Global Mirror number of FlashCopy commit failures exceed threshold UNIT: Mnf Type-Mod SerialNm IBM 2107-9A2 75-ABTV1 Session ID: 4002 456 IBM System Storage DS8870 Architecture and Implementation . is sent if a Global Mirror successfully formed a consistency group after one or more formation attempts previously failed. as shown in Example 16-16. Example 16-16 Trap 219: Global Mirror first successful consistency group after prior failures Global Mirror first successful consistency group after prior failures UNIT: Mnf Type-Mod SerialNm IBM 2107-9A2 75-ABTV1 Session ID: 4002 Trap 220.Trap 219. is sent if a Global Mirror exceeded the allowed threshold of failed FlashCopy commit attempts. as shown in Example 16-17.

2. if possible. an SNMP trap alert message also can be enabled. If a rank is being overdriven to the point of saturation (high usage). Copy operations between the Remote Mirror and Copy volume pair were suspended by a primary storage unit secondary device status command. an SNMP trap alert message #224 is posted to the SNMP server. This system resource code can be returned only by the primary storage unit. If the secondary storage unit was turned off. The paths to the secondary storage unit might not be disabled if the primary storage unit was turned off. After the paths are restored. Copy operations were suspended because the secondary volume became suspended because of internal conditions or errors. the primary volume of the volume pair can still accept updates but updates are not copied to the secondary volume. The DS8000 microcode monitors for rank saturation. 04 05 06 07 08 09 0A 16. The Remote Mirror and Copy pair was suspended because the host issued a command to freeze the Remote Mirror and Copy group. During the suspension. The Remote Mirror and Copy volume pair was suspended when the primary or secondary storage unit was rebooted or when the power was restored. issue the mkpprc command to resynchronize the specified volume pairs. you might have to issue the rmpprc command to delete the volume pairs and reissue a mkpprc command to reestablish the volume pairs. Table 16-3 Copy Services suspension reason codes Suspension reason code 03 Description The host system sent a command to the primary volume of a Remote Mirror and Copy volume pair to suspend copy operations. Chapter 16.Table 16-3 shows the Copy Services suspension reason codes. Monitoring with Simple Network Management Protocol 457 . The host system sent a command to suspend the copy operations on the secondary volume. The specified volume pair between the storage units is no longer in a copy relationship.3 I/O Priority Manager SNMP When the I/O Priority Manager Control switch is set to Monitor or Managed. Depending on the state of the volume pairs. This system resource code can be returned only by the secondary volume. The out-of-sync tracks that are created between the volume pair are recorded in the change recording feature of the primary volume. Copy operations between the remote mirror and copy volume pair were suspended when the secondary storage unit notified the primary storage unit of a state change transition to simplex state. as shown in Example 16-18 on page 458. the paths between the storage units are restored automatically. This system resource code can be returned by the control unit of the primary volume or the secondary volume. This system resource code can be returned only if a primary volume was queried. The host system might specify an immediate suspension or a suspension after the copy completed and the volume pair reached a full duplex state. Copy operations between the Remote Mirror and Copy volume pair were suspended because of internal conditions in the storage unit.

Example 16-19 Trap 221: Space Efficient repository or over-provisioned volume reached a warning watermark Space Efficient Repository or Over-provisioned Volume has reached a warning watermark Unit: Mnf Type-Mod SerialNm IBM 2107-922 75-03460 Session ID: 4002 Example 16-20 shows an illustration of generated event trap 223.2. Example 16-18 SNMP trap alert message #224 Rank Saturated UNIT: Mnf Type-Mod SerialNm IBM 2107-951 75-ACV21 Rank ID: R21 Saturation Status: 0 Important: To receive traps from I/O Priority Manager. IOPM should be set to manage SNMP by using the following command: chsi -iopmmode managesnmp <Storage_Image> 16. which causes a change in the extent status attribute. The message identifies the rank and SFI. The trap is sent out when certain extent pool capacity thresholds are reached. Example 16-20 SNMP trap alert message #223 Extent Pool Capacity Threshold Reached UNIT: Mnf Type-Mod SerialNm IBM 2107-922 75-03460 Extent Pool ID: P1 Limit: 95% Threshold: 95% Status: 0 458 IBM System Storage DS8870 Architecture and Implementation .The following SNMPs rules are followed: Up to 8 SNMP traps per SFI server in 24 hour period (max 16 per 24 hours per SFI). A trap is sent under the following conditions: Extent status is not zero (available space is already below threshold) when the first ESE volume is configured Extent status changes state if ESE volumes are configured in Extent pool Example 16-19 shows an illustration of generated event trap 221. Rank exits saturation state if not in saturation for three of five consecutive 1-minute samples. Rank enters saturation state if in saturation for five consecutive 1-minute samples. SNMP message #224 is reported when a rank enters saturation or every 8 hours if in saturation.4 Thin Provisioning SNMP The DS8000 can trigger two specific SNMP trap alerts that are related to the thin provisioning feature.

doc/f2c_ictroubleshooting_36mf4d.help. The Traps list outlines the explanations for each of the possible combinations of generic and specific alert traps.3. and the errors that are reported by SNMP are available in Chapter 5 of the IBM System Storage DS8000: Troubleshooting document. 16. You or the service provider must perform corrective action for the related problems.1 SNMP preparation During the planning for the installation (see 9. which are sent in parallel with any events that Call Home to IBM.4. the SNMP community name for Copy Service-related traps is fixed and set to public. which is available at this website: http://publib.3 SNMP configuration The SNMP for the DS8000 is designed to send traps as notifications. the list. Also. SNMP alerts can contain a combination of a generic and a specific alert trap. “Monitoring DS8870 with the HMC” on page 260). This information must be applied by IBM service personnel during the installation.s sic. The format of the SNMP Traps.ibm. Chapter 16. Monitoring with Simple Network Management Protocol 459 . the IP addresses of the management system are provided for the IBM service personnel.3.com/infocenter/dsichelp/ds8000ic/topic/com.storage. The DS8000 does not include an installed SNMP agent that can respond to SNMP polling.16.pdf SNMP alert traps provide information about problems that the storage unit detects. The network management server that is configured on the HMC receives all the generic trap 6 specific trap 3 messages. IBM service personnel can configure the HMC to send a notification for every serviceable event or for only those events that Call Home to IBM.boulder. Also.ibm.

2 SNMP configuration from the HMC Customers can configure the SNMP alerting by logging in to the DS8000 HMC Service Management (https://HMC_ip_address) from remote or local through a web browser and launch the web application by using the following customer credentials: User ID: customer Password: cust0mer Complete the following steps to configure SNMP at the HMC: 1.3.16. Log in to the Service Management section on the HMC. Figure 16-2 HMC Service Management 460 IBM System Storage DS8870 Architecture and Implementation . as shown in Figure 16-2.

as shown in Figure 16-4. Figure 16-4 HMC Test SNMP trap Chapter 16. To verify the successful setup of your environment. Figure 16-3 HMC Management Serviceable Event Notification 3. Email). Monitoring with Simple Network Management Protocol 461 . Select Management Serviceable Event Notification (as shown in Figure 16-3) and enter the TCP/IP information of the SNMP server in the Trap Configuration folder. SNMP.2. Select Storage Facility Management  Services Utilities  Test Problem Notification (PMH. create a Test Event on your DS8000 HMC.

10.1. as shown in Figure 16-5.10.10. Example 16-21 Configuring the SNMP by using dscli dscli> chsp -snmp on -snmpaddr 10.10.10.2 Disabled Disabled 4 462 IBM System Storage DS8870 Architecture and Implementation . Example 16-21 shows how SNMP is enabled by using the chsp command.3 SNMP configuration with the DS CLI Perform the configuration process for receiving the Copy Services-related traps by using the DS CLI.10. Figure 16-5 HMC SNMP trap test 16. dscli> showsp Name desc acct SNMP SNMPadd emailnotify emailaddr emailrelay emailrelayaddr emailrelayhost numkssupported IbmStoragePlex Enabled 10.1.10.2 CMUC00040I chsp: Storage complex IbmStoragePlex successfully modified.10.10.The test generates the Service Reference Code BEB20010 and the SNMP server receives the SNMP trap notification.10.3.

SNMP preparation for the management software To configure an SNMP Console.txt file that is on your DS CLI installation CD. you need an MIB file. Monitoring with Simple Network Management Protocol 463 .ibm. which is delivered on the DS CLI CD. Alternatively.mib file.com/storage/ds8000/updates/DS8K_Customer_Download_Files/CLI Chapter 16. you can use the ibm2100. Configuration information for your SNMP manager and MIB can be found in the SNMP_readme. you can download the latest version of the DS CLI CD image from this website: ftp://ftp.software. For the DS8000.

464 IBM System Storage DS8870 Architecture and Implementation .

This chapter covers the following topics: Introduction to remote support IBM policies for remote support VPN advantages Remote connection types DS8870 support tasks Remote connection scenarios Assist On-site Audit logging © Copyright IBM Corp.17 Chapter 17. Remote support This chapter describes the outbound (Call Home and Support Data offload) and inbound (code download and remote support) communications for the IBM System Storage DS8000 family. IT is an additional method for remote access to IBM products. The DS8870 maintains the same functions as in the previous generation. All rights reserved. 465 . Special emphasis was placed on the Assist On-site section. 2013.

1. This plan of providing support remotely must be balanced with the client’s expectations for security.boulder. which is available at this website: http://www.ibm. Important: The customer has the flexibility to quickly enable or disable remote support connectivity by issuing the chaccess or lsaccess commands by using the DSCLI. But as much as possible. and IPSec.2 Organization of this chapter A list of the relevant terminology for remote support is presented first.com/doc_link/en_US/a_doc_lib/aixbman/security/ipsec _planning. Snader VPN Implementation. This goal can be achieved only by careful planning with a client and a thorough review of all available options.ibm. Scenarios We present a scenario that describes how each task is performed over the types of remote connections. VPNs.1.redbooks. planned code upgrades. or management of a component failure. Maintaining the highest levels of security in a data connection is a primary goal for IBM. by Jon C. S1002693. Server and Client Solutions. Tasks We review the various support tasks that must be run on those connections. SG24-5201.1 Introduction to remote support Remote support is a complex topic that requires close scrutiny and education. IBM is committed to servicing the DS8870.htm VPNs Illustrated: Tunnels. can be downloaded at this website: http://www. 466 IBM System Storage DS8870 Architecture and Implementation . IBM wants to minimize downtime and maximize efficiency by performing many support tasks remotely.html?Open The Security Planning website is available at the following URL: http://publib16.ibm. whether it is warranty work. contains more information about physical planning.com/abstracts/sg245201. Dispatching service personnel to come to your site and perform maintenance on the system is still a part of that commitment. 17. in a secure and professional manner.1 Suggested reading The following resources can be used to better understand IBM remote support offerings: Chapter 8.17.com/support/docview. Volume I: IBM Firewall.wss?&rs=1114&uid=ssg1S1002693 17. The remainder of this chapter is organized as follows: Connections We review the types of connections that can be made from the HMC to the world outside of the DS8870. “DS8870 Physical planning and installation” on page 223. A Comprehensive Guide to Virtual Private Networks.

Web browsing and email are two of the most common applications that run on top of an IP network. An SSL connection over the Internet is considered reasonably secure. File Transfer Protocol FTP is a method of moving binary and text files from one computer system to another over an IP connection. Most companies use the Transmission Control Protocol/Internet Protocol (TCP/IP) standard for their connectivity between workstations and servers. such as the Tivoli Storage Productivity Center or DS CLI workstations. authenticating sessions and users. FTP (files). Virtual Private Network A Virtual Private Network (VPN) is a private tunnel through a public network.1.3. Having an understanding of these terms contributes to your discussions on remote support and security concerns. it refers to the use of specialized software and hardware to create a secure connection over the Internet. A generic definition is presented here and then more specific information about how IBM implements the idea is presented later in this chapter. There are two varieties of IP: IPv4 and IPv 6. A VPN allows a remote worker or an entire remote office to remain part of a company’s internal network. The two systems. although physically separate. VPNs provide security by encrypting traffic.3 Terminology and definitions Listed here are brief explanations of some of the terms that are used when remote support is described. SSH can be enabled on a system when regular Telnet and File Transfer Protocol (FTP) are disabled. FTP is considered appropriate for data that is already public. IBM support can use this methodology with VPN for data analysis. IP network There are many protocols that are running on local area networks (LANs) around the world.17. Chapter 17. The term SSH also is used to describe a secure ASCII terminal session between two computers. see “Abbreviations and acronyms” on page 529. which makes it possible to communicate only with the computer in a secure manner. “DS8870 HMC planning and setup” on page 251. With AOS Version 3. Secure Shell Secure Shell (SSH) is a protocol that establishes a secure communications channel between two computer systems. or SMTP (email). FTP is not inherently secure as it has no provisions for encryption and features only simple user and password authentication. Assist On-site Tivoli Assist On-site (AOS) is an IBM remote support option that allows SSL connectivity to a system that is at the customer site and used to troubleshoot storage devices. IP is the protocol that is used by the DS8870 Hardware Management Console (HMC) to communicate with external systems. Remote support 467 . For more information. Carrying HTTP over SSL often is referred to as HTTPS. IP is also the networking protocol of the global Internet. or if the entirety of the connection is within the physical boundaries of a private network. see Chapter 9. behave as though they are on the same private network. For a full list of terms and acronyms that are used in this book. AOS offers port forwarding as a solution that grants customers attended and unattended sessions. SG24-4889. For more information about these networks. Secure Sockets Layer Secure Sockets Layer (SSL) refers to methods of securing otherwise unsecure protocols such as HTTP (websites). see the Introduction section of Assist On-site. and verifying data integrity. Most commonly.

In general. When a VPN session with the DS8870 is needed. and the type of data that is being moved. r commands. Only approved IBM service personnel can gain access to the tools that provide the security codes for HMC command-line access. the logical protocols that are used. Although the HMC is based on a Linux operating system. the destination address. 17. Firewalls are deployed at the boundaries of networks. processes. 17. and remote procedure call (RCP) programs. and the type of traffic. Bandwidth Bandwidth refers to the characteristics of a connection and how they relate to moving data.IPSec Internet Protocol Security (IPSec) is a suite of protocols that is used to provide a secure transaction between two systems that use the Transmission Control Protocol/Internet Protocol (TCP/IP) network protocol. Bandwidth is affected by the physical connection. They are managed by policies that declare what traffic can pass based on the sender’s address.3 VPN rationale and advantages Security is a critical issue for companies worldwide. Having a secure infrastructure requires systems to work together to mitigate the risk of malicious activity from external and internal sources. Firewall A firewall is a device that controls whether data is allowed to travel onto a network segment. and IDs. including standard Internet services. IPSec focuses on authentication and encryption.2 IBM policies for remote support The following guidelines are at the core of IBM remote support strategies for the DS8870: When the DS8870 must transmit service data to IBM. such as telnet. Any connection from your network to the public Internet raises the following security concerns: Infection by viruses Intrusion by hackers The accessibility of your data from the remote support site Authorization of the remote users to access your machine when a remote connection is opened 468 IBM System Storage DS8870 Architecture and Implementation . IBM disabled or removed all unnecessary services. Most VPNs that are used on the Internet use IPSec mechanisms to establish the connection. higher bandwidth means faster movement of larger data sets. Firewalls are an essential part of network security and their configuration must be considered when remote support activities are planned. There is never any active process that is listening for incoming sessions on the HMC. FTP. IBM maintains multiple-level internal authorizations for any privileged access to the DS8870 components. physical distance. only logs and process dumps are gathered for troubleshooting. The I/O from host adapters and the contents of NVS cache memory are never transmitted. the HMC always initiates such connections and only to predefined IBM servers or ports. two of the main ingredients of a secure connection.

average connection speed is high (28 . Closer monitoring and enhanced collaboration. problems can be solved faster.1 Asynchronous modem A modem creates a low-speed asynchronous connection by using a telephone line that is plugged into the HMC modem port. Remote support 469 . The HMC can also be connected to a phone line through a modem port. However. When an IBM Service Representative arrives to your site. 17. The following benefits can be realized: Faster problem solving: You can contact technical experts in your support region to help resolve problems on your DS8870 without having to wait for data such as logs. It is relatively secure because the data is not traveling across the Internet. This type of connection favors transferring small amounts of data. which in turn lessens the impact of any failures on your business.4. The use of IBM security access provides a number of advantages that are designed help you to save time and money and efficiently solve problems. Those policies are enforced by the settings on the HMC and the configuration of client network devices. allow IBM support to assist you in resolving the most complex problems without the risk inherent to non-secure connections. along with the built-in security features of the DS8870. they already have an Action Plan. dumps. Not a business-to-business connection: It is an HMC server-to-IBM VPN server connection.56 Kbps). clients provide these connections and then apply policies on when they are to be used and what type of data they can carry. Chapter 17. These engineers can then simultaneously view the DS8870 Hardware Management Console. but in others. and traces. VoIP: Connectivity issues are seen on Voice over IP (VoIP) phone infrastructures that do not support the Modem Over IP (MoIP) standard ITU V150. this type of connection is not terribly useful because of bandwidth limitations. As a result. Save time and money: Many problems can be analyzed in advance. These two physical connections offer the following connection possibilities for sending and receiving data between the DS8870 and IBM: Asynchronous modem connection IP network connection IP network connection with VPN Assist On-site Rather than leaving the modem and Ethernet disconnected.The IBM VPN connections. The next four sections describe the capabilities of each type of connection. it can be lower. Remote-access support can help to greatly reduce service costs and shorten repair times. such as routers and firewalls. 17. In some countries. which could also be used as a call home.4 Remote connection types The DS8870 HMC can be connected to the client’s network by using a standard Ethernet (100/1000 Mb) cable. if required. Connection with a worldwide network of experts: IBM Technical support engineers can call on other worldwide subject experts to assist with problem determination.

Authorized support personnel can call the HMC and get privileged access to the command line of the operating system. Figure 17-1 Service Management in WebUI The HMC provides the following settings to govern the usage of the modem port: Unattended Session This setting allows the HMC to answer modem calls without operator intervention. as shown in Figure 17-1. IBM Support must contact the client every time they must dial in to the HMC. The client controls whether the modem answers an incoming call. 470 IBM System Storage DS8870 Architecture and Implementation . If this setting is disabled. – Duration: Automatic This option indicates that the HMC answers all calls for a specified number of days after any Serviceable Event (problem) is created. depending on the quality of the connection. – Duration: Temporary This option sets a starting and ending date. Typical PE Package transmission is not normally performed over a modem line because it can take too long. during which the HMC answers all calls. someone must go to the HMC and allow for the next expected call. Code downloads over a modem line are not possible. These options are changed from the WebUI on the HMC by selecting Service Management  Manage Inbound Connectivity.The DS8870 HMC modem can be configured to call IBM and send small status messages. – Duration: Continuous This option indicates that the HMC can always answer all calls.

These options are shown in Figure 17-2. and eventually to the Internet. can it be captured and decoded by unwanted systems? The SSL protocol is one answer to these questions. that is. A modem connection is shown in Figure 17-3 on page 479. the following concerns must be considered: Verify the authenticity of data. that is. integrity. Though favorable for speed and bandwidth.2 IP network Network connections are considered high speed when compared to a modem. HMCs that are connected to a client IP network. and confidentiality. Therefore.4. Some of the following features are provided by SSL: Client and server authentication to ensure that the appropriate machines are exchanging data Data signing to prevent unauthorized modification of data while in transit Data encryption to prevent the exposure of sensitive information while data is in transit Traffic through an SSL proxy is supported by the user ID and password that is provided by the customer. can send status updates and offloaded problem data to IBM by using SSL sessions. that is. Enough data can flow through a network connection to make it possible to run a graphical user interface (GUI). It typically takes less than one hour to move the information. Chapter 17. It provides transport layer security with authenticity. Remote support 471 . Select t his option to allow the HMC modem to receive unattended calls Modem will always answer Modem will answer for n days after a new problem is opened Modem will answer during this t ime period only Figure 17-2 Modem settings 17. A basic network connection is shown in Figure 17-5 on page 481. has it been altered during transmission? Verify the security of data. network connections introduce security concerns. for a secure connection between the client network and IBM. is it really from the sender it claims to be? Verify the integrity of data.

After the first level of authentications. For more information.4. monitor. An illustration of a traditional VPN connection is shown in Figure 17-6 on page 482. Data can be encrypted so that even if it is captured en-route. 4. The combination of tasks and connection types is described in 17.3 IP network with traditional VPN Adding a VPN tunnel to an IP network that is IPSec-based and not proxy capable.4. Network Address Translation is supported and can be configured on request. waiting for a connection to be made by IBM. the HMC is asked to launch a VPN session. IBM Support verifies that they can see and use the VPN connection from an IBM internal IP address.17. IBM Support launches the WebUI or other high-bandwidth tools to work on the DS8870. Some tasks can be done only over a network. Only the HMC is allowed to initiate the VPN tunnel. greatly increases the security of the connection between the two endpoints. 3. and it can be made only to predefined IBM addresses.7. IBM can use its service interface (WebUI) to perform the following tasks: Check the status of components and services on the DS8870 in real time Queue up diagnostic data offloads Start. pause. The HMC hangs up the modem call and initiates a VPN connection back to a predefined address or port within IBM Support. “Remote connection scenarios” on page 478. which service personnel can then use. The following steps are used to create a VPN tunnel from the DS8870 HMC to IBM: 1.6.5 DS8870 support tasks DS8870 support tasks require the HMC to contact the outside world. 2. and restart repair service actions Performing the following steps results in the HMC creating a VPN tunnel back to the IBM network. by using Prepare under Manage Inbound Connections. it cannot be replayed or deciphered. 5. or by DSCLI command. Some tasks can be performed by using the modem or the network connection. In addition to dialing via modem. There is no VPN service that sits idle. a remote access VPN can be established via WUI from the HMC. IBM support calls the HMC by using the modem. With the safety of running within a VPN. see 17. If required. The following support tasks require the DS8870 to connect to outside resources: Call Home and heartbeat Data offload Code download Remote support 472 IBM System Storage DS8870 Architecture and Implementation . 17. It does not support Data Offload or Call home. 17.4 AOS Assist On-site (AOS) provides a method of Remote Access. Data can be verified for authenticity and integrity. “Assist On-site” on page 483.

The offload can be done in the following ways: Modem offload Standard FTP offload SSL offload VPN offload Chapter 17. configurations. By sending heartbeats. So. This data can include text and binary log files.5. Call Home is a one-way communication. The Call Home also includes information about the nature of a problem so that an active investigation can be started. and features. Call Home Call Home is the capability of the HMC to contact IBM Service to report a service event. These logs are grouped into collections by the component that generated them or the software service that owns them. The data packages must be offloaded from the HMC and sent in to IBM for analysis. The Call Home facility can be configured to use the following data transfer methods: HMC modem The Internet through an SSL connection The Internet through a IPSec tunnel from the HMC to IBM Call Home information and heartbeat information is stored in the IBM internal data store so the support representatives can access the records. With ODD Dump. Remote support 473 . A heartbeat is a small message with basic product information so that IBM knows that the unit is operational.1 Call Home and heartbeat: outbound Here we describe the Call Home and heartbeat capabilities. with data that moves from the DS8870 HMC to the IBM data store. gathering and storing all the data packages. such as a hardware component failure. 17. IBM and the client ensure that the HMC is always able to initiate a full Call Home to IBM in the case of an error. It is referred to as Call Home for service. ODD cannot be generated via DSCLI. Heartbeats represent a one-way communication. IBM can collect data after an initial error occurs with no impact to host I/O. In certain cases. and timelines. a service call to the client is made to investigate the status of the DS8870. The HMC is a focal point. inventory lists. In certain cases. The MRPD information includes installed hardware. often exceeding 100 MB. If the heartbeat information does not reach IBM. The entire bundle is collected together in a PEPackage. a large amount of diagnostic data is generated. more than one PEPackage might be needed to properly diagnose a problem. memory dumps. Heartbeat The DS8870 also uses the Call Home facility to send proactive heartbeat information to IBM.17.5. the HMC must be accessible if a service action requires the information.2 Data offload: outbound For many DS8870 problem events. On Demand Data dump: The On-Demand Data (ODD) Dump provides a mechanism that allows the collection of debug data for error scenarios. with data that moves from the DS8870 HMC to the IBM data store. the IBM Support center might need an additional dump that is internally created by DS8870 or manually created through the intervention of an operator. firmware dumps. The HMC provides machine reported product data (MRPD) information to IBM by way of the Call Home facility. A DS8870 PEPackage can be large.

160.252.25.48 207. For more information.48 207.252.204 All other regions 129. FTP offload allows IBM Service personnel to dial in to the HMC by using the modem line while support data is transmitted to IBM over the network. Client firewall settings between the HMC and the Internet for SSL setup require the following IP addresses that are open on port 443 based on geography: North and South America 129.25. If this connectivity option is the only option that is available.42.42. Standard FTP offload The HMC can be configured to support automatic data offload by using File Transfer Protocol (FTP) over a network connection. The client is required to manage its firewalls so that FTP traffic from the HMC (or from an FTP proxy) can pass onto the Internet. If the values fail.50 207. SSL offload For environments that do not allow FTP traffic out to the Internet.25.25. a client can configure the FTP offload to use a client-provided FTP proxy server.160. Only files like dumps (LPAR Statesave) (which are about 10 MB) can stand this bandwidth. all the data is encrypted so that it is rendered unusable if intercepted.42.252.49 207. Offloading a PE Package over a modem connection is slow. But with SSL. be aware that the overall process of remote support is delayed while data is in transit. It also ties up the modem during that time and IBM support cannot dial in to the HMC to perform command-line tasks. the HMC uses the client-provided network connection to connect to the IBM data store. The client then becomes responsible for configuring the proxy to forward the data to IBM.20 hours. often taking 15 .Modem offload The HMC can be configured to support automatic data offload by using the internal modem and a regular phone line.200 129. consult the Information Center documentation in your DS8870 HMC by searching for Isolating Call Home/remote services failure under the “Test the Internet (SSL) connection section”.42. In this configuration.205 IBM Authentication Primary IBM Authentication Secondary IBM Data Primary IBM Data Secondary IBM Authentication Primary IBM Authentication Secondary IBM Data Primary IBM Data Secondary IP values: The IP values that are provided here could change with new code releases. Important: FTP offload of data is supported as an outbound service only. There is no active FTP server that is running on the HMC that can receive connection requests. the same as in a standard FTP offload. the DS8870 also supports offload of data by using SSL security.160.200 129. 474 IBM System Storage DS8870 Architecture and Implementation .252. This traffic can be examined at the client’s firewall before it is moved across the Internet. contact the IBM DS8870 support center. When a direct FTP session across the Internet is not available or wanted.160.

42. The IPSec tunneling technology that is used by the VPN software.160.196 host <IP addr for HMC> eq 4500 129. Example 17-1 Cisco Firewall configuration access-list access-list access-list access-list access-list access-list DMZ_to_Outside DMZ_to_Outside DMZ_to_Outside DMZ_to_Outside DMZ_to_Outside DMZ_to_Outside permit permit permit permit permit permit esp esp udp udp udp udp host host host host host host 207.16 host <IP addr for HMC> eq 4500 Only the HMC customer network must be defined for access to the IBM VPN Servers.25.VPN offload A remote service VPN session can be initiated at the HMC for data offload over a modem or an Internet VPN connection.16 host <IP addr for HMC> eq 500 207. When a firewall is in place to shield the customer network from the open Internet.42.160.196 IBM Rochester VPN Server: 129.25.252.196 host <IP addr for HMC> 129.25.16 host <IP addr for HMC> 207.252.252. the firewall must be configured to allow the HMC to connect to the IBM servers. Remote support 475 .196 host <IP addr for HMC> eq 500 129. provides the ability for IBM Service to access the DS8870 servers themselves through the secure tunnel.252.16 You must also enable the following ports and protocols: ESP UDP Port 500 UDP Port 4500 Example 17-1 shows the output of defined permissions that is based on a Cisco PIX model 525 firewall. not inbound.160.160. The VPN session is always initiated outbound from the HMC. with the TCP/IP port forwarder on the HMC. Chapter 17.42.42. At least one of these methods of connectivity must be configured through the Outbound Connectivity panel.25. The HMC establishes connection to the following TCP/IP addresses: IBM Boulder VPN Server: 207.

or modem must also be enabled as an adjunct. Internet (SSL) .Comparison of DS8870 connectivity options access and remote service are used interchangeably throughout this document. 476 IBM System Storage DS8870 Architecture and Implementation .Secure Attended and Un-Attended Sessions . Table 17-1 shows the benefits and drawbacks of the types of connection. VPN Internet. use the modem as a backup and for initiating remote access sessions.Fast debug data transfer to IBM .Can be difficult to implement in some environments . SSL is easier to implement than VPN . debug data offload. VPN internet. To support remote access. or modem must also be enabled as an adjunct.For various reasons.Supports all service activities Does not support remote access VPN Internet .Extremely slow debug data transfer to IBM Might be the only option enabled. AOS . Might be the only option enabled.Allows proxying Cons Does not support problem reporting or remote access Comments To support all service activities. However.Fast debug data transfer to IBM . The terms remote Service activities include problem reporting. Enabling multiple options is allowed and must be used for optimal availability. AOS is a preferred solution because it can add up more secure connectivity sessions. Table 17-1 Remote support connectivity comparison Connectivity Option FTP Pros .Fast debug data transfer .A installation and configuration with IBM support required Modem .Does not allow you to inspect packets Generally the best option.Allows IBM service to remotely initiate an outbound VPN session .Economical connectivity solution . such as proxying.SSL security .Supports problem reporting .Supports all service activities . and remote access.Easy installation .

Another benefit of VPN is that IBM Support can offload data and troubleshoot in parallel with VPN over Ethernet. This type of interaction with the HMC requires the most bandwidth.17. this task cannot be done with VPN over a modem. The client firewall must be configured to allow the FTP traffic to pass. Chapter 17. FTP If allowed. the problem likely is escalated to higher levels of responsibility within IBM Support. After a problem comes to the attention of the IBM Support Center and it is determined that the issue is more complex than a straightforward parts replacement. If this option is used. 17. Download the new code bundle directly from IBM by using FTPS. Important: Package bundles are not available for users to download. After the code bundle is acquired from IBM. IBM Support must wait until any data offload is complete and then attempt the diagnostic testing and repair from a command-line environment on the HMC. IBM might need to trigger a data offload. IBM Support can use ASCII end-to-end connection tools to diagnose and repair the problem. the FTP or FTPS session is closed and the code load can take place without needing to communicate outside of the DS8870.5. However. FTPS If FTP is not allowed. This escalation could happen at the same time that a support representative is dispatched to the client site. As described in 15. Upon establishing a secure session with the storage device. an FTPS session can be used instead.2. Loading code bundles from CDs or DVDs is the only option for DS8870 installations that do not include any outside connectivity. Remote support 477 .5. the following possibilities are used for acquiring code on the HMC: Load the new code bundle by using CDs or DVDs. If the only available connectivity is by modem. Download the new code bundle directly from IBM by using FTP. This process is slower and more limited in scope than if a network connection can be used. If the HMC is connected to the client network. the support representative opens an FTP session from the HMC to the IBM code repository and downloads the code bundle (or bundles) to the HMC. IBM Service Representatives also can download the bundle to their notebook and then load it on the HMC. the client firewall must be configured to allow the SSL traffic to pass. FTPS is a more secure file transfer protocol tat runs within an SSL session.3 Code download: inbound DS8870 microcode updates are published as bundles that can be downloaded from IBM. perhaps more than one.4 Remote support: inbound and two way The term remote support describes the most interactive level of assistance from IBM. and at the same time be able to interact with the DS8870 to dig deeper into the problem and develop an action plan to restore the system to normal operation. “Bundle installation” on page 438. IBM support downloads the bundles from IBM by using FTP or FTPS. Only IBM Support Representatives have the authority to use FTP or FTPS in the HMC to acquire a release bundle from the network.

considering the type of access available to the DS8870. Code download: Code must be loaded onto the HMC by using CDs that are carried in by the IBM Service Representative who can also download the bundle to HMC via the Ethernet network. IBM Support can dial in to the HMC and run commands in a command-line environment. then the following tasks are performed: Call Home and heartbeat: The HMC does not send heartbeats to IBM.1 No connections If the modem or the Ethernet are not physically connected and configured. IBM Support must be notified at the time of installation to add an exception for this DS8870 in the heartbeats database.2 Modem only If the modem is the only connectivity option. and uploaded to the IBM data store. All diagnostic and repair tasks must take place with an operator who is physically at the console. 17. IBM Support cannot use a GUI or any high-bandwidth tools. 478 IBM System Storage DS8870 Architecture and Implementation . “DS8870 support tasks” on page 472). then the following tasks are performed: Call Home and heartbeat: The HMC uses the modem to call IBM and send the Call Home data and the heartbeat data. Depending on the package size and line quality. “Remote connection types” on page 469) and the tasks were reviewed (see 17. These calls are short in duration.17. this call could take up to 20 hours to complete. we can examine how each task is performed.5. Data offload: After data offload is triggered. Remote support: IBM cannot provide any remote support for this DS8870. indicating that it is not expected to contact IBM. Data offload: If required and allowed by the client. Code download: Code must be loaded onto the HMC by using CDs or DVDs that are carried in by the Service Representative. There is no method of download if only a modem connection is available.6. Remote support: If the modem line is available (and is not being used to offload data or send Call Home data). diagnostic data can be copied onto an SDHC re-writable media.4. The HMC does not call home if a problem is detected. transported to an IBM facility.6 Remote connection scenarios Now that the four connection options were reviewed (see 17.6. 17. the HMC uses the modem to call IBM and send the data package. Having modem and FTP is a great combination because data can be offloaded quickly while the modem calls home or remote support is engaged.

IBM Remote Support Support Staff dia l into HMC for com mand line a cce ss N o GUI OR Ph on e L in e Da ta offloa ds and Ca ll Home go to IBM ove r mode m line (one wa y traffic) DS8000 Pho ne Li ne IBM Data Store Figure 17-3 Remote support with modem only Chapter 17. Remote support 479 .A modem-only connection is shown in Figure 17-3.

6. the HMC uses the VPN network to call IBM and send the data package. The firewalls can ea sily ide ntify the tr affic ba sed on the por ts used Customer’s Firewall IBM Remote Support The VPN c onne ction from HMC to IB M is encr ypte d and a uthenticate d Data offloa ds a nd Call H ome go to IBM ove r Internet v ia VPN tunne l (one wa y tr affic) Internet IBM’s VPN Device DS8000 IBM’s Firewall IBM Data Store Figure 17-4 Remote support with VPN only 480 IBM System Storage DS8870 Architecture and Implementation . the following tasks are performed: Call Home and heartbeat: The HMC uses the VPN network to call IBM and send the Call Home data and the heartbeat data. the IBM Support center can connect to the HMC and run commands in a command-line environment. A VPN-only connection is shown in Figure 17-4.3 VPN only If the VPN is the only connectivity option. After the VPN is opened. IBM Support can use the Service Web Interface.17. The package is sent to IBM server quickly. Remote support: An IBM Support Center representative calls you and asks you to open a VPN connection before the remote connection is started. Data offload: After data offload is triggered.

the following tasks are performed: Call Home and heartbeat: The HMC uses the network connection to send Call Home data and heartbeat data to IBM across the Internet. it is not configured to allow VPN traffic. Remote support 481 . Standard FTP or SSL sockets can be used. Remote support: Although there is a network connection. so remote support must be done by using the modem.4 Modem and network with no VPN If the modem and network access are provided without VPN. If the modem line is not busy. A modem and network connection that does not use VPN tunnels is shown in Figure 17-5. Customer’s Firewall IBM Remot e Support Suppor t Sta ff dial into HM C for c ommand line a cc ess No GUI Data offloa ds a nd Call H ome go to IBM ove r Inte rnet via FTP or SSL (one wa y tr affic) Internet HMC has no ope n network ports to r ec eive c onne ctions Ph on e L in e DS8000 Pho ne Li ne IBM’s Fir ewall IBM Data Store Figure 17-5 Remote support with modem and network (no VPN) Chapter 17. IBM Support can dial in to the HMC and run commands in a command-line environment. Data offload: The HMC uses the network connection to send offloaded data to IBM across the Internet.17. IBM Support cannot use a GUI or any high-bandwidth tools.6.

IBM Support can use tools to interact with the HMC. outside of a VPN tunnel. Remote support: Upon request. A modem and network connection plus traditional VPN is shown in Figure 17-6. outside of a VPN tunnel. Customer’s Firewall The firewalls can ea sily ide ntify the tr affic ba sed on the por ts used IBM Remote Support The VPN c onne ction from HMC to IB M is encr ypte d and a uthenticate d Internet Data offloa ds a nd C all H ome go to IBM ove r Inte rnet via FTP or SSL (one way tr affic) IBM’s VPN Device Ph on e L in e DS8000 Pho ne Li ne IBM’s Firewall IBM Data Store Figure 17-6 Remote support with modem and traditional VPN 482 IBM System Storage DS8870 Architecture and Implementation . Standard FTP or SSL sockets can be used. the following tasks are performed: Call Home and heartbeat: The HMC uses the network connection to send Call Home data and heartbeat data to IBM across the Internet.5 Modem and traditional VPN If the modem and a VPN-enabled network connection are provided. the HMC establishes a VPN tunnel across the Internet to IBM.17. Data offload: The HMC uses the network connection to send offloaded data to IBM across the Internet.6.

including the DS8870. “Call Home and heartbeat: outbound” on page 473 and 17.5.1. The IBM remote support team can be granted full access to IBM systems or be allowed to see only the AOS window without the ability to interact.4 or higher to run a session. 17. The customer can enable or disable different security options to decide what actions IBM remote support can do.7 Assist On-site IBM Tivoli Assist On-site (AOS) is a free-of-charge software product that is provided by IBM and designed to help customers. The customer must contact IBM support and complete a form to download the AOS client and get help to set up the software. This simple concept allows for easy management and maintenance of the AOS equipment at the customer site. Chapter 17. AOS installation requires Java Runtime 1. and allows IBM support to access systems for diagnosis and troubleshooting. AOS can be an alternative to remote support via modem but AOS allows only inbound connectivity. so you still need to implement call home and data offload. and installation. One important component of the AOS Client software is the AOS Support Service Program. This section is not intended to be a comprehensive guide about AOS. It sends a heartbeat every 2 minutes to the AOS server and checks if any connection request is present. It gives the advantage of concentrating all remote support assistance in one point regardless of the type of specific remote maintenance tool that the IBM system or device requires. AOS Client The AOS Client is the software that is used by the customer. It can be used with a wide-range of IBM hardware systems. Theses packages allow establishing an AOS connection with IBM support. It is controlled by the customer at their facilities. AOS is a secured tunneling application server over SSL. even though it is a well-proven and consolidated secure option. If so.2. For more information about AOS. prerequisites. It must be installed in a server that is managed and maintained by the customer at their facilities.5. The customer controls who (support individuals or support teams) can remotely support their equipment. Customer can have a server (it could be also a workstation or a virtual server) as the unique focal point in all their IT network infrastructure to manage and monitor all remote support requests for all different IBM products that support AOS. the AOS program is started. Customers can decide whether IBM remote support sessions are attended or unattended. Remote support 483 . AOS Console. which adds SSL security and allows the customer to have more control over their environment. We explain the fundamentals of AOS and specifically for DS8870 remote support. For more information about options to outbound connectivity (call home and data offload). see 17. see Introduction to Assist On-site for DS8000. AOS offers a new method of remote support assistance for IBM products.1 AOS components AOS consists of three main software package sets: AOS Client. REDP4889. “Data offload: outbound” on page 473.17.7. AOS can be used by DS8870 as a remote support method. the customer can decide to place the AOS client workstation in the demilitarized zone (DMZ) or elsewhere rather than on the HMC. To meet their security policies when they are using AOS. Some users are reluctant to implement VPN. Important: AOS cannot be used for call home or data offload. and AOS Server.

The AOS Console allows connection to the AOS client. All Support Sessions are handled via the AOS environment. the customer has absolute control of incoming connections. The customer is notified of the connection request and the connection does not start unless the request is accepted.For more information. 484 IBM System Storage DS8870 Architecture and Implementation .com/support/assistonsite/ The AOS Client can be configured to run in attended or unattended mode. The AOS Console has many other functions to assist the support engineer in resolving customer issues within a support session. If the connection request is not accepted within 180 seconds.ibm. the connection is refused. see the AOS utility user guide. When you launch the AOS Configuration GUI. such as the DS8870. customers can control access manually and work with separate network zones that are independent of the session.ihost.com/AssistOnSiteAdmin/docs/AOS_Utility_User_Guide. where a one-time trial AOS installation can be also requested. which is available at this website: https://aos. any established AOS support connection is terminated. at this URL: http://www. Unattended mode: lights-out mode The Unattended session can be configured so that IBM remote support can establish a session automatically at the customer’s facility when a product. calls home with a service event. The AOS Server can record (at the IBM support engineer's request) a Desktop-Sharing Support Session and provide audit functionality on support connections so that each established connection is logged. It also provides load-balancing between the available AOS relays (depending on the customer geographical location). After the session is accepted. In this AOS session type. Attended mode: lights-on mode In attended session mode. There is no direct communication between the AOS Console and the AOS client. This disconnect ensures that the configuration can be changed only on the local AOS client/server. For more information about the features of the AOS Console. IBM Remote Support can use their Utility tools to troubleshoot the customer’s system. such as the availability of the session for specific days and time of the day. This session type includes many variations and the customer can configure settings. The AOS Client must be configured with the Access Control List (ACL) that includes the AOS IBM support engineer team. AOS Console The AOS Console is the service program that is installed on the notebook or workstation that is used by the IBM support engineer.pdf AOS Server The AOS Server authorizes the IBM Support Users and validates the ACL membership. These modes are described next. see the IBM AOS website.us. Important: The AOS Configuration GUI is one of the elements that constitute the AOS Client.

Figure 17-7 AOS connection architectural overview Chapter 17. You can use this list to configure your firewall so that the AOS client at your site can talk only to those IP addresses.7. There are several IBM maintained support groups available. You can select on the AOS Client the support teams that can access your IBM equipment. Those lists are defining the IBM support region and the IBM product. You also can specify the authorized Support Teams or AOS user ID or email address of individual support members. The AOS traffic can be routed via a Proxy Server.2 AOS Security The AOS software features the following security highlights: The AOS connection is encrypted between the AOS client. The AOS Servers are known and the dedicated list of their IP addresses are available. AOS uses SSL security to establish a connection to an AOS client at the customer site. The AOS topology when a connection is established is shown in Figure 17-7. The ACL is maintained in the AOS server. Remote support 485 .17. the AOS Server. and the AOS Console.

The IBM Support member can chat only with you and is not able to view your desktop or access your system.7. the IBM Support engineer can take over the control of the mouse and keyboard and work with the system. but the support representative does not have any control over the keyboard or the mouse. see 17. By using the AOS Configuration GUI.17. A chat window is available so that you can chat in real time with the support person. you can customize which session modes are enabled for the IBM Support representative. is important to implementing AOS as a remote support for DS8870. the user or the IBM support representative can switch between the following session modes: Chat only mode Chat only mode is the default AOS session mode between you and IBM support when the session is initiated. with AOS Gateway. IP-based maintenance tools. This type of session. “Port Forwarding and AOS Gateway for the DS8870” on page 487. Shared control During a window-sharing session. View only mode In View only mode. For more information about this type of session.7. you can share the window contents with the support. You can return to the chat only mode at any time. Port forwarding A port forwarding session allows IBM support to use product-specific.4.3 Support Session modes When an AOS session is active. Figure 17-8 AOS Config GUI (part of AOS client) 486 IBM System Storage DS8870 Architecture and Implementation . This function allows IBM support to establish an IP connection to a previously defined IP address and port. as shown in Figure 17-8.

Figure 17-10 AOS port forwarding Chapter 17. Remote support 487 .7. Specific DS8870 support tools can then be run in a secure troubleshooting scenario over the AOS tunneling.4 Port Forwarding and AOS Gateway for the DS8870 If you configure the server that is hosting your AOS client with port forwarding and unattended session. Figure 17-9 Port Forwarding configuration panel example The session allows an IBM Support representative to connect from their AOS Console to the DS8870 HMC through the AOS Gateway. The gateway usage is the ideal solution for remote maintenance connections in which you do not need to interact with support. This gateway is an entry-point for IBM support to connect to the HMC of each DS8870 whose IP addresses were predefined in the port forwarding configuration panel. the AOS client becomes an AOS Gateway (Remote Support Gateway). An example of port forwarding configuration is shown in Figure 17-9. Only predefined IPs can be reached.17. Only this port forwarding session mode guarantees that IBM support can connect to the DS8870 HMC following a call home event. How port forwarding communications flow works in AOS is shown in Figure 17-10.

Figure 17-11 Port forwarding Control Panel Port forwarding is an alternative way for IBM support to connect to and service DS8870 instead of the use of an analog phone line. The test can be performed with an SSH client for all connections that include a configured port forwarding on port 22. The ping command is an Internet Control Message Protocol (ICMP) Type 8 echo request.5 AOS Best Practices for DS8870 This section provides some best practices relevant to the AOS Gateway with DS8870. This testing is a basic connection test.) If the test fails. 2. the network administrator must check for the reason why a communication on this specific port is not possible.2. Verification of the local connectivity You must verify whether you can reach the configured IBM devices in the network. No data collection can be uploaded by AOS. Ping command: The well-known ping command cannot be used. the AOS gateway cannot be used for call home or data offload. Verify the local network connectivity. Verifying the AOS support service installation Verifying the installation of the AOS support service as part of an AOS Gateway can be divided into the following steps: 1. (Logging in to the DS8870 service interface is not required for this test. A ping is sending the request to the IP address without specifying a port.5. You must use the other methods that are described in 17.7. Verify the connection to the AOS Server.An SSH connection to a DS8870 HMC from the AOS console through the AOS gateway (by using port forwarding) is shown in Figure 17-11. 488 IBM System Storage DS8870 Architecture and Implementation . 17. “Data offload: outbound” on page 473. “Call Home and heartbeat: outbound” on page 473 and 17.5.1. A port forwarding that is configured for 443 (HTTPS for remote HMC WUI) can be checked with a web browser. However.

the default location for the working directory is: C:\Documents and Settings\All Users\Application Data\IBM\Tivoli\Assist On-site Use the section “Understanding the AOS trace files” on page 490 to read the aos_service. contact IBM product support to test whether an AOS remote session can be established to your system. This file is written to the working directory that is specified in the AOS configuration panel. close the AOS configuration panel. AOS trace files: The AOS trace files are growing fast and can use a large amount of disk space if you have the trace enabled for a long time. Remote support 489 . you must enable the Enable Full Tracing. Chapter 17. A successful connection attempt is shown in Example 17-2 on page 490. you can check trace file aos_service. Figure 17-12 Assist On-site configuration panel with “Enable full tracing” enabled When the full tracing and Start Service after Closing configurator is enabled.txt file.txt. Wait for a couple of minutes and open the aos_service. In Linux. The AOS support service program connects to the AOS Server and sends a heartbeat. Then.txt file. the default location for the working directory is: /var/opt/ibm/aos/support_service For Windows. as shown in Figure 17-12.Verification of the AOS support service connectivity To verify the connectivity to the AOS Server. Therefore. we recommend that you switch off the full tracing for normal operation. To ensure that connectivity between the AOS client and the AOS server can be established. There should be some entries that indicate that AOS support service client was registered with the AOS server successfully.

ihost.. These files are helpful in fixing connection issues. An example for a connection that is established through a proxy is shown later. The file is written by the aos_svc program.]-->Connecting to Server: 'aos..txt and aos_session.... we describe the contents of the aos_service.. the program dumps the current configuration into the trace file.. Trace file for the AOS support service: aos_service. the AOS client sends the Heartbeat data to the server.....3.]-->SSL connected with cypher AES256-SHA (256 bits) {3664} [.]-->ssl_load_functions() . The trace file shows that the system is contacting the AOS server every 2 minutes.15.]-->Transaction is SSL {3664} [..txt file contains information about the heartbeat and the initiation of an AOS session. Example 17-2 Example for a successful connection attempt [.]-->Switching to SSL Completed {3664} [.. Example 17-3 AOS Support Service Heartbeat Action=AOSPolling&TargetGUID=**********&ComputerName=IBM-*********&CustNumber=4889 &CustName=ITSO%20Residency&Model=*****&Vendor=*****&Serial=*****&Uuid=******&OS=MS %20Windows%20XP%2C%20Version%205.]-->Connection to Server: 'aos. timestamps and the SSL Certificate check are removed.]-->ssl_setup_client() ..Understanding the AOS trace files In this section.]-->Connected! {3664} After the connection to the AOS server is successfully established..208..]-->AoS Polling Server {3664} [......1..2600%20%28SP%203%29&Platform=windows_i86&LocalT Z=-2&Language=en&TargetVersion=3..]-->openssl_setup_client() .]-->*** SSL Certificate START *** {3664} .]-->*** SSL Certificate END *** {3664} [.com' on port 443 {3664} [.. [.. For better clarity in reviewing the output of the trace file.]-->ssl_load_functions(OpenSSL) {3664} [.returning OK {3664} [. Example 17-2 shows a successful direct connection that is established between the AOS support service client and the AOS server.ihost.]-->Switching to SSL Starting {3664} [.45&Status=1&ACL=**********&ChatOnly=no&ViewOn ly=no&SharedControl=no&AllowCollaboration=no&AllowRecording=no&AllowTunnelling=yes &Schedule=7F-00FFFFFF-7FFFFFFF& 490 IBM System Storage DS8870 Architecture and Implementation .234) on port 443 {3664} [. as shown in Example 17-3.us.]-->ssl_setup_client() = 0 {3664} [..Done {3664} [....txt The aos_service..txt files..us..]-->Using regular SSL compatibility mode {3664} [.....0.com' (72. {3664} [... which validates the SSL certificate of the server. When you start the AOS service.

208. If this combination fails.com to connect to the relay 72.com':8200 {1112} [.234:8200 (00000000) {1112} [.]-->Connecting to RELAY 72. Example 17-6 shows a successful connection to a relay server.us.234 {1112} [.80.72.us.]-->SUCESS: Connected DIRECTLY to aos.234..com {1112} Chapter 17. Example 17-6 Connection to one of the AOS Relays [. Example 17-4 shows a response with no pending connection request.208.]-->Trying to connect DIRECTLY to 'aos..]-->Using hostname aos.. Example 17-4 Response with no pending connection request <?xml version="1..us. AOS tries to establish a connection to the submitted IP addresses on the submitted ports.15..15.443</port_list> </agent> </remotecontrol> </response> When the response of the heartbeat contains a valid session ID.208.The AOS server then replies with an XML Structure that does include the interval for the next heartbeat.60</addr_list> <port_list>8200...0" encoding="UTF-8" standalone="yes"?> <response> <remotecontrol> <agent id="aos_agent"> <timeout> 2 </timeout> <session_id>***************************************</session_id> <addr_list>72.ihost..15.15.0" encoding="UTF-8" standalone="yes"?> <response> <remotecontrol> <agent id="aos_agent"> <timeout> 2 </timeout> </agent> </remotecontrol> </response> Example 17-5 shows the servers response that includes an encrypted session ID and a list of AOS server relays that are available for a connection Example 17-5 Response with pending connection request <?xml version="1.ihost.223. the next combination is used.ihost. Remote support 491 .

waiting 4 secs on port 1176 {1112} [..]-->AOS PROCESS CONNECTED AND RUNNING {1112} [.ihost..]-->ssl_setup_client() ..com' on port 443 {832} [.]-->ME: 'SYSTEM@192..com{832} [...]-->Lets ask the proxy to connect to aos..us.returning OK {832} [.]-->HTTP_SEND: CONNECT aos...ihost..When an AOS session is successfully created.]-->AOS process running (2164).........txt file..168.122.]-->SUCESS: Connected to PROXY 192.213':3128 {832} [....ihost.213 {832} [.com:443 HTTP/1.]-->ssl_load_functions() ...168...168...]-->openssl_setup_client() ..]-->ssl_load_functions(OpenSSL) {832} [...... the trace files contain entries that show the connection initiation to the proxy..com {832} [..]-->HTTP_RECV: HTTP/1. Example 17-8 Connection to an AOS server through a proxy server [.]-->HTTP_SEND: User-Agent: Assist On-site Support Service {832} [.]-->HTTP_SEND: Host: aos.ihost..com:443 {832} [. [..]-->SSL connected with cypher AES256-SHA (256 bits) {832} [..168.168.com:443 {832} [..]-->HTTP_SEND: Connection: keep-alive {832} [..]-->HTTP_SEND: Proxy-Connection: keep-alive{832} [..]-->HE: 'Bjoern Wesselbaum@192.]-->Connected! {832} 492 IBM System Storage DS8870 Architecture and Implementation .ihost.us.]-->Switching to SSL Completed {832} [. {832} [....]-->*** SSL Certificate END *** {832} [...ihost..]-->ssl_setup_client() = 0 {832} [.us.89[XX:XX:XX:XX:XX:XX]' running 'MS Windows XP' {1112} [..122.]-->Connecting to Server: 'aos. as shown in Example 17-8.us..0{832} [..]-->Trying to connect to PROXY server '192...us..]-->Lets ask the proxy to connect to aos...25[XX:XX:XX:XX:XX:XX]' running 'Linux' {1112} When a proxy server is used. the name and the IP and MAC addresses of the IBM Support member are written into the aos_service.Done {832} [.0 200 Connection established {832} [....]-->PROXY ACCEPTED!! we are connected to aos.us.]-->HTTP_RECV: {832} [.... Example 17-7 Successful created AOS session [.]-->HTTP_SEND: {832} [.122. followed by the connection that is established with the AOS server...]-->Switching to SSL Starting {832} [.]-->*** SSL Certificate START *** {832} ..]-->PROCESS STARTED in session 0x0 pid=9060628 {1112} [.]-->EVERTHING READY {1112} [.. as shown in Example 17-7.

com:443 HTTP/1..ihost... you can see the entries of the proxy error page in the trace file (see Example 17-9)..]-->FAILED to connect to 'aos.]-->http_recv_response(): Ready to read HTTP content {3084} [..]-->Connection to proxy failed with 403...ihost.ihost.]-->HTTP_RECV: {3084} [.com' on port 443 {3084} [...</p> . </body></html> {3084} [....01//EN" "http://www..org/TR/html4/strict.ihost..com:443 {3084} [.. charset=utf-8"> <title>ERROR: The requested URL could not be retrieved</title> .com/*">https://aos. Please contact your service provider if you feel this is incorrect.]-->HTTP_SEND: Proxy-Connection: keep-alive{3084} [.213':3128 {3084} [.us..]-->AoS Polling returned -8 {3084} Chapter 17. Example 17-9 Example of a blocking proxy server [...]-->Trying to connect to PROXY server '192...]-->HTTP_SEND: Host: aos..dtd"> <html><head> <meta http-equiv="Content-Type" content="text/html..]-->Lets ask the proxy to connect to aos.us.168..w3.]-->HTTP_RECV: HTTP/1.com/*</a></p> <blockquote id="error"> <p><b>Access Denied.]-->HTTP_SEND: User-Agent: Assist On-site Support Service{3084} [. Proxy response was:0 Forbidden {3084} [.us..com' on port 443: -8 {3084} [..us..</p> <p>Your administrator is <a href="mailto:webmaster ....213 {3084} [.]-->HTTP_SEND:{3084} [....0 403 Forbidden {3084} ...us.168. [.ihost.. Remote support 493 .ihost..us.If the proxy server is blocking the connection.ihost..122....us.]-->HTTP_SEND: Connection: keep-alive{3084} [.]-->HTTP_SEND: CONNECT aos.. consult the proxy administrator to enable the AOS IP addresses and required ports..]-->SUCESS: Connected to PROXY 192. <body id=ERR_ACCESS_DENIED> <div id="titles"> <h1>ERROR</h1> <h2>The requested URL could not be retrieved</h2> </div> <hr> <div id="content"> <p>The following error was encountered while trying to retrieve the URL: <a href="https://aos....0 {3084} [..com {3084} [.</b></p> </blockquote> <p>Access control configuration prevents your request from being allowed at this time.</a>. When the proxy presents an error page...]-->Proxy response body was <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.122..]-->Connecting to Server: 'aos.

XXX.XXX:22. {456} .]-->TUNNEL 1_ssh (0x511ABC17-0) CONNECTING TO XXX. you must install two independent AOS gateways at your site. Example 17-10 AOS Session trace entries for port forwarding [.. it is not relevant if both AOS clients are running the same operating system. When you plan for a dual AOS gateway setup (as shown in Figure 17-13).]-->TUNNEL 1_ssh (0x511ABC17-0) CREATED FOR XXX. 494 IBM System Storage DS8870 Architecture and Implementation ....]-->SESSION CHANGED TO MODE 'Port Forwarding' (0x0100) {2228} [. {456} [.. Those entries hold the AOS clients configuration ID that is used on the AOS server to identify the client.]-->Start Processing CREATE TUNNEL 0x511ABC17-0 {456} [. you must copy the port forwarding configuration and paste it into the other system. while port forwarding is used on an AOS gateway.XXX.txt Example 17-10 shows the entries of a successful port forwarding session..]-->AFTER process_gui_events()=0 {2228} [... Important: Do not clone the port forwarding configuration from the Windows registry or configuration file.... You should use the AOS configuration GUI to clone the port forwarding configuration.The AOS session log: aos_session. It is only required that you keep the port forwarding configurations synchronized. [.XXX.... Figure 17-13 Example of a redundant AOS Gateway setup When the configuration is copied...XXX:22...]-->TUNNEL 0x511ABC17-0 CLOSED {456} Planning for redundancy To ensure that IBM support can access your DS8870 to perform remote service tasks at any time and under any circumstances..XXX.

Therefore. the content that is presented on the window must be remotely accessible. or a virtual Console application of the virtual machine host.Accessing the AOS Gateway from a remote session In some configurations. we compare it here with the following remote support connection types that are available for the DS8870: Modem: IBM Support and the DS8870 use a modem line to connect and offload data and engage with the product. a solution can be implemented by using software.7. 17. AOS support service dialogs are sent to the window device. which offers customers a better control of their environment. Data speeds are better by using this remote connectivity option because it uses the fast network infrastructure of the customer to connect to IBM. Remote support 495 .6 AOS with VPN versus a modem To further emphasize the benefits of AOS. A modem also needs customer intervention to reset the line or connect it to the DS8870. In this case. It is a slow connection type and depends on the quality of the line to not drop a session. a VNC-like application. the AOS Gateway must be accessed by administrators who cannot use the local physical monitor or keyboard. VPN: IBM support and the DS8870 use this connection to offload data and engage with the product. For example. Chapter 17. such as Tivoli Remote Takeover. The use of a Windows Terminal Server that creates a session for each connected RDP client does not work. AOS: This option adds SSL security. a window-sharing solution must be used and designed in such a way that the remote access uses a window sharing of the session that also sends the output to the physical monitor.

offload data. Figure 17-14 shows a typical speed comparison between AOS+VPN and a modem in a scenario in which a customer’s DS8870 must call home for a notice. Figure 17-14 Comparing data download speed of an AOS/VPN session with a modem session 496 IBM System Storage DS8870 Architecture and Implementation . and engage support center.A benefit of having a DS8870 storage device that is connected via AOS and high-speed VPN is the speed at which IBM support can be engaged and trigger data offload.

Enables or disables the modem dial-in and VPN initiation to the HMC. There are two commands that are available to use for this purpose: chaccess. Enables or disables the command-line shell access to the HMC via Internet or VPN connection. Remote support 497 . which enables and disables customized remote access. which enables and disables all remote connections. or modem must be specified. customers who disable HMC users are expected to change the default password for user customers and manage the changed password. -wui. The customer is provided with a back-up command to disable or enable HMC users. -wui enable | disable (optional) WUI Access via Internet/VPN n/a -modem enable | disable (optional) Modem Dial-in and VPN Initiation n/a Chapter 17. and lsaccess.17.8 Further remote support enhancements The customer can customize what can be done to the DS8870 by using DS CLI commands. or modem must be specified. Optional. the WUI. The ability of the user to affect storage facility operations and maintenance is limited. Enables or disables the Hardware Management Console's WUI access on the HMC via Internet or VPN connection. This control is for service access only and has no effect on access to the machine via the DS Command Line Interface. -wui. This control is for service access only and has no effect on access to the machine by using the DS Storage Manager. At least one of command-line. At least one of command-line. Optional. or modem must be specified. At least one of command-line. -wui. the customer can control the possibility of establishing remote access sessions from a GUI or command line by enabling or disabling specific users. The following newer capabilities are available: Customer Control of Remote Access By using the HMC. The chaccess command modifies access settings on an HMC for the command-line shell. and modem access. The DSCLI commands invoke functions to allow the customer to disable or enable remote access for any connection method and remote WUI access. Table 17-2 Chaccess parameters description Parameter -commandline enable | disable (optional) Description Command Line Access via Internet/VPN Default n/a Details Optional. With this addition of user control capability. The command uses the following syntax: chaccess [ -commandline enable | disable ] [-wui enable | disable] [-modem enable | disable] hmc1 | hmc2 The description of the command parameters is listed in Table 17-2. DSCLI Command that invokes the HMC command There are a series of DSCLI commands that serve as the main interface for the customer to control remote access.

Table 17-3 lsaccess parameters description Parameter hmc1 | hmc2 (optional) Description The primary or secondary HMC. Specifies the primary (hmc1) or secondary (hmc2) HMC for which settings should be displayed. although a user inadvertently specifies a primary HMC by using –hmc2 and the secondary backup HMC by using –hmc1 at DS CLI start.Parameter hmc1 | hmc2 (required) Description The primary or secondary HMC. settings for both are listed. The command uses the following syntax: lsaccess [ hmc1 | hmc2 ] The description of the command parameters is listed in Table 17-3. regardless of how -hmc1 and -hmc2 were specified during dscli startup. The lsaccess command displays access settings of primary and backup HMCs. A DS CLI connection might succeed. Example 17-11 lsaccess command output dscli> lsaccess hmc commandline ___ ___________ hmc1 enabled hmc2 enabled wui ___ enabled enabled modem _____ enabled enabled Use Cases The user can toggle the following independent controls: Enable/Disable WUI Access via Internet/VPN Enable/Disable Command Line Access via Internet/VPN Enable/Disable Modem Dial in and VPN Initiation 498 IBM System Storage DS8870 Architecture and Implementation . Important: The hmc1 specifies the primary and hmc2 specifies the secondary HMC. regardless of how -hmc1 and -hmc2 were specified during DS CLI start. Default n/a Details Required. Specifies the primary (hmc1) or secondary (hmc2) HMC for which access should be modified. If neither hmc1 or hmc2 is specified. Each command runs with the remote connection type description. See Example 17-11 for an illustration of the lsaccess command output. Important: The hmc1 specifies the primary and hmc2 specifies the secondary HMC. A DS CLI connection might succeed. although a user inadvertently specifies a primary HMC by using –hmc2 and the secondary backup HMC by using –hmc1 at DS CLI start. Default List access for all HMCs. Details Required.

Chapter 17. This on-demand audit log mechanism is sufficient for customer security requirements in regards to HMC remote access notification. and network.9 Audit logging The DS8870 offers an audit logging security function that is designed to track and log changes that are made by administrators that use e Storage Manager DS GUI or DS CLI. Customer notification of remote login The HMC code records all remote access. D/E/D: Only allow command-line access via network. in a log file.0. email notifications and SNMP traps also can be configured at the HMC to send notification in case of a Remote Support connection. VPN. E/D/E: Allow WUI access via network and model dial-in. Remote support 499 . 2012 15:30:40 CEST IBM DSCLI Version: 7. Are you sure you want to replace the file? [y/n]: y CMUC00243I offloadauditlog: Audit log was successfully offloaded from smc1 to c:\75ZA570_audit.txt. This function also documents remote support access activity to the DS8870. Therefore. 17. E/E/E: Allow all access methods. Example 17-12 DS CLI command to download audit logs dscli> offloadauditlog -logaddr smc1 c:\75ZA570_audit. E/E/D: Allow WUI and command-line access via network.The following use cases are available: Format Option 1/Option 2/Option 3 D = Disabled E = Enabled The customer can specify the following access options: D/D/D: No access is allowed. E/D/D: Only allow WUI access via network. D/D/E: Only allow modem dial-in. The DS CLI offloadauditlog command that provides clients with the ability to offload the audit logs to the DS CLI workstation in a directory of their choice is shown in Example 17-12. In addition to the Audit Log.7. D/E/E: Allow command-line access via network and modem dial-in.txt Date/Time: October 2. The audit logs can be downloaded by DS CLI or Storage Manager.580 DS: CMUC00244W offloadauditlog: The specified file currently exists. There is a DS CLI function that allows a customer to offload this file for audit purpose. it provides the customer with a complete audit trial of remote access to an HMC. The DS CLI function combines the log file that contains all service login information with an ESSNI audit log file that contains all customer configuration user login via DS CLI and DS GUI. including modem.

. In this case.Challenge Key = 'Fy31@C37'. including command parameters and user ITPC-R commands are not supported)...N.8002.Phone_connection_started.WUI_session_ended_logged off..2012/10/02 12:09:49:000 MST.N.IBM..Authority_to_root.2107-75ZA570.2012/10/02 09:11:16:000 MST. A single entry is used to log request and response information. It is possible.N. Authority_upgrade_to_root.1.WUI_session_started.customer.IBM. remove. The Response Key acts as a one-time authorization to the features of the HMC..The downloaded audit log is a text file that provides information about when a remote access session started and ended.N..2107-75ZA570. including command parameters and user ID.Phone_started.WUI_session_logoff.1. A portion of the downloaded file is shown in Example 17-13. or modify logical configuration.. U.2012/10/02 13:35:30:000 MST.customer. Audit logs feature the following characteristics: Logs should be maintained for a period of 30 days.2012/10/02 14:49:18:000 MST.2107-75ZA570. 500 IBM System Storage DS8870 Architecture and Implementation . It is a token that is shown to the IBM support representative who is dialing in to the DS8870. The representative must use the Challenge Key in an IBM internal tool to generate a Response Key that is given to the HMC. Log Copy Services commands..2107-75ZA570... U. Log commands that modify Storage Facility and Storage Facility settings. though unlikely. U.2107-75ZA570. Example 17-13 Audit log entries that are related to a remote support event by using a modem U. and what remote authority level was applied.IBM. that an operation does not complete because of an operation timeout. Log commands that create. U.8020..N. Entries are added to the audit file only after the operation completes.8000.Phone_connection_ended. There is no direct user login and no root login through the modem on a DS8870 HMC.1.Phone_ended.. The Challenge Key that is presented to the IBM support representative is not a password on the HMC..IBM. It is the user's responsibility to periodically extract the log and save it away.2012/10/02 09:10:57:000 MST. Logs are automatically trimmed (FIFO) by the subsystem so they do not use more than 50 megabytes of disk storage. Log user password and user access violations.1.1. The Challenge-Response process must be repeated if the representative needs to escalate privileges to access the HMC command-line environment. All information about the request and its completion status is known. The audit log entry includes the following roles: Log users that connect or disconnect to the storage manager. no entry is made in the log.IBM. including command parameters and user ID.8036. The Challenge and Response Keys change when a remote connection is made.8022.

For more information about how auditing is used to record who-did-what-and-when in the audited system and for a guide to log management.gov/publications/nistpubs/800-92/SP800-92.pdf Chapter 17. see this website: http://csrc.nist. Remote support 501 .

502 IBM System Storage DS8870 Architecture and Implementation .

503 . DS8870 Capacity upgrades and CoD This chapter describes aspects of implementing capacity upgrades and Capacity on Demand with the IBM System Storage DS8870. All rights reserved. This chapter covers the following topics: Installing capacity upgrades Using Capacity on Demand © Copyright IBM Corp. 2013.18 Chapter 18.

A storage enclosure interconnects the DDMs to the controller cards that connect to the device adapters. Most commonly. Each storage enclosure contains a redundant pair of controller cards. so they are referred to as DA pairs. meaning that each pair has 48 DDMs. each storage enclosure is shipped full with 24 DDMs. storage enclosure pairs. Storage enclosures always are installed in pairs. The DAs are the RAID adapters that connect the Central Electronics Complexes (CECs) to the DDMs. two. Table 18-1 DS8870 Disk drive types Standard drives SFF 146 GB 15-K rpm FDE SAS SFF 300 GB 15-K rpm FDE SAS SFF 600 GB 10-K rpm FDE SAS SFF 900 GB 10-K rpm FDE SAS LFF 3 TB 7. If a disk enclosure pair is populated with only 16 or 32 DDMs. 504 IBM System Storage DS8870 Architecture and Implementation . Baffles maintain the correct cooling airflow throughout the enclosure. A disk drive set includes 16 disk drive modules (DDM) of the same capacity and spindle speed (RPM).2K rpm FDE nearline SAS SFF 400 GB FDE SSD The disk drives are installed in Storage Enclosures (SEs). A storage enclosure pair can be populated with one. Each storage enclosure attaches to two device adapters (DAs).18. Physical installation and testing of the device adapters. Figure 3-13 on page 49 illustrates the available DS8870 Storage Enclosures. All drives that are offered in the DS8870 are Full Disk Encryption (FDE) capable. or 48 DDMs). disk drive filler modules that are called baffles are installed in the vacant DDM slots. 32. After the capacity is added successfully. The DS8870 DA cards are always installed as a redundant pair. and DDMs is done by your IBM service representative. Table 18-1 lists which DS8870 disk drive modules are available. or three disk drive sets (16. All DDMs in a disk enclosure pair must be of the same type (capacity and speed).1 Installing capacity upgrades Storage capacity can be ordered and added to the DS8870 through disk drive sets. the new capacity is shown as unconfigured array sites. Each of the controller cards also includes redundant trunking. with one enclosure in the upper part of the unit and one enclosure in the lower part.

The Storage Enclosures installation order and DA Pair relationship is shown in Figure 18-1. You cannot create ranks by using the new capacity if this action causes your machine to exceed its license key limits. “IBM System Storage DS8000 features and license keys” on page 271. Figure 18-1 SE installation order and DA Pair relationship You might need to obtain new license keys and apply them to the storage image before you start configuring the new capacity. Applying increased feature activation codes is a concurrent action. see Chapter 10. DS8870 Capacity upgrades and CoD 505 . SSDs: Special restrictions in terms of placement and intermixing apply when Solid State Drives (SSDs) are added. Chapter 18. but a license reduction or deactivation is often a disruptive action. For more information.

the target DS8870 has four device adapter pairs (a total of eight DAs) and five storage enclosure pairs (a total of 10 SEs).1 Installation order of upgrades Individual machine configurations vary. all array sites are in use. more information is shown. the next two storage enclosures to be populated are connected to a new DA pair. see “Frames: DS8870” on page 34. For these examples. and DDMs are installed in your DS8870: lsda lsstgencl lsddm lsarraysite When the -l parameter is added to these commands.1. eight DDMs are installed into the upper storage enclosure and eight into the lower storage enclosure. storage hardware is populated in the following order: 1. see Figure 18-2 on page 510.2 Checking how much total capacity is installed The following DS CLI commands can be used to check how many DAs. For more information.18. adding capacity to a DS8870 does not require any downtime. Each DA Pair can manage a maximum of four Storage Enclosure pairs (192 DDMs). If you add a complete 48 pack. An upgrade installation order also is difficult to provide because it is possible to order a machine with multiple under-populated storage enclosures (SEs) across the DA pairs. The DA cards are installed into the I/O enclosures that are at the bottom of the base frame and the first expansion frame. The second and third frames do not contain I/O enclosures. These five storage enclosure pairs include one SE pair that contains 24 LFF 3-TB nearline drives. For more information. consult with the IBM Support Center to verify the optimal configuration. all storage upgrades are concurrent. 3. The configuration is done in a way to allow future upgrades to be performed with the fewest physical changes. 2. However. we show examples of using these commands. Whenever you add 16 DDMs to a machine. This consultation is done to avoid losing capacity on the DS8870. so it is not possible to give an exact order in which every storage upgrade is installed. DDMs are added to under-populated enclosures. SEs. only storage disks. In the examples. meaning that an array was created on each array site. In the next section. Intermix DDM installation: If there is an intermix DDM installation.1. SE installation order always is done from bottom to the top of the frame. Generally. when capacity is added to a DS8870. 24 are installed in the upper storage enclosure and 24 are installed in the lower storage enclosure. After the first storage enclosure pair on a DA pair is fully populated with DDMs (48 DDMs total). 18. 506 IBM System Storage DS8870 Architecture and Implementation . There are 184 DDMs and 24 array sites because each array site consists of eight DDMs. Three SE pairs are fully populated and one SE pair features 16 SSDs (one drive set) and 32 baffles in which drives are not populated.

1400-1B4-38489/R0-P1-C3 Online U1400.2107-D02-0797F/R1-P1-D1 2 400.0x0360.0x0361 0x1 12 3000.2107-D02-0792T/R1-P1-D6 2 400.0x0262.0x0032.0x0163 0x0 24 900.0 1 0x0130.0x0360. DS8870 Capacity upgrades and CoD 507 .0x0033.0 65000 IBM.0 15000 IBM.0x0362.1400-1B2-38491/R0-P1-C6 Online U1400. 2012 11:19:15 CEST IBM DSCLI Version: 7.0 array member S16 Normal IBM.2107-D02-0792T/R1-S10 0x0232.0x0233.x.0x0131 0x0 24 300.1400-1B3-38477/R0-P1-C6 Online U1400.xxx DS: IBM.1B2.0x0233.RJ38490-P1-C6 . Example 18-2 List the storage enclosures dscli> lsstgencl IBM.0 15000 IBM.1400-1B2-38491/R0-P1-C3 Online U1400.0x0361 0x0 8 400.0x0231.2107-D02-077PN/R1-S07 0x0030.RJ38490-P1-C3 .2107-D02-0792T/R1-P1-D4 2 400.RJ38489-P1-C3 .Example 18-1 shows a listing of the device adapters.0x0333 0x0 24 300.0x0162.7. Example 18-3 List the DDMs (abbreviated) dscli> lsddm IBM.0x0363 Example 18-2 shows a listing of the storage enclosures.2107-75ZA571 Date/Time: October 8.0x0263. Example 18-1 List the device adapters dscli> lsda -l IBM.0x0132.x. Because there are 184 DDMs in the example machine. 2012 11:13:52 CEST IBM DSCLI Version: 7.0x0233 IBM.2107-D02-07E91/R1-S02 0x0232.1B1.2107-D02-077H8/R1-S05 0x0260.2107-75ZA571 ID DA Pair dkcap (10^9B) dkuse arsite State ================================================================================================ IBM.2107-D02-077B5/R1-S06 0x0262.0x0263 IBM.0 10000 IBM.0.0x0332.0x0063 IBM.RJ38489-P1-C6 .RJ38491-P1-C3 .0x0062.0 spare required S17 Normal IBM.2107-D02-0797F/R1-P1-D3 2 400.0 3 0x0330.2107-D02-07764/R1-S03 0x0060.0x0362.0x0162.1 2 0x0360.0 array member S16 Normal IBM.2107-D02-0797F/R1-P1-D2 2 400.0 15000 IBM.xxx DS: IBM.RJ38477-P1-C6 .0 array member S17 Normal Chapter 18.0 array member S17 Normal IBM.0 0 0x0030.2107-D02-0752R/R1-S04 0x0062.0x0063.0x0160.0 array member S16 Normal IBM.0 array member S16 Normal IBM.0 array member S17 Normal IBM.1 3 0x0260.2107-D02-07E8K/R1-S01 0x0230.0x0130.2107-D02-0792T/R1-P1-D5 2 400.2107-D02-0797F/R1-P1-D4 2 400.0x0161 0x0 24 900.0x0333 IBM.0x0061.0 65000 IBM.1B1.0x0033 IBM.2107-75ZA571 Date/Time: October 8. only a partial list is shown here.0x0363 0x0 8 400.RJ38477-P1-C3 .0 array member S17 Normal IBM.0x0361.0x0133 0x0 24 300.0x0261.0x0132.0x0133 IBM.1 1 0x0060.1B3.1 0 0x0160.7.0x0261.2107-75ZA571 ID State loc FC Server DA pair interfs ======================================================================================================== IBM.0 array member S16 Normal IBM.0x0330.1400-1B1-38490/R0-P1-C6 Online U1400.1B2.0x0362.0 array member S17 Normal IBM.7.xxx DS: IBM.2107-D02-0774H/R1-S08 0x0032.1400-1B4-38489/R0-P1-C6 Online U1400.0 7200 IBM.1B4.2107-D02-0792T/R1-P1-D3 2 400.0x0163 IBM. 2012 11:17:02 CEST IBM DSCLI Version: 7.1B3.2107-75ZA571 Date/Time: October 8.0x0061.0 15000 IBM.0x0031.0x0363 0x1 12 3000.1B4.2107-D02-0792T/R1-P1-D7 2 400.RJ38491-P1-C6 .0x0031.1400-1B3-38477/R0-P1-C3 Online U1400.0 array member S17 Normal IBM.0x0231.0x0331.1400-1B1-38490/R0-P1-C3 Online U1400.2107-D02-0792T/R1-P1-D8 2 400.2107-D02-0797F/R1-S09 0x0230.0x0131.0x0332.0 2 0x0230.2107-75ZA571 ID Interfaces interadd stordev cap (GB) RPM ===================================================================================== IBM.0 7200 Example 18-3 shows a listing of the storage drives.0x0161.0x0231.0x0331 0x0 24 300.0x0232.0 10000 IBM.2107-D02-0792T/R1-P1-D2 2 400.2107-D02-0792T/R1-P1-D1 2 400.

0 Assigned A22 S7 1 300.0 Assigned A12 S20 3 300.0 Assigned A13 S21 3 300. 2012 11:25:27 CEST IBM DSCLI Version: 7.0 Assigned A6 S9 1 300.0 Assigned A5 S8 1 300.0 Assigned A1 S18 3 300.0 Assigned A18 S3 0 900.2107-75ZA571 Date/Time: October 8.xxx DS: IBM.0 Assigned A16 508 IBM System Storage DS8870 Architecture and Implementation .0 Assigned A7 S10 1 300.0 Assigned A15 S23 3 300.0 Assigned A19 S4 0 900.0 Assigned A2 S14 2 3000.0 Assigned A0 S17 2 400.0 Assigned A4 S16 2 400.0 Assigned A20 S5 0 900.0 Assigned A3 S15 2 3000.0 Assigned A10 S13 2 3000.0 Assigned A8 S11 1 300.0 Assigned A21 S6 0 900. a listing of the array sites is shown.0 Assigned A14 S22 3 300.x.2107-75ZA571 arsite DA Pair dkcap (10^9B) State Array =========================================== S1 0 900.0 Assigned A17 S2 0 900.0 Assigned A9 S12 1 300.In Example 18-4. Example 18-4 List the array sites dscli> lsarraysite -dev IBM.0 Assigned A11 S19 3 300.7.

you logically configure them for use. Then. Important: SSDs are unavailable as Standby CoD drives.2 Using Capacity on Demand IBM offers Capacity on Demand (CoD) solutions that are designed to meet the changing storage needs of rapidly growing businesses. 18.2. DS8870 Capacity upgrades and CoD 509 . up to six Standby CoD disk drive sets (96 disk drives) can be factory-installed or field-installed into your system. With this offering.2. This task is not disruptive and does not require intervention from IBM. CoD on the DS8870 is described in this section. This feature is attractive if you have rapid or unpredictable growth. There are various rules about CoD. You can purchase licensed functions that are based on your system’s physical capacity.1 What is Capacity on Demand The Standby CoD offering is designed to provide you with the ability to tap into more storage. or if you want extra storage to be there when you need it. you also can order replacement CoD disk drive sets. This growth can create a problem if there is an unexpected and urgent need for disk space and no time to create a purchase order or wait for the disk to be delivered. In many database environments. it is not unusual to have rapid growth in the amount of disk space that is required for your business. This section describes aspects of implementing a DS8870 that include CoD disk packs. you must place an order with IBM to initiate billing for the activated set. which are described in the IBM System Storage DS8870 Introduction and Planning Guide. which excludes unconfigured Standby CoD capacity. To check for the CoD indicator on the DSFA website. you need to perform the following tasks: Chapter 18. You must check for the following important indicators: Is the CoD indicator present in the Disk Storage Feature Activation (DSFA) website? What is the Operating Environment License (OEL) limit that is displayed by the lskey DS CLI command? Verifying CoD on the DSFA website The data storage feature activation (DSFA) website provides feature activation codes and license keys to technically activate functions that were acquired for your IBM storage products. 18. This feature can help improve your cost of ownership because the extent of IBM authorization for licensed functions can grow at the same time you need your disk capacity to grow.18. GC27-4209. Upon activation of any portion of a Standby CoD disk drive set. To activate the disk drives. Contact your IBM representative to obtain more information about Standby CoD offering terms and conditions.2 Determining whether a DS8870 includes CoD disks A common question is how to determine whether a DS8870 has CoD disks installed.

510 IBM System Storage DS8870 Architecture and Implementation . 3. Figure 18-2 Machine signature and Activation codes The signature is a unique value that can be accessed only from the machine. In the Status column. You also must record the Machine Type that is displayed and the Machine Serial Number (which ends with 0).Using the GUI Complete the following steps to use the GUI: 1. right-click a status indicator and select Storage Image  Add Activation Key 4. The storage system signature is displayed. as shown in Figure 18-2. Connect to the following URL via a web browser: http://<hmc_ip_address>:8451/DS8000/Login 2. Select System Status under the Home icon.

xxx DS: IBM. Now log on to the DSFA website at this URL: http://www.com/storage/dsfa 3. as shown in Example 18-5.0 GB Cache Memory 233. Example 18-5 Machine Signature by using DS CLI dscli> showsi -fullid IBM.2107-75ZA571 Name DS8870_ATS02 desc Mako ID IBM. The next window requires you to choose the Machine Type and then enter the serial number and signature. Figure 18-3 DSFA machine specifics Chapter 18. DS8870 Capacity upgrades and CoD 511 .x.2421-75ZA570 <=======Machine Type (2421) and S/N (75ZA570) numegsupported 1 ETAutoMode all ETMonitor all IOPMmode Managed 2. 2012 13:47:21 CEST IBM DSCLI Version: 7. as shown in Figure 18-3.2107-75ZA571/V0 os400Serial 5AA NVS Memory 8.2107-75ZA570 Model 961 WWNN 5005076303FFD5AA Signature 1234-5678-9012-3456 <============ Machine Signature State Online ESSNet Enabled Volume Group IBM.ibm.8 GB Processor Memory 253.7.2107-75ZA571 Date/Time: October 8. Select IBM System Storage DS8000 Series from the DSFA start page.Using DS CLI Complete the following steps to use the DS CLI: 1.7 GB MTS IBM.2107-75ZA571 Storage Unit IBM. Connect with the DS CLI and run the showsi -fullid command.

DS8700 system in this example 512 IBM System Storage DS8870 Architecture and Implementation .In the View Authorization Details window. Figure 18-4 Verifying CoD by using DSFA. the CoD feature was not ordered for your storage system. If you see 0900 Non-Standby CoD. the feature code 0901 Standby CoD indicator is shown for DS8870 installations with Capacity on Demand. which is shown in Figure 18-4.

xxx DS: IBM.2107-75AZA571 Date/Time: October 8. but actually has 172 TB of disk installed because it has 2 TB of CoD disks. the command fails because it exceeds the OEL limit.7.7.2107-75ZA571 Activation Key Authorization Level (TB) Scope ========================================================================== Operating environment (OEL) 170. As a result.xxx DS: IBM. rank creation succeeds for the last 2 TB of storage.2107-75ZA571 Date/Time: October 8.xxx DS: IBM. In Example 18-6.4 All dscli> applykey -key xxxx-xxxx-xxxx-xxxx-xxxx-xxxx-xxxx-xxxx IBM. The machine in this example is licensed for 170 TB of OEL. An OEL key that activates CoD changes the feature limit from the limit that you purchased to the largest possible number.2107-75ZA571 CMUC00199I applykey: Licensed Machine Code successfully applied to storage image IBM.2107-75ZA571 Date/Time: October 8.Verifying CoD on the DS8870 Normally. However. you can see how the OEL key is changed. the CoD feature is installed as part of the Operating Environment License (OEL) key. CoD does not include a discrete key.x. 2012 14:32:03 CEST IBM DSCLI Version: 7.x.2107-75ZA571 dscli> lskey IBM. if you attempt to create ranks by using the final 2 TB of storage. However. the OEL limit increases to a large number (9. 2012 14:33:23 CEST IBM DSCLI Version: 7. Instead. DS8870 Capacity upgrades and CoD 513 .9 million TB). new features or feature limits are activated by using the DS CLI applykey command. After a new OEL key with CoD is installed.x. 2012 14:27:40 CEST IBM DSCLI Version: 7.2107-75ZA571 Activation Key Authorization Level (TB) Scope ========================================================================== Operating environment (OEL) 9999999 All Chapter 18. Example 18-6 Applying an OEL key that contains CoD dscli> lskey IBM.7.

In the Status column. 3. Figure 18-5 Add Activation Key selection 18. these CoD considerations must be clearly understood and documented. From the machine itself. a maximum of 12 array sites of CoD can exist in a machine. Because 16 drives make up a drive set. depending on what you downloaded. we review the tasks that are required to use CoD storage. During the machine order process. CoD array sites If CoD storage is installed. a better use of terminology is to say that a machine can include up to six drive sets of CoD disk. 514 IBM System Storage DS8870 Architecture and Implementation .2. it is a maximum of 96 CoD disk drives. there are 48 array sites. as shown in Figure 18-5. there is no way to tell how many of the array sites in a machine are CoD array sites as opposed to array sites that you can start using immediately. Because eight drives are used to create an array site. Select System Status under the Home icon. if a machine has 384 disk drives installed (of which 96 disk drives are CoD). of which 12 are CoD. For example. right-click a status indicator and select Storage Image  Add Activation Key or Storage Image  Import Key File.Complete the following steps to add the Activation Keys by using Web GUI: 1.3 Using the CoD storage In this section. Connect to the following URL via a web browser: http://<hmc_ip_address>:8451/DS8000/Login 2.

a new OEL key also is issued and should be applied immediately. After the CoD array sites are in use After you start to use the CoD array sites. starting with the mkarray command. All such activation is permanent. “Configuration by using the DS Storage Manager GUI” on page 323. “Configuration with the DS Command-Line Interface” on page 385. then the mkrank command. you must understand how many of each size was ordered and ensure that the correct number of array sites of each size are left unused until they are needed for growth. After the ranks are members of an Extent Pool. If you accidentally configure a CoD array site Given the sample DS8870 with 48 array sites. an OEL key is issued to reflect that CoD is no longer enabled on the storage system. you started to use the CoD arrays and should contact IBM to inform IBM that the CoD storage is in use. the client should configure only 40 of the 48 array sites.Which array sites are the CoD array sites Given a sample DS8870 with 48 array sites. This configuration assumes that all the disk drives are the same size. and so on. If volumes were created and those volumes are in use. Important: IBM requires that a Standby CoD disk drive set must be activated within 12 months from the date of installation. and Chapter 14. of which eight represent CoD disks. see Chapter 13. contact IBM so that the CoD indicator can be removed from the machine. It is possible to order CoD drive sets of different sizes. If other CoD disks are not needed. If new CoD disks are ordered and installed. the volumes can be created. Chapter 18. Using the CoD array sites Use the standard DS CLI (or DS GUI) commands to configure storage. You must place an order with IBM to initiate billing for the activated set. if you accidentally configure 41 array sites but did not intend to start using the CoD disks yet. For more information. You also can order replacement Standby CoD disk drive sets. or the DS8870 reached maximum capacity. DS8870 Capacity upgrades and CoD 515 . of which eight represent CoD disks. use the rmarray command immediately to return that array site to an unassigned state. In this case.

516 IBM System Storage DS8870 Architecture and Implementation .

and analyzing activities with your DS8870. 2013. we also reference the sites where you can find information about the service offerings that are available from IBM to help you in several of the activities that are related to the DS8870 implementation. In this appendix. 517 . © Copyright IBM Corp. All rights reserved. migrating. managing. Tools and service offerings This appendix provides information about the tools that are available to help you when planning.A Appendix A.

the number. main interface.Planning and administration tools This section describes some available tools to help plan for and administer DS8000 implementations. With this input. The following IBM System Storage Servers are supported: DS8000 series Storwize® V7000 Storwize V7000 Unified DS6000™ N series models Capacity Magic is designed as an easy-to-use tool with a single. and type of disk drive sets. and RAID 10. and the RAID type. as shown in Figure A-1 on page 519. Capacity Magic calculates the raw and net storage capacities. it becomes a challenge to calculate the raw and net storage capacity of disk systems. It offers a graphical user interface (GUI) with which you can enter the disk drive configuration of a DS8870 and other IBM disk systems. and you need an in-depth technical understanding of how spare and parity disks are assigned. RAID 6. You must invest considerable time. 518 IBM System Storage DS8870 Architecture and Implementation . such as the DS8870. You also must consider the simultaneous use of disks with different capacities and configurations that deploy RAID 5. Capacity Magic can do the physical (raw) to effective (net) capacity calculations automatically. considering all applicable rules and the provided hardware configuration (number and type of disk drive sets). Capacity Magic Because of the additional flexibility and configuration options that storage systems provide. The tool also includes functionality with which you can display the number of extents that are produced per rank.

Tools and service offerings 519 . Appendix A.Figure A-1 IBM Capacity Magic configuration window Figure A-1 shows the configuration window that Capacity Magic provides for you to specify the wanted number and type of disk drive sets.

This report is also helpful in planning and preparing the configuration of the storage in the DS8870 because it includes extent count information. Contact your IBM Representative or IBM Business Partner to discuss a Capacity Magic study. 520 IBM System Storage DS8870 Architecture and Implementation . The net extent count and capacity slightly differ between the various DS8000 models. which is licensed exclusively to IBM and IBM Business Partners.Figure A-2 shows the resulting output report that Capacity Magic produces. The product models disk storage system effective capacity as a function of physical disk capacity that is to be installed. Figure A-2 IBM Capacity Magic output report Important: IBM Capacity Magic for Windows is a product of IntelliMagic.

The first release was issued as an OS/2 application in 1994. such as the IBM 3880 and 3990. Change to larger-capacity disk modules. Storage consolidation. and offers a rich and meaningful modeling capability. to supporting modern. advanced-function disk systems. Disk Magic evolved from supporting Storage Control Units. but it is by no means complete: Move the current I/O load to a different disk system. Since that release. Activate asynchronous or synchronous Peer-to-Peer Remote Copy.Disk Magic Disk Magic is a Windows based disk system performance modeling tool. Use fewer or more Logical Unit Numbers (LUN). IBM i. Increase the disk system’s cache size. the following IBM disk controllers are supported: XIV DS8000 DS6000 DS5000 DS4000® Enterprise Storage Server (ESS) SAN Volume Controller (SVC) Storwize V7000 Storwize V7000U SAN-attached N series A critical design objective for Disk Magic is to minimize the amount of input that you must enter. Increase the current I/O load. Introducing storage virtualization into an existing disk configuration. Today. and Open environments. Tools and service offerings 521 . integrated. Appendix A. Merge the I/O load of multiple disk systems into a single load. The following list provides several examples of what Disk Magic can model. The tool models IBM disk controllers in System z. It supports disk systems from multiple vendors and offers the most detailed support for IBM subsystems.

Also. graphical output is offered by an integrated interface to Microsoft Excel. Figure A-3 IBM Disk Magic Overview 522 IBM System Storage DS8870 Architecture and Implementation .Modeling results are presented through tabular reports and Disk Magic dialogs. Figure A-3 shows how Disk Magic requires I/O workload data and disk system configuration details as input to build a calibrated model that can be used to explore possible changes.

Open Systems. which provides a graphical representation of performance data that is collected by Easy Tier over the recent days. The tool produces an Easy Tier Summary Report after statistics are gathered over at least a 24-hour period. The Storage Tier Advisor Tool (STAT) can help you determine which volumes are likely candidates for Easy Tier management by analyzing the performance of their current application workloads. The Storage Tier Advisor application tool can be downloaded from this website: ftp://ftp. The Storage Tier Advisor Tool displays a System Summary report for the total of the extent pools and more detailed reports that contain the distribution of heat data in each volume and how much heat data is included for all volumes. The TreeView displays the structure of a project with the entities that are part of a model.com/storage/ds8000/updates/DS8K_Customer_Download_Files/Sto rage_Tier_Advisor_Tool/ Appendix A. The summary report also contains a recommendation of SSD capacity and configuration values and the potential performance improvement if SSD is applied with Easy Tier in automated mode. one zSeries server. or IBM iSeries®) and disk subsystems.software.ibm. one iSeries server. Contact your IBM Representative or IBM Business Partner to discuss a Disk Magic study.Figure A-4 shows the IBM Disk Magic primary window. Figure A-4 IBM Disk Magic particular general project Important: IBM Disk Magic for Windows is a product of IntelliMagic. These entities can be host systems (IBM zSeries. Tools and service offerings 523 . In this case. which is licensed to IBM and IBM Business Partners to model disk storage system performance. TPF. and one IBM DS8800 storage system were selected in the general project wizard. two AIX servers. Storage Tier Advisor Tool In addition to the Easy Tier capabilities. the DS8870 offers the IBM System Storage DS8000 Storage Tier Advisor Tool.

Figure A-5 shows how Storage Tier Advisor Tool requires I/O workload data as input to build a performance summary report. Figure A-5 Storage Tier Advisor Tool Overview How to use the Storage Tier Advisor Tool Complete the following steps to use the STAT: 1. select System Status. Figure A-6 Selecting System Status 524 IBM System Storage DS8870 Architecture and Implementation . as shown in Figure A-6. To offload the Storage Tier Advisor summary report.

566 DS: IBM.276.7.data Volume in drive C is PRK_1160607 Volume Serial Number is 6806-ABBD Directory of C:\temp 21/09/2012 21/09/2012 16:49 2.744 bytes 0 Dir(s) 11.288 SF75ZA570ESS11_heat. CMUC00428I offloadfile: The etdata file has been offloaded to c:\temp\SF75ZA570ESS11_heat. Tools and service offerings 525 . as shown in Example A-1. There should be two files. as shown in Figure A-7. as shown in Example A-2. Example: A-1 Using the DS CLI to offload the Storage Tier Advisor summary report dscli> offloadfile -etdata c:\temp Date/Time: 21 September 2012 16:49:19 CEST IBM DSCLI Version: 7.data. Extract all of the files from the downloaded compressed file.433.297. 3. it is necessary to run STAT with that information as input. Figure A-7 Selecting Export Easy Tier Summary Report Alternatively.256 bytes free Appendix A.data.data 2 File(s) 3.456 SF75ZA570ESS01_heat.2.data 16:49 1.2107-75ZA571 CMUC00428I offloadfile: The etdata file has been offloaded to c:\temp\SF75ZA570ESS01_heat. Select Export Easy Tier summary report. Example: A-2 Extracting all the files from the downloaded compressed file.0.157.632. it is possible to get the same information by using the DS CLI. C:\temp>dir *. After you gather the information.

In the output directory. You also can see the detailed heat distribution (hot. in all the extent pools. Important: As designed. If you do not have write permission. If you open this file with a web browser. Tivoli Storage FlashCopy Manager provides support to create and manage volume-level snapshots for File Systems and Custom Applications. it fails with the following error: CMUA00007E. The tool attempts to write the output file to this directory. It uses Microsoft Volume Shadow Copy Services (VSS) and IBM storage hardware snapshot technology to protect your business-critical data. Figure A-8 Systemwide Recommendation IBM Tivoli Storage FlashCopy Manager IBM Tivoli Storage FlashCopy Manager provides the tools and information that are needed to create and manage volume-level snapshots on snapshot-oriented storage systems.4. These snapshots are created while these applications (with volume data) remain online. Example: A-3 Running STAT C:\Program Files\IBM\STAT>stat -o c:\ds8k\output c:\temp\SF75ZA570ESS01_heat. 5. cold) of all monitored volumes across the different tiers.html file is created. Run STAT.exe command has completed. an index. 526 IBM System Storage DS8870 Architecture and Implementation .data CMUA00019I The STAT. this STAT tool requires write permissions to the directory where it is installed. as shown in Figure A-8. it is possible to see the Systemwide Recommendation. as shown in Example A-3. backups can be sent to tape by using Tivoli Storage Manager server. Optionally.data c:\temp\SF75ZA570ESS11_heat. warm.

Protects applications on IBM System Storage DS3000. For more information about available services.ibm. For more information about IBM Tivoli Storage FlashCopy Manager.html http://www. IBM Tivoli Storage Productivity Center. and resources that are needed to achieve a system-managed environment. and DS5000 on Windows by using VSS. and IBM XIV Storage System on AIX. Supports the Windows. Solaris.ibm. contact your IBM Representative or see this website: http://www. we describe the various service offerings. Solaris.com/infocenter/tsminfo/v6r3/index. and IBM SAN Volume Controller solutions. with minimal performance impact for IBM DB2. SAP. IBM System Storage DS8000.jsp?topic=%2Fcom.ibm. and Exchange.html For more information about IBM Business Continuity and Recovery Services that are available. see the following websites: http://pic. and the time.com/services/ http://www.doc%2Fr_pdf_fcm. Integrates with IBM Storwize V7000. contact your IBM representative or IBM Business Partner. Tools and service offerings 527 .com/services/continuity For more information about educational offerings that are related to specific products.com/software/tivoli/products/storage-flashcopy-mgr/ IBM Service offerings Next. and then select the product as the category. AIX. Satisfies advanced data protection and data reduction needs with optional integration with IBM Tivoli Storage Manager. or visit the following websites: http://www. Oracle.ibm. IBM Global Technology Services: Service offerings IBM can assist you in deploying IBM System Storage DS8870 storage systems. and Linux operating systems. money. and Microsoft Windows.com/services/learning/index. Microsoft SQL Server. IBM Global Technology Services® features the right knowledge and expertise to reduce your system and data migration workload.itsm. Improves application availability and service levels through high-performance.html Select your country.ibm. IBM System Storage SAN Volume Controller. see this website: http://www. near-instant restore capabilities that reduce downtime.com/services/us/en/it-services/storage-and-data-services.This product includes the following key benefits: Performs near-instant application-aware snapshot backups. Linux.ibm. DS4000.dhe.fc m. Appendix A.ibm.

The following sample offerings are included: Storage Efficiency Analysis Storage Energy Efficiency Workshop Storage Efficiency study XIV Implementation and Replication Services XIV Migration Services ProtecTIER® Deduplication Services IBM Certified Secure Data Overwrite Service Technical Project Management DS8000 Data Migration Services by using Temporary Licenses for Copy Services For more information about these service offerings. client-tailored solutions and services that help in the daily work with IBM Hardware and Software components.IBM STG Lab Services: Service offerings In addition to the IBM Global Technology Services. see this website: http://www.com/systems/services/labservices/platforms/labservices_storage.ibm.html 528 IBM System Storage DS8870 Architecture and Implementation . the Storage Services team from the STG Lab are set up to assist customers with one-off.

All rights reserved. 529 .Abbreviations and acronyms AAL AC AL-PA AMP AOS API ASCII ASIC B2B BBU CEC CG CHFS CHPID CIM CKD CoD CPU CSDO CSV CUIR DA DASD DC DDM DFS DFW DHCP DMA DMZ DNS DPR DPS DSCIMCLI DSCLI Arrays Across Loops Alternating Current Arbitrated Loop Physical Addressing Adaptive Multistream Prefetching Assist On Site Application Programming Interface American Standard Code for Information Interchange Application Specific Integrated Circuit Business to Business Battery Backup Unit Central Electronics Complex Consistency Group Call Home For Service Channel Path ID Common Information Model Count Key Data Capacity on Demand Central Processing Unit Certified Secure Data Overwrite Comma Separated Value Control Unit Interface Reconfiguration Device Adapter Direct Access Storage Device Direct Current Disk Drive Module Distributed File System DASD Fast Write Dynamic Host Configuration Protocol Direct Memory Access De-Militarized Zone Domain Name System Dynamic Path Reconnect Dynamic Path Selection Data Storage Common Information Model Command-Line Interface Data Storage Command-Line Interface FCP FCSE FDE FFDC FICON FIR FRR FTP GB GC GM GSA GTS GUI HA HACMP™ HBA FATA FB FC FCAL FCIC FCoE FCoCEE DSFA DVE EAV EB ECC EDF EEH EOC EPO EPOW ESCON ESS ESSNI Data Storage Feature Activation Dynamic Volume Expansion Extended Address Volume Exabyte Error Checking and Correction Extended Distance FICON Enhanced Error Handling End of Call Emergency Power Off Emergency Power Off Warning Enterprise Systems Connection Enterprise Storage Server Enterprise Storage Server Network Interface Fibre Channel Attached Technology Adapter Fixed Block Flash Copy Fibre Channel Arbitrated Loop Fibre Channel Interface Card Fibre Channel over Ethernet Fibre Channel over Convergence Enhanced Ethernet Fibre Channel Protocol FlashCopy Space Efficient Full Disk Encryption First Failure Data Capture Fiber Connection Fault Isolation Register Failure Recovery Routines File Transfer Protocol Gigabyte Global Copy Global Mirror Global Storage Architecture Global Technical Services Graphical User Interface Host Adapter High Availability Cluster Multi-Processing Host Bus Adapter © Copyright IBM Corp. 2013.

Availability. Serviceability Real-time Compression™ Remote Input/Output 530 IBM System Storage DS8870 Architecture and Implementation .HCD HMC HSM HTTP HTTPS IBM IKE IKS IOCDS IOPS IOSQ IPL IPSec IPv4 IPv6 ITSO IWC JBOD JFS KB Kb Kbps KVM L2TP LBA LCU LDAP LED LFA LFF LFU LIC LIP LMC LPAR LRU LSS LUN LVM MB Hardware Configuration Definition Hardware Management Console Hardware Security Module Hypertext Transfer Protocol Hypertext Transfer Protocol over SSL International Business Machines Corporation Internet Key Exchange Isolated Key Server Input/Output Configuration Data Set Input Output Operations per Second Input/Output Supervisor Queue Initial Program Load Internet Protocol Security Internet Protocol version 4 Internet Protocol version 6 International Technical Support Organization Intelligent Write Caching Just a Bunch of Disks Journaling File System Kilobyte Kilobit Kilobits per second Keyboard-Video-Mouse Layer 2 Tunneling Protocol Logical Block Addressing Logical Control Unit Lightweight Directory Access Protocol Light Emitting Diode Licensed Function Authorization Large Form Factor (3.5-inch) Least Frequently Used Licensed Internal Code Loop initialization Protocol Licensed Machine Code Logical Partition Least Recently Used Logical SubSystem Logical Unit Number Logical Volume Manager Megabyte Mb Mbps MFU MGM MIB MM MPIO MRPD MRU NAT NFS NIMOL NTP NVRAM NVS OEL OLTP PATA PAV PB PCI-X PCIe PCM PDU PFA PHYP PLD PM PMB PPRC PPS PSP PSTN PTC PTF RAM RAS RC RIO Megabit Megabits per second Most Frequently Used Metro Global Mirror Management Information Block Metro Mirror Multipath Input/Output Machine Reported Product Data Most Recently Used Network Address Translation Network File System Network Installation Management on Linux Network Time Protocol Non-Volatile Random Access Memory Non-Volatile Storage Operating Environment License Online Transaction Processing Parallel Attached Technology Adapter Parallel Access Volumes Petabyte Peripheral Component Interconnect Extended Peripheral Component Interconnect Express Path Control Module Power Distribution Unit Predictive Failure Analysis POWER Systems Hypervisor Power Line Disturbance Preserve Mirror Physical Memory Block Peer-to-Peer Remote Copy Primary Power Supply Preventive Service Planning Public Switched Telephone Network Point-in-Time Copy Program Temporary Fix Random Access Memory Reliability.

RMC RMZ RPC RPM RPO SAN SARC SAS SATA SCSI SDD SDDPCM SDDDSM SDM SDO SF SE SFF SFI SFTP SIM SMIS SMP SMS SMT SMTP SNIA SNMP SOI SP SPCN SPE SRM SSD SSH SSIC SSID Remote Mirror and Copy Remote Mirror for System z Rack Power Control Revolutions per Minute Recovery Point Objective Storage Area Network Sequential Adaptive Replacement Cache Serial Attached SCSI Serial Attached Technology Adapter Small Computer System Interface Subsystem Device Driver Subsystem Device Driver Path Control Module Subsystem Device Driver Device Specific Module System Data Mover Secure Data Overwrite Space Efficient Storage Enclosure Small Form Factor (2.5-inch) Storage Facility Image SSH File Transfer Protocol Service Information Message (System z & S/390®) Storage Management Initiative Specification Symmetric Multiprocessor Storage Management Subsystem Simultaneous Multithreading Simple Mail Transfer Protocol Storage Networking Industry Association Simple Network Monitoring Protocol Silicon on Insulator Service Processor System Power Control Network Small Programming Enhancement Storage Resource Management Solid State Drive Secure Shell System Storage Interoperation Center Subsystem Identifier SSL SSPC STAT SVC TB TCB TCE TCO TCP/IP TKLM TPC TPC-BE TPC-R TPC-SE TSSC UCB UDID UPS VM VPN VSS VTOC WLM WUI WWPN XRC YB ZB zHPF zIIP Secure Sockets Layer System Storage Productivity Center Storage Tier Advisor Tool SAN Volume Controller Terabyte Task Control Block Translation Control Entry Total Cost of Ownership Transmission Control Protocol / Internet Protocol Tivoli Key Lifecycle Manager Tivoli Storage Productivity Center Tivoli Storage Productivity Center Basic Edition Tivoli Storage Productivity Center for Replication Tivoli Storage Productivity Center Standard Edition TotalStorage System Console Unit Control Block Unit Device Identifier Uninterruptable Power Supply Virtual Machine Virtual Private Network Microsoft Volume Shadow Copy Services Volume Table of Contents Workload Manager Web User Interface Worldwide Port Name Extended Remote Copy Yottabyte Zettabyte High Performance FICON for z z9 Integrated Information Processor Abbreviations and acronyms 531 .

532 IBM System Storage DS8870 Architecture and Implementation .

REDP-4505 DS8000: Introducing Solid State Drives. SG24-7432 IBM System Storage Productivity Center Deployment Guide. All rights reserved. REDP-4504 IBM System Storage DS8000: LDAP Authentication. Volume I: IBM Firewall. SG24-7521 © Copyright IBM Corp.Related publications The publications that are listed in this section are considered particularly suitable for a more detailed discussion of the topics that are covered in this book. SG24-6787 IBM System Storage DS8000 Copy Services for Open Systems. REDP-4915 A Comprehensive Guide to Virtual Private Networks.1 Technical Guide. SG24-6788 IBM System Storage DS8000 Series: IBM FlashCopy SE. SG24-7097 Data Migration to IBM Disk Storage Systems. Some of the documents that are referenced here might be available in softcopy only: IBM System Storage DS8000 Host Attachment and Interoperability. REDP-4760 IBM System Storage DS8000 Easy Tier. REDP-4667 IBM System Storage DS8000: z/OS Distributed Data Backup. REDP-4387 IBM System Storage DS8000 Disk Encryption. SG24-7560 IBM Tivoli Storage Productivity Center V4. SG24-8887 DS8800 Performance Monitoring and Tuning. SG24-5201 TPC 5. SG24-7725 SAN Volume Controller: Best Practices and Performance Guidelines. IBM Redbooks publications For more information about ordering the following publications. REDP-4758 DS8000 I/O Priority Manager. REDP-4522 IBM System Storage DS8000 Copy Services Scope Management and Resource Groups. 2013.1 Release Guide. REDP-4368 Multiple Subchannel Sets: An Implementation View. REDP-4500 IBM System Storage DS8000: Remote Pair FlashCopy (Preserve Mirror). SG24-8053 Managing Disk Subsystems using IBM TotalStorage Productivity Center. SG24-8013 IBM System Storage DS8000 Copy Services for System z. 533 . Server and Client Solutions. REDP-4701 DS8870 VMware VAAI support. see “How to get IBM Redbooks publications” on page 534.

ibm. San Jose. in USENIX File and Storage Technologies (FAST). and Additional materials.Other publications The following publications also are relevant as further information sources. Snader.boulder. and order hardcopy IBM Redbooks publications or CD-ROMs at this website: http://www. Hints and Tips. S.htm VPN Implementation.com/systems/support/storage/config/ssic Security Planning website http://publib16. S.ibm. Megiddo and D. GC27-2298 IBM System Storage Multipath Subsystem Device Driver User’s Guide. in IEEE Computer. Some of the documents that are referenced here might be available in softcopy only: IBM System Storage DS8870 Introduction and Planning Guide. et al. Modha. pages 58–65. 2004 “SARC: Sequential Prefetching in Adaptive Replacement Cache” by Binny Gill. ISBN-10: 032124544X Online resources The following websites also are relevant as further information sources: IBM Disk Storage Feature Activation (DSFA) website http://www. Redpapers. draft publications.com/support/docview.ibm.wss?&rs=1114&uid=ssg1S1002693 How to get IBM Redbooks publications You can search for. S. 2005).ibm. VPNs. Gill and D. 2007. 2005. February 13–16. volume 37.com/redbooks 534 IBM System Storage DS8870 Architecture and Implementation . pages 129–142 VPNs Illustrated: Tunnels. Modha. and IPSec.com/doc_link/en_US/a_doc_lib/aixbman/security/ipsec _planning.ibm.” by N. number 4. or download IBM Redbooks publications. by Jon C. 4th USENIX Conference on File and Storage Technologies (FAST).jsp System Storage Interoperation Center (SSIC) http://www. Proceedings of the USENIX 2005 Annual Technical Conference. CA “WOW: Wise Ordering for Writes – Combining Spatial and Temporal Locality in Non-Volatile Caches” by B..ibm.com/storage/dsfa Documentation for the DS8000: The Information Center http://publib.com/infocenter/dsichelp/ds8000ic/index. S1002693: http://www. pages 293–308 “AMP: Adaptive Multi-stream Prefetching in a Shared Cache” by Binny Gill.boulder. GC27-4209 IBM System Storage DS Command-Line Interface User's Guide. view. et al. GC52-1309 “Outperforming LRU with an adaptive replacement cache algorithm. Addison-Wesley Professional (November 5. GC53-1127 IBM System Storage DS8000 Host Systems Attachment Guide.

ibm.com/services Related publications 535 .Help from IBM IBM Support and downloads http://www.com/support IBM Global Services http://www.ibm.

536 IBM System Storage DS8870 Architecture and Implementation .

17”<->0.473” 90<->249 pages (0.IBM System Storage DS8870 Architecture and Implementation (1.5”<-> 1.0” spine) 0.2”spine) 0.1”<->0.498” 460 <-> 788 pages (0.873” 250 <-> 459 pages IBM System Storage DS8870 Architecture and Implementation (0.169” 53<->89 pages .1”spine) 0.5” spine) 1.5” spine) 0.875”<->1.998” 789 <->1051 pages IBM System Storage DS8870 Architecture and Implementation IBM System Storage DS8870 Architecture and Implementation (1.475”<->0.

5”<->nnn.n” 1315<-> nnnn pages IBM System Storage DS8870 Architecture and Implementation (2.0” spine) 2.IBM System Storage DS8870 Architecture and Implementation (2.0” <-> 2.498” 1052 <-> 1314 pages .5” spine) 2.

.

The IBM System Storage DS8870 is the most advanced model in the IBM DS8000 lineup and is equipped with IBM POWER7 based controllers. which is available for no extra fee. Customers and Partners from around the world create timely technical information based on realistic scenarios. The DS8870 also can be integrated in an LDAP infrastructure. install. INTERNATIONAL TECHNICAL SUPPORT ORGANIZATION BUILDING TECHNICAL INFORMATION BASED ON PRACTICAL EXPERIENCE IBM Redbooks are developed by the IBM International Technical Support Organization. The DS8870 also features enhanced 8 Gpbs device adapters and host adapters. The DS8870 supports advanced disaster recovery solutions. architecture. The DS8870 is equipped with high-density storage enclosures that are populated with 24 small-form-factor SAS-2 drives or storage enclosures for 12 large-form-factor nearline SAS drives. For more information: ibm. particularly SSD drives through the IBM Easy Tier feature. and configure the DS8870. The DS8870 can automatically optimize the use of each storage tier. The DS8870 storage subsystems also can be equipped with Solid-State Drives (SSDs).com/redbooks SG24-8085-00 ISBN 0738437611 . make the DS8870 suitable for multiple server environments in open systems and IBM System z environments. Specific recommendations are provided to help you implement IT solutions more effectively in your environment. business continuity solutions. Connectivity options. The book provides reference information to assist readers who need to plan for. Various configuration options are available that scale from dual 2-core systems up to dual 16-core systems with up to 1 TB of cache. and implementation of the IBM System Storage DS8870 storage system. All disk drives in the DS8870 storage system have the Full Disk Encryption (FDE) feature.Back cover ® IBM System Storage DS8870 Architecture and Implementation ® Dual IBM POWER7 based controllers with up to 1 TB of cache 400 GB SSDs with Full Disk Encryption support Improved Power Supplies and extended Power Line Disturbance This IBM Redbooks publication describes the concepts. with up to 128 Fibre Channel/FICON ports for host connections. and thin provisioning. Experts from IBM.