You are on page 1of 474

Front cover

Draft Document for Review November 14, 2006 3:49 pm SG24-6786-02

IBM System Storage DS8000


Series: Architecture and
Implementation
Plan, install and configure the DS8000 for
high-availability and efficient utilization

Turbo models, POWER5+ technology,


FATA drives, and performance features

Learn about the DS8000 design


characteristics, and management

Gustavo Castets Wenzel Kalabza


Bertrand Dufrasne Peter Klee
Stephen Baird Markus Oscheka
Werner Bauer Ying Thia
Denise Brown Robert Tondini
Jana Jamsek

ibm.com/redbooks
Draft Document for Review November 14, 2006 3:49 pm 6786edno.fm

International Technical Support Organization

IBM System Storage DS8000 Series: Architecture and


Implementation

October 2006

SG24-6786-02
6786edno.fm Draft Document for Review November 14, 2006 3:49 pm

Note: Before using this information and the product it supports, read the information in “Notices” on
page xxi.

Third Edition (October 2006)

This edition applies to the IBM System Storage DS8000 series Turbo Models 931, 932, and 9B2 as
announced in August 2006.
This document created or updated on November 14, 2006.

© Copyright International Business Machines Corporation 2006. All rights reserved.


Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule
Contract with IBM Corp.
Draft Document for Review November 14, 2006 3:49 pm 6786edno.fm

iii
6786edno.fm Draft Document for Review November 14, 2006 3:49 pm

iv IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786TOC.fm

Contents

Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv

Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxii

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii
The team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii
Special thanks to: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv
Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvi
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvi

Summary of changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvii


October 2006, Third Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvii

Part 1. Concepts and Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

Chapter 1. Introduction to the DS8000 series. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3


1.1 The DS8000, a member of the System Storage DS family . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.1 Infrastructure Simplification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.2 Business Continuity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.3 Information Lifecycle Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Introduction to the DS8000 series features and functions . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.1 IBM System Storage DS8000 series Turbo models . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.2 Hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.3 Storage capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.4 Supported environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.5 Business continuance solutions — Copy Services functions . . . . . . . . . . . . . . . . . 8
1.2.6 Interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2.7 Service and setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.2.8 Configuration flexibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.3 Positioning the DS8000 series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.3.1 Common set of functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.3.2 Common management functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.3.3 DS8000 series compared to other storage disk subsystems . . . . . . . . . . . . . . . . 11
1.4 Performance features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.4.1 Sequential Prefetching in Adaptive Replacement Cache (SARC) . . . . . . . . . . . . 12
1.4.2 Multipath Subsystem Device Driver (SDD) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.4.3 Performance for System z. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.4.4 Performance enhancements for System p . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.4.5 Performance enhancements for z/OS Global Mirror . . . . . . . . . . . . . . . . . . . . . . . 13

Chapter 2. Model overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15


2.1 DS8000 series models overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.1.1 Model naming conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.1.2 Machine types 2107 and 242x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.1.3 DS8100 Turbo Model 931 overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.1.4 DS8300 Turbo Models 932 and 9B2 overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.1.5 DS8300 Turbo Model 932 with four expansion frames — 512 TB . . . . . . . . . . . . 21
2.2 DS8000 series model comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

© Copyright IBM Corp. 2006. All rights reserved. v


6786TOC.fm Draft Document for Review November 14, 2006 3:49 pm

2.3 DS8000 design for scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23


2.3.1 Scalability for capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.3.2 Scalability for performance — linear scalable architecture . . . . . . . . . . . . . . . . . . 24
2.3.3 Model conversions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

Chapter 3. Storage system LPARs (logical partitions) . . . . . . . . . . . . . . . . . . . . . . . . . 27


3.1 Introduction to logical partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.1.1 Virtualization Engine technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.1.2 Logical partitioning concepts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.1.3 Why Logically Partition? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.2 DS8300 and LPARs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.2.1 LPARs and storage facility images (SFI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.2.2 DS8300 LPAR implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.2.3 Storage facility image hardware components . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.2.4 DS8300 Model 9B2 configuration options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.3 LPAR security through POWER Hypervisor (PHYP) . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.4 LPAR and Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.5 LPAR benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

Chapter 4. Hardware Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39


4.1 Frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.1.1 Base frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.1.2 Expansion frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.1.3 Rack operator panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.2 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.2.1 Server-based SMP design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.2.2 Cache management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.3 Processor complex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.3.1 RIO-G . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.3.2 I/O enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.4 Disk subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.4.1 Device adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.4.2 Disk enclosures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.4.3 Disk drives — DDMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.4.4 Fiber Channel ATA (FATA) disk drives overview . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.4.5 Positioning FATA versus Fibre Channel disks . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.4.6 FATA versus Fiber Channel drives on the DS8000 . . . . . . . . . . . . . . . . . . . . . . . 62
4.5 Host adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4.5.1 ESCON Host Adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
4.5.2 Fibre Channel/FICON Host Adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
4.6 Power and cooling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.7 Management console network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.8 Ethernet adapter pair (for TPC RM support at R2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

Chapter 5. RAS - Reliability, Availability, Serviceability . . . . . . . . . . . . . . . . . . . . . . . . 69


5.1 Naming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
5.2 Processor complex RAS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
5.3 Hypervisor: Storage image independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
5.3.1 RIO-G - a self-healing interconnect. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.3.2 I/O enclosure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.4 Server RAS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.4.1 Metadata checks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
5.4.2 Server failover and failback. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

vi IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786TOC.fm

5.4.3 NVS recovery after complete power loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78


5.5 Host connection availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
5.5.1 Open systems host connection — Subsystem Device Driver (SDD) . . . . . . . . . . 81
5.5.2 System z host connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
5.6 Disk subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
5.6.1 Disk path redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
5.6.2 RAID-5 overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
5.6.3 RAID-10 overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
5.6.4 Spare creation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
5.6.5 Predictive Failure Analysis (PFA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
5.6.6 Disk scrubbing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
5.7 Power and cooling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
5.7.1 Building power loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
5.7.2 Power fluctuation protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
5.7.3 Power control of the DS8000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
5.7.4 Emergency power off (EPO) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
5.8 Microcode updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
5.9 Management console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
5.10 Earthquake resistance kit (R2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

Chapter 6. Virtualization concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91


6.1 Virtualization definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
6.2 Storage system virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
6.3 The abstraction layers for disk virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
6.3.1 Array sites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
6.3.2 Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
6.3.3 Ranks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
6.3.4 Extent pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
6.3.5 Logical volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
6.3.6 Logical subsystems (LSS). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
6.3.7 Volume access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
6.3.8 Summary of the virtualization hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
6.3.9 Placement of data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
6.4 Benefits of virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

Chapter 7. Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107


7.1 Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
7.2 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
7.2.1 Basic concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
7.2.2 Benefits and use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
7.2.3 Licensing requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
7.2.4 FlashCopy options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
7.3 Remote mirror and copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
7.3.1 Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
7.3.2 Global Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
7.3.3 Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
7.3.4 Metro/Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
7.3.5 z/OS Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
7.3.6 z/OS Metro/Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
7.3.7 Summary of the Copy Services functions characteristics . . . . . . . . . . . . . . . . . . 123
7.4 Interfaces for Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
7.4.1 Storage Hardware Management Console (S-HMC) . . . . . . . . . . . . . . . . . . . . . . 124
7.4.2 DS Storage Manager Web-based interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

Contents vii
6786TOC.fm Draft Document for Review November 14, 2006 3:49 pm

7.4.3 DS Command-Line Interface (DS CLI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126


7.4.4 Totalstorage Productivity Center for Replication (TPC for Replication) . . . . . . . 126
7.4.5 DS Open application programming Interface (DS Open API) . . . . . . . . . . . . . . . 126
7.4.6 System z based I/O interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
7.5 Interoperability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128

Part 2. Planning and Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

Chapter 8. Physical planning and installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131


8.1 Considerations prior to installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
8.1.1 Who should be involved . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
8.1.2 What information is required . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
8.2 Planning for the physical installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
8.2.1 Delivery and staging area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
8.2.2 Floor type and loading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
8.2.3 Room space and service clearance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
8.2.4 Power requirements and operating environment . . . . . . . . . . . . . . . . . . . . . . . . 136
8.2.5 Host interface and cables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
8.3 Network connectivity planning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
8.3.1 Hardware management console network access . . . . . . . . . . . . . . . . . . . . . . . . 140
8.3.2 DS CLI console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
8.3.3 Remote support connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
8.3.4 Remote power control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
8.3.5 Storage area network connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
8.4 Remote mirror and copy connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
8.5 Disk capacity considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
8.5.1 Disk sparing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
8.5.2 Disk capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
8.5.3 Fibre Channel ATA (FATA) disks considerations . . . . . . . . . . . . . . . . . . . . . . . . 144
8.6 Planning for growth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144

Chapter 9. DS HMC planning and setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145


9.1 DS HMC — Hardware Management Console overview . . . . . . . . . . . . . . . . . . . . . . . 146
9.2 DS HMC software components and communication. . . . . . . . . . . . . . . . . . . . . . . . . . 147
9.2.1 Components of the DS HMC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
9.2.2 Logical flow of communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
9.2.3 DS Storage Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
9.2.4 Command-Line Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
9.2.5 DS Open Application Programming Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
9.2.6 Using the DS GUI on the HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
9.3 Typical DS HMC environment set up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
9.4 Planning and setup of the DS HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
9.4.1 Using the DS SM front end . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
9.4.2 Using the DS CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
9.4.3 Using the DS Open API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
9.4.4 Hardware and software setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
9.4.5 Activation of Advanced Function licenses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
9.4.6 Microcode upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
9.4.7 Time synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
9.4.8 Monitoring with the DS HMC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
9.4.9 Call Home and Remote support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
9.5 User management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
9.5.1 User management using the DS CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
9.5.2 User management using the DS GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161

viii IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786TOC.fm

9.6 External DS HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162


9.6.1 External DS HMC advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
9.6.2 Configuring the DS CLI to use a second HMC . . . . . . . . . . . . . . . . . . . . . . . . . . 163
9.6.3 Configuring the DS Storage Manager to use a second HMC . . . . . . . . . . . . . . . 164

Chapter 10. Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167


10.1 DS8000 hardware — performance characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . 168
10.1.1 Fibre Channel switched disk interconnection at the back end . . . . . . . . . . . . . 168
10.1.2 Fibre Channel device adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
10.1.3 Four-port host adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
10.1.4 POWER5+ — Heart of the DS8000 dual cluster design . . . . . . . . . . . . . . . . . . 171
10.1.5 Vertical growth and scalability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
10.2 Software performance enhancements — synergy items. . . . . . . . . . . . . . . . . . . . . . 174
10.2.1 End to End I/O Priority — synergy with System p AIX and DB2 . . . . . . . . . . . . 175
10.2.2 Cooperative Caching — synergy with System p AIX and DB2 . . . . . . . . . . . . . 175
10.2.3 Long Busy Wait Host Tolerance — synergy with System p AIX . . . . . . . . . . . . 175
10.2.4 HACMP-XD Extensions — synergy with System p AIX . . . . . . . . . . . . . . . . . . 175
10.3 Performance and sizing considerations for open systems . . . . . . . . . . . . . . . . . . . . 176
10.3.1 Workload characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
10.3.2 Cache size considerations for open systems . . . . . . . . . . . . . . . . . . . . . . . . . . 176
10.3.3 Data placement in the DS8000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
10.3.4 Disk drive characteristics — capacity, speed and type . . . . . . . . . . . . . . . . . . . 177
10.3.5 LVM striping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
10.3.6 Determining the number of connections between the host and DS8000 . . . . . 179
10.3.7 Determining the number of paths to a LUN. . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
10.3.8 Dynamic I/O load-balancing — Subsystem Device Driver (SDD) . . . . . . . . 179
10.3.9 Determining where to attach the host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
10.4 Performance and sizing considerations for System z . . . . . . . . . . . . . . . . . . . . . . . . 180
10.4.1 Host connections to System z servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
10.4.2 Sizing the DS8000 to replace older disk models. . . . . . . . . . . . . . . . . . . . . . . . 181
10.4.3 DS8000 processor memory size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
10.4.4 Channels consolidation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
10.4.5 Disk drives characteristics — capacity, speed and type . . . . . . . . . . . . . . . . . . 183
10.4.6 Ranks and extent pools configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
10.4.7 Parallel Access Volume — PAV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
10.4.8 z/OS Workload Manager — Dynamic PAV tuning . . . . . . . . . . . . . . . . . . . . . . 190
10.4.9 PAV in z/VM environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
10.4.10 Multiple Allegiance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
10.4.11 HyperPAV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
10.4.12 I/O Priority queueing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195

Chapter 11. Features and license keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197


11.1 DS8000 licensed functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
11.2 Activation of licensed functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
11.2.1 Obtaining DS8000 machine information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
11.2.2 Obtaining activation codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
11.2.3 Applying activation codes using the GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
11.2.4 Applying activation codes using the DS CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
11.3 Licensed scope considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
11.3.1 Why you get a choice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
11.3.2 Using a feature for which you are not licensed . . . . . . . . . . . . . . . . . . . . . . . . . 211
11.3.3 Changing the scope to All . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
11.3.4 Changing the scope from All to FB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213

Contents ix
6786TOC.fm Draft Document for Review November 14, 2006 3:49 pm

11.3.5 Applying insufficient license feature key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213


11.3.6 Calculating how much capacity is used for CKD or FB. . . . . . . . . . . . . . . . . . . 214

Part 3. Storage Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217

Chapter 12. Configuration flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219


12.1 Configuration work sheets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
12.2 Configuration flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220

Chapter 13. Configuration with DS Storage Manager GUI . . . . . . . . . . . . . . . . . . . . . 223


13.1 DS Storage Manager — Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
13.2 Logical configuration process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
13.3 Real-time manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
13.4 Simulated manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
13.4.1 Import the configuration from the Storage HMC . . . . . . . . . . . . . . . . . . . . . . . . 228
13.4.2 Import the hardware configuration from an eConfig file . . . . . . . . . . . . . . . . . . 236
13.4.3 Manually enter the configuration details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
13.4.4 Import the hardware configuration from an xml file . . . . . . . . . . . . . . . . . . . . . . 239
13.5 Examples of configuring DS8000 storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
13.5.1 Configure I/O ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
13.5.2 Configure logical host systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
13.5.3 Create arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
13.5.4 Create ranks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
13.5.5 Create extent pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
13.5.6 Create FB volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
13.5.7 Create volume groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
13.5.8 Create LCUs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
13.5.9 Creating CKD volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261

Chapter 14. Configuration with Command Line interface . . . . . . . . . . . . . . . . . . . . . . 267


14.1 DS Command-Line Interface — overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
14.2 Configuring the I/O ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
14.3 Configuring the DS8000 storage for FB volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
14.3.1 Create array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
14.3.2 Create ranks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
14.3.3 Create extent pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
14.3.4 Creating FB volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
14.3.5 Creating volume groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
14.3.6 Creating host connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
14.3.7 Mapping open systems hosts disks to storage unit volumes . . . . . . . . . . . . . . 278
14.4 Configuring the DS8000 storage for CKD volumes. . . . . . . . . . . . . . . . . . . . . . . . . . 280
14.4.1 Create array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
14.4.2 Ranks and extent pool creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
14.4.3 Logical control unit creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
14.4.4 Create CKD volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
14.5 Scripting the DS CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
14.5.1 Single command mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
14.5.2 Script mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284

Part 4. Host considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285

Chapter 15. Open systems considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287


15.1 General considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
15.1.1 Getting up to date information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288

x IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786TOC.fm

15.1.2 Boot support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290


15.1.3 Additional supported configurations (RPQ). . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
15.1.4 Multipathing support — Subsystem Device Driver (SDD). . . . . . . . . . . . . . . . . 290
15.2 Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
15.2.1 HBA and operating system settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
15.2.2 SDD for Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
15.2.3 Boot support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
15.2.4 Windows Server 2003 VDS support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
15.3 AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
15.3.1 Finding the World Wide Port Names. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
15.3.2 AIX multipath support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
15.3.3 SDD for AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
15.3.4 AIX Multipath I/O (MPIO) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
15.3.5 LVM configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
15.3.6 AIX access methods for I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
15.3.7 Boot device support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
15.4 Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
15.4.1 Support issues that distinguish Linux from other operating systems . . . . . . . . 307
15.4.2 Reference material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
15.4.3 Important Linux issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
15.4.4 Troubleshooting and monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
15.5 OpenVMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
15.5.1 FC port configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
15.5.2 Volume configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
15.5.3 Command Console LUN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
15.5.4 OpenVMS volume shadowing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
15.6 VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
15.6.1 What is new in VMware ESX Server 2.5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
15.6.2 VMware disk architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
15.6.3 VMware setup and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
15.7 Sun Solaris . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
15.7.1 Locating the WWPNs of your HBAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
15.7.2 Solaris attachment to DS8000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
15.7.3 Multipathing in Solaris . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
15.8 HP-UX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
15.8.1 Available documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
15.8.2 DS8000-specific software depots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
15.8.3 Configuring the DS8000 on a HP-UX host . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
15.8.4 Multipathing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331

Chapter 16. System z considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333


16.1 Connectivity considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
16.2 Operating systems prerequisites and enhancements . . . . . . . . . . . . . . . . . . . . . . . . 334
16.3 z/OS considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
16.3.1 z/OS program enhancements (SPEs). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
16.3.2 Parallel Access Volumes (PAV) definition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
16.3.3 HyperPAV — z/OS support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
16.4 z/VM considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
16.4.1 Connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
16.4.2 Supported DASD types and LUNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
16.4.3 PAV and HyperPAV — z/VM support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
16.4.4 MIH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
16.5 VSE/ESA and z/VSE considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341

Contents xi
6786TOC.fm Draft Document for Review November 14, 2006 3:49 pm

Chapter 17. System i considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343


17.1 Supported environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
17.1.1 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
17.1.2 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
17.2 Logical volume sizes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
17.3 Protected versus unprotected volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
17.3.1 Changing LUN protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
17.4 Adding volumes to System i configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
17.4.1 Using 5250 interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
17.4.2 Adding volumes to an Independent Auxiliary Storage Pool . . . . . . . . . . . . . . . 348
17.5 Multipath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
17.5.1 Avoiding single points of failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
17.5.2 Configuring multipath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
17.5.3 Adding multipath volumes to System i using 5250 interface. . . . . . . . . . . . . . . 358
17.5.4 Adding multipath volumes to System i using System i Navigator . . . . . . . . . . . 359
17.5.5 Managing multipath volumes using System i Navigator . . . . . . . . . . . . . . . . . . 360
17.5.6 Multipath rules for multiple System i hosts or partitions . . . . . . . . . . . . . . . . . . 363
17.5.7 Changing from single path to multipath. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
17.6 Sizing guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
17.6.1 Planning for arrays and DDMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
17.6.2 Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
17.6.3 Number of System i Fibre Channel adapters . . . . . . . . . . . . . . . . . . . . . . . . . . 365
17.6.4 Size and number of LUNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
17.6.5 Recommended number of ranks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
17.6.6 Sharing ranks between System i and other servers . . . . . . . . . . . . . . . . . . . . . 366
17.6.7 Connecting via SAN switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
17.7 Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
17.7.1 OS/400 mirroring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
17.7.2 Metro Mirror and Global Copy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
17.7.3 OS/400 data migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
17.8 Boot from SAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
17.8.1 Boot from SAN and cloning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
17.8.2 Why consider cloning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
17.9 AIX on IBM System i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
17.10 Linux on IBM System i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371

Part 5. Maintenance and upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373

Chapter 18. Licensed machine code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375


18.1 How new microcode is released . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
18.2 Installation process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
18.3 Concurrent and non-concurrent updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377
18.4 HMC code updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377
18.5 Host adapter firmware updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377
18.6 Loading the code bundle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378
18.6.1 Code update schedule example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378
18.7 Post-installation activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
18.8 Planning and application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379

Chapter 19. Monitoring with SNMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381


19.1 SNMP overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
19.1.1 SNMP agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
19.1.2 SNMP manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
19.1.3 SNMP trap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382

xii IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786TOC.fm

19.1.4 SNMP communication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383


19.1.5 Generic SNMP security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
19.1.6 Message Information Base (MIB) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
19.1.7 SNMP trap request . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
19.1.8 DS8000 SNMP configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
19.2 SNMP notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
19.2.1 Serviceable event using specific trap 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
19.2.2 Copy Services event traps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
19.3 SNMP configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390

Chapter 20. Remote support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391


20.1 Call Home and remote support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
20.1.1 Call Home for service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
20.1.2 Remote support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
20.2 Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
20.2.1 Dial-up connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
20.2.2 Secure high-speed connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
20.2.3 Establish a remote support connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
20.2.4 Example of connection to IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
20.2.5 Support authorization via remote support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
20.3 Optional firewall setup guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
20.4 Additional remote support options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
20.4.1 VPN implementation using an external HMC . . . . . . . . . . . . . . . . . . . . . . . . . . 395
20.4.2 Data offload using File Transfer Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396

Chapter 21. Capacity upgrades and CoD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397


21.1 Installing capacity upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
21.1.1 Installation order of upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
21.1.2 Checking how much capacity is installed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
21.2 Using Capacity on Demand (CoD) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
21.2.1 What is Capacity on Demand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
21.2.2 How you can tell if a DS8000 has CoD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
21.2.3 Using the CoD storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403

Appendix A. Data migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405


Data migration in open systems environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
Migrating with basic copy commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
Migrating using volume management software. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
Migrating using backup and restore methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
Data migration in System z environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
Data migration based on physical volume migration. . . . . . . . . . . . . . . . . . . . . . . . . . . 415
Data migration based on logical data set migration . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
Combination of physical and logical data migration . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
Copy Services based migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
IBM Migration Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417

Appendix B. Tools and service offerings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419


Capacity Magic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
Disk Magic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
IBM TotalStorage Productivity Center for Disk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
Disk Storage Configuration Migrator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
IBM Global Technology Services — service offerings . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424

Contents xiii
6786TOC.fm Draft Document for Review November 14, 2006 3:49 pm

Appendix C. Project plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425


Project plan skeleton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426

Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429


IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433

xiv IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786LOF.fm

Figures

1-1 DS8000 - Base frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5


2-1 Naming convention for Turbo models 931, 932, 9B2 and older 921, 922 and 9A2 . . 16
2-2 DS8100 base frame with covers removed (left) and with Model 92E (right) . . . . . . . 18
2-3 Maximum DS8100 configuration — 931 base unit and 92E expansion. . . . . . . . . . . 19
2-4 DS8300 Turbo Model 932/9B2 base frame rear views — with and without covers . . 20
2-5 DS8300 maximum configuration — base and two expansion frames . . . . . . . . . . . . 21
2-6 DS8300 Turbo Model 932 — maximum configuration with 1024 disk drives. . . . . . . 22
2-7 2-way model components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2-8 4-way model components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3-1 Logical partition concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3-2 DS8300 Model 9B2 - LPAR and storage facility image (SFI) . . . . . . . . . . . . . . . . . . 30
3-3 DS8300 LPAR resource allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3-4 SFI resource allocation in the processor complexes of the DS8300 . . . . . . . . . . . . . 32
3-5 DS8300 example configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3-6 LPAR protection in IBM POWER Hypervisor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3-7 DS8300 storage facility images and Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . 35
3-8 Example of storage facility images in the DS8300. . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3-9 DS8300 with and without LPAR vs. ESS800— addressing capabilities . . . . . . . . . . 38
4-1 DS8000 frame possibilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4-2 Rack operator panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4-3 DS8000 series architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4-4 Cache lists of the SARC algorithm for random and sequential data . . . . . . . . . . . . . 45
4-5 Processor complex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4-6 DS8000 RIO-G port layout. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4-7 I/O enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4-8 DS8000 device adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4-9 DS8000 disk enclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4-10 Industry standard FC-AL disk enclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4-11 DS8000 disk enclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4-12 Disk enclosure switched connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4-13 DS8000 switched disk expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4-14 DS8000 switched loop layout. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4-15 Array across loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4-19 DS8000 Fibre Channel/FICON host adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
5-1 Single image mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
5-2 DS8300 Turbo Model 9B2 - dual image mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
5-3 Normal data flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
5-4 Server 0 failing over its function to server 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
5-5 Single pathed host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
5-6 Dual pathed host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
5-7 Switched disk connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
6-1 Storage Facility virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
6-2 Physical layer as the base for virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
6-3 Array site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
6-4 Creation of an array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
6-5 Forming an FB rank with 1 GB extents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
6-6 Extent pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
6-7 Allocation of a CKD logical volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

© Copyright IBM Corp. 2006. All rights reserved. xv


6786LOF.fm Draft Document for Review November 14, 2006 3:49 pm

6-8 Creation of an FB LUN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99


6-9 Grouping of volumes in LSSs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
6-10 Logical storage subsystems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
6-11 Host attachments and volume groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
6-12 Virtualization hierarchy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
6-13 Optimal placement of data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
7-1 FlashCopy concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
7-2 Incremental FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
7-3 Data Set FlashCopy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
7-4 Multiple Relationship FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
7-5 Consistency Group FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
7-6 Establish FlashCopy on existing Metro Mirror or Global Copy primary . . . . . . . . . . 115
7-7 Metro Mirror basic operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
7-8 Global Copy basic operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
7-9 Global Mirror basic operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
7-10 How Global Mirror works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
7-11 Metro/Global Mirror elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
7-12 Metro/Global Mirror overview diagram. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
7-13 z/OS Global Mirror basic operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
7-14 z/OS Metro/Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
7-15 DS8000 Copy Services network components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
8-1 Floor tile cable cutout for DS8000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
8-2 Service clearance requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
8-3 DS8000 DS HMC Remote support connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
9-1 Rear view of the DS HMC and a pair of Ethernet switches . . . . . . . . . . . . . . . . . . . 146
9-2 DS HMC environment - Logical flow of communication. . . . . . . . . . . . . . . . . . . . . . 147
9-3 HMC Fluxbox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
9-4 Management console Welcome panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
9-5 Typical DS HMC environment setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
9-6 Entry screen of DS SM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
9-7 User administration: Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
9-8 User administration: Adding a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
9-9 Adding a storage complex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
9-10 Specifying an external HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
10-1 Switched FC-AL disk subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
10-2 High availability and increased bandwidth connecting both DA to two logical loops 169
10-3 Fibre Channel device adapter with 2 Gbps ports. . . . . . . . . . . . . . . . . . . . . . . . . . . 170
10-4 Host adapter with 4 Fibre Channel ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
10-5 Standard System p5 2-way SMP processor complexes for DS8100 Model 931 . . . 172
10-6 DS8100-931 with four I/O enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
10-7 Fibre Channel switched backend connect to processor complexes - partial view . . 173
10-8 DS8100 to DS8300 scale performance linearly - view without disk subsystems. . . 174
10-9 Spreading data across ranks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
10-10 Dual port host attachment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
10-11 DS8100 frontend connectivity example - partial view . . . . . . . . . . . . . . . . . . . . . . . 181
10-12 Extent pool affinity to processor complex with one extent pool for each rank . . . . . 185
10-13 Extent pool affinity to processor complex with pooled ranks in two extent pools. . . 186
10-14 Mix of extent pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
10-15 Traditional z/OS behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
10-16 z/OS behavior with PAV. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
10-17 WLM assignment of alias addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
10-18 Dynamic PAVs in a sysplex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
10-19 z/VM support of PAV volumes dedicated to a single guest virtual machine . . . . . . 191

xvi IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786LOF.fm

10-20 Linkable minidisks for guests that do exploit PAV . . . . . . . . . . . . . . . . . . . . . . . . . . 192


10-21 Parallel I/O capability with Multiple Allegiance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
10-22 Parallel Access Volumes — basic operation characteristics . . . . . . . . . . . . . . . . . . 193
10-23 HyperPAV — basic operation characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
10-24 I/O Priority Queueing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
11-1 DS8000 Storage Manager Sign On panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
11-2 DS8000 Storage units panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
11-3 DS8000 Storage Unit Properties panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
11-4 IBM DSFA Web page. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
11-5 DS8000 DSFA machine information entry page . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
11-6 DSFA View machine summary page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
11-7 DSFA Manage activations page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
11-8 DSFA View activation codes page. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
11-9 DS8000 Storage Images select panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
11-10 DS8000 activation code input panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
11-11 DS8000 Activation Code input panel — Turbo models . . . . . . . . . . . . . . . . . . . . . . 208
11-12 DS8000 import key file panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
11-13 Applied licenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
13-1 Entering the URL using the TCP/IP address for the DS HMC . . . . . . . . . . . . . . . . . 224
13-2 DS Storage Manager Welcome panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
13-3 Example of the Ranks panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
13-4 Long-running task monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
13-5 Manage configuration files: Simulated panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
13-6 Multiple open configuration files warning message . . . . . . . . . . . . . . . . . . . . . . . . . 229
13-7 New configuration file created . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
13-8 New configuration file open and Default closed. . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
13-9 Storage unit creation choices panel with pull-down . . . . . . . . . . . . . . . . . . . . . . . . . 230
13-10 Import Storage Unit panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
13-11 Import Storage Unit panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
13-12 General storage unit info panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
13-13 Import Storage Unit verification panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
13-14 Import long running task panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
13-15 Import Storage Unit finished panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
13-16 Long running task summary: Simulated panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
13-17 Storage unit in Storage units: Simulated panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
13-18 Manage hardware, Storage Images. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
13-19 Configure storage, arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
13-20 Arrays displayed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
13-21 Additional information available in the configuration file . . . . . . . . . . . . . . . . . . . . . 236
13-22 Select import from econfig file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
13-23 econfig path selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
13-24 Define storage complex properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
13-25 Configure I/O ports. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
13-26 Select port format. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
13-27 Query Host WWPNs panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
13-28 Host systems panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
13-29 General host information panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
13-30 Define host ports panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
13-31 Define host WWPN panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
13-32 Specify Connection panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
13-33 Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
13-34 Arrays panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
13-35 Definition method panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245

Figures xvii
6786LOF.fm Draft Document for Review November 14, 2006 3:49 pm

13-36 Array configuration (Custom) panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246


13-37 Add arrays to ranks panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
13-38 Verification panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
13-39 Ranks panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
13-40 Select array for rank panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
13-41 Define rank properties panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
13-42 Select extent pool panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
13-43 Verification panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
13-44 Extent pools panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
13-45 Definition method panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
13-46 Define properties panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
13-47 Create custom extent pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
13-48 Reserve storage panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
13-49 Verification panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
13-50 Volumes - Open systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
13-51 Select extent pool panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
13-52 Define volume characteristics panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
13-53 Define volume properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
13-54 Create volume nicknames panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
13-55 Verification panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
13-56 Volume groups panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
13-57 Define volume group properties panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
13-58 Select host attachments panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
13-59 Select volumes for group panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
13-60 Verification panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
13-61 LCUs panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
13-62 Select from available LCUs panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
13-63 Define LCU properties panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
13-64 Verification panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
13-65 Volumes - zSeries panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
13-66 Select extent pool panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
13-67 Define base volume characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
13-68 Define base volume properties panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
13-69 Create volume nicknames panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
13-70 Define alias assignment panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
13-71 Verification panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
15-1 SDD devices on Windows Device manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
15-2 Disk manager view. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
15-3 Microsoft VDS architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
15-4 VSS installation infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
15-5 VSS VDS example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
15-6 The logical disk structure of ESX Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
15-7 Storage Management Failover Paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
15-8 Virtual Disk device types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
15-9 Device Management by the guest operating system . . . . . . . . . . . . . . . . . . . . . . . . 324
15-10 VERITAS DMP disk view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
17-1 System Service Tools menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
17-2 Work with Disk Units menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
17-3 Work with Disk Configuration menu. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
17-4 Specify ASPs to Add Units to. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
17-5 Confirm Add Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
17-6 System i Navigator initial panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
17-7 System i Navigator Signon to System i window. . . . . . . . . . . . . . . . . . . . . . . . . . . . 349

xviii IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786LOF.fm

17-8 System i Navigator Disk Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350


17-9 SST Signon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
17-10 Defining a new disk pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
17-11 New disk pool - welcome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
17-12 Add disks to disk pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
17-13 Choose the disks to add to the disk pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
17-14 Confirm disks to be added to disk pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
17-15 New disk pool summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
17-16 New disk pool status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
17-17 Disks added successfully to disk pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
17-18 New disk pool shown on System i Navigator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
17-19 New logical volumes shown on System i Navigator. . . . . . . . . . . . . . . . . . . . . . . . . 355
17-20 Single points of failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
17-21 Multipath removes single points of failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
17-22 Example of multipath with System i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
17-23 Adding multipath volumes to an ASP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
17-24 Confirm Add Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
17-25 Adding a multipath volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
17-26 Selecting properties for a multipath logical unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
17-27 Multipath logical unit properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
17-28 Multipath connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
17-29 Process for sizing external storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
17-30 Using Metro Mirror to migrate from ESS to DS8000 . . . . . . . . . . . . . . . . . . . . . . . . 368
17-31 Ending allocation for existing disk units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
17-32 Moving data from units marked *ENDALC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
19-1 SNMP architecture and communication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
20-1 DS HMC Call Home flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
20-2 VPN establish process flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
20-3 DS8000 DS HMC remote support connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
20-4 DS8000 HMC alternate remote support connection . . . . . . . . . . . . . . . . . . . . . . . . 396
21-1 DS8000 disk enclosure assembly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
21-2 CoD machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
A-1 Preparing to mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
A-2 Convert disk to dynamic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
A-3 Accessing Add Mirror panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
A-4 Selecting disks in Add Mirror panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
A-5 Synchronization process running. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
A-6 Synchronization process finished . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
A-7 Break Mirrored Volume option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
A-8 Break Mirrored Volume finished. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
A-9 Remove Mirror panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
A-10 Remove mirror finished . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
B-1 Configuration screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
B-2 Capacity Magic output report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
B-3 Disk Magic Interfaces and Open Disk input screens . . . . . . . . . . . . . . . . . . . . . . . . 422
B-4 Disk storage configuration migrator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
C-1 Page 1 of project plan skeleton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
C-2 Page 2 of project plan skeleton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427

Figures xix
6786LOF.fm Draft Document for Review November 14, 2006 3:49 pm

xx IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786spec.fm

Notices

This information was developed for products and services offered in the U.S.A.

IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.

IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not give you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.

The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.

This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.

Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.

IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.

Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.

This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.

COPYRIGHT LICENSE:

This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.

© Copyright IBM Corp. 2006. All rights reserved. xxi


6786spec.fm Draft Document for Review November 14, 2006 3:49 pm

Trademarks
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
Redbooks (logo) ™ DS8000™ POWER™
eServer™ Enterprise Storage Server® Redbooks™
iSeries™ ESCON® Resource Link™
i5/OS® FlashCopy® RMF™
pSeries® FICON® RS/6000®
xSeries® Geographically Dispersed Parallel S/390®
z/OS® Sysplex™ System i™
z/VM® GDPS® System i5™
z/VSE™ HACMP™ System p™
zSeries® IBM® System x™
AIX 5L™ IMS™ System z™
AIX® Lotus® System Storage™
AS/400® Multiprise® System Storage DS™
BladeCenter® MVS™ Tivoli®
CICS® OS/2® TotalStorage®
DB2® OS/390® VM/ESA®
DFSMSdss™ OS/400® VSE/ESA™
DFSMShsm™ Parallel Sysplex® WebSphere®
DFSORT™ Power PC® 1-2-3®
DS4000™ PowerPC®
DS6000™ Predictive Failure Analysis®

The following terms are trademarks of other companies:

Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other
countries, or both.

Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the
United States, other countries, or both.

Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel
SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its
subsidiaries in the United States and other countries.

UNIX is a registered trademark of The Open Group in the United States and other countries.

Linux is a trademark of Linus Torvalds in the United States, other countries, or both.

Other company, product, or service names may be trademarks or service marks of others.

xxii IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786pref.fm

Preface

This IBM® Redbook describes the concepts, architecture and implementation of the IBM
System Storage™ DS8000 series of disk storage subsystems.

This book has reference information that will help you prepare the planning, installation, and
configuration of the DS8000 and also discusses the architecture and components. This book
will help you plan, design and implement a new installation, or migrate from an existing one. It
includes hints and tips derived from users experience for efficient installation and use.

The DS8000 is a cabinet-mounted self-contained disk storage subsystem that is a follow-on


development of the successful IBM TotalStorage Enterprise Storage Server (ESS), with new
functions related to storage virtualization and flexibility. The DS8000 is designed for the
higher demands of data storage and data availability that most organizations face today.

The DS8000 series benefit by using IBM POWER5+™ processor technology with a dual
two-way processor complex implementation —in the DS8100 Turbo model 931; and with a
dual four-way processor complex implementation —in the DS8300 Turbo models 932 and
9B2. This increased power and extended connectivity, with up to 128 Fibre Channel/FICON
ports or 64 ESCON ports for host connections, makes it suitable for multiple server
environments in both the open systems and System z™ environments. Its switched Fibre
Channel architecture, dual processor complex implementation, high-availability design, and
the advanced point-in-time copy and remote mirror and copy functions that it incorporates
make the DS8000 suitable for mission-critical businesses.

To read about DS8000 point-in-time copy —FlashCopy, and the set of remote mirror and copy
products available with the DS8000 series —Metro Mirror, Global Copy, Global Mirror, z/OS
Global Mirror, and Metro/Global Mirror, you can refer to the redbooks: IBM System Storage
DS8000 Series: Copy Services in Open Environments, SG24-6788 and IBM System Storage
DS8000 Series: Copy Services with System z servers, SG24-6787.

The team that wrote this redbook


This redbook was produced by a team of specialists from around the world working for the
International Technical Support Organization, San Jose Center, in IBM Mainz, Germany.

Gustavo Castets is a Certified IBM IT Specialist and Project Leader working for the IBM
International Technical Support Organization, San Jose Center. While in San Jose, from 2001
to 2004, Gustavo co-authored twelve redbooks and taught IBM classes worldwide in the area
of disk and tape storage systems. Before joining the ITSO, Gustavo was based in Buenos
Aires where he worked in different technical sales support positions for more than 22 years.
Today, in addition to his work with the ITSO, Gustavo is working for IBM Global Delivery in
Argentina, as a Storage Specialist giving technical support to accounts from US and Europe.

Bert Dufrasne is a Certified Consulting IT Specialist and Project Leader for IBM
TotalStorage® and System Storage products at the International Technical Support
Organization, San Jose Center. He has worked at IBM in various IT areas. Before joining the
ITSO, he worked for IBM Global Services as an Application Architect. He holds a degree in
Electrical Engineering.

Stephen Baird is an IT Specialist with IBM Global Services. He joined IBM in 1999, working
in open systems server performance management and capacity planning. Since 2002, he has

© Copyright IBM Corp. 2006. All rights reserved. xxiii


6786pref.fm Draft Document for Review November 14, 2006 3:49 pm

worked in Storage Area Network and disk storage subsystem support and has gained
experience with Brocade, Cisco, and McData fiber channel switches and directors as well as
DS8000, DS4000, and ESS series hardware. He holds a degree in Mechanical Engineering
from the Massachusetts Institute of Technology, in Cambridge, MA.

Werner Bauer is a certified consulting IT specialist in Germany. He has 26 years of


experience in storage software and hardware, as well with S/390® and z/OS. His areas of
expertise include disaster recovery solutions in enterprises utilizing the unique capabilities
and features of the IBM disk storage servers, ESS and DS6000/DS8000. He has written
extensively in various redbooks including, Transactional VSAM, DS6000™ / DS8000
Concepts and Architecture, and DS6000 / DS8000 Copy Services. He holds a degree in
Economics from the University of Heidelberg and in Mechanical Engineering from FH
Heilbronn.

Denise Brown is a Software Engineer at the IBM Open Systems Level Test Lab in Tucson,
Arizona. She has been working with IBM Storage for the past four years, with experience in
storage software and hardware in an open system environment, Her current areas of focus
include Copy Services solutions in Metro/Global Mirror and Incremental Re-synchronization
for the DS8000. She holds a degree in Engineering Mathematics.

Jana Jamsek is an IT specialist in IBM Slovenia. She works in Storage Advanced Technical
Support for Europe as a specialist for IBM Storage Systems and i5/OS systems. Jana has
eight years of experience in the System i and AS/400 area, and six years experience in
Storage. She holds a masters degree in computer science and a degree in mathematics from
University of Ljubljana, Slovenia. She was among the authors of the IBM Redpaper, The LTO
Ultrium Primer for IBM eServer iSeries Customers, the IBM Redbook, iSeries in Storage Area
Networks, the IBM Redbook, iSeries and IBM TotalStorage: A Guide to Implementing
External Disk on IBM eSerevr i5, and Redpaper, Multipath for IBM eServer iSeries.

Wenzel Kalabza is an IT specialist in IBM Germany. He started in 1998 as field quality


engineer for IBM hard disk drives and was technical lead in HDD robustness and rotational
vibration testing. He has three years of experience supporting IBM storage products. As a
member of the DASD EMEA back office, he initially supported the ESS. and is now a product
field engineer for the DS6000, specializing in Copy Services.

Peter Klee is working as an IT Specialist with ATS EMEA, located in Mainz, Germany. He
has 10 years of experience in data centers. He worked for a large bank in Germany and was
responsible for the architecture and the implementation of the disk storage environment using
EMC Symmetrix, HDS Lightning and ESS Model 800. He joined IBM in 2003, where he was
working for Strategic Outsourcing. Since June 2004 he is working in the ATS System Storage
team in Mainz. His main focus is Copy Services in the open systems environment with
DS8000 and DS6000, especially Global Mirror and Metro/Global Mirror.

Markus Oscheka is an IT Specialist for Proof of Concepts and Benchmarks at the ATS
Customer Solutions team in Mainz, Germany. His areas of expertise include setup and
demonstration of IBM System Storage products and solutions in various environments
including AIX, Linux, Windows, HP-UX and Solaris. He has worked at IBM for five years. He
has performed many Proof of Concepts with Copy Services on DS6000/DS8000, as well as
Performance-Benchmarks with DS4000/DS6000/DS8000.

Ying Thia is an Advisory IT Specialist based in IBM Singapore, providing storage technical
support. She has 14 years of experience in the S/390 and storage environment. Before
joining the IBM Singapore Storage team, she worked in IBM Global Services where her
responsibilities include technical support and services delivery. Her areas of expertise include
IBM high-end disk and tape storage subsystems and disaster recovery solutions using the

xxiv IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786pref.fm

capabilities and features of IBM storage products. She co-authored a previous redbook and
workshop for zSeries copy services.

Robert Tondini is a Senior IT specialist based in IBM Australia, providing storage technical
support. He has 12 years of experience in the S/390 and storage environment. His areas of
expertise include IBM high-end disk and tape storage subsystems and disaster recovery
solutions using the capabilities and features of IBM storage products. He co-authored several
redbooks and workshop for zSeries copy services.

The team: Gustavo, Robert, Wenzel, Jana, Peter, Markus, Denise, Werner, Ying, Stephen, Bertrand

Special thanks to:


John Bynum, Bob Moon- Technical Support Marketing Leads
IBM US

We want to thank Michael Eggloff and Peter Klee for hosting us at the European Storage
Competency Center in Mainz, Germany. They were able to supply us with the needed
hardware, conference room, and all of the many things needed to run a successful residency.

Günter Schmitt, Uwe Schweikhard, Edgar Strubel (ATS - IBM Mainz) for their help in
reserving and preparing the equipment we used.

Monika Baier, Susanne Balzer, Marion Barlen, Marion Hartmann, Andrea Witkowski, Gertrud
Kramer - IBM Mainz, for their help with administrative tasks.

Many thanks to the authors of the previous edition of this redbook:


Peter Kimmel, Jukka Myyrlainen, Lu Nguyen, Gero Schmidt, Shin Takata, Anthony
Vandewerdt, Bjoern Wesselbaum

We also would like to thank:

Selwyn Dickey, Timothy Klubertanz, Vess Natchev, James McCord and Chuck Stupca
IBM Rochester - System i Client Technology Center

Guido Ihlein, Marcus Dupuis, Wilfried Kleemann


IBM Germany

Bob Bartfai, James Davison, Craig Gordon, Lee La Frese, Jennifer Mason, Alan McClure,
Rosemary McCutchen, Rachel Mikolajewski, Christopher o’Toole, Bill Raywinkle, Richard

Preface xxv
6786pref.fm Draft Document for Review November 14, 2006 3:49 pm

Ripberger, Henry Sautter, Jim Sedgwick, Gail Spear, David V Valverde, Steve Van Gundy,
Leann Vaterlaus, Sonny Williams, Steve Wilkins.

Brian Sherman
IBM Canada

Nick Clayton
IBM UK

Many thanks to Emma Jacobs and ITSO editor.

Become a published author


Join us for a two- to six-week residency program! Help write an IBM Redbook dealing with
specific products or solutions, while getting hands-on experience with leading-edge
technologies. You'll team with IBM technical professionals, Business Partners and clients.

Your efforts will help increase product acceptance and client satisfaction. As a bonus, you'll
develop a network of contacts in IBM development labs, and increase your productivity and
marketability.

Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html

Comments welcome
Your comments are important to us!

We want our Redbooks to be as helpful as possible. Send us your comments about this or
other Redbooks in one of the following ways:
򐂰 Use the online Contact us review redbook form found at:
ibm.com/redbooks
򐂰 Send your comments in an email to:
redbook@us.ibm.com
򐂰 Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400

xxvi IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786chang.fm

Summary of changes

This section describes the technical changes made in this edition of the book and in previous
editions. This edition may also include minor corrections and editorial changes that are not
identified.

Summary of Changes
for SG24-6786-02
for IBM System Storage DS8000 Series: Architecture and Implementation
as created or updated on November 14, 2006.

October 2006, Third Edition


This revision reflects the addition, deletion, or modification of new and changed information
described below.

New information
򐂰 Turbo models 931/932/9B2 with POWER5+™ processor
򐂰 500 GB - 7200 rpm Fibre Channel ATA (FATA) disk drives
򐂰 4 Gb Fibre Channel/FICON host adapter connectivity option
򐂰 IBM TotalStorage Productivity Center (TPC) for Replication V3.1 support
򐂰 DS8000 series enhancements: End-to-End I/O Priorities; Cooperative Caching; Long
Busy Wait Host Tolerance; Audit Logging; z/OS Global Mirror suspend instead of long
busy option
򐂰 Metro/Global Mirror reference information
򐂰 Earthquake resistance kit
򐂰 Added information on spare creation
򐂰 System i™ related information
򐂰 HyperPAV
򐂰 242x and 239x Enterprise Choice length of warranty machine types
򐂰 Added information on RPQ options for larger capacity in 932 Turbo Models

Changed information
򐂰 The information, and some examples and screen captures presented in this book were
updated to reflect the latest available microcode bundle.
򐂰 Expanded sections describing how PAV works
򐂰 Expanded z/VM support information and z/VM support of PAVs
򐂰 Merged contents from the publication IBM TotalStorage DS8000 series: Concepts and
Architecture, SG24-6452

© Copyright IBM Corp. 2006. All rights reserved. xxvii


6786chang.fm Draft Document for Review November 14, 2006 3:49 pm

xxviii IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452p01.fm

Part 1

Part 1 Concepts and


Architecture
In this part we overview the IBM System Storage DS8000 series concepts and architecture.
The topics covered include:
򐂰 Introduction to the DS8000 series
򐂰 Model overview
򐂰 Storage system LPARs (logical partitions)
򐂰 Hardware Components
򐂰 RAS - Reliability, Availability, Serviceability
򐂰 Virtualization concepts
򐂰 Copy Services

© Copyright IBM Corp. 2006. All rights reserved. 1


6786_6452p01.fm Draft Document for Review November 14, 2006 3:49 pm

2 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452ch_Introduction.fm

Chapter 1. Introduction to the DS8000 series


This chapter introduces the features, functions, and benefits of the IBM System Storage
DS8000 series of disk storage subsystems. The topics covered in this chapter include:
򐂰 The DS8000, a member of the System Storage DS family
򐂰 Introduction to the DS8000 series features and functions
򐂰 Positioning the DS8000 series
򐂰 Performance features

© Copyright IBM Corp. 2006. All rights reserved. 3


6786_6452ch_Introduction.fm Draft Document for Review November 14, 2006 3:49 pm

1.1 The DS8000, a member of the System Storage DS family


IBM has a wide range of product offerings that are based on open standards and that share a
common set of tools, interfaces, and innovative features. The System Storage DS family is
designed to offer high availability, multiplatform support, and simplified management tools, all
to help you cost effectively adjust to an on demand world. The DS8000 series —DS8100
Turbo Model 931 and DS8300 Turbo Models 932 and 9B2— gives you the freedom to choose
the right combination of technologies for your current needs and a flexible infrastructure that
can evolve as your needs change.

1.1.1 Infrastructure Simplification


The DS8000 series is designed to a new standard of on demand storage, offering an
extraordinary opportunity to consolidate existing heterogeneous storage environments,
helping to lower costs, improve management efficiency, and free valuable floor space. The
implementation of storage system Logical Partitions (LPARs) allows the DS8300 Turbo Model
9B2 to run two independent workloads on completely independent virtual storage systems.
This unique feature of the DS8000 series delivers opportunities for new levels of efficiency
and cost effectiveness.

1.1.2 Business Continuity


The DS8000 series is designed for the most demanding, mission-critical environments
requiring extremely high availability, performance, and scalability. The DS8000 series is
designed to avoid single points of failure and provide outstanding availability. With the
advanced Copy Services functions that the DS8000 series integrates, data availability can be
enhanced even further. Flashcopy allows production workloads to continue execution
concurrent with data backups. Metro Mirror, Global Copy, Global Mirror, and Metro/Global
Mirror business continuity solutions are designed to provide the advanced functionality and
flexibility needed to tailor a business continuity environment for almost any recovery point or
recovery time objective.

1.1.3 Information Lifecycle Management


The DS8000 series can help improve the management of information according to its
business value—from the moment of its creation to the moment of its disposal.

The policy-based management capabilities built into the IBM System Storage Open Software
Family, IBM DB2 Content Manager and IBM System Storage Archive Manager, can be used
with the DS8000 and are designed to help you automatically preserve critical data, while
preventing deletion of that data before its scheduled expiration.

Furthermore, the availability of Fibre Channel ATA (FATA) drives for the DS8000, gives the
option to retain frequently accessed or high-value data on Fibre Channel disk drives, and
archiving less valuable information on the less costly FATA disk drives —FATA drives are
discussed later in Section 4.4.2, “Disk enclosures” on page 50.

1.2 Introduction to the DS8000 series features and functions


The IBM System Storage DS8000 series is a high-performance, high-capacity series of disk
storage subsystems. It offers balanced performance and storage capacity that scales linearly
from 1.1 TB up to 192 TB with FC drives or 320 TB with FATA drives —and up to 512TB with
the Turbo Model 932 when configured with FATA drives (RPQ must be submitted).

4 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452ch_Introduction.fm

With the DS8300 Turbo Model 9B2, it is possible to create two storage system logical
partitions (LPARs) in a single DS8300. These LPARs divide the system resources equally and
can be used in completely separate storage environments.

The IBM System Storage DS8000 series highlights include:


򐂰 robust, flexible, enterprise class and cost-effective disk storage
򐂰 exceptionally high system availability for continuous operations
򐂰 centralized and simplified management
򐂰 IBM POWER5+™ processor technology
򐂰 capacities from 1.1 TB to 192/320 TB (FC/FATA)
򐂰 multiple storage system LPARs, for completely separate storage environments
򐂰 point-in-time copy function —with FlashCopy; and remote mirror and copy functions
—Metro Mirror, Global Copy, Global Mirror, Metro/Global Mirror, and z/OS Global Mirror.
򐂰 smaller footprint; occupies 20% less floor space than the ESS 800's base frame
򐂰 support for a wide variety and intermix of operating systems — open systems, System i™,
System z™.

Figure 1-1 shows a photo of a DS8000 base frame.

Figure 1-1 DS8000 - Base frame

1.2.1 IBM System Storage DS8000 series Turbo models


The IBM System Storage DS8000 series offers three new models —the IBM System Storage
DS8100 Turbo Model 931, and IBM System Storage DS8300 Turbo Models 932 and 9B2. The
IBM System Storage DS8000 series Turbo Models are designed to deliver similar enterprise
class storage capabilities and functionality as the predecessor DS8000 series models
(DS8100 Model 921, and DS8300 Models 922 and 9A2) with the addition of new features,
larger capacity, and enhanced performance.

Chapter 1. Introduction to the DS8000 series 5


6786_6452ch_Introduction.fm Draft Document for Review November 14, 2006 3:49 pm

1.2.2 Hardware overview


In this section we give a short description of the main hardware components.

The hardware is optimized to provide enhancements in terms of performance, connectivity,


and reliability. From an architectural point of view the DS8000 series has not changed much
with respect to the fundamental architecture of the predecessor IBM TotalStorage Enterprise
Storage Server (ESS) models. This ensures that the DS8000 can leverage a very stable and
well-proven operating environment, offering the optimum in availability.

The DS8000 series features several models in a new, higher-density footprint than the ESS
Model 800, providing configuration flexibility. The available models are discussed in
Chapter 2, “Model overview” on page 15.

IBM POWER5+™ processor technology


The DS8000 series Turbo models exploit the IBM POWER5+™ technology, which is also the
foundation of the storage system LPARs. The DS8100 Turbo Model 931 offers a dual two-way
processor complex implementation, and the DS8300 Turbo Models 932 and 9B2 offer a dual
four-way processor complex implementation.

Compared to the predecessor DS8000 series models (921/922/9A2) that featured the IBM
POWER5™ processor, the Turbo models with their POWER5+™ processor may enable up to
a 15% performance improvement in I/O operations per second in transaction processing
workload environments.

Internal fabric
DS8000 comes with a high bandwidth, fault tolerant internal interconnection, which is also
used in the IBM System p™ servers, called RIO-2 (Remote I/O). This technology can operate
at speeds up to 1 GHz and offers a 2 GB per second sustained bandwidth per link.

Switched Fibre Channel Arbitrated Loop (FC-AL)


Unlike the ESS, the DS8000 uses switched FC-AL for its disk interconnection. This offers a
point-to-point connection to each drive and adapter, so that there are 4 paths available from
the controllers to each disk drive.

Fibre Channel disk drives


The DS8000 offers a selection of industry standard Fibre Channel disk drives, including
73 GB (10k and 15k rpm), 146 GB (10k and 15k rpm) and 300 GB (10k rpm). The 300 GB
disk drive modules (DDMs) allow a single system to scale up to 192 TB of capacity.

FATA drives
With the introduction of 500 GB (7200 rpm) Fibre Channel ATA (FATA) drives, the DS8000
capacity now scales up to 320 TB. These drives offers a cost effective option for lower priority
data. See 4.4.4, “Fiber Channel ATA (FATA) disk drives overview” on page 56 for more
information.

Host adapters
The DS8000 series offers enhanced connectivity with four-port Fibre Channel/FICON Host
Adapters. The DS8000 Turbo models offer 4Gb Fibre Channel/FICON host support designed
to offer up to 50% improvement in a single port MB/second throughput performance, helping
to enable cost savings with potential reduction in the number of host ports needed (compared
to 2Gb Fibre Channel and FICON support). The 2Gb Fibre Channel/FICON Host Adapters,
offered in long-wave and shortwave, auto-negotiate to either 2Gb, or 1Gb link speeds. The

6 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452ch_Introduction.fm

4Gb Fibre Channel/FICON Host Adapters, also offered in long-wave and shortwave,
auto-negotiate to either 4Gb, 2Gb, or 1Gb link speeds. This flexibility enables the ability to
exploit the benefits offered by higher performance, 4Gb SAN-based solutions, while also
maintaining compatibility with existing 2Gb infrastructures. This feature continues to provide
individual ports on the adapter which can be configured with Fibre Channel Protocol (FCP) or
FICON. This can help protect your investment in Fibre Channel adapters, and increase your
ability to migrate to new servers. The DS8000 series also offers two-port ESCON adapters.

A DS8100 Turbo Model 931 can support up to a maximum of 16 host adapters. A DS8300
Turbo Model 932 or 9B2 can support up to a maximum of 32 host adapters, which provide up
to 128 Fibre Channel/FICON ports.

IBM System Storage Management Console for the DS8000


The Management Console is the focal point for configuration, copy services management,
and maintenance activities. The Management Console is a dedicated workstation that is
physically located (installed) inside the DS8100 and DS8300 and can pro-actively monitor the
state of your system, notifying you and IBM when service is required. It can also be
connected to your network to enable centralized management of your system using the IBM
System Storage DS Command Line Interface or storage management software utilizing the
IBM System Storage DS Open API.

An external Management Console is available as an optional feature and can be used as a


redundant management console for environments with high-availability requirements.

1.2.3 Storage capacity


The physical capacity for the DS8000 is purchased via disk drive sets. A disk drive set
contains sixteen identical disk drive modules (DDMs), which have the same capacity and the
same revolution per minute (RPM). For additional flexibility, feature conversions are available
to exchange existing disk drive sets.

In the first frame, there is space for a maximum of 128 disk drive modules (DDMs) and each
expansion frame can contain 256 DDMs. With a maximum of 384 DDMs, the DS8100 Turbo
Model 931 using 500GB drives provides up to 192 TB of storage capacity. With a maximum of
640 DDMs, the DS8300 Turbo Models 932 and 9B2 using 500 GB drives provide up to
320 TB of storage capacity. In addition RPQs can be submitted for the Turbo Model 932 that
allow this model to scale up to 512TB of capacity —if fully configured with 500GB drives.

The DS8000 can be configured as RAID-5, or RAID-10, or a combination. As a


price/performance leader, RAID-5 offers excellent performance for many customer
applications, while RAID-10 can offer better performance for selected applications.

IBM Standby Capacity on Demand offering for the DS8000


Standby Capacity on Demand (Standby CoD) provides standby on-demand storage for the
DS8000 that allows you to access the extra storage capacity whenever the need arises. With
CoD, IBM installs up to four disk drive sets (64 disk drives) in your DS8000. At any time, you
can logically configure your CoD drives, concurrent with production, and you will automatically
be charged for the capacity.

1.2.4 Supported environments


The DS8000 series offers connectivity support across a broad range of server environments,
including IBM System z™, System p™, System p5™, System i™, System i5™, and System
x™ servers, as well as servers from Sun and Hewlett-Packard, and non-IBM Intel-based
servers.

Chapter 1. Introduction to the DS8000 series 7


6786_6452ch_Introduction.fm Draft Document for Review November 14, 2006 3:49 pm

The operating system support for the DS8000 series is almost the same as for the previous
ESS Model 800; there are over 90 supported platforms. You can refer to the DS8000
interoperability matrix for the most current list of supported platforms. The DS8000
interoperability matrix can be found at:
http://www-03.ibm.com/servers/storage/disk/ds8000/pdf/ds8000-matrix.pdf

This rich support of heterogeneous environments and attachments, along with the flexibility to
easily partition the DS8000 series storage capacity among the attached environments, can
help support storage consolidation requirements and dynamic, changing environments.

1.2.5 Business continuance solutions — Copy Services functions


For the IT environments that can not afford to stop their systems for backups, IBM provides a
fast replication technique that can provide a point-in-time copy of the data in a few seconds or
even less. This function is called FlashCopy and is available on the DS8000 series, as well as
the DS6000 and the ESS.

For data protection and availability needs, the DS8000 provides Metro Mirror, Global Mirror,
Global Copy, and Metro/Global Mirror which are remote mirror and copy functions. These
functions are also available and are fully interoperable with the ESS 800 and 750 models and
the DS6800 series. These functions provide storage mirroring and copying over large
distances for disaster recovery or availability purposes.

Copy Services is discussed in Chapter 7, “Copy Services” on page 107.

Flashcopy
The primary objective of FlashCopy is to very quickly create a point-in-time copy of a source
volume on a target volume. The benefits of FlashCopy are that the point-in-time target copy is
immediately available for use for backups or testing and the source volume is immediately
released so that applications can continue processing, with minimal application downtime.
The target volume can be either a logical or physical copy of the data, with the latter copying
the data as a background process. In a z/OS environment FlashCopy can also operate at a
data set level.

Following is a summary of the options available with FlashCopy.

Multiple Relationship FlashCopy


Multiple Relationship FlashCopy allows a source to have FlashCopy relationships with up to
12 targets simultaneously.

Incremental FlashCopy
Incremental FlashCopy provides the capability to refresh a LUN or volume involved in a
FlashCopy relationship. When a subsequent FlashCopy is initiated, only the data required to
bring the target current to the source's newly established point-in-time is copied.

FlashCopy to a remote mirror primary


FlashCopy to a remote mirror primary lets you establish a FlashCopy relationship where the
target is a remote mirror (Metro Mirror, of Global Copy) primary volume. This overcomes
previous limitations on the ESS that especially affected z/OS users using data set level
FlashCopy for copy operations within a mirrored pool of production volumes.

Consistency Groups
Consistency Groups can be used to maintain a consistent point-in-time copy across multiple
LUNs or volumes, or even multiple DS8000, DS6800, ESS 800, and ESS 750 systems.

8 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452ch_Introduction.fm

Inband commands over remote mirror link


In a remote mirror environment Inband FlashCopy allows commands to be issued from the
local or intermediate site, and transmitted over the remote mirror Fibre Channel links for
execution on the remote DS8000. This eliminates the need for a network connection to the
remote site solely for the management of FlashCopy.

Remote mirror and copy functions


The remote mirror and copy functions include Metro Mirror, Global Copy, Global Mirror, and
Metro/Global Mirror. There is also z/OS Global Mirror for the System z environments. As with
Flashcopy, remote mirror and copy functions can also be established between DS8000
systems and DS6800 and ESS 800/750 systems.

Following is a summary of the remote mirror and copy options available with the DS8000.

Metro Mirror
Metro Mirror, previously called Peer-to-Peer Remote Copy (PPRC), provides a synchronous
mirror copy of LUNs or volumes at a remote site within 300km.

Global Copy
Global Copy, previously called Extended Distance Peer-to-Peer Remote Copy (PPRC-XD), is
a non-synchronous long distance copy option for data migration and backup.

Global Mirror
Global Mirror provides an asynchronous mirror copy of LUNs or volumes over virtually
unlimited distances. The distance is typically limited only by the capabilities of the network
and channel extension technology being used.

Metro/Global Mirror
Metro/Global Mirror is a three-site data replication solution for both the open systems and the
System z environments. Local site (site A) to intermediate site (site B) provides high
availability replication using synchronous Metro Mirror, and intermediate site (site B) to
remote site (site C) provides long distance disaster recovery replication using asynchronous
Global Mirror.

z/OS Global Mirror


z/OS Global Mirror previously called Extended Remote Copy (XRC) provides an
asynchronous mirror copy of volumes over virtually unlimited distances for the System z.

z/OS Metro/Global Mirror


This is a combination of copy services for System z environments, that uses z/OS Global
Mirror to mirror primary site data to a remote location that is at a long distance and also uses
Metro Mirror to mirror the primary site data to a location within the metropolitan area. This
enables a z/OS three-site high availability and disaster recovery solution.

1.2.6 Interoperability
The DS8000 not only supports a broad range of server environments, but it can also perform
remote mirror and copy with the DS6000 and the ESS Models 750 and 800. This offers a
dramatically increased flexibility in developing mirroring and remote copy solutions, and also
the opportunity to deploy business continuity solutions at lower costs than have been
previously available.

Chapter 1. Introduction to the DS8000 series 9


6786_6452ch_Introduction.fm Draft Document for Review November 14, 2006 3:49 pm

1.2.7 Service and setup


The installation of the DS8000 is performed by IBM in accordance to the installation
procedure for this machine. The customer’s responsibility is the installation planning, the
retrieval and installation of feature activation codes, and the logical configuration planning and
execution. This hasn’t changed in regard to the previous ESS model.

For maintenance and service operations, the Storage Hardware Management Console
(S-HMC) is the focal point. The management console is a dedicated workstation that is
physically located (installed) inside the DS8000 subsystem and can automatically monitor the
state of your system, notifying you and IBM when service is required.

The S-HMC is also the interface for remote services (call home and call back) which can be
configured to meet customer requirements. It is possible to allow one or more of the following:
call on error (machine detected), connection for a few days (customer initiated), and remote
error investigation (service initiated). The remote connection between the management
console and the IBM service organization will be done via a virtual private network (VPN)
point-to-point connection over the internet or modem.

The DS8000 comes with an outstanding four year warranty, an industry first, on both
hardware and software.

1.2.8 Configuration flexibility


The DS8000 series uses virtualization techniques to separate the logical view of hosts onto
LUNs from the underlying physical layer, this providing high configuration flexibility.
Virtualization is discussed in Chapter 6, “Virtualization concepts” on page 91.

Dynamic LUN/volume creation and deletion


The DS8000 gives a high degree of flexibility in managing storage, allowing LUNs to be
created and deleted non-disruptively, even within an array. Also, when a LUN is deleted, the
freed capacity can be used with other free space to form a LUN of a different size.

Large LUN and large CKD volume support


You can configure LUNs and volumes to span arrays, allowing for larger LUN sizes up to
2 TB. The maximum CKD volume size is 65520 cylinders (about 55.6 GB), greatly reducing
the number of volumes to be managed.

Flexible LUN to LSS association


With no predefined association of arrays to LSSs on the DS8000 series, users are free to put
LUNs or CKD volumes into LSSs and make best use of the 256 address range, overcoming
previous ESS limitations, particularly for System z.

Simplified LUN masking


The implementation of volume group based LUN masking (as opposed to adapter based
masking as on the ESS) simplifies storage management by grouping all or some WWPNs of a
host into a Host Attachment. Associating the Host Attachment to a Volume Group allows all
adapters within it access to all of the storage in the Volume Group.

Logical definitions — maximum values


Following is a list of the current DS8000 maximum values for the major logical definitions:
򐂰 up to 255 logical subsystems (LSS)
򐂰 up to 65280 logical devices

10 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452ch_Introduction.fm

򐂰 up to 1280 paths per FC port


򐂰 up to 2 TB LUNs
򐂰 up to 65520 cylinders CKD volumes
򐂰 up to 8192 process logins (509 per SCSI-FCP port)

1.3 Positioning the DS8000 series


The IBM System Storage DS8000 series is designed to provide exceptional performance,
scalability, and flexibility while supporting 24 x 7 operations to help provide the access and
protection demanded by today's business environments. It also delivers the flexibility and
centralized management needed to lower long-term costs. It is part of a complete set of disk
storage products that are all part of the IBM System Storage DS Family and is the IBM disk
product of choice for environments that require the utmost in reliability, scalability, and
performance for mission-critical workloads.

1.3.1 Common set of functions


The DS8000 series and DS6000 share many features including FlashCopy, Metro Mirror,
Global Copy, and Global Mirror. In addition to this, the DS8000 and DS6000 series mirroring
solutions are also compatible with the IBM TotalStorage Models 800 and 750. All this offers a
new era in flexibility and cost effectiveness in designing business continuity solutions.

1.3.2 Common management functions


Within the DS8000 series and the DS6000 series of disk storage systems, the provisioning
tools like the DS Storage Manager (DS SM) Graphical User Interface (GUI) or the DS SM
Command-Line Interface (CLI) are very similar. Scripts written for one series member of
storage disk subsystem will also work for the other series. Given this, it is easy for a storage
administrator to work with either of the products. This reduces management costs since no
training on a new product is required when adding a product of another series.

1.3.3 DS8000 series compared to other storage disk subsystems


There are many thing in common within the IBM System Storage DS family of products itself,
as well as with the IBM TotalStorage family of products, all of which simplify the management
of environments where both solutions co-exist. This simplifies the management and control
tasks thus reducing the associated costs. Still there are unique characteristics that
differentiate the DS8000 and that we discuss in the present section.

DS8000 compared to ESS


The DS8000 is a follow on generation of the ESS, so most of the functions which are available
in the ESS are also available in the DS8000. From a consolidation point of view, it is now
possible to replace four ESS Model 800s with one DS8300. And with the LPAR
implementation you get an additional consolidation opportunity because you get two storage
system logical partitions in one physical machine.

Since the mirror solutions are compatible between the ESS and the DS8000 series, it is
possible to think about a setup for a disaster recovery solution with the high performance
DS8000 at the primary site and the ESS at the secondary site, where the same performance
is not required.

Chapter 1. Introduction to the DS8000 series 11


6786_6452ch_Introduction.fm Draft Document for Review November 14, 2006 3:49 pm

DS8000 compared to DS6000


The DS8000 and the DS6000 now offer an enterprise continuum of storage solutions. All
copy functions are available on both systems —with the exception of z/OS Global Mirror, and
Metro/Global Mirror which are only available on the DS8000. You can do Metro Mirror, Global
Mirror, and Global Copy between the two series. The DS CLI commands and the DS GUI look
the same for both systems.

Obviously the DS8000 can deliver a higher throughput and scales higher than the DS6000,
but not all customers need this high throughput and capacity. You can choose the system that
fits your needs. Both systems support the same SAN infrastructure and the same host
systems.

So it is very easy to have a mixed environment with DS8000 and DS6000 disk storage
subsystems with common skills and management functions thus optimizing the cost
effectiveness of your storage solution.

Logical partitioning available with the DS8300 Turbo Model 9B2 is not available on the
DS6000.

1.4 Performance features


The IBM System Storage DS8000 series offers optimally balanced performance, which is up
to six times the throughput of the ESS Model 800. This is possible because the DS8000
incorporates many performance enhancements, like the dual-two way and dual-four way
POWER5+ processor complex implementation of the Turbo models, four-port 2 Gb and 4 Gb
Fibre Channel/FICON host adapters, Fibre Channel disk drives, and the high-bandwidth,
fault-tolerant internal interconnections.

With all these components, the DS8000 is positioned at the top of the high performance
category.

1.4.1 Sequential Prefetching in Adaptive Replacement Cache (SARC)


One of the performance enhancers of the DS8000 is its self-learning cache algorithm which
improves cache efficiency and enhances cache hit ratios. This algorithm used in the DS8000
series is called Sequential Prefetching in Adaptive Replacement Cache (SARC).

SARC provides the following:


򐂰 Sophisticated, patented algorithms to determine what data should be stored in cache
based upon the recent access and frequency needs of the hosts
򐂰 Pre-fetching, which anticipates data prior to a host request and loads it into cache
򐂰 Self-Learning algorithms to adaptively and dynamically learn what data should be stored
in cache based upon the frequency needs of the hosts

1.4.2 Multipath Subsystem Device Driver (SDD)


The Multipath Subsystem Device Driver (SDD) is a pseudo device driver on the host system
designed to support the multipath configuration environments in IBM products. It provides
load balancing and enhanced data availability capability. By distributing the I/O workload over
multiple active paths, SDD provides dynamic load balancing and eliminates data-flow
bottlenecks. SDD also helps eliminate a potential single point of failure by automatically
re-routing I/O operations when a path failure occurs.

12 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452ch_Introduction.fm

SDD is provided with the DS8000 series at no additional charge. Fibre Channel (SCSI-FCP)
attachment configurations are supported in the AIX, HP-UX, Linux, Windows, Novell
NetWare, and Sun Solaris environments.

For more information on SDD see Section 15.1.4, “Multipathing support — Subsystem Device
Driver (SDD)” on page 290.

1.4.3 Performance for System z


The DS8000 series supports the following IBM performance enhancements for System z
environments:
򐂰 FICON extends the ability of the DS8000 series system to deliver high bandwidth potential
to the logical volumes needing it, when they need it. FICON, working together with other
DS8000 series functions, provides a high-speed pipe supporting a multiplexed operation.
Support for FICON attachment is an optional feature of the DS8000 series.
򐂰 Parallel Access Volumes (PAV) enable a single System z server to simultaneously
process multiple I/O operations to the same logical volume, which can help to significantly
reduce device queue delays. This is achieved by defining multiple addresses per volume.
With Dynamic PAV, the assignment of addresses to volumes can be automatically
managed to help the workload meet its performance objectives and reduce overall
queuing. PAV is an optional feature on the DS8000 series.
򐂰 HyperPAV which is designed to enable applications to achieve equal or better
performance than with PAV alone, while also using the same or fewer operating system
resources.
򐂰 Multiple Allegiance expands the simultaneous logical volume access capability across
multiple System z servers. This function, along with PAV, enables the DS8000 series to
process more I/Os in parallel, helping to improve performance and enabling greater use of
large volumes.
򐂰 I/O priority queuing allows the DS8000 series to use I/O priority information provided by
the z/OS Workload Manager to manage the processing sequence of I/O operations.

Chapter 10, “Performance” on page 167, gives you more information about the performance
aspects of the DS8000 family.

1.4.4 Performance enhancements for System p


Many System p users will benefit from the following DS8000 Turbo Models enhancements:
򐂰 End-to-End I/O Priorities
򐂰 Cooperative Caching
򐂰 Long Busy Wait Host Tolerance

More on this performance enhancements in Chapter 10, “Performance” on page 167.

1.4.5 Performance enhancements for z/OS Global Mirror


Many users of z/OS Global Mirror —the System z based asynchronous disk mirroring
capability, will benefit from the DS8000 Turbo Models enhancement “z/OS Global Mirror
suspend instead of long busy option”. In the event of high workload peaks, which may
temporarily overload the z/OS Global Mirror configuration bandwidth, this DS8000
enhancement can initiate a z/OS Global Mirror SUSPEND preserving primary site application
performance, an improvement over the previous LONG BUSY status.

Chapter 1. Introduction to the DS8000 series 13


6786_6452ch_Introduction.fm Draft Document for Review November 14, 2006 3:49 pm

Refer to IBM System Storage DS8000 Series: Copy Services with System z servers,
SG24-6787 for a detailed discussion of z/OS Global Mirror and related enhancements.

14 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452ModelOvw.fm

Chapter 2. Model overview


This chapter provides an overview of the IBM System Storage DS8000 storage disk
subsystem, the different models and how well they scale regarding capacity and
performance. The topics covered in this chapter include:
򐂰 DS8000 series models overview.
򐂰 DS8000 series model comparison.
򐂰 DS8000 design for scalability.

© Copyright IBM Corp. 2006. All rights reserved. 15


6786_6452ModelOvw.fm Draft Document for Review November 14, 2006 3:49 pm

2.1 DS8000 series models overview


The DS8000 series offers four models with base and expansion units and connects to a wide
variety of host server platforms. This section introduces us into the characteristics of the
available models.

2.1.1 Model naming conventions


Before we get into the details about each model, we will start with an explanation of the model
naming conventions in the DS8000.

The DS8000 series currently has three Turbo models available: the DS8100 Turbo Model 931,
and the DS8300 Turbo Models 932 and 9B2. The difference in models is in the processors
and in the capability of storage system LPARs. The predecessors of the DS8000 series Turbo
models were the DS8000 series Models 921, 922, and 9A2.

The last position of the three characters 9xx refers to either a dual two-way processor
complex model (9x1), or a dual four-way processor complex model (9x2), or an expansion
frame with no processors (9xE). The middle position of the three characters means LPAR or
non-LPAR model —93x are non-LPAR models, and 9Bx is an LPAR model). You can also
order expansion frames with the base frame: 92E for non-LPAR models and 9AE for LPAR
models.

Figure 2-1 summarizes the naming rules that apply to the Turbo models 931, 932, 9B2, as
well as to the predecessor models 921, 922, and 9A2 —for these older models the x takes the
values 2 and A.

Model Naming Conventions


9xy y=1 y=2 y=E
x=2 Non LPAR/2-way Non LPAR/4-way Expansion for Non-
LPAR base unit
x=3 Non LPAR/2-way Non LPAR/4-way N/A
x=A N/A LPAR 4-way Expansion for LPAR
base unit
x=B N/A LPAR 4-way N/A

For example,
931 : Non-LPAR / 2-way base frame
9AE : LPAR / Expansion frame

Figure 2-1 Naming convention for Turbo models 931, 932, 9B2 and older 921, 922 and 9A2

In the following sections, we describe these models further:


1. DS8100 Turbo Model 931
This model features a dual two-way processor complex implementation and it includes a
Model 931 base frame and an optional Model 92E expansion frame.
2. DS8300 Turbo Models 932
This model features a dual four-way processor complex implementation and includes a
Model 932 base frame and up to two optional Model 92E expansion frames.

16 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452ModelOvw.fm

3. DS8300 Turbo Models 9B2


This model features a dual four-way processor complex implementation and includes a
Model 9B2 base frame and up to two optional Model 9AE expansion frames.
In addition, the Model 9B2 supports the configuration of two storage system Logical
Partitions (LPARs) in one machine.

2.1.2 Machine types 2107 and 242x


You will see DS8000 series models associated with two machine types —2107 or 242x:
򐂰 2107 machine type —associated to DS8000 series Models 921, 922, 9A2. These were the
first models that were made available with the DS8000 series. The 92E and 9AE
expansion frames that attached to the 2107-92x/9A2 base units had also a 2107 machine
type.
򐂰 2107 machine type —associated to DS8000 series Models 931, 932, 9B2. These were the
Turbo models that incorporated several new features over the predecessor 92x/9A2
models. The 92E and 9AE expansion frames that attached to the 2107-93x/9B2 base units
had also a 2107 machine type.
򐂰 242x machine type —associated to DS8000 series Models 931, 932, 9B2. These machine
types correspond to the more recently available “Enterprise Choice” length of warranty
offer that allows the Turbo Models to be ordered with a one-year, two-year, three-year, or
four-year warranty period —x=1, 2, 3, or 4 respectively. The 92E and 9AE expansion
frames that attach to the 242x base units have a similar 242x machine type as the base
unit.
The DS8000 series with “Enterprise Choice” length of warranty is designed to deliver the
same enterprise class storage capabilities and functionality as the current DS8000 series
Turbo Models.

Table 2-1 summarizes the possible machine types and models combinations.

Table 2-1 2107 and 242x machine types and models


Machine Base unit models Expansions
type

2107 921 922 9A2 92E and 9AE

2107 931 932 9B2 92E and 9AE Turbo models Same models
share same
242x 931 932 9B2 92E and 9AE Turbo models characteristics
Enterprise Choice and functionality

Note: The 242x machine type numbers used for ordering purposes have changed in order
to support the multiple warranty options, but the DS8000 Turbo Models are otherwise the
same systems announced in August 2006.

In the discussions throughout this book we almost make no references to the machine
types but to the models, because identical models share identical characteristics
independently of the machine type designation.

Chapter 2. Model overview 17


6786_6452ModelOvw.fm Draft Document for Review November 14, 2006 3:49 pm

2.1.3 DS8100 Turbo Model 931 overview

Figure 2-2 DS8100 base frame with covers removed (left) and with Model 92E (right)

The DS8100 Turbo Model 931 has the following features:


򐂰 Two processor complexes, each with a System p5 POWER5+ 2.2 GHz two-way CEC.
򐂰 Base frame with up to 128 DDMs for a maximum base frame disk storage capacity of 38.4
TB with FC DDMs and 64TB with 500 GB FATA DDMs.
򐂰 Up to 128 GB of processor memory (DDR2) —this was the cache in the ESS 800.
򐂰 Up to 16 four-port Fibre Channel/FICON host adapters (HAs) of 2 or 4 Gbps. Each port
can be independently configured as either
– FCP port to open system hosts attachment
– FICON port to connect to System z hosts
– FCP link for remote mirror and copy connectivity
– This totals up to 64 ports with any mix of FCP and FICON ports. ESCON host
connection is also supported, but with ESCON a host adapter contains only two
ESCON ports —and both must be ESCON ports. A DS8000 can have both ESCON
adapters and Fibre Channel/FICON adapters at the same time
򐂰 The DS8100 Turbo Model 931 can connect to one expansion frame Model 92E. Figure 2-2
displays a front view of a DS8100 Model 931 —with the covers off, and a front view of a
Model 931 with an expansion Model 92E attached to it —with the covers on. The base and
expansion frame together allow for a maximum capacity of 384 DDMs —128 DDMs in the
base frame and 256 DDMs in the expansion frame. With all being 300 GB enterprise
DDMs, this results in a maximum disk storage subsystem capacity of 115.2 TB, with 500
GB FATA DDMs this results in a maximum DS8100 storage capacity of 192 TB.

Figure 2-3 on page 19 shows the maximum configuration of a DS8100 with the 931 base
frame plus a 92E expansion frame and provides the front view of the basic structure and
placement of the hardware components within both frames.

Note: A model 931 can be upgraded to a 932 or to a 9B2 model.

18 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452ModelOvw.fm

Base Model 931 Expansion Model 92E


128 Drives 256 Drives

Figure 2-3 Maximum DS8100 configuration — 931 base unit and 92E expansion

2.1.4 DS8300 Turbo Models 932 and 9B2 overview


The DS8300 Turbo Models 932 and 9B2 offer higher storage capacity than the DS8100 and
they provide greater throughput by means of their dual four-way processor implementation.

The Model 9B2 also has the capability of providing two storage images through two storage
system LPARs within the same physical storage unit. Both storage images split the processor,
cache, and adapter resources in a 50:50 manner.

Both models provide the following features:


򐂰 Two processor complexes, each with a System p5 POWER5+ 2.2 GHz four-way CEC.
򐂰 Base frame with up to 128 DDMs for a maximum base frame disk storage capacity of 38.4
TB with FC DDMs and 64TB with 500 GB FATA DDMs. This is the same as for the DS8100
base frame.
򐂰 Up to 256 GB of processor memory (DDR2) — this was the cache in the ESS 800.
򐂰 Up to 16 four-port Fibre Channel/FICON host adapters (HAs) of 2 or 4 Gbps in the base
frame —when configured without an expansion frame. Each port can be independently
configured as either
– FCP port to open system hosts attachment
– FICON port to connect to System z hosts
– FCP link for remote mirror and copy connectivity
– This totals up to 64 ports for the base frame with any mix of FCP and FICON ports.
ESCON host connection is also supported, but with ESCON a host adapter contains

Chapter 2. Model overview 19


6786_6452ModelOvw.fm Draft Document for Review November 14, 2006 3:49 pm

only two ESCON ports —and both must be ESCON ports. A DS8000 can have both
ESCON adapters and Fibre Channel/FICON adapters at the same time

Figure 2-4 displays a DS8300 Turbo Model 932 from the rear. On the left is the rear view with
closed covers. The right shows the rear view with no covers while the middle view is another
rear view but only with one cover off. This view shows the standard 19 inch rack mounted
hardware including disk drives, processor complexes, and I/O enclosures.

Figure 2-4 DS8300 Turbo Model 932/9B2 base frame rear views — with and without covers

The DS8300 Turbo models can connect to one or two expansion frames. This provides the
following configuration alternatives:
򐂰 With one expansion frame the storage capacity and number of adapters of the DS8300
models can expand to:
– Up to 384 DDMs in total —as for the DS8100. This is a maximum disk storage capacity
of 115.2 TB with 300 GB FC DDMs and 192 TB with 500 GB FATA DDMs.
– Up to 32 host adapters (HAs). This can be an intermix of Fibre Channel/FICON
(four-port) and ESCON (two-port) adapters.
򐂰 With two expansion frames the disk capacity of the DS8300 models expands to:
– Up to 640 DDMs in total for a maximum disk storage capacity of 192 TB when utilizing
300 GB FC DDMs and 320 TB with 500 GB FATA DDMs.

Figure 2-5 on page 21 shows a maximum configuration for a DS8300 Turbo Model with two
expansion frames —base Model 932 connects to Model 92E expansions and base Model
9B2 connects to Model 9AE expansions. This figure also shows the basic hardware
components and how they are distributed across all three frames.

20 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452ModelOvw.fm

Base Model 932 or 9B2 Expansion Model 92E or 9AE Expansion Model 92E or 9AE
128 Drives 256 Drives 256 Drives

932 attaches to 92E


9B2 attaches to 9AE

Figure 2-5 DS8300 maximum configuration — base and two expansion frames

Note the additional I/O enclosures in the first expansion frame, which is the middle frame in
Figure 2-5. Each expansion frame has twice as many DDMs as the base frame, so with 128
DDMs in the base frame and 2 x 256 DDMs in the expansion frames, a total of 640 DDMs is
possible.

2.1.5 DS8300 Turbo Model 932 with four expansion frames — 512 TB
In addition to the standard configurations you can order, with the DS8300 Turbo Model 932
you have the possibility of submitting an RPQ (Request for Price Quotation) for a
configuration of base unit plus up to four expansion frames. With the fourth expansion frame
attached, the capacity of the DS8300 Turbo Model 932 scales up to 1024 DDMs in total. This
is a maximum disk storage capacity of 512TB —when using 500 GB FATA DDMs.

The following considerations apply:


򐂰 Supports up to 256 disk drives in the third expansion frame:
– Requires full base unit and full first expansion frame
– Second and third expansion frames can be configured in 16-drive increments up to 256
drives for each frame
򐂰 Supports up to 128 drives in the fourth expansion frame:
– Requires full base unit, and full first and second expansion frames
– Third expansion frame can be configured in 16-drive increments up to 256 drives
– Fourth expansion frame can be configured in 16-drive increments up to 128 drives
maximum.
򐂰 Offered on DS8300 Turbo Model 932 —new factory ships only
򐂰 Billable RPQs ordered for attachment of third and fourth expansion frames:
– First expansion frame must be full to attach RPQ 8S0852 —third expansion frame
– Second expansion frame must be full to attach RPQ 8S053 —fourth expansion frame

Figure 2-6 on page 22 shows a picture of a DS8300 Turbo Model 932 with the maximum
configuration of 1024 disk drives.

Chapter 2. Model overview 21


6786_6452ModelOvw.fm Draft Document for Review November 14, 2006 3:49 pm

RPQs: 8S0852 - 3rd EXP; 8S053- 4th EXP

Figure 2-6 DS8300 Turbo Model 932 — maximum configuration with 1024 disk drives

2.2 DS8000 series model comparison


DS8000 models vary in processor type, maximum disk storage capacity, processor memory,
and maximum host adapters. Table 2-2 provides a comparison for the DS8000 series Turbo
models 931, 932, and 9B2, as well as the predecessor DS8000 series models 921, 922 and
9A2.

Table 2-2 DS8000 series model comparison — 921, 922 and 9A2, and Turbo 931. 932 and 9B2.
Base Images Expansion Processor DDMs Processor Max. Host
Model Model type up to Memory Adapters

921 1 None 2-way <= 128 <= up to <= 16


1.5 GHz 128 GB
1 x 92E <= 384

922 1 None 4-way <= 128 <= up to <= 16


1.9 GHz 256 GB
1 x 92E <= 384 <= 32

2 x 92E <= 640

9A2 1 or 2 None <= 128 <= 16

1 x 9AE <= 384 <= 32

2 x 9AE <= 640

931 1 None 2-way <= 128 <= up to <= 16


2.2 GHz 128 GB
1 x 92E <= 384 DDR2

22 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452ModelOvw.fm

Base Images Expansion Processor DDMs Processor Max. Host


Model Model type up to Memory Adapters

932 1 None 4-way <= 128 <= up to <= 16


2.2 GHz 256 GB
1 x 92E <= 384 DDR2 <= 32

2 x 92E <= 640


1024 (1)

9B2 1 or 2 None <= 128 <= 16

1 x 9AE <= 384 <= 32

2 x 9AE <= 640

Note 1: Must be requested via RPQ submission —see Section 2.1.5.

Depending on the DDM sizes, which can be different within a 9x1or 9x2 and the number of
DDMs, the total capacity is calculated accordingly.

Each Fibre Channel/FICON Host Adapter has four Fibre Channel ports, providing up to 128
Fibre Channel ports for a maximum configuration. Each ESCON Host Adapter has two ports,
therefore, the maximum number of ESCON ports possible is 64. There can be an intermix of
Fibre Channel/FICON and ESCON adapters, up to the maximum of 32 adapters.

2.3 DS8000 design for scalability


One of the advantages of the DS8000 series is its linear scalability for capacity and
performance. If your business (or your customer’s business) grows rapidly, you may need
much more storage capacity, faster storage performance, or both. The DS8000 series can
meet these demands within a single storage unit. We explain the scalability in this section.

2.3.1 Scalability for capacity


The DS8000 series has a linear capacity growth up to 192 TB for FC DDMs and 320 TB for
FATA DDMs —and up to 512TB for the DS8300 932 Turbo Model via RPQ request.

Large and scalable capacity


In a DS8000 you can have from 16 DDMs up to 384 DDMs (Model 931) or 640 DDMs (Models
932 and 9B2) —and up to 1024 DDMs in the model 932 via RPQ request (see 2.1.5, “DS8300
Turbo Model 932 with four expansion frames — 512 TB” on page 21).
򐂰 Each base frame can have up to 128 DDMs and an expansion frame can have up to
256 DDMs.
򐂰 The DS8100 model can have one expansion frame and the DS8300 models can have two
expansion frames
򐂰 The DS8000 series, both the DS8100 and the DS8300 models, can contain the following
types of Fibre Channel disk drives: 73 GB (15k rpm), 146 GB (10k and 15k rpm), and 300
GB (10k rpm); and they can also have 500 GB (7,200 rpm) Fibre Channel ATA (FATA) disk
drives.

Therefore, you can select a physical capacity from 1.1 TB (73 GB x 16 DDMs) up to 320 TB
(500 GB x 640 FATA DDMs). We summarize these characteristics in Table 2-3 on page 24.

Chapter 2. Model overview 23


6786_6452ModelOvw.fm Draft Document for Review November 14, 2006 3:49 pm

Table 2-3 Capacity comparison — device adapters, DDMs, and storage capacity
2-way (Base 2-way (with 4-way (Base 4-way (w/ one 4-way (w/ two
frame only) Expansion frame only) Expansion Expansion
frame) frame) frames)

Device adapters 2 to 8 2 to 8 2 to 8 2 to 16 2 to 16
(1 - 4 Fibre Loops)

DDMs 16 to 128 16 to 384 16 to 128 up to 384 up to 640


(increments of 16) (increments (increments (increments (increments
of 16) of 16) of 16) of 16)

Physical capacity 1.1 to 64TB 1.1 to 192 TB 1.1 to 64TB 1.1 to 192 TB 1.1 to 320 TB

Adding DDMs
A significant benefit of the DS8000 series is the ability to add DDMs without disruption for
maintenance. IBM offers capacity on demand solutions that are designed to meet the
changing storage needs of rapidly growing e-business. The Standby Capacity on Demand
(CoD) offering is designed to provide you with the ability to tap into additional storage and is
particularly attractive if you have rapid or unpredictable storage growth. Up to four Standby
CoD disk drive sets (64 disk drives) can be factory or field installed into your system. To
activate, you simply logically configure the disk drives for use —a non-disruptive activity that
does not require intervention from IBM. Upon activation of any portion of a Standby CoD disk
drive set, you must place an order with IBM to initiate billing for the activated set. At that time,
you can also order replacement Standby CoD disk drive sets. For more information on the
Standby CoD offering refer to the DS8000 series announcement letter —IBM announcement
letters can be found at http://www.ibm.com/products.

When the first expansion frame is attached to a 9x2 base frame, you do need a disruptive
maintenance —the first expansion enclosure for the Models 9x2 has I/O enclosures and
these I/O enclosures must be connected into the existing RIO-G loops. If you install the base
frame and the first expansion frame for the Model 9x2 at the beginning, you don’t need a
disruptive upgrade to add DDMs. The expansion enclosure for the DS8100 model and the
second expansion enclosure for the DS8300 models has no I/O enclosure, therefore you can
attach them to the existing frame without disruption.

2.3.2 Scalability for performance — linear scalable architecture


The DS8000 series also has linear scalability for performance. This capability is due to the
architecture of the DS8000 series. Figure 2-7 and Figure 2-8 on page 25 illustrate how you
can realize the linear scalability in the DS8000 series.

24 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452ModelOvw.fm

Hosts

2-way I/O controllers


Host Device
Adapter Adapter
Processor Complex

Data Cache
and
I/O
Persistent
Enclosure
Memory

2-way
processor RIO-G

Disks

Figure 2-7 2-way model components

4-way I/O controllers Hosts


Host Device
Adapter Adapter
Processor Complex
Data Cache
and
I/O
Persistent
Enclosure
Memory
2-way
processor
RIO-G

Disks
Figure 2-8 4-way model components

Chapter 2. Model overview 25


6786_6452ModelOvw.fm Draft Document for Review November 14, 2006 3:49 pm

These two figures describe the main components of the I/O controller for the DS8000 series.
The main components include the I/O processors, data cache, internal I/O bus (RIO-G loop),
host adapters, and device adapters. Figure 2-7 is a 2-way model and Figure 2-8 is a 4-way
model. You can easily see that, if you upgrade from the 2-way model to the 4-way model, the
number of main components doubles within a storage unit.

More discussion on the DS8000 series components in Chapter 10, “Performance” on


page 167.

Benefits of the DS8000 scalable architecture


Because the DS8000 series adopts this architecture for the scaling of models, the DS8000
series has the following benefits:
򐂰 The DS8000 series is easily scalable for performance and capacity.
򐂰 The DS8000 series architecture can be easily upgraded.
򐂰 The DS8000 series has a longer life cycle than other storage devices.

2.3.3 Model conversions


The following model conversions are possible:
– Turbo Model 931 to 932
– Turbo Model 931 to 9B2
– Turbo Model 932 to 9B2
– Turbo Model 9B2 to 932

DS8000 model conversions are disruptive. In addition, data may not be preserved during the
conversion. Data migration or backup/restore is your responsibility. Fee-based data migration
services are available from Global Services.

The IBM POWER5+ processor is an optional feature for the DS8100 Model 921 and DS8300
Models 922 and 9A2. The installation of this feature is disruptive. The following upgrade paths
to Turbo models 931, 932, and 9B2 equivalences, are available for the older models 921, 922,
and 9A2:
򐂰 Model 921 upgrade to “Turbo Model 931 equivalence” —add Processor Upgrade feature
and complete a Processor Memory feature conversion.
򐂰 Model 921 upgrade to “Turbo Model 932 equivalence” —first a Model 921 to 922 model
conversion, then add Processor Upgrade feature and complete Processor Memory feature
conversion.
򐂰 Model 921 upgrade to “Turbo Model 9B2 equivalence” —first a Model 921 to 9A2 model
conversion, then add Processor Memory feature.
򐂰 Model 922 upgrade to “Turbo Model 932 equivalence” —add Processor Upgrade feature
and complete a Processor Memory feature conversion.
򐂰 Model 922 upgrade to “Turbo Model 9B2 equivalence” —first a Model 922 to 9A2 model
conversion, then add Processor Upgrade feature.
򐂰 Model 9A2 upgrade to “Turbo Model 9B2 equivalence” —add Processor Upgrade feature
and complete Processor Memory feature conversion.
򐂰 Model 9A2 upgrade to “Turbo Model 932 equivalence” —first a Model 9A2 to 922 model
conversion, then add Processor Upgrade feature.

For more information on model conversions and upgrade paths refer to the DS8000 series
Turbo models announcement letter. IBM announcement letters can be found at:
http://www.ibm.com/products.

26 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452ch_LPAR.fm

Chapter 3. Storage system LPARs (logical


partitions)
This chapter discusses the DS8000 series implementation of storage system logical
partitions (LPARs). The following topics are covered:
򐂰 Introduction to logical partitioning
򐂰 DS8300 and LPARs
– LPARs and storage facility images (SFI)
– DS8300 LPAR implementation
– Storage facility image hardware components
– DS8300 Model 9B2 configuration options
򐂰 LPAR security through POWER Hypervisor (PHYP)
򐂰 LPAR and Copy Services
򐂰 LPAR benefits
򐂰 Summary

© Copyright IBM Corp. 2006. All rights reserved. 27


6786_6452ch_LPAR.fm Draft Document for Review November 14, 2006 3:49 pm

3.1 Introduction to logical partitioning


Logical partitioning allows the division of a single storage disk subsystem into multiple
independent virtual disk subsystems or partitions. IBM began work on logical partitioning in
the late 1960s, using S/360 mainframe systems with the precursors of VM, specifically CP40.
Since then, logical partitioning on IBM mainframes (now called IBM System z) has evolved
from a predominantly physical partitioning scheme based on hardware boundaries to one that
allows for virtual and shared resources with dynamic load balancing. In 1999 IBM
implemented LPAR support on the AS/400 (now called IBM System i) platform and on System
p in 2001. In 2000 IBM announced the ability to run the Linux operating system in an LPAR or
on top of VM on a System z server, to create thousands of Linux instances on a single
system.

3.1.1 Virtualization Engine technology


IBM Virtualization Engine is comprised of a suite of system services and technologies that
form key elements of IBM’s on demand computing model. It treats resources of individual
servers, storage, and networking products as if in a single pool, allowing access and
management of resources across an organization more efficiently. Virtualization is a critical
component in the on demand operating environment. The system technologies implemented
in the POWER5+ processor provide a significant advancement in the enablement of functions
required for operating in this environment.

LPAR is one component of the POWER5+ system technology that is part of the IBM
Virtualization Engine. Using IBM Virtualization Engine technology, selected models of the
DS8000 series can be used as a single, large storage system, or can be used as multiple
storage systems with logical partitioning (LPAR) capabilities. IBM LPAR technology, which is
unique in the storage industry, allows the resources of the storage system to be allocated into
separate logical storage system partitions, each of which is totally independent and isolated.
Virtualization Engine (VE) delivers the capabilities to simplify the infrastructure by allowing the
management of heterogeneous partitions/servers on a single system.

3.1.2 Logical partitioning concepts


It is appropriate to clarify some basic concepts related to logical partitioning.

Partitions
When a multi-processor computer is subdivided into multiple, independent operating system
images, those independent operating environments are called partitions. The resources on
the system are allocated to specific partitions.

Resources
Resources are defined as a system’s processors, memory, and I/O slots. I/O slots can be
populated by different adapters, such as Ethernet, SCSI, Fibre Channel or other device
controllers. A disk is allocated to a partition by assigning it the I/O slot that contains the disk’s
controller.

Logical partitioning (LPAR)


A logical partition uses hardware and firmware to logically partition the resources on a
system. LPARs logically separate the operating system images, so there is not a dependency
on the hardware, and consist of processors, memory, and I/O slots that are a subset of the
pool of available resources within a system, as shown in Figure 3-1 on page 29. While there

28 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452ch_LPAR.fm

are configuration rules, the granularity of the units of resources that can be allocated to
partitions is very flexible. It is possible to add just one resource, independently of the others

LPAR differs from physical partitioning in the way resources are grouped to form a partition.
Logical partitions do not need to conform to the physical boundaries of the hardware used to
build the server. LPAR adds more flexibility to select components from the entire pool of
available system resources.

Logical Partition

Logical Logical Partition 1 Logical Partition 2


Partition 0

Processor Processor Processor Processor Processor Processor

Cache Cache Cache Cache Cache Cache

I/O I/O I/O I/O I/O I/O I/O I/O I/O I/O I/O I/O

Memory

hardware management console

Figure 3-1 Logical partition concepts

Software and hardware fault isolation


Because a partition runs an independent operating system image, there is strong software
isolation. This means that a job or software crash in one partition will not effect the resources
in another partition.

3.1.3 Why Logically Partition?


There is a demand to provide greater flexibility for high-end systems, particularly the ability to
subdivide them into smaller partitions. Here are a few reasons to partition a storage system.

Production and test environments


Generally, production and test environments should be isolated from each other. Without
partitioning, the only practical way of performing application development and testing is to
purchase additional hardware and software. Partitioning allows the system resources to be
divided logically, eliminating the need for additional storage dedicated to testing, and
providing more confidence that the test versions will migrate smoothly into production
because they are tested on the production hardware system.

Chapter 3. Storage system LPARs (logical partitions) 29


6786_6452ch_LPAR.fm Draft Document for Review November 14, 2006 3:49 pm

Application or environment isolation


Partitioning can be used to isolate one application solution or server environment from
another, eliminating the possibility of interference between them. One set of servers are
prevented from consuming excess resources at the expense of the other.

Increased flexibility of resource allocation


A workload with resource requirements that change over time can be managed more easily
within a partition that can be altered to meet the varying demands of the workload.

3.2 DS8300 and LPARs


The DS8000 series is a server-based disk storage system. With the integration of System p5
servers with POWER5 processor technology into the DS8000 series, IBM started offering an
implementation of the server LPAR functionality in a disk storage system initially with the
DS8300 Model 9A2 and now with the Turbo Model 9B2 —it provides two virtual storage
systems in one physical machine. The resource allocation for processors, memory, and I/O
slots in the two storage system LPARs on the DS8300 is split in a fixed 50/50 ratio.

3.2.1 LPARs and storage facility images (SFI)


Before we start to explain how the LPAR functionality is implemented in the DS8300, we want
to further clarify some terms and naming conventions.

Processor Processor
complex 0 complex 1

Storage
LPAR01 Facility LPAR11
Image 1

Storage
LPAR02 Facility LPAR12
Image 2

LPARxy
x=Processor complex number
y=Storage facility number

Figure 3-2 DS8300 Model 9B2 - LPAR and storage facility image (SFI)

The DS8300 series incorporates two four-way POWER5+ server processors; see Figure 3-2.
We call each of these a processor complex. Each processor complex on the DS8300 is
divided into two processor LPARs (a set of resources on a processor complex that support

30 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452ch_LPAR.fm

the execution of an operating system). The storage facility image (SFI) is built from a pair of
processor LPARs, one on each processor complex.

Figure 3-2 shows that LPAR01 from processor complex 0 and LPAR11 from processor
complex 1 form storage facility image 1 (SFI 1). LPAR02 from processor complex 0 and
LPAR12 from processor complex 1 form the second storage facility image (SFI 2).

Storage facility images (SFIs) are also referred to as storage system LPARs —and
sometimes more briefly as storage LPARs.

3.2.2 DS8300 LPAR implementation


Each SFI will use the machine type/model number/serial number of the DS8300 Model 9B2
base frame. The frame serial number will end with 0. The last character of the serial number
will be replaced by a number in the range one to eight that uniquely identifies the DS8000
image. Initially, this character will be a 1or a 2, because there are only two SFIs available in
the 9B2 models. The serial number is needed to distinguish between the SFIs in the GUI, CLI,
and for licensing and allocating the licenses between the SFIs.

In the DS8300 Model 9B2 the split of the resources among the two possible system storage
LPARs has a 50/50 ratio as depicted in Figure 3-3.

Processor complex 0 Storage Processor complex 1


enclosures

Storage Storage
Facility RIO-G I/O drawers RIO-G
Facility
Image 1 Image 1
(LPAR01) (LPAR11)

Storage Storage
RIO-G I/O drawers RIO-G
Facility Facility
Image 2 Image 2
(LPAR02) (LPAR12)
Storage
enclosures

Figure 3-3 DS8300 LPAR resource allocation

Each SFI in the DS8300 has access to:


򐂰 50 percent of the processors
򐂰 50 percent of the processor memory
򐂰 1 loop of the RIO-G interconnection
򐂰 Up to 16 host adapters (4 I/O drawers with up to 4 host adapters)
򐂰 Up to 320 disk drives (up to 96 TB with FC DDMs or 160TB with FATA DDMs)

Chapter 3. Storage system LPARs (logical partitions) 31


6786_6452ch_LPAR.fm Draft Document for Review November 14, 2006 3:49 pm

3.2.3 Storage facility image hardware components


The management of the resource allocation between LPARs on a System p5 is done via the
Storage Hardware Management Console (S-HMC). Because the DS8300 Turbo Model 9B2
provides a fixed split between the two SFIs, there is no management or configuration
necessary via the S-HMC. The DS8300 comes pre-configured with all required LPAR
resources assigned to either SFI.

Storage Facility Image 1

Processor complex 0 HA HA DA HA HA DA
Processor complex 1
2 Processors
Processors I/O drawer 2 Processors
Processors
RIO-G interface 0 RIO-G interface

RIO-G interface RIO-G interface


HA HA DA HA HA DA
Memory SCSI controller SCSI controller Memory
I/O drawer
1
A A' B B'
Ethernet-Port Ethernet-Port
boot data data boot data data boot data data boot data data

CD/DVD
CD/DVD

S-HMC
C C' D D'
Ethernet-Port Ethernet-Port
Ethernet-Port
boot data data boot data data boot data data boot data data

HA HA DA HA HA DA
Memory Memory
SCSI controller
I/O drawer SCSI controller

RIO-G interface
3 RIO-G interface
2 Processors 2 Processors
RIO-G interface RIO-G interface
HA HA DA HA HA DA

I/O drawer
2

Storage Facility Image 2


Figure 3-4 SFI resource allocation in the processor complexes of the DS8300

Figure 3-4 shows the split of all available resources between the two SFIs —each SFI has
50% of all available resources.

I/O resources
For one SFI, the following hardware resources are required:
򐂰 2 SCSI controllers with 2 disk drives each
򐂰 2 Ethernet ports (to communicate with the S-HMC)
򐂰 1 Thin Device Media Bay (for example, CD or DVD; can be shared between the LPARs)

Each SFI will have two physical disk drives in each processor complex. Each disk drive will
contain three logical volumes, the boot volume and two logical volumes for the memory save
dump function. These three logical volumes are then mirrored across the two physical disk
drives for each LPAR. In Figure 3-4, for example, the disks A/A' are mirrors. For the DS8300
Model 9B2, there will be four drives total in one physical processor complex.

Processor and memory allocations


In the DS8300 Model 9B2 each processor complex has four processors and up to 128 GB
memory. Initially there is also a 50/50 split for processor and memory allocation.

Therefore, every LPAR has two processors and so every SFI has four processors.

32 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452ch_LPAR.fm

The memory limit depends on the total amount of available memory in the whole system.
Currently there are the following memory allocations per storage facility available:
򐂰 32 GB (16 GB per processor complex, 16 GB per SFI)
򐂰 64 GB (32 GB per processor complex, 32 GB per SFI)
򐂰 128 GB (64 GB per processor complex, 64 GB per SFI)
򐂰 256 GB (128 GB per processor complex, 128 GB per SFI)

RIO-G interconnect separation


Figure 3-4 on page 32 depicts that the RIO-G interconnection is also split between the two
SFIs. The RIO-G interconnection is divided into 2 loops. Each RIO-G loop is dedicated to a
given SFI. All I/O enclosures on the RIO-G loop with the associated host adapters and drive
adapters are dedicated to the SFI that owns the RIO-G loop.

As a result of the strict separation of the two images, the following configuration options exist:
򐂰 Each SFI is assigned to one dedicated RIO-G loop; if an image is offline, its RIO-G loop is
not available.
򐂰 All I/O enclosures, host adapters, and device adapters on a given RIO-G loop are
dedicated to the image that owns the loop.
򐂰 Disk enclosures and storage devices behind a given device adapter pair are dedicated to
the image that owns the RIO-G loop.
򐂰 Configuring of capacity to an image is managed through the placement of disk enclosures
on a specific DA pair dedicated to this image.

3.2.4 DS8300 Model 9B2 configuration options


In this section we explain the configuration options available for the DS8300 Model 9B2.

The Turbo Model 9B2 (base frame) has, divided evenly between the SFIs:
– 32 to 128 DDMs, in increments of 16
– System memory of 32, 64, 128, 256 GB
– Four I/O bays. Each bay contains:
• Up to 4 host adapters (divided between SFIs at the bay level)
• Up to 2 device adapters (divided between SFIs at the bay level)
– S-HMC, keyboard/display, and 2 Ethernet switches (shared between SFIs)

The first expansion frame Model 9AE has, divided evenly between the SFIs:
– An additional four I/O bays. Each bay contains:
• Up to 4 host adapters (divided between SFIs at the bay level)
• Up to 2 device adapters (divided between SFIs at the bay level)
– An additional 256 DDMs

The second expansion frame Model 9AE has, divided evenly between the SFIs:
– An additional 256 DDMs

A fully configured DS8300 has one base frame and two expansion frames. The first
expansion frame has additional I/O drawers and disk drive modules (DDMs), while the
second expansion frame contains only additional DDMs.

Figure 3-5 on page 34 provides an example of how a fully populated DS8300 might be
configured. The disk enclosures are assigned to SFI 1 (yellow) or SFI 2 (green). When
ordering additional disk capacity, it can be allocated to either SFI, but the cabling is

Chapter 3. Storage system LPARs (logical partitions) 33


6786_6452ch_LPAR.fm Draft Document for Review November 14, 2006 3:49 pm

pre-determined so in this example disks added to the empty pair of disk enclosures would be
allocated to SFI 2.

Storage Storage Storage


enclosure enclosure enclosure

Storage Storage Storage


enclosure enclosure enclosure

Storage Storage Storage


enclosure enclosure enclosure

Storage Storage Storage


enclosure enclosure enclosure

Storage Empty storage


enclosure enclosure

Storage Storage
Facility
Processor
Facility
Storage Empty storage
complex 0
Image 1 Image 2 enclosure enclosure

Storage Storage
Storage Storage
Facility
Processor
Facility
enclosure enclosure
Image 1 complex 1 Image 2
Storage Storage
enclosure enclosure

I/O drawer I/O drawer I/O drawer I/O drawer


0 1 4 5

I/O drawer I/O drawer I/O drawer I/O drawer


2 3 6 7

Figure 3-5 DS8300 example configuration

3.3 LPAR security through POWER Hypervisor (PHYP)


An important security feature which is implemented in the System p5 server, called the
POWER Hypervisor (PHYP), enforces partition integrity by providing a security layer between
logical partitions. The POWER Hypervisor is a component of system firmware that will always
be installed and activated, regardless of the system configuration. It operates as a hidden
partition, with no processor resources assigned to it.

In a partitioned environment, the POWER Hypervisor is loaded into the first Physical Memory
Block (PMB) at the physical address zero and reserves the PMB. From then on, it is not
possible for an LPAR to access directly the physical memory. Every memory access is
controlled by the POWER Hypervisor as shown in Figure 3-6 on page 35.

Each partition has its own exclusive page table, also controlled by the POWER Hypervisor,
which the processors use to transparently convert a program's virtual address into the
physical address where that page has been mapped into physical memory.

In a partitioned environment, the operating system uses hypervisor services to manage the
translation control entry (TCE) tables. The operating system communicates the desired I/O
bus address to logical mapping, and the hypervisor translates that into the I/O bus address to
physical mapping within the specific TCE table. The hypervisor needs a dedicated memory
region for the TCE tables to translate the I/O address to the partition memory address, then
the hypervisor can perform direct memory access (DMA) transfers to the PCI adapters.

34 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452ch_LPAR.fm

Hypervisor-
Hypervisor- Controlled
Controlled Physical TCE Tables
Partition 1 Page Tables Memory For DMA Partition 1
Proc I/O Load/Store I/O Slot

Virtual Addresses
N

Bus Addresses
Proc I/O Slot
N

Proc I/O Slot

{
N
I/O Slot
Real Addresses Addr N
Partition 2
I/O Load/Store
Proc Virtual Addresses
0 Partition 2

Bus Addresses
Proc I/O Slot
0

Proc
0
Real Addresses
{ Addr 0
I/O Slot

The Hardware and Hypervisor manage the real to virtual memory


mapping to provide robust isolation between partitions

Figure 3-6 LPAR protection in IBM POWER Hypervisor

3.4 LPAR and Copy Services


In this section we discuss the Copy Services considerations when having DS8300 SFIs.

Remote
mirror
9
Storage Facility Image 1 Storage Facility Image 2

9 Primary Primary Secondary

Remote Remote mirror or copy (RMC)


mirror

FlashCopy FlashCopy
Secondary Source Target

FlashCopy
Source
8 FlashCopy
Target

9Remote mirroring within a Storage Facility Image or across Storage Facility Images
9FlashCopy within a Storage Facility Image
Worldwide Storage Kickoff Meeting | IBM Confidential
Figure 3-7 DS8300 storage facility images and Copy Services

Chapter 3. Storage system LPARs (logical partitions) 35


6786_6452ch_LPAR.fm Draft Document for Review November 14, 2006 3:49 pm

Figure 3-7 summarizes the basic considerations for Copy Services when used with a
partitioned DS8300.

FlashCopy
The DS8000 series fully supports the FlashCopy V2 capabilities including cross LSS support.
However, a source volume of a FlashCopy located in one SFI cannot have a target volume in
the second SFI, as illustrated in Figure 3-7.

Remote mirror and copy


A remote mirror or copy relationship is supported across SFIs. The primary volume can be
located in one SFI and the secondary volume in another SFI within the same DS8300; see
Figure 3-7.

For more information about Copy Services refer to Chapter 7, “Copy Services” on page 107.

3.5 LPAR benefits


The exploitation of the LPAR technology in the DS8300 offers many potential benefits
including a reduction in floor space, power, and cooling requirements. It also simplifies IT
infrastructure through a reduced system management effort and reduces storage
infrastructure complexity and physical asset management.

The hardware-based LPAR implementation ensures data integrity. The fact that you can
create dual, independent, completely segregated virtual storage systems helps you to
optimize the utilization of your investment, and helps to segregate workloads and protect
them from one another.

The following are examples of possible scenarios where SFIs would be useful:
򐂰 Separating workloads
An environment can be split by operating system, application, organizational boundaries
or production readiness; for example, z/OS hosts on one SFI and open hosts on the other
or production on one and test or development on the other.
򐂰 Dedicating resources
As a service provider you could provide dedicated resources to each customer, thereby
satisfying security and service level agreements, while having the environment all
contained on one physical DS8300.
򐂰 Production and data mining
For database purposes, imagine a scenario where the production database is running in
the first SFI and a copy of the production database is running in the second SFI. Analysis
and data mining can be performed on it without interfering with the production database.
򐂰 Business continuance (secondary) within the same physical array
You can use the two partitions to test Copy Services solutions or you can use them for
multiple copy scenarios in a production environment.
򐂰 Information Lifecycle Management (ILM) partition with fewer resources, slower DDMs
One SFI can utilize, for example, only fast disk drive modules to ensure high performance
for the production environment, and the other SFI can use fewer and slower DDMs to
ensure Information Lifecycle Management at a lower cost.

Figure 3-8 on page 37 depicts one example of SFIs use in the DS8300.

36 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452ch_LPAR.fm

Storage Facility Image 1 Storage Facility Image 2


Capacity: 20 TB Fixed Block (FB) Capacity: 10 TB Count Key Data (CKD)
LIC level: A LIC level: B
License function: Point-in-time Copy License function: no Copy function
License feature: FlashCopy License feature: no Copy feature

System 1 System 2
Open System System z

LUN0

LUN1 3390-3

LUN2 3390-3

DS8300 Turbo Model 9B2


(Physical Capacity: 30TB)

Figure 3-8 Example of storage facility images in the DS8300

This example shows a DS8300 with a total physical capacity of 30 TB. In this case, a
minimum Operating Environment License (OEL) is required to cover the 30 TB capacity. The
DS8300 is split into two images. SFI 1 is used for an open system environment and utilizes 20
TB of fixed block data (FB). SFI 2 is used for a System z environment and uses 10 TB of
count key data (CKD).

To utilize FlashCopy on the entire capacity would require a 30 TB FlashCopy license.


However, as in this example, it is possible to have a FlashCopy license for SFI 1 for 20 TB
only. In this example for the System z environment, no copy function is needed, so there is no
need to purchase a Copy Services license for SFI 2. For more information about the licensed
functions see Chapter 11, “Features and license keys” on page 197.

Addressing capabilities with storage facility images


Figure 3-9 on page 38 highlights the enormous enhancements of the addressing capabilities
that you get with the DS8300 in LPAR mode as compared to the previous ESS Model 800.

Chapter 3. Storage system LPARs (logical partitions) 37


6786_6452ch_LPAR.fm Draft Document for Review November 14, 2006 3:49 pm

ESS 800 DS8300 DS8300 with LPAR


Max Logical Subsystems 32 255 510
Max Logical Devices 8K 63.75K 127.5K
Max Logical CKD Devices 4K 63.75K 127.5K
Max Logical FB Devices 4K 63.75K 127.5K
Max N-Port Logins/Port 128 509 509
Max N-Port Logins 512 8K 16K
Max Logical Paths/FC Port 256 1280 1280
Max Logical Paths/CU Image 256 512 512
Max Path Groups/CU Image 128 256 256
Figure 3-9 DS8300 with and without LPAR vs. ESS800— addressing capabilities

3.6 Summary
The DS8000 series started delivering the first use of the POWER5 processor IBM
Virtualization Engine logical partitioning capability originally with the Model 9A2 and now
enhanced with the POWER5+ processor of the Turbo Model 9B2. This storage system LPAR
technology is designed to enable the creation of two completely separate storage systems.
The SFIs can be used for production, test, or other unique storage environments, and they
operate within a single physical enclosure. Each SFI can be established to support the
specific performance requirements of a different, heterogeneous workload. The DS8000
series robust partitioning implementation helps to isolate and protect the SFIs. These storage
system LPAR capabilities are designed to help simplify systems by maximizing management
efficiency, cost effectiveness, and flexibility.

38 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_components.fm

Chapter 4. Hardware Components


This chapter describes the hardware components of the DS8000 series. This chapter is
intended for readers who wish to get a clear picture of what the individual components look
like and the architecture that holds them together.

The following topics are covered in this chapter:


򐂰 Frames
򐂰 Architecture
򐂰 Processor complex
򐂰 Disk subsystem
򐂰 Host adapters
򐂰 Power and cooling
򐂰 Management console network
򐂰 Ethernet adapter pair (for TPC RM support at R2)

© Copyright IBM Corp. 2006. All rights reserved. 39


6786_6452_components.fm Draft Document for Review November 14, 2006 3:49 pm

4.1 Frames
The DS8000 is designed for modular expansion. From a high-level view there appear to be
three types of frames available for the DS8000. However, on closer inspection, the frames
themselves are almost identical. The only variations are what combinations of processors, I/O
enclosures, batteries, and disks the frames contain.

Figure 4-1 is an attempt to show some of the frame variations that are possible with the
DS8000 series. The left-hand frame is a base frame that contains the processors (System p5
servers). The center frame is an expansion frame that contains additional I/O enclosures but
no additional processors. The right-hand frame is an expansion frame that contains just disk
(and no processors, I/O enclosures, or batteries). Each frame contains a frame power area
with power supplies and other power-related hardware.

Rack Cooling plenum Cooling plenum Fan Cooling plenum


Fan
power sense
disk enclosure pair sense disk enclosure pair
disk enclosure pair card
control card
disk enclosure pair disk enclosure pair disk enclosure pair

disk enclosure pair disk enclosure pair disk enclosure pair


Primary Primary Primary
power power power
supply disk enclosure pair supply disk enclosure pair supply disk enclosure pair

disk enclosure pair disk enclosure pair

System p5 server disk enclosure pair disk enclosure pair


Primary Primary Primary
power power disk enclosure pair power disk enclosure pair
supply
System p5 server supply
supply
disk enclosure pair disk enclosure pair

Battery
Battery
Backup unit
I/O I/O Backup unit I/O I/O
Battery Enclosure 1 enclosure 0 Battery Enclosure 5 enclosure 4
Backup unit backup unit
I/O I/O I/O I/O
Battery Battery
Backup unit Enclosure 3 enclosure 2 Enclosure 7 enclosure 6
Backup unit

Figure 4-1 DS8000 frame possibilities

4.1.1 Base frame


The left-hand side of the base frame (viewed from the front of the machine) is the frame
power area. Only the base frame contains rack power control cards (RPC) to control power
sequencing for the storage unit. It also contains a fan sense card to monitor the fans in that
frame. The base frame contains two primary power supplies (PPSs) to convert input AC into
DC power. The power area also contains two or three battery backup units (BBUs) depending
on the model and configuration.

The base frame can contain up to eight disk enclosures, each can contain up to 16 disk
drives. In a maximum configuration, the base frame can hold 128 disk drives. Above the disk
enclosures are cooling fans located in a cooling plenum.

40 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_components.fm

Between the disk enclosures and the processor complexes are two Ethernet switches, a
Storage Hardware Management Console (an S-HMC) and a keyboard/display module.

The base frame contains two processor complexes. These System p5 servers contain the
processor and memory that drive all functions within the DS8000. In the ESS we referred to
them as clusters, but this term is no longer relevant. We now have the ability to logically
partition each processor complex into two LPARs, each of which is the equivalent of an ESS
cluster.

Finally, the base frame contains four I/O enclosures. These I/O enclosures provide
connectivity between the adapters and the processors. The adapters contained in the I/O
enclosures can be either device or host adapters (DAs or HAs). The communication path
used for adapter to processor complex communication is the RIO-G loop. This loop not only
joins the I/O enclosures to the processor complexes, it also allows the processor complexes
to communicate with each other.

4.1.2 Expansion frame


The left-hand side of each expansion frame (viewed from the front of the machine) is the
frame power area. The expansion frames do not contain rack power control cards; these
cards are only present in the base frame. They do contain a fan sense card to monitor the
fans in that frame. Each expansion frame contains two primary power supplies (PPS) to
convert the AC input into DC power. Finally, the power area may contain three battery backup
units (BBUs) depending on the model and configuration.

Each expansion frame can hold up to 16 disk enclosures which contain the disk drives. They
are described as 16-packs because each enclosure can hold 16 disks. In a maximum
configuration, an expansion frame can hold 256 disk drives. Above the disk enclosures are
cooling fans located in a cooling plenum.

An expansion frame can contain I/O enclosures and adapters if it is the first expansion frame
that is attached to a 9x2 model. The second expansion frame in a 9x2 model configuration
cannot have I/O enclosures and adapters, nor can any expansion frame that is attached to a
model 9x1. If the expansion frame contains I/O enclosures, the enclosures provide
connectivity between the adapters and the processors. The adapters contained in the I/O
enclosures can be either device or host adapters. The available expansion frame models are:
򐂰 Model 92E expansion unit —attaches to the DS8000 series Turbo models 931 and 932,
and to the older DS8000 series 921 and 922 models
򐂰 Model 9AE expansion unit —attaches to the DS8000 series Turbo Model 9B2 and to the
older DS8000 series 9A2 model.

4.1.3 Rack operator panel


Each DS8000 frame features an operator panel. This panel has three indicators and an
emergency power off switch (an EPO switch). Figure 4-2 on page 42 depicts the operator
panel. Each panel has two line cord indicators (one for each line cord). For normal operation
both of these indicators should be on, to indicate that each line cord is supplying correct
power to the frame. There is also a fault indicator. If this indicator is illuminated you should
use the DS Storage Manager GUI or the Storage Hardware Management Console (S-HMC)
to determine why this indicator is on.

There is also an EPO switch on each operator panel. This switch is only for emergencies.
Tripping the EPO switch will bypass all power sequencing control and result in immediate
removal of system power. A small cover must be lifted to operate it. Do not trip this switch
unless the DS8000 is creating a safety hazard or is placing human life at risk.

Chapter 4. Hardware Components 41


6786_6452_components.fm Draft Document for Review November 14, 2006 3:49 pm

Line cord
indicators Fault indicator

EPO switch cover

Figure 4-2 Rack operator panel

You will note that there is not a power on/off switch on the operator panel. This is because
power sequencing is managed via the S-HMC. This is to ensure that all data in non-volatile
storage (known as modified data) is de-staged properly to disk prior to power down. It is thus
not possible to shut down or power off the DS8000 from the operator panel (except in an
emergency, with the EPO switch mentioned previously).

4.2 Architecture
Now that we have described the frames themselves, we use the rest of this chapter to explore
the technical details of each of the components. The architecture that connects these
components is pictured in Figure 4-3 on page 43.

In effect, the DS8000 consists of two processor complexes. Each processor complex has
access to multiple host adapters to connect to Fibre Channel, FICON and ESCON hosts.
Each DS8000 can have up to 32 host adapters. To access the disk subsystem, each complex
uses several four-port Fibre Channel arbitrated loop (FC-AL) device adapters. A DS8000 can
have up to sixteen of these adapters arranged into eight pairs. Each adapter connects the
complex to two separate switched Fibre Channel networks. Each switched network attaches
disk enclosures that each contain up to 16 disks. Each enclosure contains two 20-port Fibre
Channel switches. Of these 20 ports, 16 are used to attach to the 16 disks in the enclosure
and the remaining four are used to either interconnect with other enclosures or to the device
adapters. Each disk is attached to both switches. Whenever the device adapter connects to a
disk, it uses a switched connection to transfer data. This means that all data travels via the
shortest possible path.

The attached hosts interact with software which is running on the complexes to access data
on logical volumes. Each complex will host at least one instance of this software (which is
called a server), which runs in a logical partition (an LPAR). The servers manage all read and
write requests to the logical volumes on the disk arrays. During write requests, the servers

42 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_components.fm

use fast-write, in which the data is written to volatile memory on one complex and persistent
memory on the other complex. The server then reports the write as complete before it has
been written to disk. This provides much faster write performance. Persistent memory is also
called NVS or non-volatile storage.

Processor SAN fabric Processor


Complex 0 Complex 1
Host ports
Host adapter Host adapter
Volatile in I/O enclosure Volatile
in I/O enclosure
memory memory
Persistent memory Persistent memory
RIO-G

RIO-G
N-way First RIO-G loop N-way
SMP SMP

Device adapter Device adapter


in I/O enclosure in I/O enclosure

Fibre channel switch

Fibre channel switch


Fibre channel switch
Fibre channel switch

Front storage Rear storage


enclosure with enclosure with
16 DDMs 16 DDMs

Figure 4-3 DS8000 series architecture

When a host performs a read operation, the servers fetch the data from the disk arrays via the
high performance switched disk architecture. The data is then cached in volatile memory in
case it is required again. The servers attempt to anticipate future reads by an algorithm
known as SARC (Sequential prefetching in Adaptive Replacement Cache). Data is held in
cache as long as possible using this smart algorithm. If a cache hit occurs where requested
data is already in cache, then the host does not have to wait for it to be fetched from the disks.

Both the device and host adapters operate on a high bandwidth fault-tolerant interconnect
known as the RIO-G. The RIO-G design allows the sharing of host adapters between servers
and offers exceptional performance and reliability.

Chapter 4. Hardware Components 43


6786_6452_components.fm Draft Document for Review November 14, 2006 3:49 pm

Figure 4-3 uses colors as indicators of how the DS8000 hardware is shared between the
servers (the cross hatched color is green and the lighter color is yellow). On the left side, the
green server is running on the left-hand processor complex. The green server uses the N-way
SMP of the complex to perform its operations. It records its write data and caches its read
data in the volatile memory of the left-hand complex. For fast-write data it has a persistent
memory area on the right-hand processor complex. To access the disk arrays under its
management (the disks also being pictured in green), it has its own device adapter (again in
green). The yellow server on the right operates in an identical fashion. The host adapters (in
dark red) are deliberately not colored green or yellow because they are shared between both
servers.

4.2.1 Server-based SMP design


The DS8000 benefits from a fully assembled, leading edge processor and memory system.
Using SMPs as the primary processing engine sets the DS8000 apart from other disk storage
systems on the market. Additionally, the POWER5 processors used in the DS8000 series
support the execution of two independent threads concurrently. This capability is referred to
as simultaneous multi-threading (SMT). The two threads running on the single processor
share a common L1 cache. The SMP/SMT design minimizes the likelihood of idle or
overworked processors, while a distributed processor design is more susceptible to an
unbalanced relationship of tasks to processors.

The design decision to use SMP memory as I/O cache is a key element of IBM’s storage
architecture. Although a separate I/O cache could provide fast access, it cannot match the
access speed of the SMP main memory. The decision to use the SMP main memory as the
cache proved itself in three generations of IBM TotalStorage Enterprise Storage Server
(ESS). The performance roughly doubled with each generation. This performance
improvement can be traced to the capabilities of the completely integrated SMP, the
processor speeds, the L1/L2 cache sizes and speeds, the memory bandwidth and response
time, and the PCI bus performance.

With the DS8000, the cache access has been accelerated further by making the Non-Volatile
Storage a part of the SMP memory.

All memory installed on any processor complex is accessible to all processors in that
complex. The addresses assigned to the memory are common across all processors in the
same complex. On the other hand, using the main memory of the SMP as the cache, leads to
a partitioned cache. Each processor has access to the processor complex’s main memory but
not to that of the other complex. You should keep this in mind with respect to load balancing
between processor complexes.

4.2.2 Cache management


Most if not all high-end disk systems have internal cache integrated into the system design,
and some amount of system cache is required for operation. Over time, cache sizes have
dramatically increased, but the ratio of cache size to system disk capacity has remained
nearly the same.

The DS8000 (and the DS6000) use the Sequential Prefetching in Adaptive Replacement
Cache (SARC) algorithm, developed by IBM Storage Development in partnership with IBM
Research. It is a self-tuning, self-optimizing solution for a wide range of workloads with a
varying mix of sequential and random I/O streams. SARC is inspired by the Adaptive
Replacement Cache (ARC) algorithm and inherits many features from it. For a detailed
description of ARC see N. Megiddo and D. S. Modha, “Outperforming LRU with an adaptive
replacement cache algorithm,” IEEE Computer, vol. 37, no. 4, pp. 58–65, 2004.

44 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_components.fm

SARC basically attempts to determine four things:


򐂰 When data is copied into the cache.
򐂰 Which data is copied into the cache.
򐂰 Which data is evicted when the cache becomes full.
򐂰 How does the algorithm dynamically adapt to different workloads.

The DS8000 cache is organized in 4K byte pages called cache pages or slots. This unit of
allocation (which is smaller than the values used in other storage systems) ensures that small
I/Os do not waste cache memory.

The decision to copy some amount of data into the DS8000 cache can be triggered from two
policies: demand paging and prefetching. Demand paging means that eight disk blocks (a 4K
cache page) are brought in only on a cache miss. Demand paging is always active for all
volumes and ensures that I/O patterns with some locality find at least some recently used
data in the cache.

Prefetching means that data is copied into the cache speculatively even before it is
requested. To prefetch, a prediction of likely future data accesses is needed. Because
effective, sophisticated prediction schemes need extensive history of page accesses (which is
not feasible in real-life systems), SARC uses prefetching for sequential workloads. Sequential
access patterns naturally arise in video-on-demand, database scans, copy, backup and
recovery. The goal of sequential prefetching is to detect sequential access and effectively
pre-load the cache with data so as to minimize cache misses.

For prefetching, the cache management uses tracks. A track is a set of 128 disk blocks
(16 cache pages). To detect a sequential access pattern, counters are maintained with every
track to record if a track has been accessed together with its predecessor. Sequential
prefetching becomes active only when these counters suggest a sequential access pattern. In
this manner, the DS8000 monitors application read-I/O patterns and dynamically determines
whether it is optimal to stage into cache:
򐂰 Just the page requested
򐂰 That page requested plus remaining data on the disk track
򐂰 An entire disk track (or a set of disk tracks) which has (have) not yet been requested

The decision of when and what to prefetch is essentially made on a per-application basis
(rather than a system-wide basis) to be sensitive to the different data reference patterns of
different applications that can be running concurrently.

To decide which pages are evicted when the cache is full, sequential and random
(non-sequential) data is separated into different lists; see Figure 4-4.

RANDOM SEQ

MRU MRU

Desired size

SEQ bottom
LRU
RANDOM bottom
LRU

Figure 4-4 Cache lists of the SARC algorithm for random and sequential data

Chapter 4. Hardware Components 45


6786_6452_components.fm Draft Document for Review November 14, 2006 3:49 pm

A page which has been brought into the cache by simple demand paging is added to the
MRU (Most Recently Used) head of the RANDOM list. Without further I/O access, it goes
down to the LRU (Least Recently Used) bottom. A page which has been brought into the
cache by a sequential access or by sequential prefetching is added to the MRU head of the
SEQ list and then goes in that list. Additional rules control the migration of pages between the
lists so as to not keep the same pages in memory twice.

To follow workload changes, the algorithm trades cache space between the RANDOM and
SEQ lists dynamically and adaptively. This makes SARC scan-resistant, so that one-time
sequential requests do not pollute the whole cache. SARC maintains a desired size
parameter for the sequential list. The desired size is continually adapted in response to the
workload. Specifically, if the bottom portion of the SEQ list is found to be more valuable than
the bottom portion of the RANDOM list, then the desired size is increased; otherwise, the
desired size is decreased. The constant adaptation strives to make optimal use of limited
cache space and delivers greater throughput and faster response times for a given cache
size.

Additionally, the algorithm modifies dynamically not only the sizes of the two lists, but also the
rate at which the sizes are adapted. In a steady state, pages are evicted from the cache at the
rate of cache misses. A larger (respectively, a smaller) rate of misses effects a faster
(respectively, a slower) rate of adaptation.

Other implementation details take into account the relation of read and write (NVS) cache,
efficient de-staging, and the cooperation with Copy Services. In this manner, the DS8000
cache management goes far beyond the usual variants of the LRU/LFU (Least Recently Used
/ Least Frequently Used) approaches.

4.3 Processor complex


The DS8000 base frame contains two processor complexes. The 9x1 models have 2-way
processors while the 9x2 models have 4-way processors. (2-way means that each processor
complex has 2 CPUs, while 4-way means that each processor complex has 4 CPUs.)

New POWER5+ processor comments


򐂰 The POWER5+ processor is featured in the Turbo Models 931, 932, and 9B2
򐂰 The POWER5+ processor is an optional feature for the older Models 921, 922, and 9A2
򐂰 Compared to the POWER5 processor, the POWER5+ processor of the Turbo models may
enable up to a 15% performance improvement in I/O operations per second in transaction
processing workload environments

The DS8000 series features IBM POWER5 server technology. Depending on workload, the
maximum host I/O operations per second of the DS8100 9x1 models is up to three times the
maximum operations per second of the ESS Model 800. The maximum host I/O operations
per second of the DS8300 9x2 models is up to six times the maximum of the ESS Model 800.

For details on the server hardware used in the DS8000, refer to the redpaper IBM p5 570
Technical Overview and Introduction, REDP-9117, available at:
http://www.redbooks.ibm.com

The symmetric multiprocessor (SMP) system features 2-way or 4-way, copper-based,


SOI-based POWER5+ microprocessors running at 2.2 GHz with 36 MB off-chip Level 3 cache
configurations. The system is based on a concept of system building blocks. The processor
complexes are facilitated with the use of processor interconnect and system flex cables that
enable as many as four 4-way System p processor complexes to be connected to achieve a

46 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_components.fm

true 16-way SMP combined system. How these features are implemented in the DS8000
might vary.

Figure 4-5 shows a front view and a rear view of the DS8000 series processor complex.

Power supply 1
PCI-X adapters in
Power supply 2 blind-swap carriers

Front view
DVD-rom drives

SCSI disk drives

Operator panel
Processor cards
Power supply 1
Power supply 2

Rear view

PCI-X slots
RIO-G ports RIO-G ports
Figure 4-5 Processor complex

One processor complex includes:


򐂰 Five hot-plug PCI-X slots with Enhanced Error Handling (EEH)
򐂰 An enhanced blind-swap mechanism that allows hot-swap replacement or installation of
PCI-X adapters without sliding the enclosure into the service position
򐂰 Two Ultra320 SCSI controllers
򐂰 One10/100/1000 Mbps integrated dual-port Ethernet controller
򐂰 Two serial ports
򐂰 Two USB 2.0 ports
򐂰 Two HMC Ethernet ports
򐂰 Four remote RIO-G ports
򐂰 Two System Power Control Network (SPCN) ports

The System p5 servers in the DS8000 include two 3-pack front-accessible, hot-swap-capable
disk bays. The six disk bays of one System p5 processor complex can accommodate up to
880.8 GB of disk storage using the 146.8 GB Ultra320 SCSI disk drives. Two additional media
bays are used to accept optional slim-line media devices, such as DVD-ROM or DVD-RAM

Chapter 4. Hardware Components 47


6786_6452_components.fm Draft Document for Review November 14, 2006 3:49 pm

drives. The System p5 processor complex also has I/O expansion capability using the RIO-G
interconnect. How these features are implemented in the DS8000 might vary.

Processor memory
The DS8100 models 9x1 offer up to 128 GB of processor memory and the DS8300 models
9x2 offer up to 256 GB of processor memory. Half of this will be located in each processor
complex. In addition, the Non-Volatile Storage (NVS) scales to the processor memory size
selected, which can also help optimize performance.

Service processor and SPCN


The service processor (SP) is an embedded controller that is based on a PowerPC processor.
The SPCN is the system power control network that is used to control the power of the
attached I/O subsystem. The SPCN control software and the service processor software are
run on the same PowePC processor.

The SP performs predictive failure analysis based on any recoverable processor errors. The
SP can monitor the operation of the firmware during the boot process, and it can monitor the
operating system for loss of control. This enables the service processor to take appropriate
action.

The SPCN monitors environmentals such as power, fans, and temperature. Environmental
critical and non-critical conditions can generate Early Power-Off Warning (EPOW) events.
Critical events trigger appropriate signals from the hardware to the affected components to
prevent any data loss without operating system or firmware involvement. Non-critical
environmental events are also logged and reported.

4.3.1 RIO-G
The RIO-G ports are used for I/O expansion to external I/O drawers. RIO stands for remote
I/O. The RIO-G is evolved from earlier versions of the RIO interconnect.

Each RIO-G port can operate at 1 GHz in bidirectional mode and is capable of passing data in
each direction on each cycle of the port. It is designed as a high performance self-healing
interconnect. Each System p5 server in the DS8000 provides two external RIO-G ports, and
an adapter card adds two more. Two ports on each processor complex form a loop.

Figure 4-6 on page 49 illustrates how the RIO-G cabling is laid out in a DS8000 that has eight
I/O drawers. This would only occur if an expansion frame were installed. The DS8000 RIO-G
cabling will vary based on the model. A two-way DS8000 model will have one RIO-G loop. A
four-way DS8000 model will have two RIO-G loops. Each loop will support four disk
enclosures.

48 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_components.fm

I/O enclosure I/O enclosure

Processor Processor
Complex 0 Complex 1
Loop 0
RIO-G ports RIO-G ports

I/O enclosure I/O enclosure

I/O enclosure I/O enclosure

Loop 1

I/O enclosure I/O enclosure

Figure 4-6 DS8000 RIO-G port layout

4.3.2 I/O enclosures


All base models contain I/O enclosures and adapters; see Figure 4-7. The I/O enclosures
hold the adapters and provide connectivity between the adapters and the processors. Device
adapters and host adapters are installed in the I/O enclosure. Each I/O enclosure has 6 slots.
Each slot supports PCI-X adapters running at 64 bit, 133 Mhz. Slots 3 and 6 are used for the
device adapters. The remaining slots are available to install up to four host adapters per I/O
enclosure.

Two I/O enclosures


Single I/O enclosure side by side

Front view Rear view


SPCN
ports

RIO-G
Redundant ports
power supplies
Slots: 1 2 3 4 5 6
Figure 4-7 I/O enclosures

Chapter 4. Hardware Components 49


6786_6452_components.fm Draft Document for Review November 14, 2006 3:49 pm

Each I/O enclosure has the following attributes:


򐂰 4U rack-mountable enclosure
򐂰 Six PCI-X slots: 3.3 V, keyed, 133 MHz blind-swap hot-plug
򐂰 Default redundant hot-plug power and cooling devices
򐂰 Two RIO-G and two SPCN ports

4.4 Disk subsystem


The disk subsystem consists of three components:
򐂰 First, located in the I/O enclosures are the device adapters. These are RAID controllers
that are used by the storage images to access the RAID arrays.
򐂰 Second, the device adapters connect to switched controller cards in the disk enclosures.
This creates a switched Fibre Channel disk network.
򐂰 Finally, we have the disks themselves. The disks are commonly referred to as disk drive
modules (DDMs).

4.4.1 Device adapters


Each DS8000 device adapter (DA) card offers four 2Gbps FC-AL ports. These ports are used
to connect the processor complexes to the disk enclosures. The adapter is responsible for
managing, monitoring, and rebuilding the RAID arrays. The adapter provides remarkable
performance thanks to a new high function/high performance ASIC. To ensure maximum data
integrity it supports metadata creation and checking. The device adapter design is shown in
Figure 4-8.

Figure 4-8 DS8000 device adapter

The DAs are installed in pairs because each storage partition requires its own adapter to
connect to each disk enclosure for redundancy. This is why we refer to them as DA pairs.

4.4.2 Disk enclosures


Each DS8000 frame contains either 8 or 16 disk enclosures depending on whether it is a
base or expansion frame. Half of the disk enclosures are accessed from the front of the
frame, and half from the rear. Each DS8000 disk enclosure contains a total of 16 DDMs or

50 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_components.fm

dummy carriers. A dummy carrier looks very similar to a DDM in appearance but contains no
electronics. The enclosure is pictured in Figure 4-9.

Note: If a DDM is not present, its slot must be occupied by a dummy carrier. This is
because without a drive or a dummy, cooling air does not circulate correctly.

Each DDM is an industry standard FC-AL disk. Each disk plugs into the disk enclosure
backplane. The backplane is the electronic and physical backbone of the disk enclosure.

Figure 4-9 DS8000 disk enclosure

Non-switched FC-AL drawbacks


In a standard FC-AL disk enclosure all of the disks are arranged in a loop, as depicted in
Figure 4-10. This loop-based architecture means that data flows through all disks before
arriving at either end of the device adapter (shown here as the Storage Server).

Figure 4-10 Industry standard FC-AL disk enclosure

The main problems with standard FC-AL access to DDMs are:

Chapter 4. Hardware Components 51


6786_6452_components.fm Draft Document for Review November 14, 2006 3:49 pm

򐂰 The full loop is required to participate in data transfer. Full discovery of the loop via LIP
(loop initialization protocol) is required before any data transfer. Loop stability can be
affected by DDM failures.
򐂰 In the event of a disk failure, it can be difficult to identify the cause of a loop breakage,
leading to complex problem determination.
򐂰 There is a performance dropoff when the number of devices in the loop increases.
򐂰 To expand the loop it is normally necessary to partially open it. If mistakes are made, a
complete loop outage can result.

These problems are solved with the switched FC-AL implementation on the DS8000.

Switched FC-AL advantages


The DS8000 uses switched FC-AL technology to link the device adapter (DA) pairs and the
DDMs. Switched FC-AL uses the standard FC-AL protocol, but the physical implementation is
different. The key features of switched FC-AL technology are:
򐂰 Standard FC-AL communication protocol from DA to DDMs.
򐂰 Direct point-to-point links are established between DA and DDM.
򐂰 Isolation capabilities in case of DDM failures, providing easy problem determination.
򐂰 Predictive failure statistics.
򐂰 Simplified expansion: no cable rerouting is required when adding another disk enclosure.

The DS8000 architecture employs dual redundant switched FC-AL access to each of the disk
enclosures. The key benefits of doing this are:
򐂰 Two independent networks to access the disk enclosures.
򐂰 Four access paths to each DDM.
򐂰 Each device adapter port operates independently.
򐂰 Double the bandwidth over traditional FC-AL loop implementations.

In Figure 4-11 each DDM is depicted as being attached to two separate Fibre Channel
switches. This means that with two device adapters, we have four effective data paths to each
disk, each path operating at 2Gbps. Note that this diagram shows one switched disk network
attached to each DA. Each DA can actually support two switched networks.

Fibre channel switch


server 0 server 1
device device
adapter adapter
Fibre channel switch
Figure 4-11 DS8000 disk enclosure

When a connection is made between the device adapter and a disk, the connection is a
switched connection that uses arbitrated loop protocol. This means that a mini-loop is created
between the device adapter and the disk. Figure 4-12 on page 53 depicts four simultaneous
and independent connections, one from each device adapter port.

52 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_components.fm

Switched connections

Fibre channel switch server 1


server 0
device device
adapter adapter
Fibre channel switch
Figure 4-12 Disk enclosure switched connections

DS8000 switched FC-AL implementation


For a more detailed look at how the switched disk architecture expands in the DS8000 refer to
Figure 4-13 that depicts how each DS8000 device adapter connects to two disk networks
called loops. Expansion is achieved by adding enclosures to the expansion ports of each
switch. Each loop can potentially have up to six enclosures, but this will vary depending on
machine model and DA pair number.

15
Rear storage
enclosure N max=6
0 FC switch

Rear
8 or 16 DDMs per
enclosures Rear storage
15
enclosure
enclosure 2 0
2Gbs FC-AL link

4 FC-AL Ports
15
Rear storage
enclosure 1
0

Server 0 device Server 1 device


adapter adapter

Front storage
enclosure 1 15

Front Front storage


0

enclosures enclosure 2 15

0
Front storage
enclosure N max=6
15

Figure 4-13 DS8000 switched disk expansion

Chapter 4. Hardware Components 53


6786_6452_components.fm Draft Document for Review November 14, 2006 3:49 pm

Expansion
Storage enclosures are added in pairs and disks are added in groups of 16. On the ESS
Model 800, the term 8-pack was used to describe an enclosure with eight disks in it. For the
DS8000, we use the term 16-pack, though this term really describes the 16 DDMs found in
one disk enclosure. It takes two orders of 16 DDMs to fully populate a disk enclosure pair
(front and rear).

To provide an example, if a machine had six disk enclosures total, it would have three at the
front and three at the rear. If all the enclosures were fully populated with disks, and an
additional order of 16 DDMs was purchased, then two new disk enclosures would be added,
one at the front and one at the rear. The switched networks do not need to be broken to add
these enclosures. They are simply added to the end of the loop, 8 DDMs would go in the front
enclosure and the remaining 8 ddms would go in the rear enclosure. If additional 16 DDMs
were ordered later, they would be used to fill up that pair of disk enclosures.

Arrays and spares


Array sites, containing eight DDMs, are created as DDMs are installed. During configuration
the user will have the choice of creating a RAID-5 or RAID-10 array by choosing one array
site. The first four array sites created on a DA pair each contribute one DDM to be a spare. So
at least four spares are created per DA pair, depending on the disk intermix.

The intention is to only have four spares per DA pair, but this number may increase depending
on DDM intermix. Four DDMs of the largest capacity and at least two DDMs of the fastest
RPM are needed. If all DDMs are the same size and RPM, four spares are sufficient.

Arrays across loops


Each array site consists of eight DDMs. Four DDMs are taken from the front enclosure in an
enclosure pair, and four are taken from the rear enclosure in the pair. This means that when a
RAID array is created on the array site, half of the array is on each enclosure. Because the
front enclosures are on one switched loop, and the rear enclosures are on a second switched
loop, this splits the array across two loops. This is called array across loops (AAL).

Device adapter pair

Fibre channel switch 1


loop 0 loop 0
2

Server Server
0 Front enclosure 1
device Rear enclosure device
adapter adapter

3
loop 1 loop 1
4
There are two separate
switches in each enclosure.
Figure 4-14 DS8000 switched loop layout

54 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_components.fm

To better understand AAL refer to Figure 4-14 and Figure 4-15. To make the diagrams clearer,
only 16 DDMs are shown, eight in each disk enclosure. When fully populated, there would be
16 DDMs in each enclosure.

Figure 4-14 is used to depict the device adapter pair layout. One DA pair creates two switched
loops. The front enclosures populate one loop while the rear enclosures populate the other
loop. Each enclosure places two switches onto each loop. Each enclosure can hold up to 16
DDMs. DDMs are purchased in groups of 16. Half of the new DDMs go into the front
enclosure and half go into the rear enclosure.

Having established the physical layout, the diagram is now changed to reflect the layout of the
array sites, as shown in Figure 4-15. Array site 0 in green (the darker disks) uses the four
left-hand DDMs in each enclosure. Array site 1 in yellow (the lighter disks), uses the four
right-hand DDMs in each enclosure. When an array is created on each array site, half of the
array is placed on each loop. A fully populated enclosure would have four array sites.

Fibre channel switch 1


loop 0 loop 0
2

Server Server
0 1
Array site 0 Array site 1
device device
adapter adapter

3
loop 1 loop 1
4

There are two separate


switches in each enclosure.
Figure 4-15 Array across loop

AAL benefits
AAL is used to increase performance. When the device adapter writes a stripe of data to a
RAID-5 array, it sends half of the write to each switched loop. By splitting the workload in this
manner, each loop is worked evenly, which improves performance. If RAID-10 is used, two
RAID-0 arrays are created. Each loop hosts one RAID-0 array. When servicing read I/O, half
of the reads can be sent to each loop, again improving performance by balancing workload
across loops.

4.4.3 Disk drives — DDMs


Each DDM (disk drive module) is hot plugable and has two indicators. The green indicator
shows disk activity while the amber indicator is used with light path diagnostics to allow for
easy identification and replacement of a failed DDM.

Chapter 4. Hardware Components 55


6786_6452_components.fm Draft Document for Review November 14, 2006 3:49 pm

The DS8000 allows the choice of:


򐂰 four different Fibre Channel DDM types:
– 73 GB, 15k rpm drive
– 146 GB, 10k rpm drive
– 146 GB, 15k rpm drive
– 300 GB, 10k rpm drive
򐂰 and one FATA DDM drive:
– 500 GB, 7.200 rpm drive

4.4.4 Fiber Channel ATA (FATA) disk drives overview


Fibre Channel ATA (FATA) is a technology that offers increased data rate performance over
Parallel Advanced Technology Attachment (PATA) and improved availability over Serial
Advanced Technology Attachment (SATA) drives.

Evolution of ATA technology


Parallel Advanced Technology Attachment (PATA), or Integrated Drive Electronics (IDE) as it
is also known, referencing the integrated controller and disk drive technology, has been the
standard storage interface technology on personal computers since its introduction nearly 25
years ago. Although there have been several developments to the specification that added
performance (the original speed was just 3 MBps when the protocol was introduced) and
reliability features, such as ATAPI, Enhanced Integrated Drive Electronics (EIDE) extensions
for faster disk drive access, and multiple data-transfer modes, including Programmed
Input/Output (PIO), direct memory access (DMA), and Ultra™ DMA (UDMA), it has changed
little overtime. Its design limitations, coupled with faster applications and PC processor
performance, meant that it was often the cause of bottlenecks in data transfer, because it has
achieved its maximum data transfer rate of 133 MBps.

Serial ATA (SATA) disk drives


In August 2001, the first version of the new ATA technology was introduced. Offering a
maximum data rate of 150 MBps, Serial ATA 1.0 specifications allow for thinner, more flexible
cables and lower pin counts, thus enabling easier, more flexible cable routing management
and the use of smaller connectors than was possible with the existing Parallel ATA technology

SATA disks use a single serial port interface on an ATA drive and offer a low-cost disk
technology to address less intensive storage operations. SATA drives for instance are used in
the IBM DS4000 Storage Server (to overcome the single port drive limitation, IBM uses a
MUX or interposer card to provide dual port access to the SATA drives).

Meant for capacity intensive, secondary, or near-line storage applications, SATA drive
reliability is similar to Fibre Channel drives when used within their recommended duty-cycle in
less I/O intensive applications.

In February 2002, a second ATA specification was launched called Serial ATA II. Second
generation SATA-2 disk drives have made significant improvements in speed and functionality
over the first generation SATA-1 by offering up to 3Gb/s speed and Native Tag Command
Queuing like Fibre Channel disks.

Fibre Channel ATA (FATA) disk drives


A FATA disk is a combination of SATA-2 and Fibre Channel disk technologies, connecting a
dual-port FC interface directly to SATA-2 disk drive hardware. This provides true dual-port
drive connectivity.

56 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_components.fm

Designed to meet the architectural standards of enterprise-class storage systems FATA disk
drives provide high capacity at a low-cost alternative to FC disks without much sacrifice of
performance, availability or functionality. Also, when used within their recommended
duty-cycle reliability is comparable to that of the FC disks.

FATA disk drives are now available for both the IBM System Storage DS8000 series and
DS6000 series giving a lower cost alternative for large capacity, low workload environments.

Fibre Channel (FC) disk drives


Fibre Channel disk drives set the standard for enterprise level performance, reliability and
availability. Mission critical applications with heavy workloads requiring high I/O performance
and availability require Fibre Channel drives.

Differences between FATA, SATA and FC disk drives


Fibre Channel disk drives provide the higher performance, reliability, availability and
functionality when compared to FATA and SATA disk drives. If an application requires high
performance data throughput and almost continuous, intensive I/O operations, FC disk drives
are the recommended option.

Important: FATA is not the appropriate answer to every storage requirement. For many
enterprise applications, and certainly mission-critical and production applications, Fibre
Channel disks remain the best choice.

SATA and FATA disk drives are a cost efficient storage option for lower intensity storage
workloads. By providing the same dual port FC interface as Fibre Channel disks, FATA drives
offer higher availability and ensure compatibility and investment protection for existing
enterprise-class storage systems.

Note: The FATA drives offer a cost effective option for lower priority data such as various
fixed content, data archival, reference data, and near-line applications that require large
amounts of storage capacity for lighter workloads. These new drives are meant to comple-
ment, not compete with existing Fibre Channel drives as they are not intended for use in
applications that require drive utilization duty cycles greater than 20 percent.

Following is a summary of the key characteristics of the different disk drive types.

Fibre Channel
򐂰 Intended for heavy workloads in multi-user environments
򐂰 Highest performance, availability, reliability and functionality
򐂰 Good Capacity: 36–300 GB
򐂰 Very high activity
򐂰 Greater than 80% duty-cycle

FATA
򐂰 Intended for lower workloads in multi-user environments
򐂰 High performance, availability and functionality
򐂰 High reliability
򐂰 More robust technology
– Extensive Command Queuing
򐂰 High Capacity: 500 GB disk drives
򐂰 Moderate activity
򐂰 20-30% duty-cycle

Chapter 4. Hardware Components 57


6786_6452_components.fm Draft Document for Review November 14, 2006 3:49 pm

SATA-1
򐂰 Intended for lower workloads in multi-user environments
򐂰 Good performance
򐂰 Less availability and functionality than FATA or Fibre Channel disk drives
– Single port interface, no command queuing
򐂰 High reliability
򐂰 High capacity: 250–500 GB disk drives
򐂰 Moderate activity
򐂰 20-30% duty-cycle

SATA-2
򐂰 Intended for lower workloads in multi-user environments
򐂰 High performance, availability and functionality
򐂰 High reliability
򐂰 More robust technology
– Extensive Command Queuing
򐂰 High Capacity: 500 GB disk drives
򐂰 Moderate activity
򐂰 20-30% duty-cycle

4.4.5 Positioning FATA versus Fibre Channel disks


It is essential to understand the differences between FATA and Fibre Channel (FC) drive
characteristics and mechanisms and the advantages and disadvantages of each to
implement FATA target applications properly.

Without any doubt, the technical characteristics and performance of FC disks remain superior
to those of FATA disks. However, not all storage applications require these superior features.

When used for the appropriate enterprise applications, FATA disks offer a tremendous cost
advantage over FC. First, FATA drives are cheaper to manufacture and because of their lager
individual capacity, they are cheaper per gigabyte (GB) than FC disks. In large capacity
systems, the drives themselves account for the vast majority of the cost of the system. Using
FATA disks can substantially reduce the total cost of ownership (TCO) of the storage system.

Classes of storage
To better visualize where the benefits of FATA can best be obtained if implemented in a
networked storage environment, a positioning of the types or classes of storage and the
appropriate storage technology used at these levels helps in this observation.

Server Disk Disk Tape

Network

Online Near-line Offline


Storage Storage Storage

Figure 16 Storage data in a networked storage environment

58 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_components.fm

Basically, storage data can reside at three different locations within the networked storage
hierarchy. See Figure 16.

Particular data types are suitable for storage at the various levels:
򐂰 Online (primary) storage
Best suited for applications that require constant instantaneous access to data, such as
databases and frequently accessed user data.
Primary storage stores business-critical information, data with the highest value and
importance. This data requires continuous availability and typically has high-performance
requirements. Business-critical data will be stored on Fibre Channel disk implemented in
enterprise-class storage solutions.
򐂰 Near-line (secondary) storage
Used for applications that require quicker access compared with offline storage (such as
tape), but do not require the continuous, instantaneous access provided by online storage.
Secondary storage stores business-important information, but can, however, often tolerate
lower performance and potentially slightly less than 24/7 availability. It can also be used to
cache online storage for quicker backups to tape. Secondary storage represents a large
percentage of a company’s data and is an ideal fit for FATA technology.
򐂰 Offline (archival) storage
Used for applications where infrequent serial access is required, such as backup for
long-term storage. For this type of storage, tape remains the most economical solution.

Data storage implementations best suited to use FATA technology reside at the “near-line” or
secondary location within the networked storage hierarchy and offer a cost-effective
alternative to FC disks at that location. Positioned between online storage and offline storage,
near-line storage or secondary storage is an optimal cost/performance solution for hosting
cached backups and fixed data storage.

Table 4-1 summarizes the general characteristics for primary, secondary, and archival storage
in traditional IT environments.

Table 4-1 Storage classes in traditional IT environments


Class of storage Online Near-line Offline

Primary media FC disk FATA disk Tape

Price Highest Low cost-per-GB Lowest

IOPS performance Highest Minimal NA

MBps performance Highest High Lowest

Time to data Immediate ~ Immediate Mount time

Media reliability Highest Good Good - Lower

Uptime 24/7 < 24/7 < 24/7

Typical applications ERP/Oracle Fixed content Archive retrieval

Storage application types


Now that we defined the different storage classes, let’s look at application characteristics from
a storage standpoint to determine what applications are a good fit for FATA.

Chapter 4. Hardware Components 59


6786_6452_components.fm Draft Document for Review November 14, 2006 3:49 pm

IOPS and throughput


From a storage or information access perspective, applications can be classified as either
having random or sequential data access patterns. Another characteristic is the access
frequency. Random data access is measured in I/Os per second (IOPS) and is essential for
transaction-based applications, such as OLTP and databases, with random, small-block I/O.
Sequential data access, that is successive, large I/O blocks, is measured in megabytes per
second (MBps) and is crucial for bandwidth-intensive applications, such as rich media
streaming and seismic processing. These two very different application access patterns place
unique demands on the storage system. And while the controller and firmware are critical to
overall storage system performance, the disk drive plays a significant role as well.

Fibre Channel drives were designed for the highest levels of IOPS and MBps performance,
integrating advanced technologies to maximize rotational velocities and data transfer rates,
while lowering seek times and latency. In addition, the Fibre Channel interface provides
robust functionality to process multiple I/O operations concurrently of varying sizes in both
directions at once.

The FATA slower drive mechanisms result in both lower IOPS and MBps performance
compared to Fibre Channel:

The FATA drive is not designed for fast access to data or handling large amounts of random
I/O. However, FATA drives are a good fit for many bandwidth applications because they can
provide comparable throughput for short periods of time.

Access frequency
In addition to random and sequential access patterns, another consideration is access
frequency and its relationship with secondary storage. Several secondary storage
implementations identified as ideal for FATA technology generate random data access, which
on the surface does not fit the FATA performance profile. But these implementations, such as
fixed content and reference data, will have sporadic access activity on large quantities of data
and will therefore primarily be measured by cost per gigabyte and not performance. Many
non-traditional IT environments, such as high-performance computing, rich media, and
energy, will significantly benefit from enterprise-class FATA solutions. These businesses are
looking for high throughput performance at the lowest cost per gigabyte, which is exactly what
FATA can deliver.

The right FATA application


Based on our discussion of storage classes and storage application types, we can identify
specific applications that are prime targets for implementing FATA technology.

Backup application
The secondary storage implementation that fits the FATA performance profile exceptionally
well is backup, which generates sequential I/O as it streams data to the backup target, a
performance strength of FATA.

The backup of secondary data can be achieved more efficiently when the near-line storage
device acts as a caching device between Fibre Channel (FC) disks and tape, allowing the
primary disk to remain online longer. Advantages of this backup method is that it is faster and
consumes less server CPU than direct backup to tape.

60 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_components.fm

Server Disk Disk Tape

Network

Online Near-line Offline


Storage Storage Storage
Figure 17 Near-line backup scenario

Near-line storage allows disk-to-disk backups to help achieve the following benefits:
򐂰 Shorter backup time/higher application availability
Any IT department will tell you that its backup windows are either shrinking or already
nonexistent. As a result, IT personnel are always looking for ways to improve backup times
and minimize the amount of time a given application is affected by backup, either total
down time or time running in a degraded mode. By using disk as the backup target, the
backup runs and completes faster. After the data is safely stored on disk, the application is
free of the backup overhead. In addition, the data can then be moved to tape to provide
the long-term benefits of the traditional backup process.
򐂰 Faster recovery time
In the past, tape was the only means of restoring data. This is a prolonged process,
because the appropriate tape has to be located, loaded into the tape drive, and then
sequentially read to locate and retrieve the desired data. Information has become
increasingly vital to a company’s success, and the lengthy restoration time from tape can
now be avoided. Backing up data to disk, as a disk image, enables significantly faster
restoration times, because data is stored online and can be located and retrieved
immediately.
򐂰 Improved backup/restore reliability
Disk-to-disk backups create a new confidence in the ability to recover critical data by
eliminating the mechanical concerns associated with tape; one bad tape can cause large
restores to fail. Disk backups offer the same high level of RAID protection and redundancy
as the original data.
򐂰 Easier backup/restore management
Storage management software functionality can be used to create volume-level copies, or
clones, of data as a source for restoration. Disk-to-disk backup packages, however,
provide more intelligence and file-level information that enables simplified administration
and faster restores.

Reference data application


Another application is fixed-content, or reference data storage, in which data has to be online
and available, but is not necessarily being transacted every day. This is best suited for
archiving e-mail messages, image files, and other data that must be stored safely and be
readily available when needed.

Chapter 4. Hardware Components 61


6786_6452_components.fm Draft Document for Review November 14, 2006 3:49 pm

Disk
Server Online Tape
Storage

Network
Disk

Reference Offline
Data Storage
Storage
Figure 18 Reference data storage scenario

Data retention
Recent government regulations have made it necessary to store, identify, and characterize
data. The majority of this data will be unchanging and accessed infrequently, if ever. As a
result, the highest possible performance is not a requirement. These implementations require
the largest amount of storage for the least cost in the least amount of space. The FATA cost
per gigabyte advantage over Fibre Channel and high capacity drives make it an attractive
solution.

Temporary workspace
FATA is a great fit for project-based applications that need short-term, affordable capacity.

Conclusion
We have discussed when FATA is a good choice depending on the nature of an application or
the type of storage required.

Important: IBM recommends that FATA drives be employed strictly with applications such
as those discussed in “The right FATA application” on page 60. Other types of applications,
and in particular transaction processing, must be avoided.

4.4.6 FATA versus Fiber Channel drives on the DS8000


It is recommended to use the DS8000 FATA drives for applications which have a lower usage
(20% or lower r/w duty cycles) than the enterprise drives (r/w duty cycles up to 80%). Also,
from performance perspective, FATA drives have a lower random performance due to the
lower RPM and the resulting longer seek times, than the enterprise drives.

An important factor to keep in mind is that the FATA drives used in the DS8000 will protect
themselves by throttling IO based on the temperature registered by the internal sensors.
When throttled, the performance of the drives can drop by up to 50%, resulting in much higher
disk access times until the disk is able to return to its nominal temperature.

Important: Due to the lower duty cycle and the potential for I/O throttling, FATA based
volumes should not be mirrored with FC based volumes.

62 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_components.fm

Also, to keep the FATA drives from robbing the enterprise disk traffic of DS8000 cache
resources, the modified writes to Non-Volatile Storage (NVS) for the FATA arrays are limited.
The user or storage administrator is responsible for targeting appropriate workloads to the
FATA drives.

Table 4-2 shows some typical applications and the most suitable DS8000 drive type.

Table 4-2 Recommended drive types for common applications


Usage Storage Characteristics Storage Disk Type
Category Recommended

Archiving and data retention Storage Capacity, high density Near-line FATA

Backup and recovery (disk Storage Capacity, high density Near-line FATA
to disk)

Database & data mining Mix of performance and On-line Fibre Channel
capacity

Data warehouse Storage capacity, high density, On-line Fibre Channel


good performance

Document imaging and Capacity, sequential Near-line FATA


retention performance

Email Good performance, availability, On-line Fibre Channel


capacity

Enterprise eCommerce Performance, capacity, On-line Fibre Channel


availability

File serving Performance, capacity, On-line Fibre Channel


availability

Fixed content & reference Capacity Near-line FATA


data

Medical, Life Sciences Capacity, availability, variable On-line Fibre Channel


imaging performance

Multi-media (audio/video) Capacity, availability, variable On-line Fibre Channel


performance

Online transaction High performance and On-line Fibre Channel


processing (OLTP) availability

Remote data protection Good performance, availability, Near-line, FATA


capacity Off-line

Scientific and geophysics Performance, capacity On-line Fibre Channel

Surveillance data Capacity, availability Near-line FATA

Temporary storage, spool, High performance and good On-line Fibre Channel
paging availability

4.5 Host adapters


The DS8000 supports two types of host adapters: ESCON and Fibre Channel/FICON. It does
not support SCSI adapters.

Chapter 4. Hardware Components 63


6786_6452_components.fm Draft Document for Review November 14, 2006 3:49 pm

4.5.1 ESCON Host Adapters


The ESCON adapter in the DS8000 is a dual ported host adapter for connection to older
System z hosts that do not support FICON. The ports on the ESCON card use the MT-RJ
type connector.

Control units and logical paths


ESCON architecture recognizes only 16 3990 logical control units (LCUs) even though the
DS8000 is capable of emulating far more (these extra control units can be used by FICON).
Half of the LCUs (even numbered) are in server 0, and the other half (odd-numbered) are in
server 1. Because the ESCON host adapters can connect to both servers, each adapter can
address all 16 LCUs.

An ESCON link consists of two fibers, one for each direction, connected at each end by an
ESCON connector to an ESCON port. Each ESCON adapter card supports two ESCON
ports or links, and each link supports 64 logical paths.

ESCON distances
For connections without repeaters, the ESCON distances are 2 km with 50 micron multimode
fiber, and 3 km with 62.5 micron multimode fiber. The DS8000 supports all models of the IBM
9032 ESCON directors that can be used to extend the cabling distances.

Remote mirror with ESCON


The initial implementation of the ESS 2105 remote mirror function (known as Peer-to-Peer
Remote Copy, PPRC) used ESCON adapters. This was known as PPRC Version 1. The
ESCON adapters in the DS8000 do not support any form of remote mirror and copy. If you
wish to create a remote mirror between a DS8000 and an ESS 800 or another DS8000 or
DS6000, you must use Fibre Channel adapters. You cannot have a remote mirror relationship
between a DS8000 and an ESS E20 or F20 because the E20/F20 only support remote mirror
over ESCON.

ESCON supported servers


ESCON is used for attaching the DS8000 to the IBM S/390 and System z servers. The most
current list of supported servers is at this Web site:
http://www.storage.ibm.com/hardsoft/products/DS8000/supserver.htm

This site should be consulted regularly because it has the most up-to-date information on
server attachment support.

4.5.2 Fibre Channel/FICON Host Adapters


Fibre Channel is a technology standard that allows data to be transferred from one node to
another at high speeds and great distances (up to 10 km and beyond). The DS8000 uses
Fibre Channel protocol to transmit SCSI traffic inside Fibre Channel frames. It also uses Fibre
Channel to transmit FICON traffic, which uses Fibre Channel frames to carry System z I/O.

Each DS8000 Fibre Channel card offers four Fibre Channel ports (port speed of 2 or 4 Gbps
depending on the host adapter). The cable connector required to attach to this card is an LC
type. Each 2 Gbps port independently auto-negotiates to either 2 or 1 Gbps and the 4 Gbps
ports to 4 or 2 Gbps link speed. Each of the 4 ports on one DS8000 adapter can also
independently be either Fibre Channel protocol (FCP) or FICON, though the ports are initially
defined as switched point to point FCP. Selected ports will be configured to FICON
automatically based on the definition of a FICON host. Each port can be either FICON or
Fibre Channel protocol (FCP). The personality of the port is changeable via the DS Storage

64 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_components.fm

Manager GUI. A port cannot be both FICON and FCP simultaneously, but it can be changed
as required.

The card itself is PCI-X 64 Bit 133 MHz. The card is driven by a new high function, high
performance ASIC. To ensure maximum data integrity, it supports metadata creation and
checking. Each Fibre Channel port supports a maximum of 509 host login IDs and 1280
paths. This allows for the creation of very large storage area networks (SANs). The design of
the card is depicted in Figure 4-19.

Processor
QDR
PPC 1 GHz
750GX
Fibre Channel
Protocol
Engine

Data Protection Buffer


Data Mover
Fibre Channel
ASIC
Protocol
Engine Flash

Protocol Data Mover


QDR
Chipset

Figure 4-19 DS8000 Fibre Channel/FICON host adapter

Fibre Channel supported servers


The current list of servers supported by the Fibre Channel attachment is at this Web site:

http://www-03.ibm.com/servers/storage/disk/ds8000/interop.html

This document should be consulted regularly because it has the most up-to-date information
on server attachment support.

Fibre Channel distances


There are two types of host adapter cards you can select: long wave and short wave. With
long-wave laser, you can connect nodes at distances of up to 10 km (non-repeated). With
short wave you are limited to a distance of 300 to 500 metres (non-repeated). All ports on
each card must be either long wave or short wave (there can be no mixing of types within a
card).

4.6 Power and cooling


The DS8000 series power and cooling system is highly redundant.

Rack Power Control cards (RPC)


The DS8000 has a pair of redundant RPC cards that are used to control certain aspects of
power sequencing throughout the DS8000. These cards are attached to the Service
Processor (SP) card in each processor, which allows them to communicate both with the
Storage Hardware Management Console (S-HMC) and the storage facility image LPARs. The

Chapter 4. Hardware Components 65


6786_6452_components.fm Draft Document for Review November 14, 2006 3:49 pm

RPCs also communicate with each primary power supply and indirectly with each rack’s fan
sense cards and the disk enclosures in each frame.

Primary power supplies


The DS8000 primary power supply (PPS) converts input AC voltage into DC voltage. There
are high and low voltage versions of the PPS because of the varying voltages used
throughout the world. Also, because the line cord connector requirements vary widely
throughout the world, the line cord may not come with a suitable connector for your nation’s
preferred outlet. This may need to be replaced by an electrician once the machine is
delivered.

There are two redundant PPSs in each frame of the DS8000. Each PPS is capable of
powering the frame by itself. The PPS creates 208V output power for the processor complex
and I/O enclosure power supplies. It also creates 5V and 12V DC power for the disk
enclosures. There may also be an optional booster module that will allow the PPSs to
temporarily run the disk enclosures off battery, if the extended power line disturbance feature
has been purchased (see Chapter 5, “RAS - Reliability, Availability, Serviceability” on
page 69, for a complete explanation as to why this feature may or may not be necessary for
your installation).

Each PPS has internal fans to supply cooling for that power supply.

Processor and I/O enclosure power supplies


Each processor and I/O enclosure has dual redundant power supplies to convert 208V DC
into the required voltages for that enclosure or complex. Each enclosure also has its own
cooling fans.

Disk enclosure power and cooling


The disk enclosures do not have separate power supplies since they draw power directly from
the PPSs. They do, however, have cooling fans located in a plenum above the enclosures.
They draw cooling air through the front of each enclosure and exhaust air out of the top of the
frame.

Battery backup assemblies


The backup battery assemblies help protect data in the event of a loss of external power. The
9x1 models contain two battery backup assemblies while the 9x2 models contain three of
them (to support the 4-way processors). In the event of a complete loss of input AC power, the
battery assemblies are used to allow the contents of NVS memory to be written to a number
of DDMs internal to the processor complex, prior to power off.

The FC-AL DDMs are not protected from power loss unless the extended power line
disturbance feature has been purchased.

4.7 Management console network


All base models ship with one Storage Hardware Management Console (S-HMC), a keyboard
and display, plus two Ethernet switches.

S-HMC
The S-HMC is the focal point for configuration, Copy Services management, and
maintenance activities. It is possible to order two management consoles to act as a redundant

66 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_components.fm

pair. A typical configuration would be to have one internal and one external management
console. The internal S-HMC will contain a PCI modem for remote service.

Ethernet switches
In addition to the Fibre Channel switches installed in each disk enclosure, the DS8000 base
frame contains two 16-port Ethernet switches. Two switches are supplied to allow the creation
of a fully redundant management network. Each processor complex has multiple connections
to each switch. This is to allow each server to access each switch. This switch cannot be used
for any equipment not associated with the DS8000. The switches get power from the internal
power bus and thus do not require separate power outlets.

4.8 Ethernet adapter pair (for TPC RM support at R2)


The ethernet adapter pair is an optional feature for the 9x1 and 9x2 models for the support of
the TPC Replication Manager —refer to 7.4.4, “Totalstorage Productivity Center for
Replication (TPC for Replication)” on page 126 for more information. The Ethernet Adapter
Pairs are available as follows:
򐂰 RM Ethernet Adapter Pair feature #1801 (for models 931, 932, 921, and 922)
򐂰 RM Ethernet Adapter Pair A feature #1802 (for models 9B2, and 9A2)
򐂰 RM Ethernet Adapter Pair B feature #1803 (for models 9B2, and 9A2)

Chapter 4. Hardware Components 67


6786_6452_components.fm Draft Document for Review November 14, 2006 3:49 pm

68 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_RAS.fm

Chapter 5. RAS - Reliability, Availability,


Serviceability
This chapter describes the RAS (reliability, availability and serviceability) characteristics of the
DS8000 series. In this chapter we cover:
򐂰 Naming
򐂰 Processor complex RAS
򐂰 Hypervisor: Storage image independence
򐂰 Server RAS
򐂰 Host connection availability
򐂰 Disk subsystem
򐂰 Power and cooling
򐂰 Microcode updates
򐂰 Management console
򐂰 Earthquake resistance kit (R2)

© Copyright IBM Corp. 2006. All rights reserved. 69


6786_6452_RAS.fm Draft Document for Review November 14, 2006 3:49 pm

5.1 Naming
It is important to understand the naming conventions used to describe DS8000 components
and constructs in order to fully appreciate the discussion of RAS concepts.

Storage complex
This term describes a group of DS8000s managed by a single Management Console. A
storage complex may consist of just a single DS8000 storage unit.

Storage unit
A storage unit consists of a single DS8000 (including expansion frames). If your organization
has one DS8000, then you have a single storage complex that contains a single storage unit.

Storage facility image


In ESS 800 terms, a storage facility image (SFI) is the entire ESS 800. In a DS8000, an SFI
is a union of two logical partitions (LPARs), one from each processor complex. Each LPAR
hosts one server. The SFI would have control of one or more device adapter pairs and two or
more disk enclosures. Sometimes an SFI might also be referred to as a storage image.

Processor Processor
complex 0 complex 1

Storage
server 0 facility server 1
image 1

LPARs
Figure 5-1 Single image mode

In Figure 5-1 server 0 and server 1 create storage facility image 1.

Logical partitions and servers


In a DS8000, a server is effectively the software that uses a processor logical partition (a
processor LPAR), and that has access to a percentage of the memory and processor
resources available on a processor complex. This percentage is 50% for the model 9B2
—when configured with the two possible SFIs, or 100% for the models 931 and 932
—because in fact these are single SFI models. In ESS 800 terms, a server is a cluster. So in
an ESS 800 we had two servers and one storage facility image per storage unit. However,
with a DS8000 we can create logical partitions (LPARs). In the DS8300 model 9B2 this allows
the creation of four servers, two on each processor complex. One server from each processor

70 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_RAS.fm

complex is used to form a storage image. If there are four servers, there are effectively two
separate storage subsystems existing inside one DS8300 model 9B2 storage unit.

LPARs
Processor Processor
complex 0 complex 1

Storage
Server 0 facility Server 1
image 1

Storage
Server 0 facility Server 1
image 2

LPARs
Figure 5-2 DS8300 Turbo Model 9B2 - dual image mode

In Figure 5-2 we have two storage facility images (SFIs). The upper server 0 and upper server
1 form SFI 1. The lower server 0 and lower server 1 form SFI 2. In each SFI, server 0 is the
darker color (green) and server 1 is the lighter color (yellow). SFI 1 and SFI 2 may share
common hardware (the processor complexes) but they are completely separate from an
operational point of view.

Note: You may think that the lower server 0 and lower server 1 should be called server 2
and server 3. While this may make sense from a numerical point of view (for example,
there are four servers so why not number them from 0 to 3), but each SFI is not aware of
the other’s existence. Each SFI must have a server 0 and a server 1, regardless of how
many SFIs or servers there are in a DS8000 storage unit.

For more information on DS8000 series storage system partitions see Chapter 3, “Storage
system LPARs (logical partitions)” on page 27.

Processor complex
A processor complex is one System p system unit. Two processor complexes form a
redundant pair such that if either processor complex fails, the servers on the remaining
processor complex can continue to run the storage image. In an ESS 800, we would have
referred to a processor complex as a cluster.

Chapter 5. RAS - Reliability, Availability, Serviceability 71


6786_6452_RAS.fm Draft Document for Review November 14, 2006 3:49 pm

5.2 Processor complex RAS


The System p5 that constitutes the processor complex is an integral part of the DS8000
architecture. It is designed to provide an extensive set of reliability, availability, and
serviceability (RAS) features that include improved fault isolation, recovery from errors
without stopping the processor complex, avoidance of recurring failures, and predictive failure
analysis.

Reliability, availability, and serviceability


Excellent quality and reliability are inherent in all aspects of the IBM System p5 design and
manufacturing. The fundamental objective of the design approach is to minimize outages.
The RAS features help to ensure that the system performs reliably, and efficiently handles
any failures that may occur. This is achieved by using capabilities that are provided by both
the hardware, AIX 5L, and RAS code written specifically for the DS8000. The following
sections describe the RAS leadership features of IBM System p5 systems in more detail.

Fault avoidance
POWER5 systems are built to keep errors from ever happening. This quality-based design
includes such features as reduced power consumption and cooler operating temperatures for
increased reliability, enabled by the use of copper chip circuitry, SOI (silicon on insulator), and
dynamic clock-gating. It also uses mainframe-inspired components and technologies.

First Failure Data Capture


If a problem should occur, the ability to diagnose it correctly is a fundamental requirement
upon which improved availability is based. The System p5 incorporates advanced capability
in start-up diagnostics and in run-time First Failure Data Capture (FFDC) based on strategic
error checkers built into the chips.

Any errors that are detected by the pervasive error checkers are captured into Fault Isolation
Registers (FIRs), which can be interrogated by the service processor (SP). The SP in the
System p5 has the capability to access system components using special-purpose service
processor ports or by access to the error registers.

The FIRs are important because they enable an error to be uniquely identified, thus enabling
the appropriate action to be taken. Appropriate actions might include such things as a bus
retry, ECC (error checking and correction), or system firmware recovery routines. Recovery
routines could include dynamic deallocation of potentially failing components.

Errors are logged into the system non-volatile random access memory (NVRAM) and the SP
event history log, along with a notification of the event to AIX for capture in the operating
system error log. Diagnostic Error Log Analysis (diagela) routines analyze the error log
entries and invoke a suitable action, such as issuing a warning message. If the error can be
recovered, or after suitable maintenance, the service processor resets the FIRs so that they
can accurately record any future errors.

The ability to correctly diagnose any pending or firm errors is a key requirement before any
dynamic or persistent component deallocation or any other reconfiguration can take place.

Permanent monitoring
The SP that is included in the System p5 provides a way to monitor the system even when the
main processor is inoperable. The next subsection offers a more detailed description of the
monitoring functions in the System p5.

72 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_RAS.fm

Mutual surveillance
The SP can monitor the operation of the firmware during the boot process, and it can monitor
the operating system for loss of control. This enables the service processor to take
appropriate action when it detects that the firmware or the operating system has lost control.
Mutual surveillance also enables the operating system to monitor for service processor
activity and can request a service processor repair action if necessary.

Environmental monitoring
Environmental monitoring related to power, fans, and temperature is performed by the
System Power Control Network (SPCN). Environmental critical and non-critical conditions
generate Early Power-Off Warning (EPOW) events. Critical events (for example, a Class 5 AC
power loss) trigger appropriate signals from hardware to the affected components to prevent
any data loss without operating system or firmware involvement. Non-critical environmental
events are logged and reported using Event Scan. The operating system cannot program or
access the temperature threshold using the SP.

Temperature monitoring is also performed. If the ambient temperature goes above a preset
operating range, then the rotation speed of the cooling fans can be increased. Temperature
monitoring also warns the internal microcode of potential environment-related problems. An
orderly system shutdown will occur when the operating temperature exceeds a critical level.

Voltage monitoring provides warning and an orderly system shutdown when the voltage is out
of operational specification.

Self-healing
For a system to be self-healing, it must be able to recover from a failing component by first
detecting and isolating the failed component. It should then be able to take it offline, fix or
isolate it, and then reintroduce the fixed or replaced component into service without any
application disruption. Examples include:
򐂰 Bit steering to redundant memory in the event of a failed memory module to keep the
server operational
򐂰 Bit scattering, thus allowing for error correction and continued operation in the presence of
a complete chip failure (Chipkill recovery)
򐂰 Single-bit error correction using ECC without reaching error thresholds for main, L2, and
L3 cache memory
򐂰 L3 cache line deletes extended from 2 to 10 for additional self-healing
򐂰 ECC extended to inter-chip connections on fabric and processor bus
򐂰 Memory scrubbing to help prevent soft-error memory faults
򐂰 Dynamic processor deallocation

Memory reliability, fault tolerance, and integrity


The System p5 uses Error Checking and Correcting (ECC) circuitry for system memory to
correct single-bit memory failures and to detect double-bit. Detection of double-bit memory
failures helps maintain data integrity. Furthermore, the memory chips are organized such that
the failure of any specific memory module only affects a single bit within a four-bit ECC word
(bit-scattering), thus allowing for error correction and continued operation in the presence of a
complete chip failure (Chipkill recovery).

The memory DIMMs also utilize memory scrubbing and thresholding to determine when
memory modules within each bank of memory should be used to replace ones that have
exceeded their threshold of error count (dynamic bit-steering). Memory scrubbing is the
process of reading the contents of the memory during idle time and checking and correcting
any single-bit errors that have accumulated by passing the data through the ECC logic. This
function is a hardware function on the memory controller chip and does not influence normal
system memory performance.

Chapter 5. RAS - Reliability, Availability, Serviceability 73


6786_6452_RAS.fm Draft Document for Review November 14, 2006 3:49 pm

N+1 redundancy
The use of redundant parts, specifically the following ones, allows the System p5 to remain
operational with full resources:
򐂰 Redundant spare memory bits in L1, L2, L3, and main memory
򐂰 Redundant fans
򐂰 Redundant power supplies

Fault masking
If corrections and retries succeed and do not exceed threshold limits, the system remains
operational with full resources and no client or IBM Service Representative intervention is
required.

Resource deallocation
If recoverable errors exceed threshold limits, resources can be deallocated with the system
remaining operational, allowing deferred maintenance at a convenient time.

Dynamic deallocation of potentially failing components is non-disruptive, allowing the system


to continue to run. Persistent deallocation occurs when a failed component is detected; it is
then deactivated at a subsequent reboot.

Dynamic deallocation functions include:


򐂰 Processor
򐂰 L3 cache lines
򐂰 Partial L2 cache deallocation
򐂰 PCI-X bus and slots

Persistent deallocation functions include:


򐂰 Processor
򐂰 Memory
򐂰 Deconfigure or bypass failing I/O adapters
򐂰 L3 cache

Following a hardware error that has been flagged by the service processor, the subsequent
reboot of the server invokes extended diagnostics. If a processor or L3 cache has been
marked for deconfiguration by persistent processor deallocation, the boot process will attempt
to proceed to completion with the faulty device automatically deconfigured. Failing I/O
adapters will be deconfigured or bypassed during the boot process.

Concurrent Maintenance
Concurrent Maintenance provides replacement of the following parts while the processor
complex remains running:
򐂰 Disk drives
򐂰 Cooling fans
򐂰 Power Subsystems
򐂰 PCI-X adapter cards

5.3 Hypervisor: Storage image independence


A logical partition (LPAR) is a set of resources on a processor complex that supply enough
hardware to support the ability to boot and run an operating system (which we call a server).
The LPARs created on a DS8000 processor complex are used to form storage images. These
LPARs share not only the common hardware on the processor complex, including CPUs,

74 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_RAS.fm

memory, internal SCSI disks and other media bays (such as DVD-RAM), but also hardware
common between the two processor complexes. This hardware includes such things as the
I/O enclosures and the adapters installed within them.

A mechanism must exist to allow this sharing of resources in a seamless way. This
mechanism is called the hypervisor.

The hypervisor provides the following capabilities:


򐂰 Reserved memory partitions allow the setting aside of a certain portion of memory to use
as cache and a certain portion to use as NVS.
򐂰 Preserved memory support allows the contents of the NVS and cache memory areas to
be protected in the event of a server reboot.
򐂰 The sharing of I/O enclosures and I/O slots between LPARs within one storage image.
򐂰 I/O enclosure initialization control so that when one server is being initialized it doesn’t
initialize an I/O adapter that is in use by another server.
򐂰 Memory block transfer between LPARs to allow messaging.
򐂰 Shared memory space between I/O adapters and LPARs to allow messaging.
򐂰 The ability of an LPAR to power off an I/O adapter slot or enclosure or force the reboot of
another LPAR.
򐂰 Automatic reboot of a frozen LPAR or hypervisor.

5.3.1 RIO-G - a self-healing interconnect


The RIO-G interconnect is also commonly called RIO-2. Each RIO-G port can operate at 1
GHz in bidirectional mode and is capable of passing data in each direction on each cycle of
the port. This creates a redundant high-speed interconnect that allows servers on either
storage complex to access resources on any RIO-G loop. If the resource is not accessible
from one server, requests can be routed to the other server to be sent out on an alternate
RIO-G port.

5.3.2 I/O enclosure


The DS8000 I/O enclosures use hot-swap PCI-X adapters These adapters are in blind-swap
hot-plug cassettes, which allow them to be replaced concurrently. Each slot can be
independently powered off for concurrent replacement of a failed adapter, installation of a new
adapter, or removal of an old one.

In addition, each I/O enclosure has N+1 power and cooling in the form of two power supplies
with integrated fans. The power supplies can be concurrently replaced and a single power
supply is capable of supplying DC power to an I/O drawer.

5.4 Server RAS


The DS8000 design is built upon IBM’s highly redundant storage architecture. It also has the
benefit of the many years of ESS 2105 development. The DS8000 thus employs similar
methodology to the ESS to provide data integrity when performing write operations and
server failover.

Chapter 5. RAS - Reliability, Availability, Serviceability 75


6786_6452_RAS.fm Draft Document for Review November 14, 2006 3:49 pm

5.4.1 Metadata checks


When application data enters the DS8000, special codes or metadata, also known as
redundancy checks, are appended to that data. This metadata remains associated with the
application data as it is transferred throughout the DS8000. The metadata is checked by
various internal components to validate the integrity of the data as it moves throughout the
disk system. It is also checked by the DS8000 before the data is sent to the host in response
to a read I/O request. Further, the metadata also contains information used as an additional
level of verification to confirm that the data being returned to the host is coming from the
desired location on the disk.

5.4.2 Server failover and failback


To understand the process of server failover and failback, we have to understand the logical
construction of the DS8000. To better understand the contents of this section, you may want
to refer to Chapter 6, “Virtualization concepts” on page 91.

To create logical volumes on the DS8000, we work through the following constructs:
򐂰 DDMs that are installed into pre-defined array sites.
򐂰 These array sites are used to form RAID-5 or RAID-10 arrays.
򐂰 These RAID arrays then become members of a rank.
򐂰 Each rank then becomes a member of an extent pool. Each extent pool has an affinity to
either server 0 or server 1. Each extent pool is either open systems FB (fixed block) or
System z CKD (count key data).
򐂰 Within each extent pool we create logical volumes, which for open systems are called
LUNs and for System z, 3390 volumes. LUN stands for logical unit number, which is used
for SCSI addressing. Each logical volume belongs to a logical subsystem (LSS).

For open systems the LSS membership is not that important (unless you are using Copy
Services), but for System z, the LSS is the logical control unit (LCU) which equates to a 3990
(a System z disk controller which the DS8000 emulates). What is important, is that LSSs that
have an even identifying number have an affinity with server 0, while LSSs that have an odd
identifying number have an affinity with server 1. When a host operating system issues a write
to a logical volume, the DS8000 host adapter directs that write to the server that owns the
LSS of which that logical volume is a member.

If the DS8000 is being used to operate a single storage image then the following examples
refer to two servers, one running on each processor complex. If a processor complex were to
fail then one server would fail. Likewise, if a server itself were to fail, then it would have the
same effect as the loss of the processor complex it runs on.

If, however, the DS8000 is divided into two storage images, then each processor complex will
be hosting two servers. In this case, a processor complex failure would result in the loss of
two servers. The effect on each server would be identical. The failover processes performed
by each storage image would proceed independently.

Data flow
When a write is issued to a volume, this write normally gets directed to the server that owns
this volume. The data flow is that the write is placed into the cache memory of the owning
server. The write data is also placed into the NVS memory of the alternate server.

76 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_RAS.fm

NVS NVS
for odd for even
LSSs LSSs

Cache Cache
memory memory
for even for odd
LSSs LSSs

Server 0 Server 1

Figure 5-3 Normal data flow

Figure 5-3 illustrates how the cache memory of server 0 is used for all logical volumes that
are members of the even LSSs. Likewise, the cache memory of server 1 supports all logical
volumes that are members of odd LSSs. But for every write that gets placed into cache,
another copy gets placed into the NVS memory located in the alternate server. Thus the
normal flow of data for a write is:
1. Data is written to cache memory in the owning server.
2. Data is written to NVS memory of the alternate server.
3. The write is reported to the attached host as having been completed.
4. The write is destaged from the cache memory to disk.
5. The write is discarded from the NVS memory of the alternate server.

Under normal operation, both DS8000 servers are actively processing I/O requests. This
section describes the failover and failback procedures that occur between the DS8000
servers when an abnormal condition has affected one of them.

Failover
In the example depicted in Figure 5-4 on page 78, server 0 has failed. The remaining server
has to take over all of its functions. The RAID arrays, because they are connected to both
servers, can be accessed from the device adapters used by server 1.

From a data integrity point of view, the real issue is the un-destaged or modified data that
belonged to server 1 (that was in the NVS of server 0). Since the DS8000 now has only one
copy of that data (which is currently residing in the cache memory of server 1), it will now take
the following steps:
1. It destages the contents of its NVS to the disk subsystem.
2. The NVS and cache of server 1 are divided in two, half for the odd LSSs and half for the
even LSSs.
3. Server 1 now begins processing the writes (and reads) for all the LSSs.

Chapter 5. RAS - Reliability, Availability, Serviceability 77


6786_6452_RAS.fm Draft Document for Review November 14, 2006 3:49 pm

NVS NVS
NVS for for
for odd even odd
LSSs LSSs LSSs

Cache Cache Cache


for for
memory
even odd
for even LSSs LSSs
LSSs

Server 0 Server 1

Failover
Figure 5-4 Server 0 failing over its function to server 1

This entire process is known as a failover. After failover the DS8000 now operates as
depicted in Figure 5-4. Server 1 now owns all the LSSs, which means all reads and writes will
be serviced by server 1. The NVS inside server 1 is now used for both odd and even LSSs.
The entire failover process should be invisible to the attached hosts, apart from the possibility
of some temporary disk errors.

Failback
When the failed server has been repaired and restarted, the failback process is activated.
Server 1 starts using the NVS in server 0 again, and the ownership of the even LSSs is
transferred back to server 0. Normal operations with both controllers active then resumes.
Just like the failover process, the failback process is invisible to the attached hosts.

In general, recovery actions on the DS8000 do not impact I/O operation latency by more than
15 seconds. With certain limitations on configurations and advanced functions, this impact to
latency can be limited to 8 seconds. On logical volumes that are not configured with RAID-10
storage, certain RAID-related recoveries may cause latency impacts in excess of 15 seconds.
If you have real time response requirements in this area, contact IBM to determine the latest
information on how to manage your storage to meet your requirements,

5.4.3 NVS recovery after complete power loss


During normal operation, the DS8000 preserves fast writes using the NVS copy in the
alternate server. To ensure these fast writes are not lost, the DS8000 contains battery backup
units (BBUs). If all the batteries were to fail (which is extremely unlikely since the batteries are
in an N+1 redundant configuration), the DS8000 would lose this protection and consequently
that DS8000 would take all servers offline. If power is lost to a single primary power supply
this does not affect the ability of the other power supply to keep all batteries charged, so all
servers would remain online.

78 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_RAS.fm

The single purpose of the batteries is to preserve the NVS area of server memory in the event
of a complete loss of input power to the DS8000. If both power supplies in the base frame
were to stop receiving input power, the servers would be informed that they were now running
on batteries and immediately begin a shutdown procedure. Unless the power line disturbance
feature has been purchased, the BBUs are not used to keep the disks spinning. Even if they
do keep spinning, the design is to not move the data from NVS to the FC-AL disk arrays.
Instead, each processor complex has a number of internal SCSI disks which are available to
store the contents of NVS. When an on-battery condition related shutdown begins, the
following events occur:
1. All host adapter I/O is blocked.
2. Each server begins copying its NVS data to internal disk. For each server, two copies are
made of the NVS data in that server.
3. When the copy process is complete, each server shuts down.
4. When shutdown in each server is complete (or a timer expires), the DS8000 is powered
down.

When power is restored to the DS8000, the following process occurs:


1. The processor complexes power on and perform power on self tests.
2. Each server then begins boot up.
3. At a certain stage in the boot process, the server detects NVS data on its internal SCSI
disks and begins to destage it to the FC-AL disks.
4. When the battery units reach a certain level of charge, the servers come online.

An important point is that the servers will not come online until the batteries are sufficiently
charged to at least handle one outage (typically within a few minutes). In many cases,
sufficient charging will occur during the power on self test and storage image initialization.
However, if a complete discharge of the batteries has occurred, which may happen if multiple
power outages occur in a short period of time, then recharging may take up to two hours.

Because the contents of NVS are written to the internal SCSI disks of the DS8000 processor
complex and not held in battery protected NVS-RAM, the contents of NVS can be preserved
indefinitely. This means that unlike the DS6000 or the ESS 800, you are not held to a fixed
limit of time before power must be restored.

5.5 Host connection availability


Each DS8000 Fibre Channel host adapter card provides four ports for connection either
directly to a host, or to a Fibre Channel SAN switch.

Single or multiple path


Unlike the DS6000, the DS8000 does not use the concept of preferred path, since the host
adapters are shared between the servers. To show this concept, Figure 5-5 on page 80
depicts a potential machine configuration. In this example, a DS8100 Model 931 has two I/O
enclosures (which are enclosures 2 and 3). Each enclosure has four host adapters: two Fibre
Channel and two ESCON. I/O enclosure slots 3 and 6 are not depicted because they are
reserved for device adapter (DA) cards. If a host were to only have a single path to a DS8000
as shown in Figure 5-5, then it would still be able to access volumes belonging to all LSSs
because the host adapter will direct the I/O to the correct server. However, if an error were to
occur either on the host adapter (HA), host port (HP), or I/O enclosure, then all connectivity
would be lost. Clearly the host bus adapter (HBA) in the attached host is also a single point of
failure.

Chapter 5. RAS - Reliability, Availability, Serviceability 79


6786_6452_RAS.fm Draft Document for Review November 14, 2006 3:49 pm

Single pathed host

HBA

HP HP HP HP HP HP HP HP HP HP HP HP

Fibre Fibre ESCON ESCON


channel channel
Slot 1 Slot 2 Slot 4 Slot 5

Server
owning all RIO-G RIO-G I/O enclosure 2 RIO-G RIO-G
Server
owning all
even LSS odd LSS
logical logical
RIO-G RIO-G RIO-G RIO-G
volumes volumes
I/O enclosure 3
Slot 1 Slot 2 Slot 4 Slot 5
Fibre Fibre
channel channel ESCON ESCON
HP HP HP HP HP HP HP HP HP HP HP HP

Figure 5-5 Single pathed host

It is always preferable that hosts that access the DS8000 have at least two connections to
separate host ports in separate host adapters on separate I/O enclosures, as in Figure 5-6.

Dual pathed host

HBA HBA

HP HP HP HP HP HP HP HP HP HP HP HP

Fibre Fibre ESCON ESCON


channel channel
Slot 1 Slot 2 Slot 4 Slot 5

Server
RIO-G
I/O enclosure 2 RIO-G RIO-G
Server
owning all RIO-G owning all
even LSS odd LSS
logical logical
RIO-G RIO-G RIO-G RIO-G
volumes volumes
I/O enclosure 3
Slot 1 Slot 2 Slot 4 Slot 5
Fibre Fibre
channel ESCON ESCON
channel
HP HP HP HP HP HP HP HP HP HP HP HP

Figure 5-6 Dual pathed host

80 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_RAS.fm

In this example, the host is attached to different Fibre Channel host adapters in different I/O
enclosures. This is also important because during a microcode update, an I/O enclosure may
need to be taken offline. This configuration allows the host to survive a hardware failure on
any component on either path.

SAN/FICON/ESCON switches
Because a large number of hosts may be connected to the DS8000, each using multiple
paths, the number of host adapter ports that are available in the DS8000 may not be sufficient
to accommodate all the connections. The solution to this problem is the use of SAN switches
or directors to switch logical connections from multiple hosts. In a System z environment you
will need to select a SAN switch or director that also supports FICON. ESCON-attached hosts
may need an ESCON director.

A logic or power failure in a switch or director can interrupt communication between hosts and
the DS8000. We recommend that more than one switch or director be provided to ensure
continued availability. Ports from two different host adapters in two different I/O enclosures
should be configured to go through each of two directors. The complete failure of either
director leaves half the paths still operating.

Multi-pathing software
Each attached host operating system requires a mechanism to allow it to manage multiple
paths to the same device, and to preferably load balance these requests. Also, when a failure
occurs on one redundant path, then the attached host must have a mechanism to allow it to
detect that one path is gone and route all I/O requests for those logical devices to an
alternative path. Finally, it should be able to detect when the path has been restored so that
the I/O can again be load balanced. The mechanism that will be used varies by attached host
operating system and environment as detailed in the next two sections.

5.5.1 Open systems host connection — Subsystem Device Driver (SDD)


In the majority of open systems environments, IBM strongly recommends the use of the
Subsystem Device Driver (SDD) to manage both path failover and preferred path
determination. SDD is a software product that IBM supplies free of charge with the DS8000,
as well as with the SAN Volume Controller (SVC) and the DS6000.

SDD provides availability through automatic I/O path failover. If a failure occurs in the data
path between the host and the DS8000, SDD automatically switches the I/O to another path.
SDD will also automatically set the failed path back online after a repair is made. SDD also
improves performance by sharing I/O operations to a common disk over multiple active paths
to distribute and balance the I/O workload. For the DS6000 and SVC, SDD also supports the
concept of preferred path.

SDD is not available for every supported operating system. Refer to the IBM System Storage
DS8000 Host Systems Attachment Guide, SC26-7917, and the interoperability Web site for
direction as to which multi-pathing software may be required. Some devices, such as the IBM
SAN Volume Controller (SVC), do not require any multi-pathing software because the internal
software in the device already supports multi-pathing. The interoperability Web site is:

http://www.ibm.com/servers/storage/disk/ds8000/interop.html

For more information on the SDD see Section 15.1.4, “Multipathing support — Subsystem
Device Driver (SDD)” on page 290.

Chapter 5. RAS - Reliability, Availability, Serviceability 81


6786_6452_RAS.fm Draft Document for Review November 14, 2006 3:49 pm

5.5.2 System z host connection


In the System z environment, the normal practice is to provide multiple paths from each host
to a disk subsystem. Typically, four paths are installed. The channels in each host that can
access each Logical Control Unit (LCU) in the DS8000 are defined in the HCD (hardware
configuration definition) or IOCDS (I/O configuration data set) for that host. Dynamic Path
Selection (DPS) allows the channel subsystem to select any available (non-busy) path to
initiate an operation to the disk subsystem. Dynamic Path Reconnect (DPR) allows the
DS8000 to select any available path to a host to reconnect and resume a disconnected
operation; for example, to transfer data after disconnection due to a cache miss.

These functions are part of the System z architecture and are managed by the channel
subsystem in the host and the DS8000.

A physical FICON/ESCON path is established when the DS8000 port sees light on the fiber
(for example, a cable is plugged in to a DS8000 host adapter, a processor or the DS8000 is
powered on, or a path is configured online by OS/390). At this time, logical paths are
established through the port between the host and some or all of the LCUs in the DS8000,
controlled by the HCD definition for that host. This happens for each physical path between a
System z CPU and the DS8000. There may be multiple system images in a CPU. Logical
paths are established for each system image. The DS8000 then knows which paths can be
used to communicate between each LCU and each host.

CUIR
Control Unit Initiated Reconfiguration (CUIR) prevents loss of access to volumes in System z
environments due to wrong path handling. This function automates channel path
management in System z environments, in support of selected DS8000 service actions.

Control Unit Initiated Reconfiguration is available for the DS8000 when operated in the z/OS
and z/VM environments. The CUIR function automates channel path vary on and vary off
actions to minimize manual operator intervention during selected DS8000 service actions.

CUIR allows the DS8000 to request that all attached system images set all paths required for
a particular service action to the offline state. System images with the appropriate level of
software support will respond to such requests by varying off the affected paths, and either
notifying the DS8000 subsystem that the paths are offline, or that it cannot take the paths
offline. CUIR reduces manual operator intervention and the possibility of human error during
maintenance actions, at the same time reducing the time required for the maintenance. This
is particularly useful in environments where there are many systems attached to a DS8000.

5.6 Disk subsystem


The DS8000 supports RAID-5 and RAID-10. It does not support the non-RAID configuration
of disks better known as JBOD (just a bunch of disks).

5.6.1 Disk path redundancy


Each DDM in the DS8000 is attached to two 20-port SAN switches. These switches are built
into the disk enclosure controller cards. Figure 5-7 on page 83 illustrates the redundancy
features of the DS8000 switched disk architecture. Each disk has two separate connections
to the backplane. This allows it to be simultaneously attached to both switches. If either disk
enclosure controller card is removed from the enclosure, the switch that is included in that
card is also removed. However, the switch in the remaining controller card retains the ability to
communicate with all the disks and both device adapters (DAs) in a pair. Equally, each DA

82 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_RAS.fm

has a path to each switch, so it also can tolerate the loss of a single path. If both paths from
one DA fail, then it cannot access the switches; however, the other DA retains connection.

to next Server 0 to next


expansion device adapter expansion
enclosure enclosure

Server 1
device adapter

Fibre channel switch Fibre channel switch

Storage enclosure
Midplane backplane

Figure 5-7 Switched disk connections

Figure 5-7 also shows the connection paths for expansion on the far left and far right. The
paths from the switches travel to the switches in the next disk enclosure. Because expansion
is done in this linear fashion, the addition of more enclosures is completely non-disruptive.

5.6.2 RAID-5 overview


RAID-5 is one of the most commonly used forms of RAID protection.

RAID-5 theory
The DS8000 series supports RAID-5 arrays. RAID-5 is a method of spreading volume data
plus parity data across multiple disk drives. RAID-5 provides faster performance by striping
data across a defined set of DDMs. Data protection is provided by the generation of parity
information for every stripe of data. If an array member fails, then its contents can be
regenerated by using the parity data.

RAID-5 implementation in the DS8000


In a DS8000, a RAID-5 array built on one array site will contain either seven or eight disks
depending on whether the array site is supplying a spare. A seven-disk array effectively uses
one disk for parity, so it is referred to as a 6+P array (where the P stands for parity). The
reason only 7 disks are available to a 6+P array is that the eighth disk in the array site used to
build the array was used as a spare. This we then refer to as a 6+P+S array site (where the S
stands for spare). An 8-disk array also effectively uses 1 disk for parity, so it is referred to as a
7+P array.

Chapter 5. RAS - Reliability, Availability, Serviceability 83


6786_6452_RAS.fm Draft Document for Review November 14, 2006 3:49 pm

Drive failure
When a disk drive module fails in a RAID-5 array, the device adapter starts an operation to
reconstruct the data that was on the failed drive onto one of the spare drives. The spare that
is used will be chosen based on a smart algorithm that looks at the location of the spares and
the size and location of the failed DDM. The rebuild is performed by reading the
corresponding data and parity in each stripe from the remaining drives in the array,
performing an exclusive-OR operation to recreate the data, then writing this data to the spare
drive.

While this data reconstruction is going on, the device adapter can still service read and write
requests to the array from the hosts. There may be some degradation in performance while
the sparing operation is in progress because some DA and switched network resources are
being used to do the reconstruction. Due to the switch-based architecture, this effect will be
minimal. Additionally, any read requests for data on the failed drive requires data to be read
from the other drives in the array and then the DA performs an operation to reconstruct the
data.

Performance of the RAID-5 array returns to normal when the data reconstruction onto the
spare device completes. The time taken for sparing can vary, depending on the size of the
failed DDM and the workload on the array, the switched network, and the DA. The use of
arrays across loops (AAL) both speeds up rebuild time and decreases the impact of a rebuild.

5.6.3 RAID-10 overview


RAID-10 is not as commonly used as RAID-5, mainly because more raw disk capacity is
needed for every GB of effective capacity.

RAID-10 theory
RAID-10 provides high availability by combining features of RAID-0 and RAID-1. RAID-0
optimizes performance by striping volume data across multiple disk drives at a time. RAID-1
provides disk mirroring, which duplicates data between two disk drives. By combining the
features of RAID-0 and RAID-1, RAID-10 provides a second optimization for fault tolerance.
Data is striped across half of the disk drives in the RAID-1 array. The same data is also
striped across the other half of the array, creating a mirror. Access to data is preserved if one
disk in each mirrored pair remains available. RAID-10 offers faster data reads and writes than
RAID-5 because it does not need to manage parity. However, with half of the DDMs in the
group used for data and the other half to mirror that data, RAID-10 disk groups have less
capacity than RAID-5 disk groups.

RAID-10 implementation in the DS8000


In the DS8000 the RAID-10 implementation is achieved using either six or eight DDMs. If
spares exist on the array site, then six DDMs are used to make a three-disk RAID-0 array
which is then mirrored. If spares do not exist on the array site then eight DDMs are used to
make a four-disk RAID-0 array which is then mirrored.

Drive failure
When a disk drive module (DDM) fails in a RAID-10 array, the controller starts an operation to
reconstruct the data from the failed drive onto one of the hot spare drives. The spare that is
used will be chosen based on a smart algorithm that looks at the location of the spares and
the size and location of the failed DDM. Remember a RAID-10 array is effectively a RAID-0
array that is mirrored. Thus when a drive fails in one of the RAID-0 arrays, we can rebuild the
failed drive by reading the data from the equivalent drive in the other RAID-0 array.

84 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_RAS.fm

While this data reconstruction is going on, the DA can still service read and write requests to
the array from the hosts. There may be some degradation in performance while the sparing
operation is in progress because some DA and switched network resources are being used to
do the reconstruction. Due to the switch-based architecture of the DS8000, this effect will be
minimal. Read requests for data on the failed drive should not be affected because they can
all be directed to the good RAID-1 array.

Write operations will not be affected. Performance of the RAID-10 array returns to normal
when the data reconstruction onto the spare device completes. The time taken for sparing
can vary, depending on the size of the failed DDM and the workload on the array and the DA.

Arrays across loops


The DS8000 implements the concept of arrays across loops (AAL). With AAL, an array site is
actually split into two halves. Half of the site is located on the first disk loop of a DA pair and
the other half is located on the second disk loop of that DA pair. It is implemented primarily to
maximize performance. However, in RAID-10 we are able to take advantage of AAL to provide
a higher level of redundancy. The DS8000 RAS code will deliberately ensure that one RAID-0
array is maintained on each of the two loops created by a DA pair. This means that in the
extremely unlikely event of a complete loop outage, the DS8000 would not lose access to the
RAID-10 array. This is because while one RAID-0 array is offline, the other remains available
to service disk I/O.

5.6.4 Spare creation


When the array sites are created on a DS8000, the DS8000 microcode determines which
sites will contain spares. The first four array sites will normally each contribute one spare to
the DA pair, with two spares being placed on each loop. In general, each device adapter pair
will thus have access to four spares.

On the ESS 800 the spare creation policy was to have four DDMs on each SSA loop for each
DDM type. This meant that on a specific SSA loop it was possible to have 12 spare DDMs if
you chose to populate a loop with three different DDM sizes. With the DS8000 the intention is
to not do this. A minimum of one spare is created for each array site defined until the following
conditions are met:
򐂰 A minimum of 4 spares per DA pair
򐂰 A minimum of 4 spares of the largest capacity array site on the DA pair
򐂰 A minimum of 2 spares of capacity and RPM greater than or equal to the fastest array site
of any given capacity on the DA pair

Floating spares
The DS8000 implements a smart floating technique for spare DDMs. On an ESS 800, the
spare floats. This means that when a DDM fails and the data it contained is rebuilt onto a
spare, then when the disk is replaced, the replacement disk becomes the spare. The data is
not migrated to another DDM, such as the DDM in the original position the failed DDM
occupied. So in other words, on an ESS 800 there is no post repair processing.

The DS8000 microcode may choose to allow the hot spare to remain where it has been
moved, but it may instead choose to migrate the spare to a more optimum position. This will
be done to better balance the spares across the DA pairs, the loops, and the enclosures. It
may be preferable that a DDM that is currently in use as an array member be converted to a
spare. In this case the data on that DDM will be migrated in the background onto an existing
spare. This process does not fail the disk that is being migrated, though it does reduce the
number of available spares in the DS8000 until the migration process is complete.

Chapter 5. RAS - Reliability, Availability, Serviceability 85


6786_6452_RAS.fm Draft Document for Review November 14, 2006 3:49 pm

A smart process will be used to ensure that the larger or higher RPM DDMs always act as
spares. This is preferable because if we were to rebuild the contents of a 146 GB DDM onto a
300 GB DDM, then approximately half of the 300 GB DDM will be wasted since that space is
not needed. The problem here is that the failed 146 GB DDM will be replaced with a new
146 GB DDM. So the DS8000 microcode will most likely migrate the data back onto the
recently replaced 146 GB DDM. When this process completes, the 146 GB DDM will rejoin
the array and the 300 GB DDM will become the spare again. Another example would be if we
fail a 73 GB 15k RPM DDM onto a 146 GB 10k RPM DDM. This means that the data has now
moved to a slower DDM, but the replacement DDM will be the same as the failed DDM. This
means the array will have a mix of RPMs. This is not desirable. Again, a smart migrate of the
data will be performed once suitable spares have become available.

Hot plugable DDMs


Replacement of a failed drive does not affect the operation of the DS8000 because the drives
are fully hot plugable. Due to the fact that each disk plugs into a switch, there is no loop break
associated with the removal or replacement of a disk. In addition there is no potentially
disruptive loop initialization process.

Over configuration of spares


The DDM sparing policies support the over configuration of spares. This possibility may be of
interest to some installations as it allows the repair of some DDM failures to be deferred until
a later repair action is required.

5.6.5 Predictive Failure Analysis (PFA)


The drives used in the DS8000 incorporate Predictive Failure Analysis (PFA) and can
anticipate certain forms of failures by keeping internal statistics of read and write errors. If the
error rates exceed predetermined threshold values, the drive will be nominated for
replacement. Because the drive has not yet failed, data can be copied directly to a spare
drive. This avoids using RAID recovery to reconstruct all of the data onto the spare drive.

5.6.6 Disk scrubbing


The DS8000 will periodically read all sectors on a disk. This is designed to occur without any
interference with application performance. If ECC-correctable bad bits are identified, the bits
are corrected immediately by the DS8000. This reduces the possibility of multiple bad bits
accumulating in a sector beyond the ability of ECC to correct them. If a sector contains data
that is beyond ECC's ability to correct, then RAID is used to regenerate the data and write a
new copy onto a spare sector of the disk. This scrubbing process applies to both array
members and spare DDMs.

5.7 Power and cooling


The DS8000 has completely redundant power and cooling. Every power supply and cooling
fan in the DS8000 operates in what is known as N+1 mode. This means that there is always
at least one more power supply, cooling fan, or battery than is required for normal operation.
In most cases this simply means duplication.

Primary power supplies


Each frame has two primary power supplies (PPS). Each PPS produces voltages for two
different areas of the machine:

86 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_RAS.fm

򐂰 208V is produced to be supplied to each I/O enclosure and each processor complex. This
voltage is placed by each supply onto two redundant power buses.
򐂰 12V and 5V is produced to be supplied to the disk enclosures.

If either PPS fails, the other can continue to supply all required voltage to all power buses in
that frame. The PPS can be replaced concurrently.

Important: It should be noted that if you install the DS8000 such that both primary power
supplies are attached to the same circuit breaker or the same switch board, then the
DS8000 will not be well protected from external power failures. This is a very common
cause of unplanned outages.

Battery backup units


Each frame with I/O enclosures, or every frame if the power line disturbance feature is
installed, will have battery backup units (BBU). Each BBU can be replaced concurrently,
provided no more than one BBU is unavailable at any one time. The DS8000 BBUs have a
planned working life of at least four years.

Rack cooling fans


Each frame has a cooling fan plenum located above the disk enclosures. The fans in this
plenum draw air from the front of the DDMs and then move it out through the top of the frame.
There are multiple redundant fans in each enclosure. Each fan can be replaced concurrently.

Rack power control card (RPC)


The rack power control cards are part of the power management infrastructure of the
DS8000. There are two RPC cards for redundancy. Each card can independently control
power for the entire DS8000.

5.7.1 Building power loss


The DS8000 uses an area of server memory as non-volatile storage (NVS). This area of
memory is used to hold data that has not been written to the disk subsystem. If building power
were to fail, where both primary power supplies (PPSs) in the base frame were to report a
loss of AC input power, then the DS8000 must take action to protect that data.

5.7.2 Power fluctuation protection


The DS8000 base frame contains battery backup units that are intended to protect modified
data in the event of a complete power loss. If a power fluctuation occurs that causes a
momentary interruption to power (often called a brownout) then the DS8000 will tolerate this
for approximately 30ms. If the power line disturbance feature is not present on the DS8000,
then after that time, the DDMs will stop spinning and the servers will begin copying the
contents of NVS to the internal SCSI disks in the processor complexes. For many customers
who use UPS (uninterruptible power supply) technology, this is not an issue. UPS-regulated
power is in general very reliable, so additional redundancy in the attached devices is often
completely unnecessary.

If building power is not considered reliable then the addition of the extended power line
disturbance feature should be considered. This feature adds two separate pieces of hardware
to the DS8000:

Chapter 5. RAS - Reliability, Availability, Serviceability 87


6786_6452_RAS.fm Draft Document for Review November 14, 2006 3:49 pm

1. For each primary power supply in each frame of the DS8000, a booster module is added
that converts 208V battery power into 12V and 5V. This is to supply the DDMs with power
directly from the batteries. The PPSs do not normally receive power from the BBUs.
2. Batteries will be added to expansion racks that did not already have them. Base racks and
expansion racks with I/O enclosures get batteries by default. Expansion racks that do not
have I/O enclosures normally do not get batteries.

With the addition of this hardware, the DS8000 will be able to run for up to 50 seconds on
battery power, before the servers begin to copy NVS to SCSI disk and then shutdown. This
would allow for a 50 second interruption to building power with no outage to the DS8000.

5.7.3 Power control of the DS8000


The DS8000 does not possess a white power switch to turn the DS8000 storage unit off and
on, as was the case with the ESS 800. All power sequencing is done via the Service
Processor Control Network (SPCN) and RPCs. If the user wishes to power the DS8000 off,
they must do so using the management tools provided by the Storage Hardware
Management Console (S-HMC). If the S-HMC is not functional, then it will not be possible to
control the power sequencing of the DS8000 until the S-HMC function is restored. This is one
of the benefits that is gained by purchasing a redundant S-HMC.

5.7.4 Emergency power off (EPO)


Each DS8000 frame has an emergency power off switch. This button is intended purely to
remove power from the DS8000 in the following extreme cases:
򐂰 The DS8000 has developed a fault which is placing the environment at risk, such as a fire.
򐂰 The DS8000 is placing human life at risk, such as the electrocution of a service
representative.

Apart from these two contingencies (which are highly unlikely), the EPO switch should never
be used. The reason for this is that the DS8000 NVS storage area is not directly protected by
batteries. If building power is lost, the DS8000 can use its internal batteries to destage the
data from NVS memory to a variably sized disk area to preserve that data until power is
restored. However, the EPO switch does not allow this destage process to happen and all
NVS data is lost. This will most likely result in data loss.

If you need to power the DS8000 off for building maintenance, or to relocate it, you should
always use the S-HMC to achieve this.

5.8 Microcode updates


The DS8000 contains many discrete redundant components. Most of these components have
firmware that can be updated. This includes the processor complexes, device adapters, and
host adapters. Each DS8000 server also has an operating system (AIX) and Licensed
Machine Code (LMC) that can be updated. As IBM continues to develop and improve the
DS8000, new releases of firmware and licensed machine code become available to offer
improvements in both function and reliability.

For detailed discussion on microcode updates refer to Chapter 18, “Licensed machine code”
on page 375.

88 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_RAS.fm

Concurrent code updates


The architecture of the DS8000 allows for concurrent code updates. This is achieved by using
the redundant design of the DS8000. In general, redundancy is lost for a short period as each
component in a redundant pair is updated.

5.9 Management console


The DS8000 management network consists of redundant Ethernet switches and redundant
Storage Hardware Management (S-HMC) consoles.

S-HMC
The S-HMC is used to perform configuration, management, and maintenance activities on the
DS8000. It can be ordered to be located either physically inside the base frame or external for
mounting in a customer-supplied rack.

If the S-HMC is not operational then it is not possible to perform maintenance, power the
DS8000 up or down, or perform Copy Services tasks such as the establishment of
FlashCopies. It is thus recommended to order two management consoles to act as a
redundant pair. Alternatively, if TotalStorage Productivity Center - Replication Manager (TPC
RM) is used, copy services tasks can be managed by that tool if the S-HMC is unavailable.

Ethernet switches
Each DS8000 base frame contains two 16-port Ethernet switches. Two switches are supplied
to allow the creation of a fully redundant management network. Each server in the DS8000
has a connection to each switch. Each S-HMC also has a connection to each switch. This
means that should a single Ethernet switch fail, all traffic can successfully travel from either
S-HMC to any server in the storage unit using the alternate switch.

Remote support and call home


Call Home is the capability of the DS HMC to contact IBM support services to report a
problem. This is referred to as Call Home for service. The DS HMC will also provide
machine-reported product data (MRPD) information to IBM by way of the Call Home facility.

IBM Service personnel located outside of the client facility log in to the DS HMC to provide
service and support.

The remote support as well as the call home options are described in detail in Section 20.1,
“Call Home and remote support” on page 392.

5.10 Earthquake resistance kit (R2)


The Earthquake Resistance Kit is an optional seismic kit for stabilizing the storage unit rack,
so that the rack complies with IBM earthquake resistance standards. It helps to prevent
human injury and ensures that the system will be available following the earthquake, by
limiting potential damage to critical system components such as hard drives.

A storage unit rack with this optional seismic kit includes cross-braces on the front and rear of
the rack which prevent the rack from twisting. Hardware at the bottom of the rack secures it to
the floor. Depending on the flooring in your environment, specifically non-raised floors,
installation of required floor mounting hardware may be disruptive.

The Earthquake Resistance Kit is an optional feature for the DS8000 series.

Chapter 5. RAS - Reliability, Availability, Serviceability 89


6786_6452_RAS.fm Draft Document for Review November 14, 2006 3:49 pm

90 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_Virtualization_Concepts.fm

Chapter 6. Virtualization concepts


This chapter describes virtualization concepts as they apply to the DS8000. In this chapter we
cover the following topics:
򐂰 Virtualization definition
򐂰 Storage system virtualization
򐂰 The abstraction layers for disk virtualization
– Array sites
– Arrays
– Ranks
– Extent pools
– Logical volumes
– Logical subsystems (LSS)
– Volume access
– Summary of the virtualization hierarchy
– Placement of data
򐂰 Benefits of virtualization

© Copyright IBM Corp. 2006. All rights reserved. 91


6786_6452_Virtualization_Concepts.fm Draft Document for Review November 14, 2006 3:49 pm

6.1 Virtualization definition


In a fast changing world, to react quickly to changing business conditions, IT infrastructure
must allow for on-demand changes. Virtualization is key to an on-demand infrastructure.
However, when talking about virtualization many vendors are talking about different things.

One important feature of the DS8000 is the virtualization of a whole storage subsystem. For
example, if a service provider runs workloads for different banks, it would be best to
completely separate the workloads. This could be done on the host side with IBM’s LPAR
technology; the same technology is now also available for a storage system, the IBM DS8000.

For this chapter, the definition of virtualization is the abstraction process from the physical
disk drives to a logical volume that the hosts and servers see as if it were a physical disk.

6.2 Storage system virtualization


The DS8300 Turbo Model 9B2 supports LPAR mode with up to two storage logical partitions
(storage images) on a physical storage system unit. In each partition runs a storage facility
image (SFI), a virtual storage subsystem with its own copy of Licensed Internal Code (LIC),
which consists of the AIX kernel and the functional code. Both SFIs share the physical
hardware and the LPAR hypervisor manages this sharing. See Chapter 3, “Storage system
LPARs (logical partitions)” on page 27 for more details about LPARs.

Like in non-LPAR mode, where there are two SMPs running an AIX kernel and forming a
storage complex with two servers, server0 and server1, a SFI is a storage complex of its own.
Since it does not own the physical hardware (the storage unit), you can think of it as a virtual
storage system. Each SFI has a server 0 and a server 1 is totally separate from the other SFI
by the LPAR hypervisor. Disk drives and arrays are owned by one or the other SFI, they
cannot be shared. Figure 6-1 illustrates the storage LPAR concept. In the following section,
server 0 or server 1 could also mean server 0 or server 1 of a SFI running in an LPAR.

Logical
view: Storage Facility image 1 Storage Facility image 2
virtual
LIC RIO-G LIC LIC RIO-G LIC
Storage Memory
I/O I/O
Memory Memory
I/O I/O
Memory

Facility Processor Processor Processor Processor

images LPAR Hypervisor

takes part of

Physical takes part of takes part of

view: RIO-G
physical I/O I/O
storage Memory Memory
unit
Processor Processor

Figure 6-1 Storage Facility virtualization

92 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_Virtualization_Concepts.fm

6.3 The abstraction layers for disk virtualization


In this chapter, when talking about virtualization, we are talking about the process of
preparing a bunch of physical disk drives (DDMs) to be something that can be used from an
operating system, which means we are talking about the creation of LUNs.

The DS8000 is populated with switched FC-AL disk drives that are mounted in disk
enclosures. You order disk drives in groups of 16 drives of the same capacity and RPM. The
disk drives can be accessed by a pair of device adapters. Each device adapter has four paths
to the disk drives. The four paths provide two FC-AL device interfaces, each with two paths,
such that either path can be used to communicate with any disk drive on that device interface
(in other words, the paths are redundant). One device interface from each device adapter is
connected to a set of FC-AL devices such that either device adapter has access to any disk
drive through two independent switched fabrics (in other words, the device adapters and
switches are redundant).

Each device adapter has four ports and since device adapters operate in pairs, there are
eight ports or paths to the disk drives. All eight paths can operate concurrently and could
access all disk drives on the attached fabric. In normal operation, however, disk drives are
typically accessed by one device adapter. Which device adapter owns the disk is defined
during the logical configuration process. This avoids any contention between the two device
adapters for access to the disks.

I/O Enclosure

I/O Enclosure
RIO-G HA HA DA HA HA DA

HA HA DA HA HA DA
Storage enclosure pair
Server0

Server1

Switches

Switched loop 1 Switched loop 2

Figure 6-2 Physical layer as the base for virtualization

Figure 6-2 shows the physical layer on which virtualization is based.

Chapter 6. Virtualization concepts 93


6786_6452_Virtualization_Concepts.fm Draft Document for Review November 14, 2006 3:49 pm

Compare this with the ESS design, where there was a real loop and having an 8-pack close to
a device adapter was an advantage. This is no longer relevant for the DS8000. Because of
the switching design, each drive is in close reach of the device adapter, apart from a few more
hops through the Fibre Channel switches for some drives. So, it is not really a loop, but a
switched FC-AL loop with the FC-AL addressing schema: Arbitrated Loop Physical
Addressing (AL-PA).

6.3.1 Array sites


An array site is a group of eight DDMs. What DDMs make up an array site is pre-determined
by the DS8000, but note, that there is no pre-determined server affinity for array sites. The
DDMs selected for an array site are chosen from two disk enclosures on different loops; see
Figure 6-3. The DDMs in the array site are of the same DDM type, which means the same
capacity and the same speed (RPM).

Array
Site

Switch

Loop 1 Loop 2
Figure 6-3 Array site

As you can see from Figure 6-3, array sites span loops. Four DDMs are taken from loop 1 and
another four DDMs from loop 2. Array sites are the building blocks used to define arrays.

6.3.2 Arrays
An array is created from one array site. Forming an array means defining it as a specific
RAID type. The supported RAID types are RAID-5 and RAID-10 (see 5.6.2, “RAID-5
overview” on page 83 and 5.6.3, “RAID-10 overview” on page 84). For each array site you can
select a RAID type. The process of selecting the RAID type for an array is also called defining
an array.

Note: In the DS8000 implementation, one array is defined using one array site.

According to the DS8000 sparing algorithm, from zero to two spares may be taken from the
array site. This is discussed further in Section 5.6.4, “Spare creation” on page 85.

94 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_Virtualization_Concepts.fm

Figure 6-4 shows the creation of a RAID-5 array with one spare, also called a 6+P+S array
(capacity of 6 DDMs for data, capacity of one DDM for parity, and a spare drive). According to
the RAID-5 rules, parity is distributed across all seven drives in this example.

On the right-hand side in Figure 6-4 the terms D1, D2, D3, and so on, stand for the set of data
contained on one disk within a stripe on the array. If, for example, 1 GB of data is written, it is
distributed across all the disks of the array.

Array
Site
D1 D7 D13 ...

D2 D8 D14 ...

D3 D9 D15 ...

Creation of D4 D10 D16 ...

an array D5 D11 P ...

D6 P D17 ...

P D12 D18 ...


Data
Data
Data
Data
RAID
Data
Data
Array
Parity Spare
Spare

Figure 6-4 Creation of an array

So, an array is formed using one array site, and while the array could be accessed by each
adapter of the device adapter pair, it is managed by one device adapter. Which adapter and
which server manages this array is be defined later in the configuration path.

6.3.3 Ranks
In the DS8000 virtualization hierarchy there is another logical construct, a rank. When
defining a new rank, its name is chosen by the DS Storage Manager, for example, R1, R2, or
R3, and so on. You have to add an array to a rank.

Note: In the DS8000 implementation, a rank is built using just one array.

The available space on each rank will be divided into extents. The extents are the building
blocks of the logical volumes. An extent is striped across all disks of an array as shown in
Figure 6-5 on page 96 and indicated by the small squares in Figure 6-6 on page 97.

The process of forming a rank does two things:


򐂰 The array is formatted for either FB (open systems) or CKD (System z) data. This
determines the size of the set of data contained on one disk within a stripe on the array.
򐂰 The capacity of the array is subdivided into equal sized partitions, called extents. The
extent size depends on the extent type, FB or CKD.

Chapter 6. Virtualization concepts 95


6786_6452_Virtualization_Concepts.fm Draft Document for Review November 14, 2006 3:49 pm

An FB rank has an extent size of 1 GB (where 1 GB equals 230 bytes).

People who work in the System z environment typically do not deal with gigabytes, instead
they think of storage in metrics of the original 3390 volume sizes. A 3390 Model 3 is three
times the size of a Model 1, and a Model 1 has 1113 cylinders which is about 0.94 GB. The
extent size of a CKD rank therefore was chosen to be one 3390 Model 1 or 1113 cylinders.

One extent is the minimum physical allocation unit when a LUN or CKD volume is created, as
we discuss later. It is still possible to define a CKD volume with a capacity that is an integral
multiple of one cylinder or a fixed block LUN with a capacity that is an integral multiple of 128
logical blocks (64K bytes). However, if the defined capacity is not an integral multiple of the
capacity of one extent, the unused capacity in the last extent is wasted. For instance, you
could define a 1 cylinder CKD volume, but 1113 cylinders (1 extent) is allocated and 1112
cylinders would be wasted.

Figure 6-5 shows an example of an array that is formatted for FB data with 1 GB extents (the
squares in the rank just indicate that the extent is composed of several blocks from different
DDMs).

Data D1 D7 D13 ...


Data D2 D8 D14 ...
Data
Data
RAID D3 D9 D15 ...

Array
D4 D10 D16 ...
Data
D5 D11 P ...
Data
Parity D6 P D17 ...

Spare P D12 D18 ...

Creation of
a Rank
....

FB Rank
....

....

1GB 1GB 1GB 1GB ....

of 1GB
....

.... extents
....

....

Figure 6-5 Forming an FB rank with 1 GB extents

6.3.4 Extent pools


An extent pool is a logical construct to aggregate the extents from a set of ranks to form a
domain for extent allocation to a logical volume. Typically the set of ranks in the extent pool
would have the same RAID type and the same disk RPM characteristics so that the extents in
the extent pool have homogeneous characteristics. There is no predefined affinity of ranks or
arrays to a storage server. The affinity of the rank (and it's associated array) to a given server
is determined at the point it is assigned to an extent pool.

One or more ranks with the same extent type can be assigned to an extent pool. One rank can
be assigned to only one extent pool. There can be as many extent pools as there are ranks.

96 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_Virtualization_Concepts.fm

The DS Storage Manager GUI guides the user to use the same RAID types in an extent pool.
As such, when an extent pool is defined, it must be assigned with the following attributes:
– Server affinity
– Extent type
– RAID type

The minimum number of extent pools is one; however, normally it should be at least two with
one assigned to server 0 and the other to server 1 so that both servers are active. In an
environment where FB and CKD are to go onto the DS8000 storage server, four extent pools
would provide one FB pool for each server, and one CKD pool for each server, to balance the
capacity between the two servers. Figure 6-6 is an example of a mixed environment with CKD
and FB extent pools. Additional extent pools may also be desirable to segregate ranks with
different DDM types. Extent pools are expanded by adding more ranks to the pool. Ranks are
organized in two rank groups; rank group 0 is controlled by server 0 and rank group 1 is
controlled by server 1.

Important: Capacity should be balanced between the two servers for best performance.

Extent Pool CKD0 Extent Pool CKD1

1113 1113 1113 1113 1113 1113 1113 1113


Cyl. Cyl. Cyl. Cyl. Cyl. Cyl. Cyl. Cyl.
CKD CKD CKD CKD CKD CKD CKD CKD

1113 1113 1113 1113


Cyl. Cyl. Cyl. Cyl.
CKD CKD CKD CKD Extent Pool FBprod
Server0

Server1
1GB 1GB 1GB 1GB
FB FB FB FB
Extent Pool FBtest

1GB 1GB 1GB 1GB 1GB 1GB 1GB 1GB


FB FB FB FB FB FB FB FB

1GB 1GB 1GB 1GB 1GB 1GB 1GB 1GB


FB FB FB FB FB FB FB FB

Figure 6-6 Extent pools

6.3.5 Logical volumes


A logical volume is composed of a set of extents from one extent pool.

On a DS8000 up to 65280 (we use the abbreviation 64K in this discussion, even though it is
actually 65536 - 256, which is not quite 64K in binary) volumes can be created (64K CKD, or
64K FB volumes, or a mix of both types, but the sum cannot exceed 64K).

Chapter 6. Virtualization concepts 97


6786_6452_Virtualization_Concepts.fm Draft Document for Review November 14, 2006 3:49 pm

Fixed Block LUNs


A logical volume composed of fixed block extents is called a LUN. A fixed block LUN is
composed of one or more 1 GB (230 ) extents from one FB extent pool. A LUN cannot span
multiple extent pools, but a LUN can have extents from different ranks within the same extent
pool. You can construct LUNs up to a size of 2 TB (240 ).

LUNs can be allocated in binary GB (230 bytes), decimal GB (109 byes), or 512 or 520 byte
blocks. However, the physical capacity that is allocated for a LUN is always a multiple of 1 GB,
so it is a good idea to have LUN sizes that are a multiple of a gigabyte. If you define a LUN
with a LUN size that is not a multiple of 1 GB, for example, 25.5 GB, the LUN size is 25.5 GB,
but 26 GB are physically allocated and 0.5 GB of the physical storage is unusable.

CKD volumes
A System z CKD volume is composed of one or more extents from one CKD extent pool. CKD
extents are of the size of 3390 Model 1, which has 1113 cylinders. However, when you define
a System z CKD volume, you do not specify the number of 3390 Model 1 extents but the
number of cylinders you want for the volume.

You can define CKD volumes with up to 65520 cylinders, which is about 55.6 GB.

If the number of cylinders specified is not an integral multiple of 1113 cylinders, then some
space in the last allocated extent is wasted. For example, if you define 1114 or 3340
cylinders, 1112 cylinders are wasted. For maximum storage efficiency, you should consider
allocating volumes that are exact multiples of 1113 cylinders. In fact, integral multiples of 3339
cylinders should be consider for future compatibility.

If you want to use the maximum number of cylinders (65520), you should consider that this is
not a multiple of 1113. You could go with 65520 cylinders and waste 147 cylinders for each
volume (the difference to the next multiple of 1113) or you might be better off with a volume
size of 64554 cylinders which is a multiple of 1113 (factor of 58), or even better, with 63441
cylinders which is a multiple of 3339, a model 3 size.

Extent Pool CKD0


Logical
Rank-x 1113 1113 1113 1113
free
3390 Mod. 3
3390 Mod. 3

Rank-y 1113 1113


used
free
used
free Allocate 3226 cylinder
volume

Extent Pool CKD0

Rank-x 1113 1113 1113 1113


used
3390 Mod. 3 Volume with
Rank-y
3226 cylinders
1113 1000
used used
used used

113 cylinders unused

Figure 6-7 Allocation of a CKD logical volume

98 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_Virtualization_Concepts.fm

A CKD volume cannot span multiple extent pools, but a volume can have extents from
different ranks in the same extent pool. Figure 6-7 shows how a logical volume is allocated
with a CKD volume as an example. The allocation process for FB volumes is very similar and
is shown in Figure 6-8.

Extent Pool FBprod


Logical
Rank-a 1 GB 1 GB 1 GB 1 GB
free
3 GB LUN
3 GB LUN

Rank-b 1 GB 1 GB
used
free
used
free Allocate a 3 GB LUN

Extent Pool FBprod

Rank-a 1 GB 1 GB 1 GB 1 GB
used
3 GB LUN 2.9 GB LUN
Rank-b
created
1 GB 1 GB
used used
used used
100 MB unused

Figure 6-8 Creation of an FB LUN

System i LUNs
System i LUNs are also composed of fixed block 1 GB extents. There are, however, some
special aspects with System i LUNs. LUNs created on a DS8000 are always RAID protected.
LUNs are based on RAID-5 or RAID-10 arrays. However, you might want to deceive OS/400
and tell it that the LUN is not RAID protected. This causes OS/400 to do its own mirroring.
System i LUNs can have the attribute unprotected, in which case the DS8000 will lie to an
System i host and tell it that the LUN is not RAID protected.

OS/400 only supports certain fixed volume sizes, for example model sizes of 8.5 GB,
17™.5 GB, and 35.1 GB. These sizes are not multiples of 1 GB and hence, depending on the
model chosen, some space is wasted. System i LUNs expose a 520 byte block to the host.
The operating system uses 8 of these bytes so the usable space is still 512 bytes like other
SCSI LUNs. The capacities quoted for the System i LUNs are in terms of the 512 byte block
capacity and are expressed in GB (109 ). These capacities should be converted to GB (230 )
when considering effective utilization of extents that are 1 GB (230 ). For more information on
this topic see Appendix 17, “System i considerations” on page 343.

Allocation and deletion of LUNs/CKD volumes


All extents of the ranks assigned to an extent pool are independently available for allocation to
logical volumes. The extents for a LUN/volume are logically ordered, but they do not have to
come from one rank and the extents do not have to be contiguous on a rank. The current
extent allocation algorithm of the DS8000 will not distribute the extents across ranks. The
algorithm will use available extents within one rank, unless there are not enough free extents
available in that rank, but free extents in another rank of the same extent pool. While this
algorithm exists, the user may want to consider putting one rank per extent pool to control the

Chapter 6. Virtualization concepts 99


6786_6452_Virtualization_Concepts.fm Draft Document for Review November 14, 2006 3:49 pm

allocation of logical volumes across ranks to improve performance, except for the case in
which the logical volume needed is larger than the total capacity of the single rank.

This construction method of using fixed extents to form a logical volume in the DS8000 allows
flexibility in the management of the logical volumes. We can now delete LUNs and reuse the
extents of those LUNs to create other LUNs, maybe of different sizes. One logical volume can
be removed without affecting the other logical volumes defined on the same extent pool.
Compared to the ESS, where it was not possible to delete a LUN unless the whole array was
reformatted, this DS8000 implementation gives you much more flexibility and allows for on
demand changes according to your needs.

Since the extents are cleaned after you have deleted a LUN or CKD volume, it may take some
time until these extents are available for reallocation. The reformatting of the extents is a
background process.

6.3.6 Logical subsystems (LSS)


A logical subsystem (LSS) is another logical construct. It groups logical volumes, LUNs, in
groups of up to 256 logical volumes.

On an ESS there was a fixed association between logical subsystems (and their associated
logical volumes) and device adapters (and their associated ranks). The association of an
8-pack to a device adapter determined what LSS numbers could be chosen for a volume. On
an ESS up to 16 LSSs could be defined depending on the physical configuration of device
adapters and arrays.

On the DS8000, there is no fixed binding between any rank and any logical subsystem. The
capacity of one or more ranks can be aggregated into an extent pool and logical volumes
configured in that extent pool are not bound to any specific rank. Different logical volumes on
the same logical subsystem can be configured in different extent pools. As such, the available
capacity of the storage facility can be flexibly allocated across the set of defined logical
subsystems and logical volumes.

This predetermined association between array and LSS is gone on the DS8000. Also the
number of LSSs has changed. You can now define up to 255 LSSs for the DS8000. You can
even have more LSSs than arrays.

For each LUN or CKD volume you can now choose an LSS. You can put up to 256 volumes
into one LSS. There is, however, one restriction. We already have seen that volumes are
formed from a bunch of extents from an extent pool. Extent pools, however, belong to one
server, server 0 or server 1, respectively. LSSs also have an affinity to the servers. All even
numbered LSSs (X’00’, X’02’, X’04’, up to X’FE’) belong to server 0 and all odd numbered
LSSs (X’01’, X’03’, X’05’, up to X’FD’) belong to server 1. LSS X’FF’ is reserved.

System z users are familiar with a logical control unit (LCU). System z operating systems
configure LCUs to create device addresses. There is a one to one relationship between an
LCU and a CKD LSS (LSS X'ab' maps to LCU X'ab'). Logical volumes have a logical volume
number X'abcd' where X'ab' identifies the LSS and X'cd' is one of the 256 logical volumes on
the LSS. This logical volume number is assigned to a logical volume when a logical volume is
created and determines the LSS that it is associated with. The 256 possible logical volumes
associated with an LSS are mapped to the 256 possible device addresses on an LCU (logical
volume X'abcd' maps to device address X'cd' on LCU X'ab'). When creating CKD logical
volumes and assigning their logical volume numbers, users should consider whether Parallel
Access Volumes (PAV) are required on the LCU and reserve some of the addresses on the
LCU for alias addresses. For more information on PAV see 16.3, “z/OS considerations” on
page 335.

100 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_Virtualization_Concepts.fm

For open systems, LSSs do not play an important role except in determining which server the
LUN is managed by (and which extent pools it must be allocated in) and in certain aspects
related to Metro Mirror, Global Mirror, or any of the other remote copy implementations.

Some management actions in Metro Mirror, Global Mirror, or Global Copy operate at the LSS
level. For example the freezing of pairs to preserve data consistency across all pairs, in case
you have a problem with one of the pairs, is done at the LSS level. With the option now to put
all or most of the volumes of a certain application in just one LSS, this makes the
management of remote copy operations easier; see Figure 6-9.

Physical Drives Logical Volumes

LSS X'17'
DB2

LSS X'18'
DB2-test

Figure 6-9 Grouping of volumes in LSSs

Of course you could have put all volumes for one application in one LSS on an ESS, too, but
then all volumes of that application would also be in one or a few arrays, and from a
performance standpoint this was not desirable. Now on the DS8000 you can group your
volumes in one or a few LSSs but still have the volumes in many arrays or ranks.

Fixed block LSSs are created automatically when the first fixed block logical volume on the
LSS is created and deleted automatically when the last fixed block logical volume on the LSS
is deleted. CKD LSSs require user parameters to be specified and must be created before the
first CKD logical volume can be created on the LSS; they must be deleted manually after the
last CKD logical volume on the LSS is deleted.

Address groups
Address groups are created automatically when the first LSS associated with the address
group is created, and deleted automatically when the last LSS in the address group is
deleted.

LSSs are either CKD LSSs or FB LSSs. All devices in an LSS must be either CKD or FB. This
restriction goes even further. LSSs are grouped into address groups of 16 LSSs. LSSs are
numbered X'ab', where a is the address group and b denotes an LSS within the address
group. So, for example X'10' to X'1F' are LSSs in address group 1.

Chapter 6. Virtualization concepts 101


6786_6452_Virtualization_Concepts.fm Draft Document for Review November 14, 2006 3:49 pm

All LSSs within one address group have to be of the same type, CKD or FB. The first LSS
defined in an address group fixes the type of that address group.

System z users who still want to use ESCON to attach hosts to the DS8000 should be aware
that ESCON supports only the 16 LSSs of address group 0 (LSS X'00' to X'0F'). Therefore
this address group should be reserved for ESCON-attached CKD devices, in this case, and
not used as FB LSSs.

Figure 6-10 shows the concept of LSSs and address groups.

Address group X'1x' CKD


LSS X'10' LSS X'11'
LSS X'12' LSS X'13'
LSS X'14' LSS X'15'
LSS X'16' X'1500'
Extent Pool CKD-1 Extent Pool CKD-2
LSS X'18'
Rank-a LSS X'1A' LSS X'17' Rank-w
LSS X'1C' LSS X'19'
LSS X'1B'
Rank-b X'1E00' Rank-x
LSS X'1D'
Server0

Server1
X'1E01'
X'1D00'
LSS X'1E'
Extent Pool FB-1 LSS X'1F' Extent Pool FB-2

Rank-c Rank-y

Rank-d Address group X'2x': FB


Extent Pool FB-2
LSS X'20' LSS X'21'
LSS X'22' X'2100' Rank-z
LSS X'24'
X'2101'
LSS X'26'
LSS X'23'
X'2800' LSS X'25'
LSS X'27'
LSS X'28' LSS X'29'
Volume ID LSS X'2A' LSS X'2B'
LSS X'2C' LSS X'2D'
LSS X'2E' LSS X'2F'

Figure 6-10 Logical storage subsystems

The LUN identifications X'gabb' are composed of the address group X'g', and the LSS
number within the address group X'a’, and the position of the LUN within the LSS X'bb'. For
example, LUN X'2101' denotes the second (X'01') LUN in LSS X'21' of address group 2.

6.3.7 Volume access


A DS8000 provides mechanisms to control host access to LUNs. In most cases a server has
two or more HBAs and the server needs access to a group of LUNs. For easy management of
server access to logical volumes, the DS8000 introduced the concept of host attachments
and volume groups.

Host attachment
HBAs are identified to the DS8000 in a host attachment construct that specifies the HBAs’
World Wide Port Names (WWPNs). A set of host ports can be associated through a port
group attribute that allows a set of HBAs to be managed collectively. This port group is
referred to as host attachment within the GUI.

102 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_Virtualization_Concepts.fm

A given host attachment can be associated with only one volume group. Each host
attachment can be associated with a volume group to define which LUNs that HBA is allowed
to access. Multiple host attachments can share the same volume group. The host attachment
may also specify a port mask that controls which DS8000 I/O ports the HBA is allowed to log
in to. Whichever ports the HBA logs in on, it sees the same volume group that is defined in the
host attachment associated with this HBA.

The maximum number of host attachments on a DS8000 is 8192.

Volume group
A volume group is a named construct that defines a set of logical volumes. When used in
conjunction with CKD hosts, there is a default volume group that contains all CKD volumes
and any CKD host that logs into a FICON I/O port has access to the volumes in this volume
group. CKD logical volumes are automatically added to this volume group when they are
created and automatically removed from this volume group when they are deleted.

When used in conjunction with Open Systems hosts, a host attachment object that identifies
the HBA is linked to a specific volume group. The user must define the volume group by
indicating which fixed block logical volumes are to be placed in the volume group. Logical
volumes may be added to or removed from any volume group dynamically.

There are two types of volume groups used with Open Systems hosts and the type
determines how the logical volume number is converted to a host addressable LUN_ID on the
Fibre Channel SCSI interface. A map volume group type is used in conjunction with FC SCSI
host types that poll for LUNs by walking the address range on the SCSI interface. This type of
volume group can map any FB logical volume numbers to 256 LUN_IDs that have zeroes in
the last six bytes and the first two bytes in the range of X'0000' to X'00FF'.

A mask volume group type is used in conjunction with FC SCSI host types that use the Report
LUNs command to determine the LUN_IDs that are accessible. This type of volume group
can allow any and all FB logical volume numbers to be accessed by the host where the mask
is a bitmap that specifies which LUNs are accessible. For this volume group type, the logical
volume number X'abcd' is mapped to LUN_ID X'40ab40cd00000000'. The volume group type
also controls whether 512 byte block LUNs or 520 byte block LUNs can be configured in the
volume group.

When associating a host attachment with a volume group, the host attachment contains
attributes that define the logical block size and the Address Discovery Method (LUN Polling or
Report LUNs) that are used by the host HBA. These attributes must be consistent with the
volume group type of the volume group that is assigned to the host attachment so that HBAs
that share a volume group have a consistent interpretation of the volume group definition and
have access to a consistent set of logical volume types. The GUI typically sets these values
appropriately for the HBA based on the user specification of a host type. The user must
consider what volume group type to create when setting up a volume group for a particular
HBA.

FB logical volumes may be defined in one or more volume groups. This allows a LUN to be
shared by host HBAs configured to different volume groups. An FB logical volume is
automatically removed from all volume groups when it is deleted.

The maximum number of volume groups is 8320 for the DS8000.

Chapter 6. Virtualization concepts 103


6786_6452_Virtualization_Concepts.fm Draft Document for Review November 14, 2006 3:49 pm

WWPN-1 WWPN-2 WWPN-3 WWPN-4


Host attachment: AIXprod1 Host attachment: AIXprod2

Volume group: DB2-1 Volume group: DB2-2

Volume group: DB2-test

Host att: Test


WWPN-5 WWPN-6

WWPN-7
Host att: Prog
WWPN-8 Volume group: docs

Figure 6-11 Host attachments and volume groups

Figure 6-11 shows the relationships between host attachments and volume groups. Host
AIXprod1 has two HBAs, which are grouped together in one host attachment, and both are
granted access to volume group DB2-1. Most of the volumes in volume group DB2-1 are also
in volume group DB2-2, accessed by server AIXprod2. In our example, there is, however, one
volume in each group that is not shared. The server in the lower left part has four HBAs and
they are divided into two distinct host attachments. One can access some volumes shared
with AIXprod1 and AIXprod2. The other HBAs have access to a volume group called “docs.”

6.3.8 Summary of the virtualization hierarchy


Going through the virtualization hierarchy, we started with just a bunch of disks that were
grouped in array sites. An array site was transformed into an array, eventually with spare
disks. The array was further transformed into a rank with extents formatted for FB or CKD
data. Next, the extents were added to an extent pool that determined which storage server
would serve the ranks and aggregated the extents of all ranks in the extent pool for
subsequent allocation to one or more logical volumes.

Next we created logical volumes within the extent pools, assigning them a logical volume
number that determined which logical subsystem they would be associated with and which
server would manage them. Then the LUNs could be assigned to one or more volume
groups. Finally the host HBAs were configured into a host attachment that is associated with
a given volume group.

This new virtualization concept provides for much more flexibility. Logical volumes can
dynamically be created and deleted. They can be grouped logically to simplify storage
management. Large LUNs and CKD volumes reduce the total number of volumes and this
also contributes to a reduction of the management effort.

Figure 6-12 on page 105 summarizes the virtualization hierarchy.

104 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_Virtualization_Concepts.fm

Array RAID Rank Extent Logical


Site Array Type FB Pool Volume

Data

1 GB FB

1 GB FB
1 GB FB
Data

1 GB FB

1 GB FB
1 GB FB
Data
Data

Server0
Data

1 GB FB

1 GB FB
1 GB FB
Data
Parity
Spare

LSS Address Volume Host


FB Group Group Attachment

X'2x' FB
4096
addresses

LSS X'27'

X'3x' CKD
4096
addresses

Figure 6-12 Virtualization hierarchy

6.3.9 Placement of data

Host LVM volume


Loop 1
Loop 2

Loop 2
Loop 1

LSS 00 LSS 01

Extent Pool FB-0a Extent Pool FB-1a


DA pair 1

DA pair 1

Extent Pool FB-0b Extent Pool FB-1b


Server0

Server1

Extent Pool FB-0c Extent Pool FB-1c


DA pair 2

DA pair 2

Extent Pool FB-0d Extent Pool FB-1d

Figure 6-13 Optimal placement of data

Chapter 6. Virtualization concepts 105


6786_6452_Virtualization_Concepts.fm Draft Document for Review November 14, 2006 3:49 pm

As explained in the previous chapters, there are several options on how to create logical
volumes. You can select an extent pool that is owned by one server. There could be just one
extent pool per server or you could have several. The ranks of extent pools could come from
arrays on different device adapter pairs and different loops or from the same loop. Figure 6-13
shows an optimal distribution of eight logical volumes within a DS8000. Of course you could
have more extent pools and ranks, but when you want to distribute your data for optimal
performance, you should make sure that you spread it across the two servers, across different
device adapter pairs, across the loops, and across several ranks.

If you use some kind of a logical volume manager (like LVM on AIX) on your host, you can
create a host logical volume from several DS8000 logical volumes (LUNs). You can select
LUNs from different DS8000 servers, device adapter pairs, and loops as shown in
Figure 6-13. By striping your host logical volume across the LUNs, you will get the best
performance for this LVM volume.

6.4 Benefits of virtualization


The DS8000 physical and logical architecture defines new standards for enterprise storage
virtualization. The main benefits of the virtualization layers are:
򐂰 Flexible LSS definition allows maximization/optimization of number of devices per LSS.
򐂰 No strict relationship between RAID ranks and LSSs.
򐂰 No connection of LSS performance to underlying storage.
򐂰 Number of LSSs can be defined based upon device number requirements:
– With larger devices significantly fewer LSSs might be used.
– Volumes for a particular application can be kept in a single LSS.
– Smaller LSSs can be defined if required (for applications requiring less storage).
– Test systems can have their own LSSs with fewer volumes than production systems.
򐂰 Increased number of logical volumes:
– Up to 65280 (CKD)
– Up to 65280 (FB)
– 65280 total for CKD + FB
򐂰 Any mix of CKD or FB addresses in 4096 address groups.
򐂰 Increased logical volume size:
– CKD: 55.6 GB (65520 cylinders), architected for 219 TB
– FB: 2 TB, architected for 1 PB
򐂰 Flexible logical volume configuration:
– Multiple RAID types (RAID-5, RAID-10)
– Storage types (CKD and FB) aggregated into extent pools
– Volumes allocated from extents of extent pool
– Dynamically add/remove volumes
򐂰 Virtualization reduces storage management requirements.

106 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452CopyServices.fm

Chapter 7. Copy Services


This chapter discusses the Copy Services functions available with the DS8000 series models
which include the several remote mirror and copy functions as well as the point-in-time copy
function (FlashCopy).

These functions make the DS8000 series a key component for disaster recovery solutions,
data migration activities, as well as for data duplication and backup solutions.

In this chapter we cover the following topics:


򐂰 Copy Services
򐂰 FlashCopy
򐂰 Remote mirror and copy
– Metro Mirror
– Global Copy
– Global Mirror
– Metro/Global Mirror
– z/OS Global Mirror
– z/OS Metro/Global Mirror
򐂰 Interfaces for Copy Services
򐂰 Interoperability

The information discussed in this chapter is covered in greater extent and detail in the
following redbooks:
򐂰 IBM System Storage DS8000 Series: Copy Services in Open Environments, SG24-6788
򐂰 IBM System Storage DS8000 Series: Copy Services with System z servers, SG24-6787

© Copyright IBM Corp. 2006. All rights reserved. 107


6786_6452CopyServices.fm Draft Document for Review November 14, 2006 3:49 pm

7.1 Copy Services


Copy Services is a collection of functions that provide disaster recovery, data migration, and
data duplication functions. With the Copy Services functions, for example, you can create
backup data with little or no disruption to your application, and you can back up your
application data to the remote site for the disaster recovery.

The Copy Services functions run on the DS8000 storage unit and support open systems and
System z environments. These functions are supported also in the DS6000 series and on the
previous generation of storage systems —the IBM TotalStorage Enterprise Storage Server
(ESS) models.

DS8000 Copy Services functions


Copy Services in the DS8000 includes the following optional licensed functions:
򐂰 IBM System Storage FlashCopy, which is a point-in-time copy function
򐂰 Remote mirror and copy functions, which include:
– IBM System Storage Metro Mirror —previously known as synchronous PPRC
– IBM System Storage Global Copy —previously known as PPRC Extended Distance
– IBM System Storage Global Mirror —previously known as asynchronous PPRC
– IBM System Storage Metro/Global Mirror —a 3-site solution to meet the most rigorous
business resiliency needs
򐂰 Additionally, for the System z users the following are available:
– z/OS Global Mirror —previously known as Extended Remote Copy (XRC)
– z/OS Metro/Global Mirror —a 3-site solution that combines z/OS Global Mirror and
Metro Mirror

Many design characteristics of the DS8000 and its data copy and mirror capabilities features
contribute to the protection of your data, 24 hours a day and seven days a week.

We discuss these Copy Services functions in the following sections.

Copy Services management interfaces


You control and manage the DS8000 Copy Services functions by means of the following
interfaces:
򐂰 DS Storage Manager (DS SM) which is a graphical user interface (GUI) running in a Web
browser that communicates with the DS8000 Hardware Management Console (DS HMC).
򐂰 DS Command Line Interface (DS CLI) which provides various commands that are
executed on the DS HMC.
򐂰 TotalStorage Productivity Center for Replication (TPC for Replication). The TPC
Replication Manager server connects to the DS8000.
򐂰 DS Open Application Programming Interface (DS Open API).

System z users can also use the following interfaces:


򐂰 TSO commands
򐂰 ICKDSF utility commands
򐂰 ANTRQST application programming interface (API)
򐂰 DFSMSdss utility

We explain these interfaces in 7.4, “Interfaces for Copy Services” on page 124.

108 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452CopyServices.fm

7.2 FlashCopy
FlashCopy is designed to provide a point-in-time copy capability for logical volumes with
minimal interruption to applications, and makes it possible to access both the source and
target copies immediately.

In this section we discuss FlashCopy basic characteristics and options. For a more detailed
and extensive discussion on these topics, you can refer to the redbooks: IBM System Storage
DS8000 Series: Copy Services in Open Environments, SG24-6788, and IBM System Storage
DS8000 Series: Copy Services with System z servers, SG24-6787.

7.2.1 Basic concepts


FlashCopy creates a point-in-time copy of the data. When a FlashCopy operation is invoked,
it takes only a few seconds to complete the process of establishing a FlashCopy source and
target volume pair, and creating the necessary control bitmaps. Thereafter, you have access
to a point-in-time copy of the source volume —as though all the data had been copied. As
soon as the pair has been established, you can read and write to both the source and target
volumes.

Once the FlashCopy relationship is established, optionally, a background process starts to


copy the tracks from the source to the target volume —with FlashCopy you have the option of
not doing the physical copy of the volumes. See Figure 7-1 for an illustration of these
FlashCopy basic concepts.

Note: In this chapter, track means a piece of data in the DS8000; the DS8000 uses the
logical tracks to manage the Copy Services functions.

FlashCopy provides a point-in-time copy

Source Target
FlashCopy command issued

Copy immediately available

Write Read Read


Write
Time

Read and write to both source


and copy possible
T0

When copy is complete,


relationship between
source and target ends

Figure 7-1 FlashCopy concepts

If you access the source or the target volumes during the background copy, FlashCopy
manages these I/O requests as follows:

Chapter 7. Copy Services 109


6786_6452CopyServices.fm Draft Document for Review November 14, 2006 3:49 pm

򐂰 Read from the source volume


When a read request goes to the source volume, it is read from the source volume.
򐂰 Read from the target volume
When a read request goes to the target volume, FlashCopy checks the bitmap and:
– If the point-in-time data was already copied to the target volume, it is read from the
target volume.
– If the point-in-time data has not been copied yet, it is read from the source volume.
򐂰 Write to the source volume
When a write request goes to the source volume, first the data is written to the cache and
persistent memory (write cache). Then when the update is destaged to the source volume,
FlashCopy checks the bitmap and:
– If the point-in-time data was already copied to the target, then the update is written to
the source volume.
– If the point-in-time data has not been copied yet to the target, then first it is copied to
the target volume, and after that the update is written to the source volume.
򐂰 Write to the target volume
Whenever data is written to the target volume while the FlashCopy relationship exists, the
storage subsystem makes sure that the bitmap is updated. This way the point-in-time data
from the source volume never overwrites updates done directly to the target

The background copy may have a slight impact to your application because the physical copy
needs some storage resources, but the impact is minimal because the host I/O is prior to the
background copy. And if you want, you can issue FlashCopy with the no background copy
option.

No background copy option


If you invoke FlashCopy with the no background copy option, the FlashCopy relationship is
established without initiating a background copy. Therefore, you can minimize the impact of
the background copy. When the DS8000 receives an update to a source track in a FlashCopy
relationship, a copy of the point-in-time data is copied to the target volume so that it is
available when the data from the target volume is accessed. This option is useful when you
don’t need to issue FlashCopy in the opposite direction.

7.2.2 Benefits and use


The point-in-time copy created by FlashCopy is typically used where you need a copy of the
production data to be produced with little or no application downtime (depending on the
application). It can be used for online backup, testing of new applications, or for creating a
database for data-mining purposes. The copy looks exactly like the original source volume
and is an instantly available, binary copy.

7.2.3 Licensing requirements


FlashCopy is an optional licensed function feature of the IBM System Storage DS8000 series.
This feature is also referred to as the Point-in-Time Copy (PTC) licensed function.

To use FlashCopy you must have the corresponding licensed function indicator feature in the
DS8000, and you must acquire the corresponding DS8000 function authorization with the
adequate feature number —license in terms of physical capacity. For details on features and
functions requirements, see Section 11.1, “DS8000 licensed functions” on page 198.

110 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452CopyServices.fm

Note: for a detailed explanation of the features involved and the considerations you must
have when ordering FlashCopy we recommend you refer to the announcement letters:
򐂰 IBM System Storage DS8000 — Function Authorization (IBM 239x or 2244)
򐂰 IBM System Storage DS8000 Series (IBM 242x or 2107)

IBM announcement letters can be found at http://www.ibm.com/products

7.2.4 FlashCopy options


FlashCopy has many options and expanded functions for data copy. We explain some of the
options and capabilities in this section:
򐂰 Incremental FlashCopy (refresh target volume)
򐂰 Data Set FlashCopy
򐂰 Consistency Group FlashCopy
򐂰 Establish FlashCopy on existing Metro Mirror or Global Copy primary
򐂰 Persistent FlashCopy
򐂰 Inband Commands over remote mirror link

Incremental FlashCopy (refresh target volume)


Refresh target volume provides the ability to refresh a LUN or volume involved in a FlashCopy
relationship. When a subsequent FlashCopy operation is initiated, only the tracks changed on
both the source and target need to be copied from the source to the target. The direction of
the refresh can also be reversed.

In many cases, at most 10 to 20 percent of the entire data is changed in a day. In such a
situation, if you use this function for daily backup, you can save the time for the physical copy
of FlashCopy.

Figure 7-2 explains the basic characteristics of Incremental FlashCopy.

Initial FlashCopy
Incremental FlashCopy relationship established
with change recording and
Write
Write Source Target persistent copy options
Read
Read Control bitmap for each
volume created

Incremental FlashCopy
started
Tracks changed on the
target are overwritten by
the corresponding tracks
from the source
Tracks changed on the
source are copied to the
target
Possible reverse operation,
the target updates the
source
Figure 7-2 Incremental FlashCopy

Chapter 7. Copy Services 111


6786_6452CopyServices.fm Draft Document for Review November 14, 2006 3:49 pm

When using the Incremental FlashCopy option this is what happens:


1. At first, you issue full FlashCopy with the change recording option. This option is for
creating change recording bitmaps in the storage unit. The change recording bitmaps are
used for recording the tracks which are changed on the source and target volumes after
the last FlashCopy.
2. After creating the change recording bitmaps, Copy Services records the information for
the updated tracks to the bitmaps. The FlashCopy relationship persists even if all of the
tracks have been copied from the source to the target.
3. The next time you issue Incremental FlashCopy, Copy Services checks the change
recording bitmaps and copies only the changed tracks to the target volumes. If some
tracks on the target volumes are updated, these tracks are overwritten by the
corresponding tracks from the source volume.

You can also issue incremental FlashCopy from the target volume to the source volumes with
the reverse restore option. The reverse restore operation cannot be done unless the
background copy in the original direction has finished.

Data Set FlashCopy


Data Set FlashCopy allows a FlashCopy of a data set in a System z environment.

Dataset
Dataset

Volum e level FlashCopy Dataset level FlashCopy

Figure 7-3 Data Set FlashCopy

Multiple Relationship FlashCopy


Multiple Relationship FlashCopy allows a source to have FlashCopy relationships with
multiple targets simultaneously. A source volume or extent can be FlashCopied to up to 12
target volumes or target extents, as illustrated in Figure 7-4 on page 113.

Note: If a FlashCopy source volume has more than one target, that source volume can be
involved only in a single incremental FlashCopy relationship.

112 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452CopyServices.fm

Source Target
Volume Volume

1’

1 2’

12’

Maximum 12 Target volumes


from 1 Source volume

Figure 7-4 Multiple Relationship FlashCopy

Consistency Group FlashCopy


Consistency Group FlashCopy allows you to freeze (temporarily queue) I/O activity to a LUN
or volume. Consistency Group FlashCopy helps you to create a consistent point-in-time copy
across multiple LUNs or volumes, and even across multiple storage units.

What is Consistency Group FlashCopy?


If a consistent point-in-time copy across many logical volumes is required, and the user does
not wish to quiesce host I/O or database operations, then the user may use Consistency
Group FlashCopy to create a consistent copy across multiple logical volumes in multiple
storage units.

In order to create this consistent copy, the user would issue a set of establish FlashCopy
commands with a freeze option, which will hold off host I/O to the source volumes. In other
words, Consistency Group FlashCopy provides the capability to temporarily queue (at the
host I/O level, not the application level) subsequent write operations to the source volumes
that are part of the Consistency Group. During the temporary queueing, Establish FlashCopy
is completed. The temporary queueing continues until this condition is reset by the
Consistency Group Created command or the time-out value expires (the default is two
minutes).

Once all of the Establish FlashCopy requests have completed, a set of Consistency Group
Created commands must be issued via the same set of DS network interface servers. The
Consistency Group Created commands are directed to each logical subsystem (LSS)
involved in the consistency group. The Consistency Group Created command allows the write
operations to resume to the source volumes.

This operation is illustrated in Figure 7-5 on page 114.

For a more detailed discussion of the concept of data consistency and how to manage the
Consistency Group operation you can refer to the redbooks IBM System Storage DS8000
Series: Copy Services in Open Environments, SG24-6788 and IBM System Storage DS8000
Series: Copy Services with System z servers, SG24-6787.

Chapter 7. Copy Services 113


6786_6452CopyServices.fm Draft Document for Review November 14, 2006 3:49 pm

DS8000 #1, #2, #3, …


Server1 LSS11
LSS11
LSS11

Wait FlashCopy
Server2 Wait FlashCopy
FlashCopy
write requests Wait
LSS12
LSS12
LSS12

Wait FlashCopy
Wait
Wait FlashCopy
Waiting write operation
FlashCopy
until Consistency Group Created
command is invoked. Consistency Group
Figure 7-5 Consistency Group FlashCopy

Important: Consistency Group FlashCopy can create host-based consistent copies, they
are not application-based consistent copies. The copies have power-fail or crash level
consistency. This means that if you suddenly power off your server without stopping your
applications and without destaging the data in the file cache, the data in the file cache may
be lost and you may need recovery procedures to restart your applications. To start your
system with Consistency Group FlashCopy target volumes, you may need the same
operations as the crash recovery.

For example, If the Consistency Group source volumes are used with a journaled file
system (like AIX JFS) and the source LUNs are not unmounted before running FlashCopy,
it is likely that fsck will have to be run on the target volumes.

Establish FlashCopy on existing Metro Mirror or Global Copy primary


This option allows you to establish a FlashCopy relationship where the target is also a Metro
Mirror or Global Copy primary volume; see Figure 7-6 on page 115. This enables you to
create full or incremental point-in-time copies at a local site and then use remote mirroring to
copy the data to the remote site.

Note: You cannot FlashCopy from a source to a target, where the target is also a Global
Mirror primary volume.

Metro Mirror and Global Copy are explained in 7.3, “Remote mirror and copy” on page 115.

114 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452CopyServices.fm

FlashCopy
Source Volume

FlashCopy to
MM or GC
FlashCopy Primary
Target Volume
Volume Secondary
Volume
Primary Metro Mirror
Volume or
Global Copy

Figure 7-6 Establish FlashCopy on existing Metro Mirror or Global Copy primary

Persistent FlashCopy
Persistent FlashCopy allows the FlashCopy relationship to remain even after the copy
operation completes. You must explicitly delete the relationship.

Inband Commands over remote mirror link


In a remote mirror environment, commands to manage FlashCopy at the remote site can be
issued from the local or intermediate site and transmitted over the remote mirror Fibre
Channel links. This eliminates the need for a network connection to the remote site solely for
the management of FlashCopy.

Note: This function is only available through the use of the DS CLI commands and not the
DS Storage Manager GUI.

7.3 Remote mirror and copy


The remote mirror and copy functions of the DS8000 is a set of flexible data mirroring
solutions that allows replication between volumes on two or more disk storage systems.
These functions are used to implement remote data backup and disaster recovery solutions.

The remote mirror and copy functions are optional licensed functions of the DS8000, that
include:
򐂰 Metro Mirror
򐂰 Global Copy
򐂰 Global Mirror
򐂰 Metro/Global Mirror

In addition, System z users can use the DS8000 for:


򐂰 z/OS Global Mirror
򐂰 z/OS Metro/Global Mirror

In the following sections we discuss these remote mirror and copy functions.

Chapter 7. Copy Services 115


6786_6452CopyServices.fm Draft Document for Review November 14, 2006 3:49 pm

For a more detailed and extensive discussion on these topics, you can refer to the redbooks:
IBM System Storage DS8000 Series: Copy Services in Open Environments, SG24-6788, and
IBM System Storage DS8000 Series: Copy Services with System z servers, SG24-6787.

Licensing requirements
To use any of these remote mirror and copy optional licensed functions, you must have the
corresponding licensed function indicator feature in the DS8000, and you must acquire the
corresponding DS8000 function authorization with the adequate feature number —license in
terms of physical capacity. For details on features and functions requirements, see
Section 11.1, “DS8000 licensed functions” on page 198.

Also consider that some of the remote mirror solutions, like Global Mirror, or Metro/Global
Mirror, or z/OS Metro/Global Mirror, integrate more than one licensed function. In this case
you will need to have all of the required licensed functions.

Note: for a detailed explanation of the features involved and the considerations you must
have to order Copy Services licensed functions refer to the announcement letters:
򐂰 IBM System Storage DS8000 — Function Authorization (IBM 239x or 2244)
򐂰 IBM System Storage DS8000 Series (IBM 242x or 2107)

IBM announcement letters can be found at http://www.ibm.com/products

7.3.1 Metro Mirror


Metro Mirror (previously known as Synchronous Peer-to-Peer Remote Copy, PPRC) provides
real-time mirroring of logical volumes between two DS8000s that can be located up to 300 km
from each other. It is a synchronous copy solution where write operations are completed on
both copies (local and remote site) before they are considered to be complete.

Figure 7-7 illustrates the basic operations characteristics of Metro Mirror.

Server 4 Write
acknowledge
write
1 Write hit
on secondary
3

2
Primary Write to Secondary
(local) secondary (remote)
Figure 7-7 Metro Mirror basic operation

116 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452CopyServices.fm

7.3.2 Global Copy


Global Copy (previously know as Peer-to-Peer Remote Copy Extended Distance, PPRC-XD)
copies data non-synchronously and over longer distances than is possible with Metro Mirror.
When operating in Global Copy mode, the source volume sends a periodic, incremental copy
of updated tracks to the target volume, instead of sending a constant stream of updates. This
causes less impact to application writes for source volumes and less demand for bandwidth
resources, while allowing a more flexible use of the available bandwidth.

Global Copy does not keep the sequence of write operations. Therefore, the copy is normally
fuzzy, but you can make a consistent copy through synchronization (called a go-to-sync
operation). After the synchronization, you can issue FlashCopy at the secondary site to make
the backup copy with data consistency. After the establish of the FlashCopy, you can change
the mode back to the non-synchronous mode.

Server 2 Write
write acknowledge

Write to secondary
(non-synchronously)

Primary Secondary
(local) (remote)
Figure 7-8 Global Copy basic operation

7.3.3 Global Mirror


Global Mirror (previously known as Asynchronous PPRC) is a two-site long distance
asynchronous remote copy technology. This solution integrates the Global Copy and
FlashCopy technologies. With Global Mirror, the data that the host writes to the storage unit at
the local site is asynchronously mirrored to the storage unit at the remote site. This way, a
consistent copy of the data is automatically maintained on the storage unit at the remote site.

Global Mirror operations provide the following benefits:


򐂰 Support for virtually unlimited distances between the local and remote sites, with the
distance typically limited only by the capabilities of the network and the channel extension
technology. This unlimited distance enables you to choose your remote site location
based on business needs and enables site separation to add protection from localized
disasters.

Chapter 7. Copy Services 117


6786_6452CopyServices.fm Draft Document for Review November 14, 2006 3:49 pm

򐂰 A consistent and restartable copy of the data at the remote site, created with minimal
impact to applications at the local site.
򐂰 Data currency where, for many environments, the remote site lags behind the local site
typically 3 to 5 seconds, minimizing the amount of data exposure in the event of an
unplanned outage. The actual lag in data currency that you experience will depend upon a
number of factors, including specific workload characteristics and bandwidth between the
local and remote sites.
򐂰 Dynamic selection of the desired recovery point objective, based upon business
requirements and optimization of available bandwidth.
򐂰 Session support whereby data consistency at the remote site is internally managed across
up to eight storage units that are located across the local and remote sites.
򐂰 Efficient synchronization of the local and remote sites with support for failover and failback
operations, helping to reduce the time that is required to switch back to the local site after
a planned or unplanned outage.

Figure 7-9 illustrates the basic operation characteristics of Global Mirror.

2 Write
Server acknowledge
write
1

Write to secondary
(non-synchronously) FlashCopy
B (automatically)
A

Automatic cycle controlled by active session

Figure 7-9 Global Mirror basic operation

How Global Mirror works


Figure 7-10 on page 119 illustrates the basics of how Global Mirror works —everything in an
automatic fashion under the control of the DS8000 microcode and the Global Mirror session.

You can see in Figure 7-10 that he A volumes at the local site are the production volumes and
are used as Global Copy primary volumes. The data from the A volumes is replicated to the B
volumes, which are the Global Copy secondary volumes. At a certain point in time, a
Consistency Group is created using all of the A volumes, even if they are located in different
storage units. This has no application impact because the creation of the Consistency Group
is very quick (on the order of milliseconds).

118 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452CopyServices.fm

Global Copy
Global Copy FlashCopy
Secondary
Primary Target
FlashCopy Source

FlashCopy
Global Copy
A B C

Local site Remote site

Automatic cycle in an active Global Mirror Session


1. Create Consistency Group of volumes at local site
2. Send increment of consistent data to remote site
3. FlashCopy at the remote site
4. Resume Global Copy (copy out-of-sync data only)
5. Repeat all the steps according to defined time period

Figure 7-10 How Global Mirror works

Once the Consistency Group is created, the application writes can continue updating the A
volumes. The increment of the consistent data is sent to the B volumes using the existing
Global Copy relationships. Once the data reaches the B volumes, it is FlashCopied to the C
volumes.

The C volumes now contain a consistent copy of the data. Because the B volumes except at
the moment of doing the FlashCopy usually contain a fuzzy copy of the data, the C volumes
are used to hold the last consistent point-in-time copy of the data while the B volumes are
being updated by Global Copy.

The data at the remote site is current within 3 to 5 seconds, but this recovery point (RPO)
depends on the workload and bandwidth available to the remote site.

With its efficient and autonomic implementation, Global Mirror is a solution for disaster
recovery implementations where a consistent copy of the data needs to be kept at all times at
a remote location that can be separated by a very long distance from the production site.

7.3.4 Metro/Global Mirror


Metro/Global Mirror is a three-site, multi-purpose, replication solution for both System z and
open systems data. Local site (site A) to intermediate site (site B) provides high availability
replication using Metro Mirror, and intermediate site (site B) to remote site (site C) supports
long distance disaster recovery replication with Global Mirror. See Figure 7-11 on page 120.

Chapter 7. Copy Services 119


6786_6452CopyServices.fm Draft Document for Review November 14, 2006 3:49 pm

Server or Servers

***
normal application I/Os failover application I/Os FlashCopy
Global Copy incremental
asynchronous NOCOPY
Metro Mirror long distance

A B C

Metro Mirror D
synchronous
short distance
Global Mirror
Intermediate site Remote site
Local site (site A) (site B) (siteC)
Figure 7-11 Metro/Global Mirror elements

Both Metro Mirror and Global Mirror are well established replication solutions. Metro/Global
Mirror combines Metro Mirror and Global Mirror to incorporate the best features of the two
solutions:
򐂰 Metro Mirror
– Synchronous operation supports zero data loss.
– The opportunity to locate the intermediate site disk subsystems close to the local site
allows use of intermediate site disk subsystems in a high availability configuration.

Note: Metro Mirror can be used for distances of up to 300 km but, when used in a
Metro/Global Mirror implementation, a shorter distance might be more appropriate in
support of the high availability functionality.

򐂰 Global Mirror
– Asynchronous operation supports long distance replication for disaster recovery.
– Global Mirror methodology allows for no impact to applications at the local site.
– Provides a recoverable, restartable, consistent image at the remote site with a
Recovery Point Objective (RPO) typically in the 3-5 second range.

Metro/Global Mirror processes


Figure 7-12 on page 121 gives an overview of the Metro/Global Mirror process. The
Metro/Global Mirror process is better understood if the components processes are
understood. One important consideration is that the intermediate site volumes (site B
volumes) are special because they act as both Global Mirror (GM) source and Metro Mirror
(MM) target volumes at the same time.

120 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452CopyServices.fm

Server or Servers

***
4
normal application I/Os failover application I/Os
Global Mirror network Global Mirror FlashCopy
asynchronous incremental
Metro Mirror large distance NOCOPY
1
2
A B C
3 b
a c
D
Metro Mirror network
synchronous
Global Mirror
small distance

Metro Mirror write Global Mirror consistency group formation (CG)


1. application to VolA a. write updates to B volumes paused (< 3ms) to create CG
2. VolA to VolB b. CG updates to B volumes drained to C volumes
3. write complete to A c. after all updates drained, FlashCopy changed data from C to D
4. write complete to application volumes

Local Site (Site A) Intermediate Site (Site B) Remote Site (SiteC)


Figure 7-12 Metro/Global Mirror overview diagram

The local site (site A) to intermediate site (site B) component is identical to Metro Mirror.
Application writes are synchronously copied to the intermediate site before write complete is
signaled to the application. All writes to the local site volumes in the mirror are treated in
exactly the same way.

The intermediate site (site B) to remote site (site C) component is identical to Global Mirror,
except that:
򐂰 The writes to intermediate site volumes are Metro Mirror secondary writes and not
application primary writes.
򐂰 The intermediate site volumes are both GM source and MM target at the same time.

The intermediate site disk subsystems are collectively paused by the Global Mirror Master
disk subsystem to create the Consistency Group (CG) set of updates. This pause would
normally take 3 ms every 3 to 5 seconds. After the CG set is formed, the Metro Mirror writes
from local site (site A) volumes to intermediate site (site B) volumes, are allowed to continue.
Also, the CG updates continue to drain to the remote site (site C) volumes. The intermediate
site to remote site drain should take only a few seconds to complete.

Once all updates are drained to the remote site, all changes since the last FlashCopy from
the C volumes to the D volumes are logically (NOCOPY) FlashCopied to the D volumes. After
the logical FlashCopy is complete, the intermediate site to remote site Global Copy data
transfer is resumed until the next formation of a Global Mirror CG. The process described

Chapter 7. Copy Services 121


6786_6452CopyServices.fm Draft Document for Review November 14, 2006 3:49 pm

above is repeated every 3 to 5 seconds if the interval for consistency group formation is set to
zero. Otherwise it will be repeated at the specified interval plus 3 to 5 seconds.

The Global Mirror processes are discussed in greater detail in IBM System Storage DS8000
Series: Copy Services in Open Environments, SG24-6788 and IBM System Storage DS8000
Series: Copy Services with System z servers, SG24-6787.

7.3.5 z/OS Global Mirror


z/OS Global Mirror (previously know as Extended Remote Copy, XRC) is a copy function
available for the z/OS and OS/390 operating systems. It involves a System Data Mover (SDM)
that is found only in OS/390 and z/OS. z/OS Global Mirror maintains a consistent copy of the
data asynchronously at a remote location, and can be implemented over unlimited distances.
It is a combined hardware and software solution that offers data integrity and data availability
and can be used as part of business continuance solutions, for workload movement, and for
data migration. z/OS Global Mirror function is an optional licensed function of the DS8000.

Figure 7-13 illustrates the basic operations characteristics of z/OS Global Mirror.

Primary Secondary
SDM manages the
site site
data consistency System
Data
Write Mover
acknowledge
Server 2
write
1 Write
ashynchronously

Figure 7-13 z/OS Global Mirror basic operations

The SDM can be located at the secondary site, at the primary site, or at an independent site.

7.3.6 z/OS Metro/Global Mirror


This mirroring capability implements z/OS Global Mirror to mirror primary site data to a
location that is a long distance away and also uses Metro Mirror to mirror primary site data to
a location within the metropolitan area. This enables a z/OS 3-site high availability and
disaster recovery solution for even greater protection from unplanned outages.

Figure 7-14 on page 123 illustrates the basic operation characteristics of a z/OS Metro/Global
Mirror implementation.

122 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452CopyServices.fm

Site 1 Site2 Site 3

SDM

Metropolitan Unlimited
distance distance

Metro
Mirror z/OS Global
Mirror

P X’
P’ X
DS8000 DS8000 DS8000 X”
Metro Mirror Metro Mirror/ z/OS Global Mirror
Secondary z/OS Global Mirror Secondary
FlashCopy
Primary

Figure 7-14 z/OS Metro/Global Mirror

7.3.7 Summary of the Copy Services functions characteristics


In this section we summarize the use of and considerations for the set of remote mirror and
copy functions available with the DS8000 series.

Metro Mirror
Metro Mirror is a function for synchronous data copy at a distance. The following
considerations apply:
򐂰 There is no data loss and it allows for rapid recovery for distances up to 300 km.
򐂰 There may be slight performance impact for write operations.

Global Copy
Global Copy is a function for non-synchronous data copy at very long distances —only limited
by the network implementation. The following considerations apply:
򐂰 It can copy your data at nearly an unlimited distance, making it suitable for data migration
and daily backup to a remote distant site.
򐂰 The copy is normally fuzzy but can be made consistent through a synchronization
procedure.

To create a consistent copy for Global Copy, you need a go-to-sync operation —that is,
synchronize the secondary volumes to the primary volumes. During the go-to-sync operation,
the mode of remote copy changes from a non-synchronous copy to a synchronous copy.
Therefore, the go-to-sync operation may cause performance impact to your application
system. If the data is heavily updated and the network bandwidth for remote copy is limited,
the time for the go-to-sync operation becomes longer.

Chapter 7. Copy Services 123


6786_6452CopyServices.fm Draft Document for Review November 14, 2006 3:49 pm

Global Mirror
Global Mirror is an asynchronous copy technique; you can create a consistent copy in the
secondary site with an adaptable Recovery Point Objective (RPO) —Recovery Point
Objective (RPO) specifies how much data you can afford to re-create should the system need
to be recovered. The following considerations apply:
򐂰 Global Mirror can copy to nearly an unlimited distance.
򐂰 It is scalable across the storage units.
򐂰 It can realize a low RPO if there is enough link bandwidth —when the link bandwidth
capability is exceeded with a heavy workload, the RPO might grow.
򐂰 Global Mirror causes only a slight impact to your application system.

z/OS Global Mirror


z/OS Global Mirror is an asynchronous copy technique controlled by a z/OS host software
called System Data Mover. The following considerations apply:
򐂰 It can copy to nearly unlimited distances.
򐂰 It is highly scalable.
򐂰 It has very low RPO —the RPO might grow if the bandwidth capability is exceeded, or host
performance might be impacted.
򐂰 Additional host server hardware and software is required.

7.4 Interfaces for Copy Services


There are several interfaces for invoking and managing the Copy Services in the DS8000. We
introduce them in this section.

Copy Services functions can be initiated using the following interfaces:


򐂰 DS Storage Manager (DS SM) Web-based Graphical User Interface (GUI)
򐂰 DS Command-Line Interface (DS CLI)
򐂰 TotalStorage Productivity Center (TPC) for Replication
򐂰 DS open application programming interface (DS Open API)
򐂰 System z based I/O interfaces —TSO commands, ICKDSF commands, ANTRQST macro,
DFSMSdss utility.

Note: Not always all the Copy Services options are available in each management
interface. Refer to the following redbooks for specific considerations:
IBM System Storage DS8000 Series: Copy Services in Open Environments, SG24-6788
IBM System Storage DS8000 Series: Copy Services with System z servers, SG24-6787

7.4.1 Storage Hardware Management Console (S-HMC)


DS Storage Manager, DS CLI, and DS Open API commands are issued via the Ethernet
network, and these commands are invoked by the Storage Hardware Management Console
(S-HMC). When the S-HMC has the command requests, including those for Copy Services,
from these interfaces, S-HMC communicates with each server in the storage units via the
Ethernet network. Therefore, the S-HMC is a key component to configure and manage the
DS8000 Copy Services functions.

The network components for Copy Services are illustrated in Figure 7-15 on page 125.

124 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452CopyServices.fm

S-HMC 1 Processor
Ethernet
(internal) Complex 0
Switch 1

Customer DS8000
Network
DS Storage
Manager Ethernet Processor
(S-HMC 2)
Switch 2 Complex 1
DS CLI (external)

DS API

Figure 7-15 DS8000 Copy Services network components

Each DS8000 will have an internal S-HMC in the base frame, and you can have an external
S-HMC for redundancy.

For further information about the S-HMC, see Chapter 9, “DS HMC planning and setup” on
page 145.

7.4.2 DS Storage Manager Web-based interface


The DS Storage Manager (DS SM) is a Web-based management graphical user Interface
(GUI). It is used to manage the logical configurations as well as the Copy Services functions.
DS SM has an online mode and an offline mode; only the online mode is supported for Copy
Services.

DS Storage Manager is already installed in the S-HMC. You can also install it in other
computers that you may set up. When you manage the Copy Services functions with DS SM
in computers that you set up, DS SM issues its command to the S-HMC via the Ethernet
network.

The DS Storage Manager supports almost all the Copy Services options. The following
options are not supported:
򐂰 Consistency Group operation —for FlashCopy and for Metro Mirror
򐂰 Inband commands over remote mirror links

For more information on the DS SM Web-based graphical user interface you can refer to the
following sources:
򐂰 IBM System Storage DS8000 Information Center web site at:
http://publib.boulder.ibm.com/infocenter/dsichelp/ds8000ic/index.jsp
򐂰 IBM System Storage DS8000 Series: Copy Services in Open Environments, SG24-6788
򐂰 IBM System Storage DS8000 Series: Copy Services with System z servers, SG24-6787

For additional information about the DS SM uses see also Chapter 13, “Configuration with DS
Storage Manager GUI” on page 223.

Chapter 7. Copy Services 125


6786_6452CopyServices.fm Draft Document for Review November 14, 2006 3:49 pm

7.4.3 DS Command-Line Interface (DS CLI)


The DS Command-Line Interface (DS CLI) provides a full-function command set that enables
open systems hosts to invoke and manage FlashCopy and the remote mirror and copy
functions through batch processes and scripts. While there is no support for System z for the
DS CLI, you can use the DS CLI from a supported server to control and manage Copy
Services functions on z/OS volumes.

For more information on the DS CLI, as well as the supported platforms you can refer to the
following sources:
򐂰 IBM System Storage DS8000 Information Center web site at:
http://publib.boulder.ibm.com/infocenter/dsichelp/ds8000ic/index.jsp
򐂰 DS 8000 Interoperability matrix, that you can find at:
http://www-03.ibm.com/servers/storage/disk/ds8000/pdf/ds8000-matrix.pdf
򐂰 IBM System Storage DS8000: Command-Line Interface User´s Guide, SC26-7916.
򐂰 IBM System Storage DS8000 Series: Copy Services in Open Environments, SG24-6788
򐂰 IBM System Storage DS8000 Series: Copy Services with System z servers, SG24-6787

For additional information about the DS CLI uses, see also Chapter 14, “Configuration with
Command Line interface” on page 267.

7.4.4 Totalstorage Productivity Center for Replication (TPC for Replication)


IBM TotalStorage Productivity Center (TPC) for Replication provides management of DS8000
series business continuance solutions, including FlashCopy and remote mirror and copy
functions. IBM TPC for Replication V3.1 for FlashCopy, Metro Mirror, and Global Mirror
support focuses on automating administration and configuration of these services,
operational control (starting, suspending, resuming), copy services tasks, and monitoring and
managing the copy sessions. IBM TPC for Replication does not support Global Copy.

TPC for Replication is designed to help administrators manage Copy Services. This applies
not only to the copy services provided by DS8000 and DS6000 but also to copy services
provided by the ESS 800 and SAN Volume Controller (SVC).

In addition to these capabilities, TPC for Replication also provides two-site Business
Continuity manageability. This is intended to provide disaster recovery management through
planned and unplanned failover and failback automation, and monitoring progress of the copy
services so you can verify the amount of replication that has been done as well as the amount
of time required to complete the replication operation.

For more information about TPC for replication, you can refer to the following sources:
򐂰 IBM System Storage DS8000 Series: Copy Services in Open Environments, SG24-6788
򐂰 IBM System Storage DS8000 Series: Copy Services with System z servers, SG24-6787
򐂰 IBM TotalStorage Productivity Center for Replication User’s Guide, SC32-0103
򐂰 IBM TotalStorage Productivity Center for Replication Installation and Configuration Guide,
SC32-0102
򐂰 IBM TotalStorage Productivity Center for Replication Command-Line Interface User’s
Guide, SC32-0104.

7.4.5 DS Open application programming Interface (DS Open API)

The DS Open application programming interface (API) is a non-proprietary storage


management client application that supports routine LUN management activities, such as
LUN creation, mapping and masking, and the creation or deletion of RAID-5 and RAID-10

126 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452CopyServices.fm

volume spaces. The DS Open API also enables Copy Services functions such as FlashCopy
and remote mirror and copy functions. It supports these activities through the use of the
Storage Management Initiative Specification (SMIS), as defined by the Storage Networking
Industry Association (SNIA)

The DS Open API helps integrate DS configuration management support into storage
resource management (SRM) applications, which allow customers to benefit from existing
SRM applications and infrastructures. The DS Open API also enables the automation of
configuration management through customer-written applications. Either way, the DS Open
API presents another option for managing storage units by complementing the use of the IBM
System Storage DS Storage Manager web-based interface and the DS Command-Line
Interface.

You must implement the DS Open API through the IBM System Storage Common Information
Model (CIM) agent, a middleware application that provides a CIM-compliant interface. The DS
Open API uses the CIM technology to manage proprietary devices such as open system
devices through storage management applications. The DS Open API allows these storage
management applications to communicate with a storage unit.

For information on the DS Open API, refer to the publication IBM System Storage DS Open
Application Programming Interface Reference, GC35-0516.

7.4.6 System z based I/O interfaces


In addition to using the DS GUI, or the DS CLI, or TPC for Replication, System z users have
also the following additional interfaces available for Copy Services management:
򐂰 TSO commands
򐂰 ICKDSF utility commands
򐂰 DFSMSdss utility
򐂰 ANTRQST application programming interface
򐂰 Native TPF commands (for z/TPF only)

These interfaces have the advantage of not having to issue their commands to the DS8000
HMC. They can instead directly send inband commands over a FICON channel connection
between the System z and the DS8000. Sending inband commands allows for a very quick
command transfer that does not depend on any additional software stacks.

Operating systems supported interfaces


From the various System z operating systems following is the list the supported interfaces:
򐂰 z/OS:
– TSO commands
– ICKDSF utility
– DFSMSdss utility
– ANTRQST
򐂰 z/VM and z/VSE
– ICKDSF utility
򐂰 z/TPF
– ICKDSF utility
– z/TPF itself

Chapter 7. Copy Services 127


6786_6452CopyServices.fm Draft Document for Review November 14, 2006 3:49 pm

7.5 Interoperability
Remote mirror and copy pairs can only be established between disk subsystems of the same
(or similar) type and features. For example, a DS8000 can have a remote mirror pair with
another DS8000, a DS6000, an ESS 800, or an ESS 750. It cannot have a remote mirror pair
with an RVA or an ESS F20. Note that all disk subsystems must have the appropriate features
installed. If your DS8000 is being mirrored to an ESS disk subsystem, the ESS must have
PPRC Version 2 (which supports Fibre Channel links) with the appropriate licensed internal
code level (LIC).

Refer to the DS8000 Interoperability Matrix for more information:


http://www-1.ibm.com/servers/storage/disk/ds8000/interop.html

Note: The DS8000 does not support ESCON links for remote mirror and copy operations.
If you want to establish a remote mirror relationship between a DS8000 and an ESS 800,
you have to use FCP links.

128 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786p_Planning.fm

Part 2

Part 2 Planning and


Installation
In this part we discuss matters related to the installation planning process of the DS8000. The
subjects covered include:
򐂰 Physical planning and installation
򐂰 DS HMC planning and setup
򐂰 Performance
򐂰 Features and license keys

© Copyright IBM Corp. 2006. All rights reserved. 129


6786p_Planning.fm Draft Document for Review November 14, 2006 3:49 pm

130 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_Physical Planning.fm

Chapter 8. Physical planning and


installation
This chapter discusses the various steps involved in the planning and installation of the
DS8000, including a reference listing of the information required for the set up and where to
find detailed technical reference material. The topics covered in this chapter include:
򐂰 Considerations prior to installation
򐂰 Planning for the physical installation
򐂰 Network connectivity planning
򐂰 Remote mirror and copy connectivity
򐂰 Disk capacity considerations
򐂰 Planning for growth

The publication IBM System Storage DS8000 Introduction and Planning Guide, GC35-0515,
should be reviewed and available to use during the configuration and installation process.

© Copyright IBM Corp. 2006. All rights reserved. 131


6786ch_Physical Planning.fm Draft Document for Review November 14, 2006 3:49 pm

8.1 Considerations prior to installation


Start by developing and following a project plan to address the many topics needed for a
successful implementation. Appendix C, “Project plan” on page 425 includes a sample Gantt
chart showing some suggested key activities and their timing.

In general the following should be considered for your installation planning checklist:
򐂰 Plan for growth to minimize disruption to operations. Expansion frames can only be placed
to the right (from the front) of the DS8000
򐂰 Location suitability, floor loading and access constraints, elevators, doorways, and so on
򐂰 Power requirements: redundancy, use of Uninterrupted Power Supply (UPS)
򐂰 Environmental requirements: adequate cooling capacity
򐂰 A plan detailing the desired logical configuration of the storage
򐂰 Staff education and availability to implement the storage plan. Alternatively, IBM or IBM
Business Partner services

Customer responsibilities for the installation


The DS8000 series is specified as an IBM or IBM Business Partner installation and setup
system. Still, some of the required planning and installation activities for which the customer
is responsible, at a high level are the following:
򐂰 Physical configuration planning is a customer responsibility. Your disk marketing specialist
can help you plan and select the DS8000 series physical configuration and features.
Introductory information, including required and optional features, can be found in this
book as well as in the publication IBM System Storage DS8000 Introduction and Planning
Guide, GC35-0515.
򐂰 Installation planning is a customer responsibility. Information about planning the
installation of your DS8000 series, including equipment, site, and power requirements, can
be found in this book as well as in the publication IBM System Storage DS8000
Introduction and Planning Guide, GC35-0515.
򐂰 Logical configuration planning and application is a customer responsibility. Logical
configuration refers to the creation of RAID ranks, volumes, and/or LUNs, and the
assignment of the configured capacity to servers. Application of the initial logical
configuration and all subsequent modifications to the logical configuration is a customer
responsibility. The logical configuration can be created, applied, and modified using the
DS Storage Manager, DS CLI, or DS Open API.
IBM Global Services will also apply and/or modify your logical configuration (fee-based
services).

In this chapter you will find information that will assist you with the planning and installation
activities.

8.1.1 Who should be involved


We suggest having a project manager to coordinate the many tasks necessary for a
successful installation. Installation will require close cooperation with the user community, the
IT support staff, and the technical resources responsible for floor space, power, and cooling.

A Storage Administrator should also coordinate requirements from the user applications and
systems in order to build a storage plan for the installation. This will be needed to configure
the storage after the initial hardware installation is complete.

The following people should be briefed and engaged in the planning process for the physical
installation:

132 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_Physical Planning.fm

򐂰 Systems and Storage Administrators


򐂰 Installation Planning Engineer
򐂰 Building Engineer for floor loading and air conditioning
򐂰 Location Electrical Engineer
򐂰 IBM or Business Partner Installation Engineer

8.1.2 What information is required


A validation list to assist in the installation process should include:
򐂰 Drawings detailing the positioning as specified and agreed upon with a building engineer,
ensuring the weight is within limits for the route to the final installation position.
򐂰 Approval to use elevators if appropriate; weight and size are acceptable.
򐂰 Connectivity information, servers, and SAN; mandatory LAN connections.
򐂰 Ensure that you have a detailed storage plan agreed upon, with the client available to
understand how the storage is to be configured. Ensure that the configuration specialist
has all the information to configure all the arrays, and setup the environment as required.
򐂰 License keys for OEL (mandatory) and any optional Copy Services license keys.

The IBM System Storage DS8000 Introduction and Planning Guide, GC35-0515, contains
additional information about physical planning. You can download it at:
http://www.ibm.com/servers/storage/disk/ds8000

8.2 Planning for the physical installation


This section discusses the physical installation planning process and gives some important
tips and considerations.

8.2.1 Delivery and staging area


The shipping carrier is responsible for delivering and unloading the DS8000 as close to its
final destination as possible. Inform your carrier of the weight and size of the packages to be
delivered and inspect the site and the areas where the packages will be moved (for example,
hallways, floor protection, elevator size and loading, and so on).

Table 8-1 shows the final packaged dimensions and maximum packaged weight of the
DS8000 storage unit shipgroup.

Table 8-1 Packaged dimensions and weight for DS8000 models


Shipping container Packaged dimensions (in Maximum packaged weight
centimeters and inches) (in kilograms and pounds)

Model 931 Height 207.5 cm (81.7 in.) 1309 kg (2886 lb)


pallet or crate Width 101.5 cm (40 in.)
Depth 137.5 cm (54.2 in.)

Model 932 Height 207.5 cm (81.7 in.) 1368 kg (3016)


Model 9B2 Width 101.5 cm (40 in.)
pallet or crate Depth 137.5 cm (54.2 in.)

Model 92E expansion unit Height 207.5 cm (81.7 in.) 1209 kg (2665 lb)
Model 9AE expansion unit Width 101.5 cm (40 in.)
pallet or crate Depth 137.5 cm (54.2 in.)

Chapter 8. Physical planning and installation 133


6786ch_Physical Planning.fm Draft Document for Review November 14, 2006 3:49 pm

Shipping container Packaged dimensions (in Maximum packaged weight


centimeters and inches) (in kilograms and pounds)

(if ordered) External S-HMC Height 69.0 cm (27.2 in.) 75 kg (165 lb)
container Width 80.0cm (31™.5 in.)
Depth 120.0cm (47.3 in.)

Attention: A fully configured model in the packaging can weigh over 1406 kg (3100 lbs).
Use of less than three persons to move it can result in injury.

8.2.2 Floor type and loading


The DS8000 can be installed on a raised or nonraised floor. We recommend that you install
the unit on a raised floor because it allows you to operate the storage unit with better cooling
efficiency and cabling layout protection.

The total weight and space requirements of the storage unit will depend on the configuration
features you ordered. You may need to consider calculating the weight of the unit and the
expansion box (if ordered) in their maximum capacity to allow for the addition of new features.

Table 8-2 provides the weights of the various DS8000 models.

Table 8-2 DS8000 weights


Model Maximum weight

Model 931 1189 kg (2620 lb)

Models 932/9B2 1248 kg (2750 lb)

Models 92E/9AE (first expansion unit) 1089 kg (2400 lb)

Models 92E/9AE (second expansion unit) 867 kg (1910 lb)

Important: You need to check with the building engineer or another appropriate person to
be sure that the floor loading is properly considered.

Raised floors can better accommodate cabling layout. The power and interface cables enter
the storage unit via the rear side.

Figure 2-1 shows the location of the cable cutouts. You may use the following measurements
when you cut the floor tile:
򐂰 Width: 45.7 cm (18.0 in.)
򐂰 Depth: 16 cm (6.3 in.)

134 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_Physical Planning.fm

Figure 8-1 Floor tile cable cutout for DS8000

8.2.3 Room space and service clearance


The total amount of space needed by the storage units can be calculated using the
dimensions in Table 8-3.

Table 8-3 DS8000 dimensions


Dimension with Model 931/932/9B2 Model 931/932/9B2 Model 932/9B2
covers (Base Frame only) (with 1 Expansion (with 2 Expansion
Frame) Frames)

Height 76 in. 76 in. 76 in.


193 cm 193 cm 193 cm

Width 33.3 in. 69.7 in. 102.6 in.


84.7 cm 172.7 cm 260.9 cm

Depth 46.7 in. 46.7 in. 46.7 in.


118.3 cm 118.3 cm 118.3 cm

The storage unit location area should also cover the service clearance needed by IBM
services representatives when accessing the front and rear of the storage unit. You may use
the following minimum service clearances —the dimensions are also shown in Figure 8-2 on
page 136:
1. For the front of the unit, allow a minimum of 121.9 cm (48 in.) for the service clearance.
2. For the rear of the unit, allow a minimum of 76.2 cm (30 in.) for the service clearance.
3. For the sides of the unit, allow a minimum of 5.1 cm (2 in.) for the service clearance.

Chapter 8. Physical planning and installation 135


6786ch_Physical Planning.fm Draft Document for Review November 14, 2006 3:49 pm

Figure 8-2 Service clearance requirements

8.2.4 Power requirements and operating environment


Consider the following basic items when planning for the DS8000 power requirements:
򐂰 Power connectors
򐂰 Input voltage
򐂰 Power consumption and environment
򐂰 Power control features
򐂰 Power Line Disturbance (PLD) feature

Power connectors
Each DS8000 base and expansion unit has redundant power supply systems. The two line
cords to each frame should be supplied by separate AC power distribution systems. Table 8-4
lists the standard connector and receptacle for the DS8000.

Table 8-4 Connectors/receptacles


Location In-line connector Wall receptacle Manufacturer

US-Chicago 7428-78 7324-78 Russell-Stoll

US/AP/LA/Canada 7428-78 7324-78 Russell-Stoll

EMEA not applicable hard wire

136 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_Physical Planning.fm

Location In-line connector Wall receptacle Manufacturer

Japan 460C9W 460R9W

Use a 60 Ampere rating for the low voltage feature and a 25 Ampere rating for the high
voltage feature.

For more details regarding power connectors and line cords, refer to the publication IBM
System Storage DS8000 Introduction and Planning Guide, GC35-0515.

Input voltage
The DS8000 supports a three-phase input voltage source. Table 8-5 shows the power
specifications for each feature code.

Table 8-5 DS8000 input voltages and frequencies


Characteristic Low voltage (Feature 9090) High voltage (Feature 9091)

Nominal input voltage 200, 208, 220, or 240 RMS Vac 380, 400, 415, or 480 RMS Vac
(3-phase)

Minimum input voltage 180 RMS Vac 333 RMS Vac


(3-phase)

Maximum input voltage 264 RMS Vac 508 RMS Vac


(3-phase)

Steady-state input frequency 50 ± 3 or 60 ± 3.0 Hz 50 ± 3 or 60 ± 3.0 Hz

Power consumption and environment


Table 8-6 provides the power consumption specifications of the DS8000.

Table 8-6 DS8000 power consumption


Measurement Model 931 Model 932 Model 9B2 Expansion Models
(92E, 9AE)

Peak electric power 5.5 kVA 7.0 kVA 7.0 kVA 6.0 kVA

Thermal load (BTU/hr) 18,772 23,891 23,891 20 478

Air circulation for the DS8000 is provided by the various fans installed throughout the frame.
The power complex and most of the lower part of the machine take air from the front and
exhaust air to the rear. The upper disk drive section takes air from the front and rear sides,
and exhausts air to the top of the machine.

The recommended operating temperature for the DS8000 is between 20 to 25o C (68 to 78o F)
at a relative humidity range of 40 to 50 percent.

Important: Make sure that air circulation for the DS8000 base unit and expansion units is
maintained free from obstruction to keep the unit operating in the specified temperature
range.

Power control features


The DS8000 has remote power control features that allow you to control the power of the
storage complex through the DS Storage Manager console. Another power control feature is
available for the System z environment.

Chapter 8. Physical planning and installation 137


6786ch_Physical Planning.fm Draft Document for Review November 14, 2006 3:49 pm

For more details regarding power control features, refer to the publication IBM System
Storage DS8000 Introduction and Planning Guide, GC35-0515.

Power Line Disturbance feature


The Power Line Disturbance (PLD) feature stretches the available uptime of the DS8000 from
30 milliseconds to 30–50 seconds during a PLD event. We recommend that this feature be
installed specially with environments that have no UPS. There is no additional physical
connection planning needed for the client with or without the PLD.

8.2.5 Host interface and cables


The DS8000 Model 931 supports a maximum of 16 host adapters and four device adapter
pairs. The DS8000 Models 932 and 9B2 support a maximum of 32 host adapters and eight
device adapter pairs.

The DS8000 supports two types of fiber adapters:


򐂰 Enterprise Systems Connection Architecture (ESCON) adapters
򐂰 Fibre Channel/FICON adapters

ESCON
The DS8000 ESCON adapter supports two ESCON links per card. Each ESCON port is a
64-bit, LED-type interface, which features an enhanced microprocessor, and supports 62.5
micron multimode fiber optic cable terminated with the industry standard MT-RJ connector.

ESCON cables can be specified when ordering the ESCON host adapters.

Table 8-7 shows the various fiber optic cable features available for the ESCON ports.

Table 8-7 ESCON cable feature


Feature Length Connector Characteristic

1430 31 m (101.7 ft) MT-RJ Standard 62.5 micron

1431 31 m (101.7 ft) Duplex/MT-RJ Standard 62.5 micron

1432 2m Duplex receptacle/MT-RJ Standard 62.5 micron


connector

1440 31 m (101.7 ft) MT-RJ Plenum-rated 62.5 micron

1441 31 m (101.7 ft) Duplex/MT-RJ Plenum-rated 62.5 micron

The 31-meter cable is the standard length provided with the DS8000. You may order custom
length cables from IBM Global Services.

Note: Feature 1432 is a conversion cable for use in the DS8000 when connecting the unit
to a S/390 host using existing cables. The 9672 processor and the IBM ESCON Director
(9032) use duplex connectors.

Fibre Channel/FICON
The DS8000 Fibre Channel/FICON adapter has four ports per card. Each port supports FCP
or FICON, but not simultaneously. Fabric components from IBM, CNT, McDATA, and Brocade
are supported by both environments.

138 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_Physical Planning.fm

FCP is supported on point-to-point, fabric, and arbitrated loop topologies. FICON is supported
on point-to-point and fabric topologies.

The 31-meter fiber optic cable can be ordered with each Fibre Channel adapter. Using the
9-micron cable, a longwave adapter can extend the point-to-point distance to 10 km. A
shortwave adapter using 50 micron cable supports point-to-point distances of up to 500
meters at 1 Gbps and up to 300 meters at 2 Gbps. Additional distance can be achieved with
the use of appropriate SAN fabric components.

Table 8-8 lists the various fiber optic cables features for the FCP/FICON adapters.

Table 8-8 FCP/FICON cable features


Feature Length Connector Characteristic

1410 31 m LC/LC 50 micron, multimode

1411 31 m LC/SC 50 micron, multimode

1412 2m SC to LC adapter 50 micron, multimode

1420 31 m LC/LC 9 micron, single mode

1421 31 m LC/SC 9 micron, single mode

1422 2m SC to LC adapter 9 micron, single mode

Note: The remote mirror and copy functions use FCP as the communication link between
DS8000s, DS6000s, and ESS Models 800 and 750.

For more details on IBM-supported attachments, refer to the publication IBM System Storage
DS8000 Host Systems Attachment Guide, SC26-7917.

For the most up-to-date details about host types, models, adapters, and operating systems
supported by the DS8000 unit, see the Interoperability Matrix at:
http://www.ibm.com/servers/storage/disk/ds8000/interop.html

8.3 Network connectivity planning


Implementing the DS8000 requires that you consider the physical network connectivity of the
storage adapters and the DS HMC (Hardware Management Console) with your local area
network.

Check your local environment for the following DS8000 unit connections:
򐂰 Hardware management console network access
򐂰 DS CLI console
򐂰 Remote support connection
򐂰 Remote power control
򐂰 Storage area network connection

For more details on physical network connectivity, refer to the publications IBM System
Storage DS8000 User´s Guide, SC26-7915 and IBM System Storage DS8000 Introduction
and Planning Guide, GC35-0515.

Chapter 8. Physical planning and installation 139


6786ch_Physical Planning.fm Draft Document for Review November 14, 2006 3:49 pm

8.3.1 Hardware management console network access


Hardware management consoles (HMCs) are the focal point for configuration, copy services
management, and maintenance for a DS8000 unit. The internal management console (FC
1100) is a dedicated workstation that is physically located inside the DS8000 base frame. It
consists of a workstation processor, keyboard, monitor, modem, and Ethernet cables. The
Ethernet cables connect the management console to the storage unit. The management
console can pro-actively monitor the state of your system, notifying you and IBM when
service is required.

An external management console is available as a optional feature (FC 1110) for


environments with high-availability requirements. The external management console is an 2U
unit that is designed to be installed in a 19-inch, client-provided rack. The hardware that you
need to install the external management console into the rack is shipped with it.

Tip: To ensure that the IBM service representative can quickly and easily access an
external DS HMC, place the external DS HMC rack within 15.2 m (50 ft) of the storage
units that are connected to it.

The management console can be connected to your network for remote management of your
system using the DS Storage Manager Web-based graphical user interface (GUI), the DS
Command-Line Interface (CLI), or using storage management software through the DS Open
API. In order to use the CLI to manage your storage unit, you need to connect the
management console to your LAN because the CLI interface is not available on the HMC. The
DS8000 can be managed from the HMC only using the Web GUI interface. Connecting the
HMC to your LAN also allows you to access the DS Storage Manager from any location that
has network access using a Web browser.

In order to connect the management consoles (internal, and external if present) to your
network, you need to provide the following settings to your IBM service representative so that
they can configure the management consoles for attachment to your LAN:
򐂰 Management console network IDs, host names, and domain name
򐂰 Dynamic Name Server settings (if you plan to use DNS to resolve network names)
򐂰 Routing information

8.3.2 DS CLI console


Another way of managing and configuring your storage unit is through the DS CLI. The CLI
can be installed on and used from a LAN-connected system such as the storage
administrator’s laptop. You may consider installing the DS CLI on a separate workstation
connected to the storage unit’s LAN (for example, a Pentium 4 PC running Windows XP).

For details on the hardware and software requirements for the DS CLI, refer to the IBM
System Storage DS8000: Command-Line Interface User´s Guide, SC26-7916.

8.3.3 Remote support connection


Remote support connection is available from the DS HMC using the modem (dial-up) and the
Virtual Private Network (VPN) on the Internet through the client LAN.

You can take advantage of the DS8000 remote support feature for outbound calls (Call Home
function) or inbound calls (remote service access by an IBM support representative). You
need to provide an analog telephone line for the DS HMC modem.

Figure 8-3 on page 141 shows a typical remote support connection.

140 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_Physical Planning.fm

Figure 8-3 DS8000 DS HMC Remote support connection

Take note of the following guidelines to assist in the preparation for attaching the DS8000 to
the client’s LAN:
1. Assign a TCP/IP address and host name to the DS HMC in the DS8000.
2. If e-mail notification of service alert is allowed, enable the support on the mail server for
the TCP/IP addresses assigned to the DS8000.
3. Use the information that was entered on the installation worksheets during your planning.

IBM recommends service connection through the high-speed VPN network utilizing a secure
Internet connection. You need to provide the network parameters for your DS HMC through
the installation worksheet prior to actual configuration of the console. See Chapter 9, “DS
HMC planning and setup” on page 145.

Your IBM System Support Representative (SSR) will need the configuration worksheet during
the configuration of your DS HMC. A worksheet is available in the IBM System Storage
DS8000 Introduction and Planning Guide, GC35-0515.

See also Chapter 20, “Remote support” on page 391 for further discussion on remote support
connection.

8.3.4 Remote power control


The System z remote power control setting allows you to power on and off the storage unit
from a System z interface. If you plan to use the System z power control feature, be sure that
you order the System z power control feature. This feature comes with four power control
cables.

In a System z environment, the host must have the Power Sequence Controller (PSC) feature
installed to have the ability to turn on/off specific control units, like the DS8000. The control
unit is controlled by the host through the power control cable. The power control cable comes
with a standard length of 31 meters, so be sure to consider the physical distance between the
host and DS8000.

Chapter 8. Physical planning and installation 141


6786ch_Physical Planning.fm Draft Document for Review November 14, 2006 3:49 pm

8.3.5 Storage area network connection


The DS8000 can be attached to a SAN environment through its Fibre Channel ports. SANs
provide the capability to interconnect open-systems hosts, S/390 and System z hosts, and
other storage systems.

A SAN allows your single Fibre Channel host port to have physical access to multiple Fibre
Channel ports on the storage unit. You may need to establish zones to limit the access (and
provide access security) of host ports to your storage ports. Take note that shared access to a
storage unit Fibre Channel port might come from hosts that support a combination of bus
adapter types and operating systems.

8.4 Remote mirror and copy connectivity


The DS8000 uses the high speed Fibre Channel protocol for remote mirror and copy
connectivity. It supports connectivity with other DS8000s, DS6000s, as well as the IBM
TotalStorage Enterprise Storage Server Model 800 and Model 750 —the ESS must have
PPRC V2 (which supports Fibre Channel links) with the appropriate features installed.

Note: The DS8000 storage unit does not support ESCON links for the remote mirror and
copy functions —only Fibre Channel links for remote mirror and copy connectivity.

Make sure that you have a sufficient number of FCP paths assigned for your remote mirroring
between your source and target sites to address performance and redundancy issues. When
you plan to use both Metro Mirror and Global Copy modes between a pair of storage units,
IBM recommends that you use separate logical and physical paths for the Metro Mirror and
another set of logical and physical paths for the Global Copy.

Plan accordingly on the distance between the primary and secondary storage units to
properly acquire the necessary length of fiber optic cables you need, or if your Copy Services
solution would require separate hardware such as channel extenders or DWDM.

For detailed information refer to the redbooks IBM System Storage DS8000 Series: Copy
Services in Open Environments, SG24-6788 and IBM System Storage DS8000 Series: Copy
Services with System z servers, SG24-6787.

8.5 Disk capacity considerations


The effective capacity of the DS8000 is determined by several factors:
򐂰 The spares configuration
򐂰 The size of the installed disk drives
򐂰 The selected RAID configuration —RAID 5 or RAID 10, in two sparing combinations
򐂰 The storage type —FB or CKD

8.5.1 Disk sparing


The DS8000 assigns spare disks automatically. The first four array sites (a set of eight disk
drives) on a Device Adapter (DA) pair will normally each contribute one spare to the DA pair.
A minimum of one spare is created for each array site defined until the following conditions
are met:
򐂰 A minimum of four spares per DA pair

142 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_Physical Planning.fm

򐂰 A minimum of four spares of the largest capacity array site on the DA pair
򐂰 A minimum of two spares of capacity and rpm greater than or equal to the fastest array
site of any given capacity on the DA pair

The DDM sparing policies support the over configuration of spares. This possibility may be of
interest to some installations as it allows the repair of some DDM failures to be deferred until
a later repair action is required.

Refer to IBM System Storage DS8000 Introduction and Planning Guide, GC35-0515 for more
details on the DS8000 sparing concepts. See also “Spare creation” on page 85.

8.5.2 Disk capacity


The DS8000 operates in either a RAID 5 or RAID 10 configuration. The following four RAID
configurations are possible:
򐂰 3+3 - RAID 10 configuration: the array consists of three data drives that are mirrored to
three copy drives. Two drives on the array site are used as spares.
򐂰 4+4 - RAID 10 configuration: the array consists of four data drives that are mirrored to four
copy drives.
򐂰 6+P - RAID 5 configuration: the array consists of six data drives and one parity drive. The
remaining drive on the array site is used as a spare.
򐂰 7+P - RAID 5 configuration: the array consists of seven data drives and one parity drive.

Table 8-9 helps you plan for the capacity of your DS8000 system.

Table 8-9 Disk drive set capacity for open systems and System z environments
Disk size Physical Rank type Effective capacity of one Rank in decimal GB
(GB) capacity (Number of extents)
per disk
drive set in Rank of RAID 10 arrays Rank of RAID 5 arrays
decimal GB
3+3 4+4 6+P 7+P

73 1168 FB 206.16 274.88 414.46 483.18


(192) (256) (386) (450)

CKD 204.34 272.45 410.57 479.62


(216) (288) (434) (507)

146 2336 FB 414.46 557.27 836.44 976.03


(386) (519) (779) (909)

CKD 411.51 549.63 825.86 963.03


(435) (581) (873) (1018)

300 4800 FB 842.96 1125.28 1698.66 1979.98


(785) (1048) (1582) (1844)

CKD 835.32 1114.39 1675.38 1954.45


(883) (1178) (1771) (2066)

500 8000 FB 1404.93 1875.47 2831.10 3299.97


(1308) (1747) (2637) (3073)

CKD 1392.20 1857.32 2792.30 3257.42


(1472) (1963) (2952) (3443)

Chapter 8. Physical planning and installation 143


6786ch_Physical Planning.fm Draft Document for Review November 14, 2006 3:49 pm

Note:
1. Physical capacities are in decimal gigabytes (GB). One GB is one billion bytes.
2. Please keep in mind the lower recommended usage for the 500 GB FATA drives as
detailed in Section 4.4.6, “FATA versus Fiber Channel drives on the DS8000” on
page 62

Table 8-9 shows the effective capacity of one rank in the different possible configurations. A
disk drive set contains 16 drives, which form two array sites. Capacity on the DS8000 is
added in increments of one disk drive set. The effective capacities in the table are expressed
in binary gigabytes and as the number of extents.

8.5.3 Fibre Channel ATA (FATA) disks considerations


When planning capacity and disk types, you can make a choice of FC and FATA disks. FATA
drives offer a cost effective option for lower priority data such as various fixed content, data
archival, reference data, and near-line applications that require large amounts of storage
capacity for lighter workloads.

These new drives are meant to complement, not compete with existing Fibre Channel drives
as they are not intended for use in applications that require drive utilization duty cycles
greater than 20 percent. Intermix of the two drive types within the DS8000 is supported with
certain restrictions on physical placement and logical configuration.

For a detailed discussion on FATA drives you can refer to Chapter 4 Hardware Components,
sections 4.4.4 on page 56, 4.4.5 on page 58, and 4.4.6 on page 62.

8.6 Planning for growth


The DS8000 storage unit is a highly scalable storage solution. Features such as total storage
capacity, cache size, and host adapters can be easily increased by physically adding the
necessary hardware or by changing the needed licensed keys for Advanced Copy Services
features (as ordered).

Planning for future growth would normally suggest an increase in physical requirements, in
your installation area (including floor loading), electrical power, and environmental cooling.

A key feature that you can order for your dynamic storage requirement is the Standby
Capacity on Demand (CoD). This offering is designed to provide you with the ability to tap into
additional storage and is particularly attractive if you have rapid or unpredictable growth, or if
you simply want the knowledge that the extra storage will be there when you need it. Standby
Cod allows you to access the extra storage capacity when you need it through a nondisruptive
activity. For more information on Capacity on Demand, see Section 21.2, “Using Capacity on
Demand (CoD)” on page 400.

144 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_DS_HMC_Plan.fm

Chapter 9. DS HMC planning and setup


This chapter discusses the planning activities needed for the set up of the required DS
Hardware Management Console (DS HMC) environment. In this chapter we cover the
following topics:
򐂰 DS HMC — Hardware Management Console overview
򐂰 DS HMC software components and communication
򐂰 Typical DS HMC environment set up
򐂰 Planning and setup of the DS HMC
򐂰 User management
򐂰 External DS HMC

© Copyright IBM Corp. 2006. All rights reserved. 145


6786ch_DS_HMC_Plan.fm Draft Document for Review November 14, 2006 3:49 pm

9.1 DS HMC — Hardware Management Console overview


The DS HMC looks similar to a laptop. It consists of a workstation processor, keyboard,
monitor, modem, and Ethernet cables. The DS8000 comes with an internal DS HMC (feature
code 1100) in the base frame, together with a pair of Ethernet switches installed and cabled
to the processor complex or external DS HMC, or both. It is a focal point with multiple
functions including:
򐂰 Storage configuration
򐂰 LPAR management1
򐂰 Advanced Copy Services management
򐂰 Interface for local service personnel
򐂰 Remote service and support

The DS HMC is connected to the storage facility by way of redundant private Ethernet
networks. Figure 9-1 shows the back of a single DS HMC and a pair of Ethernet switches.

Ethernet ports for private


DS-HMC DS-HMC network

SFI-1 SFI-2

Port for
Customer
Network
SFI-1 SFI-2

Ports for DS-HMC #2


Ports to connect Pair of Ethernet switches
- Storage Facility Image (SFI) addtional DS8000

Figure 9-1 Rear view of the DS HMC and a pair of Ethernet switches

The DS HMC has two built-in Ethernet ports: One dual-port Ethernet PCI adapter and one
PCI modem for asynchronous Call Home support. The DS HMCs private Ethernet ports
shown are configured into port 1 of each Ethernet switch to form the private DS8000 network.
The client Ethernet port indicated is the primary port to be used to connect to the client
network. The empty Ethernet port is normally not used. Corresponding private Ethernet ports
of the external DS HMC (FC1110) would be plugged into port 2 of the switches, as shown. To
interconnect two DS8000 base frames, FC1190 would provide a pair of 31m Ethernet cables
to connect from port 16 of each switch in the second base frame into port 15 of the first frame.
If the second DS HMC is installed in the second DS8000, it would remain plugged into port 1
of its Ethernet switches. Each LPAR of a Storage Facility Image (SFI) is connected via a
redundant Ethernet connection to the internal network.

1
A processor LPAR is part of a Storage Facility Image. For the client there is no possibility to manage a processor
LPAR, as is possible on the System p5 or System i5 systems.

146 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_DS_HMC_Plan.fm

9.2 DS HMC software components and communication


This section gives a brief overview of the DS HMC software components, interface options
and the logical flow of communication among the components.

9.2.1 Components of the DS HMC


The DS Hardware Management Console is an application that runs within a WebSphere
environment on the Linux-based DS8000 Management Console and consists of two servers,
the DS Storage Management server and the DS Network Interface server:
򐂰 DS SMS
The DS Storage Management server is the logical server that runs in a WebSphere
environment on the DS HMC and communicates with the outside world to perform
DS8000-specific tasks.
򐂰 DS NW IFS
The DS Network Interface is the logical server that also runs in the WebSphere
environment on the DS HMC, communicates with the DS SMS and also interacts with the
two controllers of the DS8000.

9.2.2 Logical flow of communication


The DS8000 provides several management interfaces. The interfaces are:
򐂰 DS Storage Manager (DS SM) graphical user interface (GUI)
򐂰 DS Command-Line Interface (DS CLI)
򐂰 DS Open Application Programming Interface (DS Open API)

Figure 9-2 shows the management interface components and the way they communicate.

Figure 9-2 DS HMC environment - Logical flow of communication

Chapter 9. DS HMC planning and setup 147


6786ch_DS_HMC_Plan.fm Draft Document for Review November 14, 2006 3:49 pm

The GUI and the CLI are comprehensive, easy-to-use interfaces for a storage administrator to
perform DS8000 management tasks. They can be used interchangeably depending on the
particular task. The DS Open API provides an interface for storage management application
programs to the DS8000.

Installable code for the interface programs is available on CDs that are delivered with the
DS8000 unit. Subsequent versions are made available with DS8000 Licensed Internal Code
updates. All front ends for the DS8000 can be installed on any of the supported workstations.

Tip: We recommend that you have a directory structure in place where all software
components to be installed for the DS environment are stored, including the latest levels
from the Internet used for installation.

The DS Storage Management Server (DS SMS) runs in a WebSphere environment installed
in the DS HMC. The DS SMS provides the communication interface to the front end DS
Storage Manager (DS SM), which is running in a Web browser. The DS SMS also
communicates with the DS Network Interface Server (DS NW IFS), which is responsible for
communication with the two controllers of the DS8000.

9.2.3 DS Storage Manager


Communication with the DS Storage Manager graphical user interface (GUI) is established
using a Web browser on any supported network connected workstation by entering in the
Web browser the IP address of the DS HMC and the port that the DS SMS is listening to, as
shown in Example 9-1. This is the easiest and probably the most commonly used method to
access the DS GUI.

Example 9-1 Connecting to the DS SMS from within a Web browser


To connect to the DS SMS from within a Web Browser, just type the address
http://<ip-address>:8451/DS8000

DS Storage Manager can also be accessed locally from the DS8000 Management console
using the Web browser that comes pre installed on the DS HMC. See “Using the DS GUI on
the HMC” on page 150 for instructions on how to log on to the Management console and to
access the DS Storage Manager.

Another alternative is to install DS Storage Manager on a workstation and use it to


communicate with the DS HMC.

9.2.4 Command-Line Interface


The DS Command-Line Interface, which needs to be executed in the command environment
of a workstation, is a second option to communicate with the DS HMC. Before it can be used,
the DS CLI must first be installed on the workstation.

Note: The DS CLI cannot be used locally on the DS Hardware Management console.

Once the DS CLI has been installed on a workstation it can be used by just typing dscli in the
command environment. Multiple DS CLI commands can be integrated into a script that can be
executed by using the dscli command with the -script parameter.

The DS CLI provides three command modes:


򐂰 Interactive command mode

148 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_DS_HMC_Plan.fm

򐂰 Script command mode


򐂰 Single-shot command mode

To enter the interactive mode of the DS CLI just type dscli in a command prompt window and
follow the prompts to log on, as shown in Example 9-2. Once logged on, DS CLI commands
can be entered one at a time.

Example 9-2 CLI interactive mode


C:\IBM\DSCLI>dscli
Enter the primary management console IP address: 10.0.0.1
Enter the secondary management console IP address:
Enter your username: its4me
Enter your password:
Date/Time: November 9, 2005 9:47:13 AM EET IBM DSCLI Version: 5.1.0.204 DS:
IBM.2107-75ABTV1
IBM.2107-75ABTV2
dscli> lssi
Date/Time: November 9, 2005 9:47:23 AM EET IBM DSCLI Version: 5.1.0.204
Name ID Storage Unit Model WWNN State ESSNet
============================================================================
- IBM.2107-75ABTV1 IBM.2107-75ABTV0 9B2 5005076303FFC663 Online Enabled
- IBM.2107-75ABTV2 IBM.2107-75ABTV0 9B2 5005076303FFCE63 Online Enabled
dscli> exit

To call a script with DS CLI commands, the following syntax in a command prompt window of
a Windows workstation can be used:
dscli -script <script_filename> -hmc1 <ip-address> -user <userid> -passwd <password>

In Example 9-3, script file lssi.cli contains just one CLI command, the lssi command.

Example 9-3 CLI script mode


C:\IBM\DSCLI>dscli -script c:\ds8000\lssi.cli -hmc1 10.0.0.1 -user its4me -passwd its4pw
Date/Time: November 9, 2005 9:33:25 AM EET IBM DSCLI Version: 5.1.0.204
IBM.2107-75ABTV1
IBM.2107-75ABTV2
Name ID Storage Unit Model WWNN State ESSNet
============================================================================
- IBM.2107-75ABTV1 IBM.2107-75ABTV0 9B2 5005076303FFC663 Online Enabled
- IBM.2107-75ABTV2 IBM.2107-75ABTV0 9B2 5005076303FFCE63 Online Enabled

Example 9-4 shows how to run a single command from a workstation prompt. The -cfg flag
precedes a CLI profile that contains the IP address and user ID information for the DS HMC.

Example 9-4 CLI single-shot mode


C:\IBM\DSCLI>dscli -cfg 75abtv1.profile lssi
Date/Time: November 9, 2005 9:24:42 AM EET IBM DSCLI Version: 5.1.0.204
IBM.2107-75ABTV1
IBM.2107-75ABTV2
Name ID Storage Unit Model WWNN State ESSNet
============================================================================
- IBM.2107-75ABTV1 IBM.2107-75ABTV0 9B2 5005076303FFC663 Online Enabled
- IBM.2107-75ABTV2 IBM.2107-75ABTV0 9B2 5005076303FFCE63 Online Enabled

Chapter 9. DS HMC planning and setup 149


6786ch_DS_HMC_Plan.fm Draft Document for Review November 14, 2006 3:49 pm

9.2.5 DS Open Application Programming Interface


Calling DS Open Application Programming Interfaces (DS Open APIs) from within a program
is a third option to implement communication with the DS HMC. Both DS CLI and DS Open
API communicate directly with the DS NW IFS.

The new Common Information Model (CIM) Agent (v5.2) for the DS8000 is Storage
Management Initiative Specification (SMI-S) 1.1 compliant. This agent, previously also known
as the Installation and Configuration Automation/Assistance Tool (ICAT), is used by storage
management applications such as TPC, TSM, VSS/VDS, and Director & Director Server
Storage Provisioning. Also, with compliance to more open standards, the agent can now be
accessed by software from third party vendors including Veritas/Symantec, HP/AppIQ, EMC,
and many other applications at SNIA Interoperability Lab.

Starting with release v5.2.1, it is possible to install the CIM agent directly on the DS8000
HMC thus eliminating the need for a customer supplied server. However, with the HMC based
implementation, it is only possible to access the DS8000 managed by that HMC. For this
reason, users may opt to install it on an external system, allowing them to manage multiple
devices of different types (DS8000, DS6000 or ESS 800/Fxx).

Installing the agent directly in the DS8000 HMC has a few other limitations, including:
򐂰 Limited failover between primary and secondary HMC
򐂰 Must use WEB SM to enable the agent
򐂰 Must use DSCIMCLI to configure the agent
򐂰 No integration with DS8000 GUI or DSCLI

The CIM agent is bundled with MM Extensions CD and can be loaded using the CDA
procedure. After being loaded, the agent must be enabled using the WEB SM and configured
with the DSCIMCLI.

9.2.6 Using the DS GUI on the HMC


The following procedure can be used to log onto the Management Console and access the
DS Storage Manager using the Web browser pre installed on the DS HMC.
1. Open and turn on the Management Console. The Hardware Management Console login
window displays, prompting for user ID and password. After a successful login, the
WebSM interface window is displayed.
The predefined Management console user ID and password are:
– User ID: customer
– Password: cust0mer
After logging in the first time, it is best to change the password. If the password is lost or
forgotten, an IBM service representative will be needed to reset it.
2. Right-click with the mouse on an empty area of the desktop to open a Fluxbox. Place the
mouse over the Net selection in the Fluxbox. Another box is displayed with the words Net
and Browser. See Figure 9-3 on page 151.

150 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_DS_HMC_Plan.fm

Figure 9-3 HMC Fluxbox

3. Click Browser. The Web browser is started with no address bar and a Web page titled
WELCOME TO THE DS8000 MANAGEMENT CONSOLE is displayed; see Figure 9-4.

Figure 9-4 Management console Welcome panel

4. On the Welcome panel, click IBM SYSTEM STORAGE DS STORAGE MANAGER.


5. A certificate panel opens. Click Accept.
6. The IBM System Storage DS8000 SignOn panel opens. Proceed by entering a user ID
and password. The predefined user ID and password are:
– User ID: admin

Chapter 9. DS HMC planning and setup 151


6786ch_DS_HMC_Plan.fm Draft Document for Review November 14, 2006 3:49 pm

– Password: admin
The password must be changed on first login. If someone has already logged on, check
with the person what the new password is.
7. A Wand (password manager) panel opens. Select OK.

9.3 Typical DS HMC environment set up

Figure 9-5 Typical DS HMC environment setup

The typical setup for a DS HMC environment assumes that the following servers and
workstations exist. See Figure 9-5.
򐂰 DS Hardware Management Console (online configuration mode only)
One Management console, with all the required software pre installed, is always located
within the first DS8000 unit ordered. This can be used for a real-time (online)
configuration. Optionally, a second Management console can be ordered and installed in a
client provided rack or in a second DS8000.
򐂰 DS SM workstations with DS SM code installed (online/offline-configuration modes)
Client-provided workstation where DS Storage Manager has been installed. This can be
used for both real-time (online) and simulated (offline) configurations.
򐂰 Workstations with Web browser only (online configuration mode only)
User-provided workstations with a Web browser to access the DS Storage Manager on
the DS HMC. This can be used for a real-time (online) configuration.

Other supported HMC configurations include:


򐂰 1:2 - 1 HMC driving two DS8000
򐂰 2:1 - 1 internal (FC1100) and 1 external (FC1110) HMC attached to one DS8000
򐂰 2:2 - 2 HMCs supporting 2 DS8000’s, most likely an internal HMC in each DS8000
򐂰 2:3 - 2 HMCs supporting 3 DS8000’s, most likely an HMC in the 1st two DS8000’s
򐂰 2:4 - 2 HMCs supporting 4 DS8000’s

152 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_DS_HMC_Plan.fm

9.4 Planning and setup of the DS HMC


It is possible to run a DS8000 implementation with one internal DS HMC or in redundant
mode with an additional external DS HMC.

Note: An implementation with only one DS HMC should only be used where high
availability of the storage environment is not needed.

9.4.1 Using the DS SM front end


The DS SM front end running in a Web browser can be used to do all physical and logical
configuration for the DS8000, including Copy Services functions. During initial planning it is
helpful to identify which tasks will be done using the DS SM front end.

Note: The DS CLI is usually a better option for mass updates due to web page load times.

Figure 9-6 shows the Welcome panel of the DS8000 Storage Manager. The main areas that
can be handled using the DS SM are:
򐂰 Monitoring and administration
򐂰 Hardware management
򐂰 Storage configuration
򐂰 Copy Services management

A Windows or Linux version of the DS HMC can be used to simulate configuration tasks in
offline mode. This option is not available with the internal or external DS HMCs.

Figure 9-6 Entry screen of DS SM

Chapter 9. DS HMC planning and setup 153


6786ch_DS_HMC_Plan.fm Draft Document for Review November 14, 2006 3:49 pm

9.4.2 Using the DS CLI


Using the DS CLI for configuration and reporting purposes might have some advantages
whenever major configuration activity is needed —the initial configuration, for example.

Also, it is easy to integrate the CLI commands into existing scripts. This might, for example,
be needed where automation for backup and disaster recovery is running based on scripts.

Note: The DS CLI consists of commands to interact with the DS8000. Multiple commands
can be integrated in one DS CLI script. Programming logic needs to be implemented in the
software that uses the DS CLI scripts or DS CLI commands.

9.4.3 Using the DS Open API


In some environments there are existing backup or disaster recovery solutions based on
solutions developed by the client’s development organization. For those solutions a need may
exist to adapt to the logic of the DS8000 by using the new DS Open API. During the planning
of the DS HMC, the program components which need to be adapted to the DS8000
environment need to be identified.

9.4.4 Hardware and software setup


The following activities are needed to identify which code levels should be used and to plan
the installation activities.
򐂰 Plan for installation activities of the external HMC.
The installation activities for the optional external HMC need to be identified as part of the
overall project plan and agreed upon with the responsible IBM personnel.
򐂰 Code for DS SM workstations.
The CD that comes with every DS8000 contains DS SM code that can be installed on a
workstation to support simulated (offline) configuration. This includes WebSphere as well
as DB2, the Java Virtual Machine, and the DS-specific server codes for DS SMS and DS
NW IFS. A new version of DS SM code may become available with DS8000 microcode
update packages. The level of code for the DS SM workstations needs to be planned by
IBM and the client’s organization.
For initial configuration of the DS8000 it is possible to do configuration of the DS8000
offline on a DS SM workstation and apply the changes afterwards to the DS8000. To
configure a DS8000 in simulation mode, you must import the hardware configuration from
an eConfig order file or from an already configured DS8000, or manually enter the
configuration details into the Storage Manager. You can then modify the configuration
while disconnected from the network. The resultant configuration can then be exported
and applied to a new or unconfigured DS8000 Storage Unit.
򐂰 Plan for installation activities of DS SM workstations.
The installation activities for all workstations that need to communicate with the DS HMC
need to be identified as part of the overall project plan. This will include time and
responsibility information for the physical setup of the DS SM workstations (if any
additional are needed).
򐂰 Code for supported Web browser.
The Web browser to be used on any administration workstation should be a supported
one, as mentioned in the installation guide or in the Information Center for the DS8000. A
decision should be made as to which Web browser should be used. The code should then
be made available.

154 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_DS_HMC_Plan.fm

The Web browser is the only software that is needed on workstations that will do
configuration tasks online using the DS Storage Manager GUI.

9.4.5 Activation of Advanced Function licenses


When ordering a DS8000, the license and some optional features need activation as part of
the customization of the DS8000. Activation codes are currently needed for the following
licensed functions in the 931/932/9B2 Turbo models:
򐂰 Operation environment
򐂰 FICON/ESCON attachment
򐂰 Point-in-Time Copy
򐂰 Metro/Global Mirror
򐂰 Metro Mirror
򐂰 Global Mirror
򐂰 Remote Mirror for z/OS
򐂰 Parallel Access Volumes
򐂰 HyperPAV

Also, depending on your Metro/Global Mirror configuration, there are the Metro Mirror Add on,
and the Global Mirror Add on licensed functions for the Turbo models. For the 921/922/9A2
models Metro Mirror and Global Mirror are under the Remote Mirror and Copy (RMC)
licensed function, and the FICON/ESCON Attachment function does not apply.

To prepare for the activation of the license keys, the Disk storage feature activation (DSFA)
Internet page can be used to create activation codes and to download keyfiles, which then
need to be applied to the DS8000. The following information is needed for creating license
activation codes:
򐂰 Serial number of the DS8000
The serial number of a DS8000 can be taken from the front of the base frame (lower right
corner). If several machines have been delivered this is the only way to obtain the serial
number of a machine located in a specific point in the computer center.
򐂰 Machine signature
The machine signature can only be obtained using the DS SM or the DS CLI after
installation of the DS8000 and DS HMC.
򐂰 License Function Authorization document
In most situations, the DSFA application can locate your 239x/2244 function authorization
record when you enter the DS8000 (242x/2107) serial number and signature. However, if
the function authorization record is not attached to the 242x/2107 record, you must assign
it to the 242x/2107 record in the DSFA application. In this situation, you will need the
239x/2244 serial number (which you can find on the License Function Authorization
document).
If you are activating codes for a new storage unit, the License Function Authorization
documents are included in the shipment of the storage unit. If you are activating codes for
an existing storage unit, IBM sends these documents to you in an envelope.

For more information on required DS8000 features and function authorizations, as well as
activation tasks, see Chapter 11, “Features and license keys” on page 197.

The Operating Environment license must be for a capacity greater than or equal to the total
physical capacity of the system. If it is not, you will not be able to configure any storage for a
new box or the new capacity for an upgrade. For each of the other features you need to order
a license for capacity greater than or equal to the capacity of the storage format with which it

Chapter 9. DS HMC planning and setup 155


6786ch_DS_HMC_Plan.fm Draft Document for Review November 14, 2006 3:49 pm

will be used. For example, assume you have a 10 TB box with 4 TB of storage for CKD and 6
TB for FB. If you only wanted to use Metro Mirror for the CKD storage you would need to order
the Metro Mirror license for 4 TB.

Note: Applying increased feature activation codes is a concurrent action, but a license
reduction or deactivation is a disruptive action.

During planning the capacity ordered for each of the Copy Services functions should be
reviewed. After activation of the features it should be checked that they match the capacity
assigned in the DS8000 for Copy Services functions.

9.4.6 Microcode upgrades


The following needs to be considered in regard to the microcode upgrade of the DS8000:
򐂰 Microcode changes
IBM may release changes to the DS8000 series Licensed Machine Code. IBM plans to
make most DS8000 series Licensed Machine Code changes available for download by the
DS8000 series system from the IBM System Storage technical support Web site. Please
note that not all Licensed Machine Code changes may be available via the support Web
site.
򐂰 Microcode install
IBM CE will install the changes that IBM does not make available for you to download. If
the machine does not function as warranted and your problem can be resolved through
your application of downloadable Licensed Machine Code, you are responsible for
downloading and installing these designated Licensed Machine Code changes as IBM
specifies. Check if the new microcode requires new levels of DS SM, DS CLI, and DS
Open API and plan on upgrading them on relevant workstations if necessary.
򐂰 Host prerequisites
When planning for initial installation or for microcode updates make sure that all
prerequisites for the hosts are identified correctly. Sometimes a new level is required for
the SDD as well. The Interoperability Matrix should be the primary source to identify
supported operating systems, HBAs, and hardware of hosts.
To prepare for the download of drivers refer to the HBA Support Matrix referenced in the
Interoperability Matrix and make sure drivers are downloaded from the IBM Internet
pages. This is to make sure that drivers are used with the settings corresponding to the
DS8000 and that we do not use settings that might work with another storage subsystem
but would not work or would not be optimal with the DS8000.

Important: The Interoperability Matrix always reflects information regarding the latest
supported code levels. This does not necessarily mean that former levels of HBA
firmwares or drivers are no longer supported. If in doubt about any supported levels
contact your IBM representative.

The Interoperability Matrix and the HBA Support Matrix can respectively be found at the
IBM System Storage technical support Web site at:
http://www-03.ibm.com/servers/storage/disk/ds8000/pdf/ds8000-matrix.pdf
http://www-03.ibm.com/servers/storage/support/config/hba/index.wss
򐂰 Maintenance windows

156 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_DS_HMC_Plan.fm

Even though the microcode update of the DS8000 is a non-disruptive action, any
prerequisites identified for the hosts (for example, patches, new maintenance levels, new
drivers) could make it necessary to schedule a maintenance window. The host
environments can then be upgraded to the level needed in parallel to the microcode
update of the DS8000 taking place.

For more information on microcode upgrades see Chapter 18, “Licensed machine code” on
page 375.

9.4.7 Time synchronization


For proper error analysis it is important to have the date and time information synchronized as
much as possible on all components in the DS8000 environment. This includes the DS8000,
the DS HMC, the DS SM workstations, and also the servers connected to the DS8000.

The setting of the date and time of the DS8000 is a maintenance activity that can only be
done by the IBM CE.

9.4.8 Monitoring with the DS HMC


You can receive notifications from the DS HMC through SNMP traps and e-mail. Notifications
contain information about your Storage Complex, such as serviceable events. You can
choose one or both notification methods.
򐂰 Simple Network Management Protocol (SNMP) traps
For monitoring purposes, the DS8000 uses SNMP traps. An SNMP trap can be sent to a
server in the client’s environment, perhaps with System Management Software, which
handles the trap based on the MIB delivered with the DS8000 software. A MIB containing
all traps can be used for integration purposes into System Management Software. Traps
supported are described in more detail in the documentation that comes along with the
microcode on the CDs provided by the IBM CE. The IP address where the traps should be
sent to needs to be configured during initial installation of the DS8000. For more
information on the DS8000 and SNMP see Chapter 19, “Monitoring with SNMP” on
page 381.
򐂰 E-mail
When you choose to enable e-mail notifications, e-mail messages are sent to all the e-mail
addresses that are defined on the DS HMC when the storage complex encounters a
serviceable event or must alert you to other information.
During the planning process, who needs to be notified needs to be identified. For the
e-mail notification, the IP address of a SMTP relay is needed.
򐂰 SIM notification
SIM is only applicable for System z servers. It allows you to receive a notification on the
system console in case of a serviceable event.

9.4.9 Call Home and Remote support


The DS HMC supports both outbound (Call Home) and inbound (Remote Services) remote
support.

Call Home is the capability of the DS HMC to contact IBM support to report a serviceable
event. Remote Services is the capability of IBM service representatives to connect to the DS
HMC to perform service tasks remotely. If allowed to do so by the setup of the client’s
environment, an IBM Product Field Engineer (IBM PFE) could connect to the DS HMC to

Chapter 9. DS HMC planning and setup 157


6786ch_DS_HMC_Plan.fm Draft Document for Review November 14, 2006 3:49 pm

perform detailed problem analysis. He can view error logs and problem logs, and initiate trace
or dump retrievals.

Remote support can be configured for dial-up connection through a modem or high-speed
virtual private network (VPN) Internet connection. The details to set up the remote support
functionality are described in the VPN Security and Implementation document, which can be
found on the DS8100 Support Web page under topic VPN Implementation:

http://www.ibm.com/servers/storage/support/disk/ds8100/installing.html

Also see Chapter 20, “Remote support” on page 391.

When the VPN connection is used, if there is a firewall in place to shield your network from
the open Internet, the firewall must be configured to allow the HMC to connect to the IBM
VPN servers. The HMC establishes connection to the following TCP/IP addresses:
򐂰 207.25.252.196 IBM Boulder VPN Server
򐂰 129.42.160.16 IBM Rochester VPN Server

You must also enable the following ports and protocols:


򐂰 Protocol ESP
򐂰 Protocol UDP Port 500
򐂰 Protocol UDP Port 4500

The setup of the remote support environment is done by the IBM CE during initial installation.

9.5 User management


User management can be done using the DS CLI or the DS GUI. An administrator user ID is
pre-configured during the installation of the DS8000, using the following defaults:
User ID admin
Password admin

Attention: The password of the admin user ID will need to be changed before it can be
used. The GUI will force you to change the password when you first log in. The DS CLI will
allow you to log in, but will not allow you to issue any other commands until you have
changed the password. As an example, to change the admin user’s password to passw0rd,
use the following command: chuser-pw passw0rd admin. Once you have issued that
command, you can then issue any other command.

During the planning phase of the project, a worksheet or a script file was established with a
list of all people who need access to the DS GUI or DS CLI. The supported roles are:
򐂰 Administrator has access to all available commands.
򐂰 Physical operator has access to maintain the physical configuration (storage complex,
storage image, array, rank, and so on).
򐂰 Logical operator has access to maintain the logical configuration (logical volume, host,
host ports, and so on).
򐂰 Copy Services operator has access to all Copy Services functions and the same access
as the monitor group.
򐂰 Monitor group has access to all read-only list and show commands.
򐂰 No access could be used by the administrator to temporarily deactivate a user ID.

158 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_DS_HMC_Plan.fm

General password settings include the time period in days after which passwords expire and a
number that identifies how many failed logins are allowed.

Whenever a user is added, a password is entered by the administrator. During the first sign-in,
this password needs to be changed by the user. The user ID is deactivated if an invalid
password is entered more times then as defined by the administrator for the password
settings. Only a user with administrator rights can then reset the user ID with a new initial
password.

If the access is denied for the administrator due to the number of invalid tries, a procedure
can be obtained from your IBM representative to reset the administrator’s password.

Tip: User names and passwords are both case sensitive. If you create a user name called
Anthony, you cannot log on using the user name anthony. DS CLI commands, however, are
not case sensitive. So the commands LSUSER or LSuser or lsuser will all work.

The password for each user account is forced to adhere to the following rules:
򐂰 The length of the password must be between six and 16 characters.
򐂰 It must begin and end with a letter.
򐂰 It must have at least five letters.
򐂰 It must contain at least one number.
򐂰 It cannot be identical to the user ID.
򐂰 It cannot be a previous password.

9.5.1 User management using the DS CLI


The DS CLI allows the client to manage user IDs for the DS CLI and for the DS GUI. The
commands to support this are:
򐂰 mkuser
This command creates a user account that can be used with both DS CLI and the DS GUI.
In Example 9-5 we create a user called Pierre, who is in the op_storage group. His
temporary password is tempw0rd.

Example 9-5 Using the mkuser command to create a new user


dscli> mkuser -pw tempw0rd -group op_storage Pierre
Date/Time: 10 November 2005 20:38:56 IBM DSCLI Version: 5.1.0.204
CMUC00133I mkuser: User Pierre successfully created.

򐂰 rmuser
This command removes an existing user ID. In Example 9-6 we remove a user called
Enzio.

Example 9-6 Removing a user


dscli> rmuser Enzio
Date/Time: 10 November 2005 21:21:33 IBM DSCLI Version: 5.1.0.204
CMUC00135W rmuser: Are you sure you want to delete user Enzio? [y/n]:y
CMUC00136I rmuser: User Enzio successfully deleted.

򐂰 chuser
This command changes the password and/or group of an existing user ID. It is also used
to unlock a user ID that has been locked by exceeding the allowable login retry count. You
could also lock a user ID if desired. In Example 9-7 on page 160 we unlock the user,

Chapter 9. DS HMC planning and setup 159


6786ch_DS_HMC_Plan.fm Draft Document for Review November 14, 2006 3:49 pm

change the password, and change the group membership for a user called Sharon. A user
has to use the chpass command when they use that user ID for the first time.

Example 9-7 Changing a user with chpass


dscli> chuser -unlock -pw passw0rd -group monitor Sharon
Date/Time: 10 November 2005 22:55:43 IBM DSCLI Version: 5.1.0.204
CMUC00134I chuser: User Sharon successfully modified.

򐂰 lsuser
With this command, a list of all user IDs can be generated. In Example 9-8 we can see
three users.

Example 9-8 Using the lsuser command to list users


dscli> lsuser
Date/Time: 10 November 2005 21:14:18 IBM DSCLI Version: 5.1.0.204
Name Group State
==========================
Pierre op_storage active
admin admin active
Tamara op_volume active
Juergen monitor active

򐂰 showuser
The account details of user IDs can be displayed with this command. In Example 9-9 we
list the details of Arielle’s user ID.

Example 9-9 Using the show user command to list user information
dscli> showuser Arielle
Date/Time: 10 November 2005 21:25:34 IBM DSCLI Version: 5.1.0.204
Name Arielle
Group op_volume
State active
FailedLogin 0

򐂰 managepwfile
This command creates or adds to an encrypted password file that will be placed onto the
local machine. This file can be referred to in a DS CLI profile. This allows you to run scripts
without specifying a DS CLI user password in clear text. If manually starting DS CLI you
can also refer to a password file with the -pwfile parameter. By default, the file is placed
in: c:\Documents and settings\<Windows user name>\DSCLI\security.dat or in
$HOME/dscli/security.dat (for non Windows based operating systems)
In Example 9-10 we manage our password file by adding the user ID called BenColeman.
The password is now saved in an encrypted file called security.dat.

Example 9-10 Using the managepwfile command


dscli> managepwfile -action add -name BenColeman -pw passw0rd
Date/Time: 10 November 2005 23:40:56 IBM DSCLI Version: 5.1.0.204
CMUC00206I managepwfile: Record 10.0.0.1/BenColeman successfully added to password file
C:\Documents and Settings\AnthonyV\dscli\security.dat.

򐂰 chpass
This command lets you change two password rules: password expiration (days) and failed
logins allowed. In Example 9-11 on page 161 we change the expiration to 365 days and 5

160 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_DS_HMC_Plan.fm

failed logon attempts. If you set both values to zero, then passwords never expire and
unlimited logon attempts are allowed —which is not recommended.

Example 9-11 Changing rules using the chpass command


dscli> chpass -expire 365 -fail 5
Date/Time: 10 November 2005 21:44:33 IBM DSCLI Version: 5.1.0.204
CMUC00195I chpass: Security properties successfully set.

򐂰 showpass
This command lists the properties for passwords (Password expiration days and Failed
logins allowed). In Example 9-12 we can see that passwords have been set to expire in
365 days and that 5 login attempts are allowed before a user ID is locked.

Example 9-12 Using the showpass command


dscli> showpass
Date/Time: 10 November 2005 21:44:45 IBM DSCLI Version: 5.1.0.204
Password Expiration 365 days
Failed Logins Allowed 5

The exact syntax for any DS CLI command can be found in the IBM System Storage DS8000:
Command-Line Interface User´s Guide, SC26-7916. You can also use the DS CLI help
command to get further assistance.

9.5.2 User management using the DS GUI


To work with user administration, sign on to the DS GUI. From the selection menu on the left,
select Real-time Manager → Monitor system → User administration. This will give you a
screen in which you can select a storage complex to display all defined user IDs for it (see
Figure 9-7). You would normally select localhost from the Storage Complex pull-down. You
can then select a user ID to work with.

Figure 9-7 User administration: Overview

Chapter 9. DS HMC planning and setup 161


6786ch_DS_HMC_Plan.fm Draft Document for Review November 14, 2006 3:49 pm

The administrator can perform several tasks from the Select Action pull-down:
򐂰 Add User (DS CLI equivalent is mkuser)
򐂰 Modify User (DS CLI equivalent is chuser)
򐂰 Lock or Unlock User - Choice will toggle based on user state (DS CLI equivalent is chuser)
򐂰 Delete User (DS CLI equivalent is rmuser)
򐂰 Password Settings (DS CLI equivalent is chpass)

If you click a user name, it will bring up the Modify User panel.

Note: If a user who is not in the Administrator group logs onto the DS GUI and goes to the
User Administration panel, they will only be able to see their own user ID in the list. The
only action they will be able to perform is to change their password.

Selecting Add User will display a window in which a user can be added by entering the user
ID, the temporary password, and the role (see Figure 9-8). The role will decide what type of
activities can be performed by this user. In this window, the user ID can also be temporarily
deactivated by selecting Access none.

Figure 9-8 User administration: Adding a user

In Figure 9-8, a user is being added with the user name Frontdesk. This user is being placed
into the Monitor group.

9.6 External DS HMC


Customers may also decide to install an optional external DS HMC. The hardware is
equivalent to the internal DS HMC and needs to be installed in a client-supplied 19-inch IBM
or a non-IBM rack (1U server/1U display). Configuration management is achieved either in
real-time or offline configuration mode.

The external DS HMC is an optional priced feature. To help preserve console functionality, the
external and the internal DS HMCs are not available to be used as a general purpose
computing resource.

162 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_DS_HMC_Plan.fm

In the next few sections we use the terms first and internal DS HMC interchangeably, and
second and external DS HMC interchangeably. The two DS HMCs run in a dual-active
configuration, so either DS HMC can be used at any time. The distinction between the
internal and external unit is only for clarity and ease of explanation.

9.6.1 External DS HMC advantages


Having an external DS HMC provides a number of advantages. Among the advantages are:
򐂰 Enhanced maintenance capability
Since the DS HMC is the only service personnel interface available, an external DS HMC
will greatly enhance maintenance operational capabilities if the internal DS HMC fails.

Tip: To ensure that IBM service representatives can quickly and easily access an
external DS HMC, place its rack within 15.2 m (50 ft) of the storage units it connects.

򐂰 Improved remote support


In many environments, the DS8000 and internal HMC will be secured behind a firewall in a
user’s internal LAN. In this case, it can be very difficult for IBM to provide remote support.
An external DS HMC can be configured in such a way that it is able to communicate with
both the DS8000 and IBM. Thus, the dual HMC configuration can greatly enhance remote
support capabilities.
򐂰 High availability for configuration operations
In open systems environments, all configuration commands must go through the HMC.
This is true regardless of whether you use the DS CLI, the DS SM, or the DS Open API.
An external DS HMC will allow these operations to continue to work despite a failure of the
internal DS HMC.
򐂰 High availability for Advanced Copy Services operations
In open systems environments, all Advanced Copy Services commands must also go
through the HMC. This is true regardless of whether you use the DS CLI, the DS SM, or
the DS Open API. An external DS HMC will allow these operations to continue to work
despite a failure of the internal DS HMC.

In order to take advantage of the high availability features of the second HMC (high availability
for configuration operations and Advanced Copy Services), you must configure the DS CLI or
the DS SM to take advantage of the second HMC.

When the user issues a configuration or copy services command, the DS CLI or DS SM will
send the command to the first HMC. If the first HMC is not available, it will automatically send
the command to the second HMC instead. Typically, the user does not have to re-issue the
command.

Any changes made using the first HMC are instantly reflected in the second HMC. There is no
caching done within the HMC, so there are no cache coherency issues. By first HMC, we
mean the HMC defined as the HMC1. It is even possible to define the external HMC as the
first HMC, and vice versa, but this is not typical.

9.6.2 Configuring the DS CLI to use a second HMC


The second HMC can either be specified on the command line or in the profile file being used
by the DS CLI. To specify the second HMC, use the -hmc2 parameter, as in Example 9-13.

Chapter 9. DS HMC planning and setup 163


6786ch_DS_HMC_Plan.fm Draft Document for Review November 14, 2006 3:49 pm

Example 9-13 Using the -hmc2 parameter


C:\Program Files\IBM\dscli>dscli -hmc1 mitmuzik.ibm.com -hmc2 mitgasse.ibm.com
Enter your username: brandon
Enter your password:
Date/Time: November 8, 2005 12:59:50 PM CET IBM DSCLI Version: 5.1.0.204 DS:
IBM.2107-7503461

Alternatively, you can modify the following lines in the dscli.profile (or other profile) file:
# Management Console/Node IP Address(es)
# hmc1 and hmc2 are equivalent to -hmc1 and -hmc2 command options.
hmc1:mitmuzik.ibm.com
hmc2:mitgasse.ibm.com

After you make these changes, the DS CLI will use the second HMC in the unlikely event that
the first HMC fails. This change will allow you to perform both configuration and copy services
commands with full redundancy.

9.6.3 Configuring the DS Storage Manager to use a second HMC


If you connect directly to the DS Storage Manager on the internal HMC, and the HMC has an
internal failure, then you will not be able to continue your session. Instead, you must point
your Web browser to the second HMC and begin a new session.

One alternative is to install the DS Storage Manager on a separate system. In this way, even
if the internal HMC fails, the separate system will be able to continue your configuration or
copy services session by connecting to the external HMC.

The DS Storage Manager can be configured to use a second HMC by specifying both the first
and second HMC in the Real-time manager → Manage hardware → Storage complexes
screen. From the pull-down menu, choose Add. See Figure 9-9.

Figure 9-9 Adding a storage complex

On the next panel, check the Define Management console 2 check box, and add the IP
addresses of both HMC machines. Then click OK. Figure 9-10 on page 165 shows the adding
of two HMCs, 10.10.10.10 and 10.10.10.11.

164 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_DS_HMC_Plan.fm

Figure 9-10 Specifying an external HMC

At this point, the DS Storage Manager is configured to use the second HMC if the first HMC
should fail.

Chapter 9. DS HMC planning and setup 165


6786ch_DS_HMC_Plan.fm Draft Document for Review November 14, 2006 3:49 pm

166 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_Perf.fm

10

Chapter 10. Performance


This chapter discusses the performance characteristics of the DS8000 regarding the physical
and logical configuration. The considerations discussed in the present chapter will help you at
the time of planning the physical and logical setup.

In this chapter we cover the following topics:


򐂰 DS8000 hardware — performance characteristics
򐂰 Software performance enhancements — synergy items
򐂰 Performance and sizing considerations for open systems
򐂰 Performance and sizing considerations for System z

© Copyright IBM Corp. 2006. All rights reserved. 167


6786_6452_Perf.fm Draft Document for Review November 14, 2006 3:49 pm

10.1 DS8000 hardware — performance characteristics


The DS8000 overcomes many of the architectural limits of the predecessor disk subsystems.
In this section we go through the different architectural layers of the DS8000 and discuss the
performance characteristics that differentiate the DS8000 from other disk subsystems.

10.1.1 Fibre Channel switched disk interconnection at the back end


Because SSA connectivity has not been further enhanced to increase the connectivity speed
beyond 40MBps, Fibre Channel connected disks are used in the DS8000 back end. This
technology is commonly used to connect a group of disks in a daisy-chained fashion in a
Fibre Channel Arbitrated Loop (FC-AL).

There are some shortcomings with FC-AL. The most obvious ones are:
򐂰 Arbitration - disks compete for loop bandwidth.
򐂰 Failures within the FC-AL loop, particularly with intermittently failing components on the
loops and disks.
򐂰 Increased time it to complete a loop operation as the number of loop devices increase.

For highly parallel operations, concurrent reads and writes with various transfer sizes, this
would impact the total effective bandwidth of an FC-AL structure.

The DS8000 series overcomes the FC-AL shortcomings


The DS8000 uses the same Fibre Channel drives as used in conventional FC-AL based
storage systems. To overcome the arbitration issue within FC-AL, the architecture is
enhanced by adding a switch-based approach and creating FC-AL switched loops, as shown
in Figure 10-1. Actually it is called a Fibre Channel switched disk subsystem.

To host servers

Processor Memory Processor


Host server

Adapter Adapter

Adapter Adapter
Memory
Storage server

Processor Processor

To storage servers

Adapter Adapter

20 port switch
o oo
16 DDM
20 port switch

Figure 10-1 Switched FC-AL disk subsystem

168 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_Perf.fm

These switches use FC-AL protocol and attach FC-AL drives through a point-to-point
connection. The arbitration message of a drive is captured in the switch, processed and
propagated back to the drive, without routing it through all the other drives in the loop.

Performance is enhanced since both DAs connect to the switched Fibre Channel disk
subsystem back end as displayed in Figure 10-2. Note that each DA port can concurrently
send and receive data.

To host servers

Adapter Adapter

Processor Memory Processor

Storage server
Adapter Adapter

To next
switch

20 port switch
ooo
16 DDM
20 port switch

Figure 10-2 High availability and increased bandwidth connecting both DA to two logical loops

These two switched point-to-point loops to each drive, plus connecting both DAs to each
switch, accounts for the following:
򐂰 There is no arbitration competition and interference between one drive and all the other
drives because there is no hardware in common for all the drives in the FC-AL loop. This
leads to an increased bandwidth utilizing the full speed of a Fibre Channel for each
individual drive. Note that the external transfer rate of a Fibre Channel DDM is 200 MBps.
򐂰 Doubles the bandwidth over conventional FC-AL implementations due to two
simultaneous operations from each DA to allow for two concurrent read operations and
two concurrent write operations at the same time.
򐂰 In addition to the superior performance, we must not forget the improved RAS over
conventional FC-AL. The failure of a drive is detected and reported by the switch. The
switch ports distinguish between intermittent failures and permanent failures. The ports
understand intermittent failures which are recoverable and collect data for predictive failure
statistics. If one of the switches itself fails, a disk enclosure service processor detects the
failing switch and reports the failure using the other loop. All drives can still connect
through the remaining switch.

Chapter 10. Performance 169


6786_6452_Perf.fm Draft Document for Review November 14, 2006 3:49 pm

This discussion has just outlined the physical structure. A virtualization approach built on top
of the high performance architecture design contributes even further to enhanced
performance. See Chapter 6, “Virtualization concepts” on page 91.

10.1.2 Fibre Channel device adapter


The DS8000 still relies on eight DDMs to form a RAID-5 or a RAID-10 array. These DDMs are
actually spread over two Fibre Channel loops and follow the successful approach to use
arrays across loops (AAL). With the virtualization approach and the concept of extents, the
DS8000 device adapters (DAs) are mapping the virtualization level over the disk subsystem
back end —see Figure 10-3. For detailed discussion on disk subsystem virtualization refer to
Chapter 6, “Virtualization concepts” on page 91.

To host servers

Adapter Adapter

DA

Processor Memory Processor PowerPC


Storage server

Adapter Adapter
Fibre Channel Fibre Channel
Protocol Proc Protocol Proc

2 Gbps Fibre Channel ports

Figure 10-3 Fibre Channel device adapter with 2 Gbps ports

The RAID device adapter is built on PowerPC technology with four 2 Gbps Fibre Channel
ports and high function, high performance ASICs. Each port provides up to five times the
throughput of the SSA-based DA ports used in predecessor disk subsystems. Each single
Fibre Channel protocol processor satisfies both Fibre Channel 2 Gbps ports at full speed.

Note that each DA performs the RAID logic and frees up the processors from this task. The
actual throughput and performance of a DA is not only determined by the 2 Gbps ports and
hardware used, but also by the firmware efficiency.

10.1.3 Four-port host adapters


Before looking into the heart of the DS8000 series, we briefly review the host adapters and
their enhancements to address performance. Figure 10-4 on page 171 depicts the host
adapters. These adapters are designed to hold four Fibre Channel ports, which can be
configured to support either FCP or FICON. They are also enhanced in their configuration
flexibility and provide more logical paths —from 256 with an ESS FICON port to 1,280 per
FICON port on the DS8000 series.

Each port continues the tradition of providing industry-leading throughput and I/O rates for
FICON and FCP.

170 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_Perf.fm

Fibre Channel Host ports

To host servers

Fibre Channel Fibre Channel


Protocol Proc Protocol Proc

Adapter Adapter

Processor Memory Processor

Storage server
PowerPC

HA
Adapter Adapter

Figure 10-4 Host adapter with 4 Fibre Channel ports

With FC adapters that are configured for FICON, the DS8000 series provides the following
configuration capabilities:
򐂰 Either fabric or point-to-point topologies
򐂰 A maximum of 64 host adapter ports on the DS8100 Model 931, and a maximum of 128
host adapter ports on DS8300 Model 932 and Model 9B2
򐂰 A maximum of 509 logins per fibre-channel port
򐂰 A maximum of 8,192 logins per storage unit
򐂰 A maximum of 1280 logical paths on each fibre-channel port
򐂰 Access to all control-unit images over each FICON port
򐂰 A maximum of 512 logical paths per control unit image.

FICON host channels limit the number of devices per channel to 16,384. To fully access
65,280 devices on a storage unit, it is necessary to connect a minimum of four FICON host
channels to the storage unit. This way using a switched configuration, you can expose 64
control-unit images (16,384 devices) to each host channel.

The front end with the 2 Gbps or 4 Gbps ports scales up to 128 ports for a DS8300. This
results in a theoretical aggregated host I/O bandwidth of 128 times 2 Gbps for the 2Gbps
ports and outperforms an ESS by a factor of eight. The DS8100 still provides four times more
bandwidth at the front end than an ESS.

10.1.4 POWER5+ — Heart of the DS8000 dual cluster design


The DS8000 series Turbo models incorporate the latest System p POWER5+ processor
technology. System p5 servers are capable of scaling from a 1-way to a 16-way SMP using
standard 4U building blocks. The DS8000 series Turbo Model 931 utilizes two-way processor
complexes, and the Turbo Models 932 and 9B2 utilize four-way processor complexes —in a
dual implementation for all the three models.

The following sections discuss configuration and performance aspects based on the two-way
processor complexes used in the DS8100.

Among the most exciting capabilities the System p inherited from System z are the dynamic
LPAR mode and the micro partitioning capability. This System p-based functionality has the

Chapter 10. Performance 171


6786_6452_Perf.fm Draft Document for Review November 14, 2006 3:49 pm

potential to be exploited also in future disk storage server enhancements. For details on LPAR
implementation in the DS8000 see Chapter 3, “Storage system LPARs (logical partitions)” on
page 27.

Besides the self-healing features and advanced RAS attributes, the RIO-G structures provide
a very high I/O bandwidth interconnect with DAs and HAs to provide system-wide balanced
aggregated throughput from top to bottom. A simplified view is in Figure 10-5.

To host servers

Adapter Adapter L1,2


Processor Memory
Memory

Processor Processor L1,2


L3

Storage server
Processor Memory
Memory

Adapter Adapter RIO-G Module


POWER5 2-way SMP

Figure 10-5 Standard System p5 2-way SMP processor complexes for DS8100 Model 931

The smallest processor complex within a DS8100 is the POWER5+ two-way SMP processor
complex. The dual-processor complex approach allows for concurrent microcode loads,
transparent I/O failover and failback support, and redundant, hot-swapable components.

Fibre Channel Host ports

HA
Fibre Channel Fibre Channel
Protocol Proc Protocol Proc

PowerPC
RIO-G Module

Server 0 Server 1
L1,2 L1,2
Memory Processor
Memory Processor Memory
Memory

L3
Memory
L1,2
Memory Processor RIO-G Interconnect Processor
L1,2
Memory
L3
Memory

RIO-G Module RIO-G Module


POWER5 2-way SMP POWER5 2-way SMP

RIO-G Module PowerPC

DA
I/O enclosure Fibre Channel
Protocol Proc
Fibre Channel
Protocol Proc
DA

2 Gbps Fibre Channel ports

4 HAs
Figure 10-6 DS8100-931 with four I/O enclosures

172 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_Perf.fm

Figure 10-6 provides a less abstract view and outlines some details on the dual 2-way
processor complex of a DS8100 Model 931, its gates to host servers through HAs, and its
connections to the disk storage back end through the DAs.

Each of the two processor complexes is interconnected through the System p-based RIO-G
interconnect technology and includes up to four I/O enclosures which equally communicate to
either processor complex. Note that there is some affinity between the disk subsystem and its
individual ranks to either the left processor complex, server 0, or to the right processor
complex, server 1. This affinity is established at creation of an extent pool.

Each single I/O enclosure itself contains six Fibre Channel adapters:
򐂰 Two DAs which install in pairs
򐂰 Four HAs which install as required

Each adapter itself contains four Fibre Channel ports.

Although each HA can communicate with each server, there is some potential to optimize
traffic on the RIO-G interconnect structure. RIO-G provides a full duplex communication with
1 Gbps in either direction. There is no such thing as arbitration. Figure 10-6 shows that the
two left-most I/O enclosures might communicate with server 0, each in full duplex. The two
right-most I/O enclosures communicate with server 1, also in full duplex mode. This results in
a potential of 8 GBps in total just for this single structure. Basically there is no affinity between
HA and server. As we see later, the server which owns certain volumes through its DA,
communicates with its respective HA when connecting to the host.

High performance, high availability interconnect to the disk subsystem


Figure 10-7 depicts in some detail how the Fibre Channel switched back-end storage
connects to the processor complex.

Server 0 Server 1
L1,2 L1,2
Memory Processor
Memory Processor Memory
Memory

L3
Memory
L1,2
Memory Processor
RIO-G Interconnect Processor
L1,2
Memory
L3
Memory

RIO-G Module RIO-G Module


POWER5 2-way SMP
POWER5 2-way SMP

RIO-G Module RIO-G Module


DA DA

Note that DA and HA


positions in I/O enclosures
are shown to suit
the intention of the figure

20 port switch
ooo
16 DDM
20 port switch

20 port switch
ooo
16 DDM
20 port switch

Figure 10-7 Fibre Channel switched backend connect to processor complexes - partial view

Chapter 10. Performance 173


6786_6452_Perf.fm Draft Document for Review November 14, 2006 3:49 pm

All I/O enclosures within the RIO interconnect fabric are equally served from either processor
complex.

Each I/O enclosure contains two DAs. Each DA with its four ports connects to four switches to
reach out to two sets of 16 drives or disk drive modules (DDMs) each. Note that each 20-port
switch has two ports to connect to the next switch pair with 16 DDMs when vertically growing
within a DS8000. As outlined before, this dual two logical loop approach allows for multiple
concurrent I/O operations to individual DDMs or sets of DDMs and minimizes arbitration
through the DDM/switch port mini loop communication.

10.1.5 Vertical growth and scalability


Figure 10-8 shows a simplified view of the basic DS8000 series structure and how it accounts
for scalability.

Server 0 Server 1
I/O enclosure I/O enclosure
L1,2 L1,2
Memory Processor
Memory Processor Memory
Memory

L3
Memory
L1,2
Memory Processor RIO-2 Interconnect Processor
L1,2
Memory
L3
L3
Memory
Memory

RIO-G Module RIO-G Module


POWER5 2-way SMP
I/O enclosure I/O enclosure
POWER5 2-way SMP
Dual two-way processor complex
Fibre Channel switched disk subsystems are not shown

Server 0 Server 1
L1,2 L1,2
Memory Processor
Memory Processor Memory
Memory

L3
Memory
L1,2
Memory Processor RIO-2 Interconnect Processor
L1,2
Memory
L3
L3
Memory
Memory

RIO-G Module RIO-G Module

Dual four-way processor complex


L1,2 L1,2
Memory Processor
Memory Processor Memory
Memory

L3
Memory
L1,2
Memory Processor RIO-2 Interconnect Processor
L1,2
Memory
L3
L3
Memory
Memory

RIO-G Module
RIO-G Module
POWER5 4-way SMP POWER5 4-way SMP

Figure 10-8 DS8100 to DS8300 scale performance linearly - view without disk subsystems

Although Figure 10-8 does not display the back-end part, it can be derived from the number
of I/O enclosures, which suggests that the disk subsystem also doubles, as does everything
else, when switching from a DS8100 to an DS8300. Doubling the number of processors and
I/O enclosures accounts also for doubling the performance or even more.

Again note here that a virtualization layer on top of this physical layout contributes to
additional performance potential.

10.2 Software performance enhancements — synergy items


There are a number of performance enhancements in the DS8000 that work together with the
software in the host and are collectively referred to as the synergy Items. These items allow

174 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_Perf.fm

the DS8000 to cooperate with the host systems in manners beneficial to the overall
performance of the systems.

10.2.1 End to End I/O Priority — synergy with System p AIX and DB2
End to end I/O priority is a new addition, requested by IBM, to SCSI T10 standard. This
feature allows trusted applications to override the priority given to each I/O by the operating
system. This is only applicable for raw volumes (no file system) and with the 64 bit kernel.
Currently, AIX supports this feature in conjunction with DB2. The priority is delivered to
storage subsystem in FCP Transport Header.

The priority of an AIX process can be 0 (no assigned priority) or any integer value from 1
(highest priority) to 15 (lowest priority). All I/O requests associated with a given process
inherit its priority value, but with end to end priority, DB2 can change this value for critical data
transfers. At the DS8000, the host adapter will give preferential treatment to higher priority
I/O, improving performance for specific requests to deemed important by the application such
as ones that might be prerequisites for other (eg - DB2 logs).

10.2.2 Cooperative Caching — synergy with System p AIX and DB2


Another software related performance improvement is cooperative caching, a feature which
provides a way for the host to send cache management hints to the storage facility. Currently,
the host can indicate that the information just accessed is unlikely to be accessed again soon.
This decreases the retention period of the cached data, allowing the subsystem to conserve
its cache for data that is more likely to be reaccessed, improving the cache hit ratio.

With the implementation of Cooperative Caching the AIX operating system allows trusted
applications, such as DB2, to provide cache hints to the DS8000. This improves the
performance of the subsystem by keeping more of the repeatedly accessed data within the
cache. Cooperative caching is supported in System p AIX with Multipath I/O (MPIO) Path
Control Module (PCM) provided with Subsystem Device Driver (SDD). It is only applicable for
raw volumes (no file system) and with the 64 bit kernel.

10.2.3 Long Busy Wait Host Tolerance — synergy with System p AIX
Another new addition to SCSI T10 standard is SCSI long busy wait, which provides the target
system a method by which to specify not only that it is busy but also how long the initiator
should wait before retrying an I/O.

This information, provided in the Fiber Channel Protocol (FCP) status response, prevents the
initiator from retrying too soon only to fail again. This in turn reduces unnecessary requests
and potential I/O failures due to exceeding a set threshold for the number of retries. IBM
System p AIX supports for SCSI long busy wait with MPIO and it is also supported by the
DS8000.

10.2.4 HACMP-XD Extensions — synergy with System p AIX


HACMP-Extended Distance (unrelated to PPRC-XD) provides server/LPAR failover capability
over extended distances (up to 15km). It can now also exploit the Metro-Mirror function of the
DS8000 as a data replication mechanism between the primary and remote site. It is important
to note that the HACMP “Metro” distance is much less than DS8000 Metro Mirror Distance of
300km. The DS8000 requires no changes to be used in this fashion.

Chapter 10. Performance 175


6786_6452_Perf.fm Draft Document for Review November 14, 2006 3:49 pm

10.3 Performance and sizing considerations for open systems


To determine the most optimal DS8000 layout, the I/O performance requirements of the
different servers and applications should be defined up front since they will play a large part in
dictating both the physical and logical configuration of the disk subsystem. Prior to designing
the disk subsystem, the disk space requirements of the application should be well
understood.

10.3.1 Workload characteristics


The answers to questions like how many host connections do I need?, how much cache do I
need? and the like always depend on the workload requirements (such as, how many I/Os per
second per server, I/Os per second per gigabyte of storage, and so forth).

The information you need, ideally, to conduct detailed modeling includes:


򐂰 Number of I/Os per second
򐂰 I/O density
򐂰 Megabytes per second
򐂰 Relative percentage of reads and writes
򐂰 Random or sequential access characteristics
򐂰 Cache hit ratio

10.3.2 Cache size considerations for open systems


Cache sizes in the DS8000 series can be either 16, 32, 64, 128, or 256 GB —16 GB is only
available on the DS8100 model and 256 GB is only available on the DS8300 models.

The factors that have to be considered to determine the proper cache size are:
򐂰 The total amount of disk capacity that the DS8000 will hold
򐂰 The characteristic access density (I/Os per GB) for the stored data
򐂰 The characteristics of the I/O workload (cache friendly, unfriendly, standard; block size;
random or sequential; read/write ratio; I/O rate)

If you do not have detailed information regarding the access density and the I/O operations
characteristics, but you only know the usable capacity, you can estimate between 2 GB and
4 GB for the size of the cache per 1 TB of storage, as a general rule of thumb.

10.3.3 Data placement in the DS8000


Once you have determined the disk subsystem throughput, the disk space and number of
disks required by your different hosts and applications, you have to make a decision regarding
the data placement.

As is common for data placement and to optimize the DS8000 resources utilization, you
should:
򐂰 Equally spread the LUNs across the DS8000 servers. Spreading the LUNs equally on
rank group 0 and 1 will balance the load across the DS8000 units.
򐂰 Use as many disks as possible.
򐂰 Distribute across DA pairs and RIO-G loops.
򐂰 Stripe your logical volume across several ranks.
򐂰 Consider placing specific database objects (such as logs) on different ranks.

176 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_Perf.fm

Note: Database logging usually consists of sequences of synchronous sequential writes.


Log archiving functions (copying an active log to an archived space) also tend to consist of
simple sequential read and write sequences. You should consider isolating log files on
separate arrays.

All disks in the storage disk subsystem should have roughly the equivalent utilization. Any
disk that is used more than the other disks will become a bottleneck to performance. A
practical method is to make extensive use of volume level striping across disk drives.

10.3.4 Disk drive characteristics — capacity, speed and type


At the heart of disk subsystem performance is the disk itself. Anything about it that affects IO
speed or throughput will have a direct impact on the performance of the subsystem as a
whole. Key among these factors are the size of the disk drives, the rotational speed of the
drives and the type of drive (and its intended usage).

Drive size and architecture dictate the amount of data per drive head. As the physical drive
size increases, so does the potential workload for arrays and logical volumes on those drives.
Keep this in mind when planning for solutions requiring high IO rates and fast response times.
One way to counter this is with faster drives speeds, but for the best performance, use arrays
of small high speed drives (e.g. - 15k rpm, 73 GB). The higher rotational speed reduces seek
time, thus improving performance.

FATA disk drives


Also, keep in mind that for the new FATA 500 GB drives, they are both the largest and slowest
of the drives available for the DS8000. This combined with the lower utilization
recommendations and the potential for drive protection throttling means that these drives are
definitely not the drive to use for high performance or heavy IO applications. See 4.4.4, “Fiber
Channel ATA (FATA) disk drives overview” on page 56 for a detailed discussion about the
considerations and when to use FATA disk drives.

Disk Magic and Capacity Magic is the recommended approach


To decide which are the disk types that better fulfill your workload needs, the best approach is
to use your installation workload data as input to the Disk Magic modelling tool. With your
workload data and current configuration of disk subsystem units, Disk Magic can establish a
base model from where it can project the necessary DS8000 units —and their configuration,
that will absorb the present workload and also any future growth that you assume.

To estimate the number and capacity of disk drive sets needed to fulfill your storage capacity
requirements, the Capacity Magic tool can be used. It is a friendly tool and easy to use, that
will also help you determine the requirements for any growth in capacity that you may want to
estimate.

10.3.5 LVM striping


Striping is a technique for spreading the data in a logical volume across several disk drives in
such a way that the I/O capacity of the disk drives can be used in parallel to access data on
the logical volume. The primary objective of striping is very high performance reading and
writing of large sequential files, but there are also benefits for random access.

DS8000 logical volumes are composed of extents. An extent pool is a logical construct to
manage a set of extents. One or more ranks with the same attributes can be assigned to an
extent pool. One rank can be assigned to only one extent pool. To create the logical volume,

Chapter 10. Performance 177


6786_6452_Perf.fm Draft Document for Review November 14, 2006 3:49 pm

extents from one extent pool are concatenated. If an extent pool is made up of several ranks,
a LUN can potentially have extents on different ranks and so be spread over those ranks.

Note: We recommend assigning one rank per extent pool to control the placement of the
data. When creating a logical volume in an extent pool made up of several ranks, the
extents for this logical volume are taken from the same rank if possible.

However, to be able to create very large logical volumes, you must consider having extent
pools that span more than one rank. In this case, you will not control the position of the LUNs
and this may lead to an unbalanced implementation, as shown in Figure 10-9 —right side.

Balanced implementation: LVM striping Non-balanced implementation: LUNs across ranks


1 rank per extent pool More than 1 rank per extent pool

Rank 1 Extent pool 1 Extent Pool Pool 5


Extent
2 GB LUN 1
Rank 5 8GB LUN

Extent
Rank 2
Extent pool 2
1GB 2GB LUN 2 Rank 6
Extent
1GB

Extent pool 3
2GB LUN 3 Rank 7

Rank 3
Extent pool 4 Rank 8
2GB LUN 4

Rank 4

LV stripped across 4 LUNs

Figure 10-9 Spreading data across ranks

Combining extent pools made up of one rank and then LVM striping over LUNs created on
each extent pool, will offer a balanced method to evenly spread data across the DS8000 as
shown in Figure 10-9 —left side.

Note: The recommendation is to use host striping wherever possible to distribute the
access patterns across the physical resources of the DS8000.

The stripe size


Each striped logical volume that is created by the host’s logical volume manager has a stripe
size that specifies the fixed amount of data stored on each DS8000 logical volume (LUN) at
one time.

The stripe size has to be large enough to keep sequential data relatively close together, but
not too large so as to keep the data located on a single array.

The recommended stripe sizes that should be defined using your host’s logical volume
manager are in the range of 4 MB to 64 MB. You should choose a stripe size close to 4 MB if
you have a large number of applications sharing the arrays and a larger size when you have
very few servers or applications sharing the arrays.

178 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_Perf.fm

10.3.6 Determining the number of connections between the host and DS8000
When you have determined your workload requirements in terms of throughput, you have to
choose the appropriate number of connections to put between your open systems servers
and the DS8000 to sustain this throughput.

A Fibre Channel host port can sustain a data transfer of 206 MB/s. As a general
recommendation, you should at least have two FC connections between your hosts and your
DS8000.

10.3.7 Determining the number of paths to a LUN


When configuring the IBM DS8000 for an open systems host, a decision must be made
regarding the number of paths to a particular LUN, because the multipathing software allows
(and manages) multiple paths to a LUN. There are two opposing factors to consider when
deciding on the number of paths to a LUN:
򐂰 Increasing the number of paths increases availability of the data, protecting against
outages.
򐂰 Increasing the number of paths increases the amount of CPU used because the
multipathing software must choose among all available paths each time an I/O is issued.

A good compromise is between 2 and 4 paths per LUN.

10.3.8 Dynamic I/O load-balancing — Subsystem Device Driver (SDD)


The Subsystem Device Driver is a pseudo device driver that IBM provides designed to
support the multipath configuration environments in the DS8000. It resides in a host system
with the native disk device driver.

The dynamic I/O load-balancing option (default) of SDD is recommended to ensure better
performance because:
򐂰 SDD automatically adjusts data routing for optimum performance. Multipath load
balancing of data flow prevents a single path from becoming overloaded, causing
input/output congestion that occurs when many I/O operations are directed to common
devices along the same input/output path.
򐂰 The path to use for an I/O operation is chosen by estimating the load on each adapter to
which each path is attached. The load is a function of the number of I/O operations
currently in process. If multiple paths have the same load, a path is chosen at random from
those paths.

For more information on the SDD see Section 15.1.4, “Multipathing support — Subsystem
Device Driver (SDD)” on page 290

10.3.9 Determining where to attach the host


When determining where to attach multiple paths from a single host system to I/O ports on a
host adapter to the storage facility image, the following considerations apply:
򐂰 Choose the attached I/O ports on different host adapters.
򐂰 Spread the attached I/O ports evenly between the four I/O enclosure groups.
򐂰 Spread the I/O ports evenly between the different RIO-G loops.

The DS8000 host adapters have no server affinity, but the device adapters and the rank have
server affinity. Illustrated in Figure 10-10 on page 180 is a host that is connected through two
FC adapters to two DS8000 host adapters located in different I/O enclosures. The host has

Chapter 10. Performance 179


6786_6452_Perf.fm Draft Document for Review November 14, 2006 3:49 pm

access to LUN1, which is created in the extent pool 1 controlled by the DS8000 server 0. The
host system sends read commands to the storage server. When a read command is
executed, one or more logical blocks are transferred from the selected logical drive through a
host adapter over an I/O interface to a host. In this case the logical device is managed by
server 0 and the data is handled by server 0. The read data to be transferred to the host must
first be present in server 0's cache. When the data is in the cache it is then transferred
through the host adapters to the host.

IBM

Reads
Reads

HAs don't have DS8000 server affinity


LUN1
HA FC0 FC1
I/Os I/Os

L1,2
Memory L1,2
Memory Processor
Processor Memory
Memory
SERVER 0
SERVER 1
L3
L1,2
Memory
RIO-2 Interconnect L1,2
Memory Processor L3
Processor Memory
Memory

RIO-G Module Extent pool 1 switch


20 port Extent pool 4
RIO-G Module
16 DDM
LUN1 ooo

20 port switch

DA 20 port switch DA

DAs with an affinity to server 0 LUN1 ooo DAs with an affinity to server 1
16 DDM

20 port switch

Extent pool 1 oooo Extent pool 4


controlled by server 0 controlled by server 1

Figure 10-10 Dual port host attachment

10.4 Performance and sizing considerations for System z


Here we discuss some System z specific topics regarding the performance potential of the
DS8000. We also discuss the considerations you must have when you configure and size a
DS8000 that replaces older storage hardware in System z environments.

10.4.1 Host connections to System z servers


Figure 10-11 on page 181 partially shows a configuration where a DS8000 connects to
FICON hosts. Note that this figure only indicates the connectivity to the Fibre Channel
switched disk subsystem through its I/O enclosure, symbolized by the rectangles.

Each I/O enclosure can hold up to four HAs. The example in Figure 10-11 shows only eight
FICON channels connected to the first two I/O enclosures. Not shown is a second FICON
director, which connects in the same fashion to the remaining two I/O enclosures to provide a
total of 16 FICON channels in this particular example. The DS8100 disk storage subsystem
provides up to 64 FICON channel ports. Again note the very efficient FICON implementation
in the DS8000 FICON ports.

180 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_Perf.fm

Parellel z/OS 1.4+


Sysplex

FICON
HA
Director

RIO-G Module RIO-G Module

Server 0 Server 1
L1,2 L1,2
Memory Processor
Memory Processor Memory
Memory

L3
L1,2
Memory Processor
RIO-G Interconnect L1,2
Memory
L3
Memory Processor Memory

RIO-G Module RIO-G Module


POWER5 2-way SMP POWER5 2-way SMP

Figure 10-11 DS8100 frontend connectivity example - partial view

10.4.2 Sizing the DS8000 to replace older disk models


We will see now some sizing guidelines that can be used when the DS8000 comes to replace
or take over the workload of an older model disk subsystem.

A sizing approach to follow would be to propose how many ESS 800s might be consolidated
into a DS8000 series model. From there you can derive the number of ESS 750s, ESS F20s,
and ESS E20s which can collapse into a DS8000. The older ESS models have a known
relationship to the ESS 800. Further considerations are, for example, the connection
technology used, like ESCON, FICON, or FICON Express channels, and the number of
channels.

Generally speaking, a properly configured DS8100 has the potential to provide the same or
better numbers than two ESS 800s. Since the ESS 800 has the performance capabilities of
two ESS F20s, a properly configured DS8100 can replace four ESS F20s. As the DS8000
series scales linearly, a well configured DS8300 has the potential to have the same or better
numbers concerning I/O rates, sequential bandwidth, and response time than two DS8100 or
four ESS 800s. Since the ESS 800 has roughly the performance potential of two ESS F20s, a
corresponding number of ESS F20s can be consolidated. This applies also to the ESS 750,
which has a similar performance behavior to that of an ESS F20.

When comparing the DS8000 series Turbo models 931, 932, and 9B2 against the
predecessor DS8000 series models 921, 922, and 9A2, consider that the Turbo models
feature the IBM POWER5+ processor which can enable up to a 15% performance
improvement in I/O operations per second in transaction processing workload environments
compared to the IBM POWER5 processor of the 921/922/9A2 models.

Chapter 10. Performance 181


6786_6452_Perf.fm Draft Document for Review November 14, 2006 3:49 pm

Disk Magic is the recommended approach


The previous are rule of thumbs. Still the best approach is to use your installation workload
RMF data as input to the Disk Magic modelling tool. With your workload data and current
configuration of disk subsystem units, Disk Magic can establish a base model from where it
can project the necessary DS8000 units —and their configuration, that will absorb the present
workload and also any future growth that you assume.

10.4.3 DS8000 processor memory size


Cache, or processor memory as a DS8000 term, is specially important to System z based
I/O. Processor memory or cache in the DS8000 contributes to very high I/O rates and helps to
minimize I/O response time.

Processor memory or cache can grow to up to 256 GB in the DS8300 and to 128 GB for the
DS8100. This processor memory is subdivided into a data in cache portion, which holds data
in volatile memory, and a persistent part of the memory, which functions as NVS to hold
DASD fast write (DFW) data until de-staged to disk.

It is not just the pure cache size which accounts for good performance figures. Economical
use of cache, like 4 KB cache segments and smart, adaptive caching algorithms, are just as
important to guarantee outstanding performance. This is implemented in the DS8000 series.

Besides the potential for sharing data on a DS8000 processor memory level, cache is the
main contributor to good I/O performance.

Use your current configuration and workload data and consider these guidelines:
򐂰 Choose a cache size for the DS8000 series which has a similar ratio between cache size
and disk storage capacity to that of the currently used configuration.
򐂰 When you consolidate multiple disk storage units, configure the DS8000 processor
memory or cache size as the sum of all cache from your current disk storage units.

For example, consider a DS8100 replacing four ESS F20s with 3.2 TB and 16 GB cache
each. The ratio between cache size and disk storage for the ESS F20s is 0.5% with 16
GB/3.2 TB. The new DS8100 is configured with 18 TB to consolidate 4 x 3.2 TB plus some
extra capacity for growth. This would require 90 GB of cache to keep the original
cache-to-disk storage ratio. Round up to the next available memory size, which is 128 GB for
this DS8100 configuration.

This ratio of 0.5% cache to backstore is considered high performance for z/OS environments.
Standard performance suggests a ratio of 0.2% cache to backstore ratio.

10.4.4 Channels consolidation


When sizing the DS8000, the number of channels you currently have in your configuration is
also a matter to consider. In that respect, the following applies:
򐂰 Keep the same number of ESCON channels when the DS8000 will be ESCON connected.
Since the maximum number of ESCON channel ports in a DS8100 is 32, and 64 for a
DS8300, this also determines the consolidation factor. Note that with ESCON channels
only, the potential of the DS8000 series cannot be fully exploited. You might consider
taking this opportunity to migrate from ESCON to FICON to improve performance and
throughput.
򐂰 When the connected host uses FICON channels with 1 Gbps technology and it will stay at
this speed as determined by the host or switch ports, then keep the same number of

182 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_Perf.fm

FICON ports. So four ESS 800s with eight FICON channels each connected to IBM 9672
G5 or G6 servers might end up in a single DS8300 with 32 FICON channels.
򐂰 When migrating not only to DS8000 models, but also from 1 Gbps FICON to FICON
Express channels at 2 Gbps, you can consider consolidating the number of channels to
about 2/3 of the original number of channels. Use at least four FICON channels per
DS8000 —by the way, when we write about FICON channels, we mean FICON ports in
the disk storage units.
򐂰 Coming from FICON Express channels, you should keep a minimum of four FICON ports.
You might consider using 25% fewer FICON ports in the DS8000 than the aggregated
number of FICON 2 Gbps ports from the source environment. For example, when you
consolidate two ESS 800s with eight FICON 2 Gbps ports each to a DS8100, plan for a
minimum of 12 FICON ports on the DS8100.

Another example, with four ESS F20s each with eight FICON channels, might collapse into
about 20 FICON ports when changing to a connectivity speed of 2 Gbps for the target
DS8100 or DS8300.

10.4.5 Disk drives characteristics — capacity, speed and type


You can determine the number of ranks required not only based on the needed capacity, but
also depending on the workload characteristics in terms of access density, read to write ratio,
and hit rates.

You can approach this from the disk side and look at some basic disk figures. Fibre Channel
disks, for example, at 10k RPM provide an average seek time of approximately 5 ms and an
average latency of 3 ms. For transferring only a small block, the transfer time can be
neglected. This is an average 8 ms per random disk I/O operation or 125 I/Os per second. A
15k RPM disk provides about 200 random I/Os per second for small block I/Os. A combined
number of 8 disks is then good for 1,600 I/Os per second when they spin at 15k per minute.
Reduce the number by 12.5% when you assume a spare drive in the 8 pack. Assume further
a RAID-5 logic over the 8 packs.

Back at the host side, consider an example with 4,000 I/Os per second, a read to write ratio of
3 to 1, and 50% read cache hits. This leads to the following I/O numbers:
򐂰 3,000 read I/Os per second.
򐂰 1,500 read I/Os must read from disk.
򐂰 1,000 writes with RAID-5 and assuming the worst case results in 4,000 disk I/Os.
򐂰 This totals to 4,500 disk I/Os.

With 15K rpm DDMs you need the equivalent of three 8 packs to satisfy the I/O load from the
host for this example. Depending on the required capacity, you then decide the disk capacity,
provided each desired disk capacity has 15k rpm. When the access density is less and you
need more capacity, follow the example with higher capacity disks, which usually spin at a
slower speed like 10k rpm.

For a single disk drive, various disk vendors provide on their internet product sites the disk
specifications like for example:
򐂰 146 GB DDM with 10K rpm deliver a sustained transfer rate between 38 and 68 MB/s or
53 MB/s on average.
򐂰 73 GB DDM with 15K rpm transfers between 50 and 75 MB/s or 62.5 MB/s on average.

The 73 GB DDMs have about 18% more sequential capability than the 146 GB DDM, but
60% more random I/O potential —the I/O characteristic is another aspect to consider when

Chapter 10. Performance 183


6786_6452_Perf.fm Draft Document for Review November 14, 2006 3:49 pm

deciding the disk and disk array size. While this discussion is theoretical in approach, it
provides a first impression.

Once the speed of the disk has been decided, the capacity can be calculated based on you
storage capacity needs and the effective capacity of the RAID configuration you will use. For
this you can use Table 8-9 on page 143.

FATA disk drives


When analyzing the disks alternatives, keep in mind that for the new FATA 500 GB drives,
they are both the largest and slowest of the drives available for the DS8000. This combined
with the lower utilization recommendations and the potential for drive protection throttling
means that these drives are definitely not the drive to use for high performance or heavy IO
applications. See 4.4.4, “Fiber Channel ATA (FATA) disk drives overview” on page 56 for a
detailed discussion about the considerations and when to use FATA disk drives.

Disk Magic and Capacity Magic is the recommended approach


The previous are rule of thumbs. Still the best approach is to use your installation workload
RMF data as input to the Disk Magic modelling tool. With your workload data and current
configuration of disk subsystem units, Disk Magic can establish a base model from where it
can project the necessary DS8000 units —and their configuration, that will absorb the present
workload and also any future growth that you assume.

To estimate the number and capacity of disk drive sets needed to fulfill your storage capacity
requirements, the Capacity Magic tool can be used. It is a friendly tool and easy to use, that
will also help you determine the requirements for any growth in capacity that you may want to
estimate.

10.4.6 Ranks and extent pools configuration


This section discusses how to group ranks into extent pools and what the implications are
with different grouping approaches.

Note the independence of LSSs from ranks in the DS8000. Because an LSS is congruent
with a System z LCU, we need to understand the implications. It is now possible to have
volumes within the very same LCU, which is the very same LSS, but these volumes might
reside in different ranks.

A horizontal pooling approach assumes that volumes within a logical pool of volumes, like all
DB2 volumes, are evenly spread across all ranks. This is independent of how these volumes
are represented in LCUs. The following sections assume horizontal volume pooling across
ranks, which might be congruent with LCUs when mapping ranks accordingly to LSSs.

Configure one extent pool for each single rank


When defining an extent pool, an affinity is created between this specific extent pool and a
server — see Chapter 6, “Virtualization concepts” on page 91 for a discussion of the
construct of an extent pool.

Due to the virtualization of the Fibre Channel switched disk subsystem you might consider
creating as many extent pools as there are RAID ranks in the DS8000. This would then work
similar to what is currently in the ESS. With this approach you can control the placement of
each single volume and where it ends up in the disk subsystem.

In the example in Figure 10-12 on page 185, each rank is in its own extent pool. The evenly
numbered extent pools have an affinity to the left server, server 0. The odd number extent

184 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_Perf.fm

pools have an affinity to the right server, server 1. When a rank is subdivided into extents it
gets assigned to its own extent pool.

HA

RIO-G Module RIO-G Module

L1,2
Memory L1,2
Memory Processor
Processor Memory Memory

L1,2 L1,2
L3 L3
Memory
Memory Processor Processor Memory
Memory

Server0 Server1
RIO-G Module RIO-G Module
POWER5 2-way SMP POWER5 2-way SMP

DAs
Rank Extent pool0 Extent pool2 Extent pool4 Extent pool1 Extent pool3 Extent pool5

1 3 5 2 4 6

7 9 11 8 10 12

Extent pool6 Extent pool8 Extent pool10 Extent pool7 Extent pool9 Extent pool11

Figure 10-12 Extent pool affinity to processor complex with one extent pool for each rank

Now all volumes which are comprised of extents out of an extent pool have also a respective
server affinity when scheduling I/Os to these volumes.

This allows you to place certain volumes in specific ranks to avoid potential clustering of many
high activity volumes within the same rank. You can create SMS storage groups which are
congruent to these extent pools to ease the management effort of such a configuration. But
you can still assign multiple storage groups when you are not concerned about the placement
of less active volumes.

Figure 10-12 also indicates that there is no affinity nor a certain preference between HA and
processor complexes or servers in the DS8000. In this example either one of the two HAs can
address any volume in any of the ranks, which range here from rank number 1 to 12. Note
there is an affinity of DAs to the processor complex. A DA pair connects to two switches
respectively to two pairs of switches —see Figure 10-2 on page 169. The first DA of this DA
pair connects to the left processor complex or server 0. The second DA of this DA pair
connects to the other processor complex or server 1.

Minimize the number of extent pools


The other extreme is to create just two extent pools when the DS8000 is configured as CKD
storage only. You would then subdivide the disk subsystem evenly between both processor
complexes or servers as Figure 10-13 on page 186 shows.

Chapter 10. Performance 185


6786_6452_Perf.fm Draft Document for Review November 14, 2006 3:49 pm

HA

RIO-G Module RIO-G Module

L1,2
Memory L1,2
Memory Processor
Processor Memory Memory

L1,2 L1,2
L3 L3
Memory
Memory Processor Processor Memory
Memory

Server0 Server1
RIO-G Module RIO-G Module
POWER5 2-way SMP
POWER5 2-way SMP

DAs

Rank 1 3 2 4
5 7 6 8
9 11 10 12

Extent pool0 Extent pool1


Figure 10-13 Extent pool affinity to processor complex with pooled ranks in two extent pools

Again what is obvious here is the affinity between all volumes residing in extent pool 0 to the
left processor complex, server 0, and vice versa for the volumes residing in extent pool 1 and
their affinity to the right processor complex or server 1.

When creating volumes there is no straightforward approach to place certain volumes into
certain ranks. For example, when you create the first 20 DB2 logging volumes, they would be
allocated in a consecutive fashion in the first rank. The concerned RAID site would then host
all these 20 logging volumes. Now with Array Across Loops (AAL) and the high performance
and large bandwidth capabilities of the Fibre Channel switched disk storage subsystem, this
might not be an issue. You may choose to control the placement of your most performance
critical volumes. This might lead to a compromise between both approaches.

Plan for a reasonable number of extent pools


Figure 10-14 on page 187 presents a grouping of ranks into extent pools which follows a
similar pattern and discussion as for grouping volumes or volume pools into SMS storage
groups.

Create two general extent pools for all the average workload and the majority of the volumes
and subdivide these pools evenly between both processor complexes or servers. These pools
contain the majority of the installed ranks in the DS8000. Then you might consider two or four
smaller extent pools with dedicated ranks for high performance workloads and their
respective volumes. You may consider defining storage groups accordingly which are
congruent to the smaller extent pools.

186 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_Perf.fm

blue I/O red I/O blue I/O red I/O

blue server red server

Server0 Server1
DAs
POWER5 2-way POWER5 2-way
SMP SMP

SGLOG1 SGLOG2
Rank Extent pool0 Extent pool2 Extent pool3 Extent pool1

1 1
3 5 5 3
7 9 SGPRIM 9 7

SGHPC1 11 11 SGHPC2

Extent pool4 Extent pool5


blue pool red pool

Figure 10-14 Mix of extent pools

Consider grouping the two larger extent pools into a single SMS storage group. SMS will
eventually spread the workload evenly across both extent pools. This allows a system
managed approach to place data sets automatically in the right extent pools.

With more than one DS8000 you might consider configuring each DS8000 in a uniform
fashion. We recommend grouping all volumes from all the large extent pools into one large
SMS storage group, SGPRIM. Cover the smaller, high performance extent pools through
discrete SMS storage groups for each DS8000.

With two of the configurations displayed in Figure 10-14, this ends up with one storage group,
SGPRIM, and six smaller storage groups. SGLOG1 contains Extent pool0 in the first DS8100
and the same extent pool in the second DS8100. Similar considerations are true for
SGLOG2. For example, in a dual logging database environment this allows you to assign
SGLOG1 to the first logging volume and SGLOG2 for the second logging volume. For very
demanding I/O rates and to satisfy a small set of volumes, you might consider keeping Extent
pool 4 and Extent pool 5 in both DS8100s separate, through four distinct storage groups,
SGHPC1-4.

Figure 10-14 shows, again, that there is no affinity between HA and processor complex or
server. Each I/O enclosure connects to either processor complex. But there is an affinity
between extent pool and processor complex and, therefore, an affinity between volumes and
processor complex. This requires some attention, as outlined previously, when you define
your volumes.

10.4.7 Parallel Access Volume — PAV


Parallel Access Volume (PAV) is one of the features that was originally introduced with the
IBM TotalStorage Enterprise Storage Server (ESS) and that the DS8000 series have
inherited. PAV is an optional licensed function of the DS8000 for the z/OS and z/VM operating

Chapter 10. Performance 187


6786_6452_Perf.fm Draft Document for Review November 14, 2006 3:49 pm

systems, helping the System z servers running applications to concurrently share the same
logical volumes.

The ability to do multiple I/O requests to the same volume nearly eliminates IOSQ time, one
of the major components in z/OS response time. Traditionally, access to highly active volumes
has involved manual tuning, splitting data across multiple volumes, and more. With PAV and
the Workload Manager, you can almost forget about manual performance tuning. WLM
manages PAVs across all the members of a sysplex too. This way, the DS8000 in conjunction
with z/OS has the ability to meet the performance requirements by its own.

Traditional z/OS behavior — without PAV


Traditional storage disk subsystems have allowed for only one channel program to be active
to a volume at a time, in order to ensure that data being accessed by one channel program
cannot be altered by the activities of some other channel program.

Figure 10-15 illustrates the traditional z/OS behavior without PAV, where subsequent
simultaneous I/Os to volume 100 are queued while volume 100 is still busy with a preceding
I/O.

Appl. A Appl. B Appl. C


UCB 100 UCB 100 UCB 100

UCB Busy Device Busy

One I/O to
one volume
System z at one time System z

100
Figure 10-15 Traditional z/OS behavior

From a performance standpoint, it did not make sense to send more than one I/O at a time to
the storage disk subsystem, because the hardware could process only one I/O at a time.
Knowing this, the z/OS systems did not try to issue another I/O to a volume —in z/OS
represented by a Unit Control Block (UCB)— while an I/O was already active for that volume,
as indicated by a UCB busy flag; see Figure 10-15. Not only were the z/OS systems limited to
processing only one I/O at a time, but also the storage subsystems accepted only one I/O at a
time from different system images to a shared volume, for the same reasons mentioned
above.

188 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_Perf.fm

Parallel I/O capability — z/OS behavior with PAV

concurrent I/Os to volume 100


using different UCBs --- no one is queued

Appl. A Appl. B Appl. C


UCB 1FF UCB 1FE
UCB 100
alias to alias to
UCB 100 UCB 100

z/OS
Single image
System z

DS8000 with PAV 100 Logical volume

Physical layer

Figure 10-16 z/OS behavior with PAV

The DS8000 has the capability to do more than one I/O to a CKD volume. Using alias address
in addition to the conventional base address, a z/OS host can use several UCBs for the same
logical volume instead of one UCB per logical volume. For example, base address 100 may
have alias addresses 1FF and 1FE which allows for three parallel I/O operations to the same
volume; see Figure 10-16.

This feature that allows parallel I/Os to a volume from one host is called Parallel Access
Volume (PAV).

The two concepts that are basic in the PAV functionality are:
򐂰 Base address: the base device address is the conventional unit address of a logical
volume. There is only one base address associated with any volume.
򐂰 Alias address: an alias device address is mapped to a base address. I/O operations to an
alias run against the associated base address storage space. There is no physical space
associated with an alias address. You can define more than one alias per base.

Alias address have to be defined to the DS8000 and to the IODF file. This association is
predefined and adding new aliases can be done non-disruptively. Still, the association
between base and alias is not fixed —alias address can be assigned to different base
address by the z/OS Workload Manager.

For guidelines on PAV definition and support, see 16.3.2, “Parallel Access Volumes (PAV)
definition” on page 338.

PAV is an optional licensed feature of the DS8000 series


PAV is an optional licensed function on the DS8000 series, available with the PAV indicator
feature (2107 feature #0780, or 242x features #0780 and #78xx), and corresponding DS8000
series function authorization (2244-PAV, or 239x-LFA, with feature number 78xx). PAV also
requires the purchase of the FICON/ESCON Attachment feature in the Turbo models.

Chapter 10. Performance 189


6786_6452_Perf.fm Draft Document for Review November 14, 2006 3:49 pm

10.4.8 z/OS Workload Manager — Dynamic PAV tuning


It is not always easy to predict which volumes should have an alias address assigned, and
how many. Your software can automatically manage the aliases according to your goals. z/OS
can exploit automatic PAV tuning if you are using the z/OS Workload Manager (WLM) in Goal
mode. The WLM can dynamically tune the assignment of alias addresses. The Workload
Manager monitors the device performance and is able to dynamically reassign alias
addresses from one base to another if predefined goals for a workload are not met.

z/OS recognizes the aliases that are initially assigned to a base during the NIP (Nucleus
Initialization Program) phase. If dynamic PAVs are enabled, the WLM can reassign an alias to
another base by instructing the IOS to do so when necessary; seeFigure 10-17.

WLM can dynamically


UCB 100 reassign an alias to
another base

WLM

IOS
Assign to base 100

1F0 1F1 1F2 1F3


Base Alias Alias Alias Alias Base
100 to 100 to 100 to 110 to 110 110

Figure 10-17 WLM assignment of alias addresses

z/OS’s Workload Manager in Goal mode tracks the system workload and checks if the
workloads are meeting their goals established by the installation; see Figure 10-18.

WLMs exchange performance information


Goals not met because of IOSQ ?
Who can donate an alias ?

System z System z System z System z


WLM WLM WLM WLM
IOSQ on 100 ? IOSQ on 100 ? IOSQ on 100 ? IOSQ on 100 ?

Base Alias Alias Alias Alias Base


100 to 100 to 100 to 110 to 110 110

Dynamic PAVs Dynamic PAVs


DS8000

Figure 10-18 Dynamic PAVs in a sysplex

190 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_Perf.fm

WLM also keeps track of the devices utilized by the different workloads, accumulates this
information over time, and broadcasts it to the other systems in the same sysplex. If WLM
determines that any workload is not meeting its goal due to IOSQ time, WLM will attempt to
find an alias device that can be reallocated to help this workload achieve its goal; see
Figure 10-18.

Actually there are two mechanisms to tune the alias assignment:


1. The first mechanism is goal based. This logic attempts to give additional aliases to a
PAV-enabled device that is experiencing IOS queue delays and is impacting a service
class period that is missing its goal.
2. The second is to move aliases to high contention PAV enabled devices from low
contention PAV devices. High contention devices are identified by having a significant
amount of IOS queue time (IOSQ). This tuning is based on efficiency rather than directly
helping a workload to meet its goal.

As mentioned before, the Workload Manager (WLM) must be in Goal mode to cause PAVs to
be shifted from one logical device to another.

The movement of an alias from one base to another is serialized within the sysplex. IOS
tracks a token for each PAV enabled device. This token is updated each time an alias change
is made for a device. IOS and WLM exchange the token information. When the WLM instructs
IOS to move an alias, WLM also presents the token. When IOS has started a move and
updated the token, all affected systems are notified of the change through an interrupt.

RMF reporting on PAV


RMF reports the number of exposures for each device in its Monitor/DASD Activity report and
in its Monitor II and Monitor III Device reports. RMF also reports which devices had a change
in the number of exposures. RMF reports all I/O activity against the base address —not by
the base and associated aliases. The performance information for the base includes all base
and alias activity.

10.4.9 PAV in z/VM environments


z/VM provides PAV support in the following ways:
򐂰 As traditionally supported, for VM guests as dedicated guests via the CP ATTACH command
or DEDICATE user directory statement
򐂰 Starting with z/VM 5.2.0, with APAR VM63952, VM supports PAV minidisks.

Figure 10-19 and Figure 10-20 illustrate PAV in a z/VM environment.

DSK001

E101 E100 E102

DASD E100-E102 access same time


base alias alias 9800 9801 9802
RDEV RDEV RDEV
E100 E101 E102 Guest 1

Figure 10-19 z/VM support of PAV volumes dedicated to a single guest virtual machine

Chapter 10. Performance 191


6786_6452_Perf.fm Draft Document for Review November 14, 2006 3:49 pm

DSK001

E101 E100 E102

9800 9801 9802 9800 9801 9802 9800 9801 9802

Guest 1 Guest 2 Guest 3

Figure 10-20 Linkable minidisks for guests that do exploit PAV

In this way, PAV provides to the z/VM environments the benefits of a greater I/O performance
(throughput) by reducing I/O queueing.

With the SPE introduced with z/VM 5.2.0 and APAR VM63952 additional enhancements are
available when using PAV with z/VM. For more information see Section 16.4, “z/VM
considerations” on page 340.

10.4.10 Multiple Allegiance


Normally, if any System z host image (server or LPAR) does an I/O request to a device
address for which the storage disk subsystem is already processing an I/O that came from
another System z host image, then the storage disk subsystem will send back a device busy
indication. This delays the new request and adds to processor and channel overhead —this
delay is shown in the RMF Pend time column.

parallel I/O capability

Appl. A Appl. B
UCB 100 UCB 100

Multiple
System z Allegiance System z

DS8000 100 Logical volume

Physical
layer
Figure 10-21 Parallel I/O capability with Multiple Allegiance

The DS8000 accepts multiple I/O requests from different hosts to the same device address,
increasing parallelism and reducing channel overhead. In older storage disk subsystems, a

192 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_Perf.fm

device had an implicit allegiance, that is, a relationship created in the control unit between the
device and a channel path group when an I/O operation is accepted by the device. The
allegiance causes the control unit to guarantee access (no busy status presented) to the
device for the remainder of the channel program over the set of paths associated with the
allegiance.

With Multiple Allegiance, the requests are accepted by the DS8000 and all requests will are
processed in parallel, unless there is a conflict when writing data to a particular extent of the
CDK logical volume; see Figure 10-21 on page 192.

Still, good application software access patterns can improve the global parallelism by
avoiding reserves, limiting the extent scope to a minimum, and setting an appropriate file
mask, for example, if no write is intended.

In systems without Multiple Allegiance, all except the first I/O request to a shared volume
were rejected, and the I/Os were queued in the System z channel subsystem, showing up as
PEND time in the RMF reports.

Multiple Allegiance provides significant benefits for environments running a sysplex, or


System z systems sharing access to data volumes. Multiple Allegiance and PAV can operate
together to handle multiple requests from multiple hosts.

10.4.11 HyperPAV
The DS8000 series offers enhancements to Parallel Access Volumes (PAV) with support for
HyperPAV, which is designed to enable applications to achieve equal or better performance
than PAV alone, while also using the same or fewer operating system resources.

PAV and Multiple Allegiance


Figure 10-22 illustrates the basic characteristics of how PAV operates.

Aliases assigned to specific addresses

Figure 10-22 Parallel Access Volumes — basic operation characteristics

Chapter 10. Performance 193


6786_6452_Perf.fm Draft Document for Review November 14, 2006 3:49 pm

Multiple Allegiance and PAV allow multiple I/Os to be executed concurrently against the same
volume:
򐂰 With Multiple Allegiance, the I/Os are coming from different system images
򐂰 With PAV, the I/Os are coming from the same system image:
– Static PAV: aliases are always associated with the same base addresses
– Dynamic PAV: aliases are assigned up front but can be reassigned to any base
address as need dictates by means of the Dynamic Alias Assignment function of the
Workload Manager—reactive alias assignment.

HyperPAV
With HyperPAV an on demand proactive assignment of aliases is now possible, see
Figure 10-23.

Aliases kept in pool for use as needed

Figure 10-23 HyperPAV — basic operation characteristics

HyperPAV allows an alias address to be used to access any base on the same control unit
image per I/O base. This capability also allows different HyperPAV hosts to use one alias to
access different bases which reduces the number of alias addresses required to support a set
of bases in a System z environment with no latency in targeting an alias to a base. This
functionality is also designed to enable applications to achieve equal or better performance
than possible with the original PAV feature alone while also using the same or fewer operating
system resources.

Benefits of HyperPAV
HyperPAV has been designed to:
򐂰 Provide an even more efficient Parallel Access Volumes (PAV) function
򐂰 Help customers who implement larger volumes to scale I/O rates without the need for
additional PAV alias definitions
򐂰 Exploit FICON architecture to reduce overhead, improve addressing efficiencies, and
provide storage capacity and performance improvements:
– More dynamic assignment of PAV-aliases improves efficiency

194 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786_6452_Perf.fm

– Number of PAV-aliases needed may be reduced, taking fewer from the 64K device
limitation and leaving more storage for capacity use
򐂰 Enable a more dynamic response to changing workloads
򐂰 Simplified management of aliases
򐂰 Enable users to stave off migration to larger volume sizes.

For support information see sections 16.3.3, “HyperPAV — z/OS support” on page 339 and
16.4.3, “PAV and HyperPAV — z/VM support” on page 340.

Optional licensed function


HyperPAV is an optional licensed function of the DS8000 series.

242x machine types


HyperPAV is an optional feature, available with the HyperPAV indicator feature #7899 and
#0782, and corresponding DS8000 series function authorization (2396-LFA HyperPAV feature
number 7899). HyperPAV also requires the purchase of the PAV licensed feature and the
FICON/ESCON Attachment licensed feature.

2107 machine types


HyperPAV is an optional feature, available with the HyperPAV indicator feature number 0782
and corresponding DS8000 series function authorization (2244-PAV HyperPAV feature
number 7899). HyperPAV also requires the purchase of one or more PAV licensed features
and the FICON/ESCON Attachment licensed feature —the FICON/ESCON Attachment
licensed feature applies only to the DS8000 Turbo Models 931,932, and 9B2.

10.4.12 I/O Priority queueing


The concurrent I/O capability of the DS8000 allows it to execute multiple channel programs
concurrently, as long as the data accessed by one channel program is not altered by another
channel program.

Queueing of channel programs


When the channel programs conflict with each other and must be serialized to ensure data
consistency, the DS8000 will internally queue channel programs. This subsystem I/O queuing
capability provides significant benefits:
򐂰 Compared to the traditional approach of responding with a device busy status to an
attempt to start a second I/O operation to a device, I/O queuing in the storage disk
subsystem eliminates the overhead associated with posting status indicators and
re-driving the queued channel programs.
򐂰 Contention in a shared environment is eliminated. Channel programs that cannot execute
in parallel are processed in the order they are queued. A fast system cannot monopolize
access to a volume also accessed from a slower system. Each system gets a fair share.

Priority queueing
I/Os from different z/OS system images can be queued in a priority order. It is the z/OS
Workload Manager that makes use of this priority to privilege I/Os from one system against
the others. You can activate I/O Priority Queuing in WLM Service Definition settings. WLM
has to run in Goal mode.

Chapter 10. Performance 195


6786_6452_Perf.fm Draft Document for Review November 14, 2006 3:49 pm

System A System B

WLM WLM
IO Queue
for I/Os that IO with
have to be Priority
queued X'21'

Execute I/O from A Pr X'FF'


:
I/O from B Pr X'9C'
:
: IO with
Priority
: X'80'
DS8000
:
I/O from B Pr X'21'
3390

Figure 10-24 I/O Priority Queueing

When a channel program with a higher priority comes in and is put in front of the queue of
channel programs with lower priority, the priority of the low-priority programs is also
increased; see Figure 10-24. This prevents high-priority channel programs from dominating
lower priority ones and gives each system a fair share.

196 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_Activate_License.fm

11

Chapter 11. Features and license keys


This chapter discusses the activation of licensed functions. In this chapter we cover the
following topics:
򐂰 DS8000 licensed functions
򐂰 Activation of licensed functions
򐂰 Licensed scope considerations

For the planning activities related to the activation of the licensed functions see also the
discussion in 9.4.5, “Activation of Advanced Function licenses” on page 155.

© Copyright IBM Corp. 2006. All rights reserved. 197


6786ch_Activate_License.fm Draft Document for Review November 14, 2006 3:49 pm

11.1 DS8000 licensed functions


Many of the functions of the DS8000 series that we have discussed so far, are optional
licensed functions that must be enabled in order to use them. For all the DS8000 models the
licensed functions are enabled via a 242x/2107 licensed function indicator feature, plus a
239x/2244 licensed function authorization feature number, in the following way:
򐂰 For the Turbo Models 931, 932 and 9B2, with Enterprise Choice length of warranty, the
licensed functions are enabled via a pair of 242x- 931/932/9B2 licensed function indicator
feature numbers (#07xx and #7xxx) plus a 239x-LFA (Licensed Function Authorization)
feature number (#7xxx) —according to Table 11-1.
The x in 242x designates the machine type according to its warranty period, where x can
be either 1, 2, 3, or 4. For example, a 2424-9B2 machine type designates a DS8000 series
Turbo Model 9B2 with four years of warranty period.
The x in 239x can either be 6, 7, 8, or 9, according to the associated 242x base unit model
—2396 function authorizations apply to 2421 base units, 2397 to 2422, and so on. For
example, a 2399-LFA machine type designates a DS8000 Licensed Function
Authorization for a 2424 machine with four years of warranty period.
򐂰 For the Turbo Models 931, 932, and 9B2, without Enterprise Choice length of warranty, the
licensed functions are enabled via a 2107-931/932/9B2 licensed function indicator feature
number (#07xx) plus a 2244-model function authorization feature number (#7xxx)
—according to Table 11-2 on page 199.
򐂰 For the 921, 922 and 9A2 models, the licensed functions are enabled via a 2107-
921/922/9A2 licensed function indicator feature number (#07xx) plus a 2244-model
function authorization feature number (#7xxx) —according to Table 11-3 on page 199.

The 242x/2107 licensed function indicator feature numbers enable the technical activation of
the function subject to the client applying a feature activation code made available by IBM.
The 239x/2244 licensed function authorization feature numbers establish the extent of
authorization for that function on the 242x or 2107 machine for which it was acquired.

Table 11-1 DS8000 series Turbo Models 93x/9B2 with Enterprise Choice length of warranty
Licensed function for IBM 242x indicator IBM 239x function authorization
Turbo Models 93x/9B2 —with feature numbers model and feature numbers
Enterprise Choice warranty

Operating Environment 0700 and 70xx 239x Model LFA, 70xx

FICON/ESCON Attachment 0702 and 7090 239x Model LFA, 7090

Point-in-Time Copy 0720 and 72xx 239x Model LFA, 72xx

Metro/Global Mirror 0742 and 74xx 239x Model LFA, 74xx

Metro Mirror 0744 and 74xx 239x Model LFA, 74xx

Global Mirror 0746 and 74xx 239x Model LFA, 74xx

Metro Mirror Add on 0754 and 75xx 239x Model LFA, 75xx

Global Mirror Add on 0756 and 75xx 239x Model LFA, 75xx

Remote Mirror for z/OS 0760 and 76xx 239x Model LFA, 76xx

Parallel access volumes 0780 and 78xx 239x Model LFA, 78xx

HyperPAV 0782 and 7899 239x Model LFA, 7899

198 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_Activate_License.fm

Table 11-2 DS8000 series Turbo Models 93x/9B2 without Enterprise Choice length of warranty
Licensed function for IBM 2107 indicator IBM 2244 function authorization
Turbo Models 93x/9B2 feature number model and feature numbers
—without Enterprise Choice w.

Operating Environment 0700 2244-OEL, 70xx

FICON/ESCON Attachment 0702 2244-OEL, 7090

Point-in-Time Copy 0720 2244-PTC, 72xx

Metro/Global Mirror 0742 2244-RMC, 74xx

Metro Mirror 0744 2244-RMC, 74xx

Global Mirror 0746 2244-RMC, 74xx

Metro Mirror Add on 0754 2244-RMC, 75xx

Global Mirror Add on 0756 2244-RMC, 75xx

Remote Mirror for z/OS 0760 2244-RMZ, 76xx

Parallel access volumes 0780 2244-PAV, 78xx

HyperPAV 0782 2244-PAV, 7899

Table 11-3 DS8000 series Models 92x/9A2 licensed functions


Licensed function for IBM 2107 indicator IBM 2244 function authorization
Models 92x/9A2 feature number model and feature numbers

Operating Environment 0700 2244-OEL, 70xx

Point-in-Time Copy 0720 2244-PTC, 72xx

Remote Mirror and Copy 0740 2244-RMC, 74xx

Metro/Global Mirror 0742 2244-RMC, 74xx

Remote Mirror for z/OS 0760 2244-RMZ, 76xx

Parallel Access Volumes 0780 2244-PAV, 78xx

HyperPAV 0782 2244-PAV, 7899

Note: For a detailed explanation of the features involved and the considerations you must
have when ordering DS8000 licensed functions refer to the announcement letters:
򐂰 IBM System Storage DS8000 Series (IBM 2107 and IBM 242x)
򐂰 IBM System Storage DS8000 — Function Authorizations (IBM 2244 or IBM 239x).

IBM announcement letters can be found at http://www.ibm.com/products

11.2 Activation of licensed functions


Activating the license keys of the DS8000 can be done after the IBM service representative
has completed the storage complex installation. Based on your 2244/239x licensed function
order you need to obtain the necessary keys from the IBM Disk Storage Feature Activation
(DSFA) Web site at: http://www.ibm.com/storage/dsfa

Chapter 11. Features and license keys 199


6786ch_Activate_License.fm Draft Document for Review November 14, 2006 3:49 pm

You may activate the license keys all at the same time (for example, on initial activation of the
storage unit) or activate them individually (for example, additional ordered keys).

Before connecting to the IBM DSFA site to obtain your feature activation codes, ensure that
you have the following items:
򐂰 The IBM License Function Authorization documents. If you are activating codes for a new
storage unit, these documents are included in the shipment of the storage unit. If you are
activating codes for an existing storage unit, IBM sends these documents to you in an
envelope.
򐂰 A diskette or USB memory device for downloading your activation codes into a file if you
cannot access the DS Storage Manager from the system that you are using to access the
DSFA Web site. Instead of downloading the activation codes in softcopy format, you can
also print the activation codes and manually enter them using the DS Storage Manager
GUI. However, this is slow and error prone because the activation keys are 32-character
long strings.

For a discussion of the activities in preparation to the activation of the licensed functions see
also 9.4.5, “Activation of Advanced Function licenses” on page 155.

11.2.1 Obtaining DS8000 machine information


In order to obtain license activation keys from the DFSA Web site, you need to know the serial
number and machine signature of your DS8000 unit. Use the following procedure to find out
the required information.

Figure 11-1 DS8000 Storage Manager Sign On panel

1. Start the DS Storage Manager application. Log in using an ID with administrator access. If
this is the first time you are accessing the machine, contact your IBM CE for the user ID
and password. After successful login, the DS8000 Storage Manager Welcome panel

200 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_Activate_License.fm

opens. Select, in order, Real-time manager and Manage hardware as shown in


Figure 11-1 on page 200.
2. In the My Work navigation panel on the left side, select Storage units. The Storage units
panel opens (Figure 11-2).

Figure 11-2 DS8000 Storage units panel

3. On the Storage units panel, select the storage unit by clicking the box to the left of it, click
Properties in the Select Action pull-down list. The Storage Unit Properties panel opens
(Figure 11-3).

Figure 11-3 DS8000 Storage Unit Properties panel

Chapter 11. Features and license keys 201


6786ch_Activate_License.fm Draft Document for Review November 14, 2006 3:49 pm

4. On the Storage Unit Properties panel, click the General tab. Gather the following
information about your storage unit:
– From the MTMS field, note the machine's serial number. The Machine Type - Model
Number - Serial Number (MTMS) is a string that contains the machine type, model
number, and serial number. The last seven characters of the string are the machine's
serial number.
– From the Machine signature field, note the machine signature.

You can use Table 11-4 to document the information. You will later enter the information on
the IBM DSFA Web site.

Table 11-4 DS8000 machine information table


Property Your storage unit’s information

Machine’s serial number

Machine signature

11.2.2 Obtaining activation codes


Perform the following steps to obtain the activation codes.
1. At a computer with an Internet connection and a browser, connect to the IBM Disk Storage
Feature Activation (DSFA) Web site at: http://www.ibm.com/storage/dsfa
Figure 11-4 shows the DSFA home page.

Figure 11-4 IBM DSFA Web page

2. Click the DS8000 series (machine type 2107) link. This will bring you to the Select
DS8000 series machine page (Figure 11-5 on page 203).

202 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_Activate_License.fm

Figure 11-5 DS8000 DSFA machine information entry page

Note: The examples we are discussing in this section of the book illustrate the activation of
the licensed functions for 2107-922 and 9A2 models. For this reason the machine type and
function authorizations you see in the screens, correspond to Table 11-3 on page 199. For
the DS8000 series Turbo Models 93x/9B2 the machine types and function authorizations
correspond to Table 11-1 on page 198 and Table 11-2 on page 199.

3. Enter the machine information and click Submit. The View machine summary page opens
(Figure 11-6 on page 204).

Chapter 11. Features and license keys 203


6786ch_Activate_License.fm Draft Document for Review November 14, 2006 3:49 pm

Figure 11-6 DSFA View machine summary page

The View machine summary page shows the total purchased licenses and how much of
them are currently assigned. The example in Figure 11-6 shows a storage unit where all
licenses have already been assigned. When assigning licenses for the first time, the
Assigned field would show 0.0 TB.
4. Click the Manage activations link. The Manage activations page opens. Figure 11-7 on
page 205 shows the Manage activations page for a 2107 Model 9A2 with two storage
images. For each license type and storage image, enter the license scope (FB, CKD, or
All) and a capacity value (in TB) to be assigned to the storage image. The capacity values
are expressed in decimal terabytes with 0.1 TB increments. The sum of the storage image
capacity values for a license cannot exceed the total license value.

204 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_Activate_License.fm

Figure 11-7 DSFA Manage activations page

5. When you have entered the values, click Submit. The View activation codes page opens,
showing the license activation codes for the storage images (Figure 11-8 on page 206).
Print the activation codes or click Download to save the activation codes in a file that you
can later import in the DS8000. The file will contain the activation codes for both storage
images.

Chapter 11. Features and license keys 205


6786ch_Activate_License.fm Draft Document for Review November 14, 2006 3:49 pm

Figure 11-8 DSFA View activation codes page

Note: In most situations, the DSFA application can locate your 2244/239x licensed function
authorization record when you enter the DS8000 (2107 or 242x) serial number and
signature. However, if the 2244/239x licensed function authorization record is not attached
to the 2107/242x record, you must assign it to the 2107/242x record using the Assign
function authorization link on the DSFA application. In this case, you need the 2244/239x
serial number (which you can find on the License Function Authorization document).

11.2.3 Applying activation codes using the GUI


Use this process to apply the activation codes on your DS8000 storage images using the DS
Storage Manager GUI. Once applied, the codes enable you to begin configuring storage on a
storage image.

Note: The initial enablement of any optional DS8000 licensed function is a concurrent
activity (assuming the appropriate level of microcode is installed on the machine for the
given function). The following activation activities are disruptive and require a machine IML
(Models 921, 931, 922, and 932) or reboot of the affected image (Models 9A2 and 9B2):
򐂰 Removal of a DS8000 licensed function to deactivate the function.
򐂰 A lateral change or reduction in the license scope. A lateral change is defined as
changing the license scope from fixed block (FB) to count key data (CKD) or from CKD
to FB. A reduction is defined as changing the license scope from all physical capacity
(ALL) to only FB or only CKD capacity.

206 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_Activate_License.fm

Important: Before you begin this task, you must resolve any current DS8000 problems.
Contact IBM support for assistance in resolving these problems.

The easiest way to apply the feature activation codes is to download the activation codes from
the IBM Disk Storage Feature Activation (DSFA) Web site to your local computer and import
the file into the DS Storage Manager. If you can access the DS Storage Manager from the
same computer that you use to access the DSFA Web site, you can copy the activation codes
from the DSFA window and paste them into the DS Storage Manager window. The third
option is to manually enter the activation codes in the DS Storage Manager from a printed
copy of the codes.
1. In the My Work navigation panel on the DS Storage Manager Welcome screen, select, in
order, Real-time manager, Manage hardware, and Storage images. The Storage
images panel opens (Figure 11-9).

Figure 11-9 DS8000 Storage Images select panel

2. On the Storage images panel, select a storage image whose activation codes you want to
apply. Select Apply activation codes in the Select Action pull-down list. The Apply
Activation codes panel is displayed (Figure 11-10 on page 208).
If this is the first time you apply activation codes, the fields in the panel are empty;
otherwise, the current license codes and values are displayed in the fields and you can
modify or overwrite them, as appropriate.

Chapter 11. Features and license keys 207


6786ch_Activate_License.fm Draft Document for Review November 14, 2006 3:49 pm

Figure 11-10 DS8000 activation code input panel

Note: As we already referred, the example we are presenting correspond to a 2107-9A2


machine —for this reason the Apply Activation codes panel looks like in Figure 11-10. For
a 93x/9B2 model, the Apply Activation codes screen looks as shown in Figure 11-11.

Figure 11-11 DS8000 Activation Code input panel — Turbo models

208 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_Activate_License.fm

3. If you are importing your activation codes from a file that you downloaded from the DSFA
Web site, click Import key file. The Import panel is displayed (Figure 11-12).

Figure 11-12 DS8000 import key file panel

Enter the name of your key file, then click OK to complete the import process.
If you did not download your activation codes into a file, manually enter the codes into the
appropriate fields on the Apply Activation codes panel.
4. After you have entered the activation codes, either manually or by importing a key file, click
Apply. The Capacity and Storage type fields will now reflect the license information
contained in the activation codes, as in Figure 11-13.

Figure 11-13 Applied licenses

Click OK to complete the activation code apply process.

Note: For the 9A2 and 9B2 models, you need to perform the code activation process for
both storage images, one image at a time.

11.2.4 Applying activation codes using the DS CLI


The license keys can also be activated using the DS CLI. This is available only if the machine
Operating Environment License (OEL) has previously been activated and you have a console
with a compatible DS CLI program installed.

Chapter 11. Features and license keys 209


6786ch_Activate_License.fm Draft Document for Review November 14, 2006 3:49 pm

1. Use the showsi command to display the DS8000 machine signature; see Example 11-1.

Example 11-1 DS CLI showsi command


dscli> showsi ibm.2107-75abtv1
Date/Time: November 2, 2005 9:00:21 AM EET IBM DSCLI Version: 5.1.0.204 DS:
ibm.2107-75abtv1
Name -
desc -
ID IBM.2107-75ABTV1
Storage Unit IBM.2107-75ABTV0
Model 9A2
WWNN 5005076303FFC663
Signature 1234-5678-9012-3456
State Online
ESSNet Enabled
Volume Group V0
os400Serial -

2. Obtain your license activation codes from the IBM DSFA Web site. See 11.2.2, “Obtaining
activation codes” on page 202.
3. Use the applykey command to activate the codes and the lskey command to verify which
type of licensed features are activated for your storage unit.
a. Give a applykey command at the dscli command prompt as follows. The -file
parameter specifies the key file. The second parameter specifies the storage image.
dscli> applykey -file c:\2107_7520780.xml IBM.2107-7520781
b. Verify that the keys have been activated for your storage unit by issuing the DS CLI
lskey command as shown in Example 11-2.

Example 11-2 Using lskey to list installed licenses


dscli> lskey IBM.2107-7520781
Date/Time: 3 January 2006 20:10:07 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781
Activation Key Capacity (TB) Storage Type
==================================================
Flash Copy 105 All
Operating Environment 105 All
Parallel Access Volumes 105 CKD
Remote Mirror and Copy 55 FB
Remote Mirror for z/OS 105 CKD

Note: For 9A2 and 9B2 models, you need to perform the code activation process for both
storage images, for example, using the serial number from Example 11-2 that would be
IBM.2107-7520781 and IBM.2107-7520782.

For more details on the DS CLI, refer to IBM System Storage DS8000: Command-Line
Interface User´s Guide, SC26-7916.

11.3 Licensed scope considerations


For the Point-in-Time Copy (PTC) function and the remote mirror and copy functions, you
have the ability to set the scope of these functions to be FB, CKD, or ALL. You need to decide
what scope to set, as shown in Figure 11-7 on page 205. In that example, Image One of the
2107-9A2 has 16 TB of RMC and the user has currently decided to set the scope to All. If
instead it was set to FB, then the user would not be able to use RMC with any CKD volumes

210 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_Activate_License.fm

that are later configured. However, it is possible to return to the DSFA Web site at a later time
and change the scope from CKD or FB to All, or from All to either CKD or FB. In every case, a
new activation code will be generated, which can be downloaded and applied.

11.3.1 Why you get a choice


Let us imagine a simple scenario where a machine has 20 TB of capacity. Of this, 15 TB is
configured as FB and 5 TB is configured as CKD. If we only wish to use Point-in-Time copy for
the CKD volumes, then we can purchase just 5 TB of Point-in-Time copy and set the scope of
the Point-in-Time copy activation code to CKD. Then if we later purchase 5 TB more storage
capacity but only use it for FB, then we do not need to purchase any more PTC license.

When deciding which scope to set, there are several scenarios to consider. Use Table 11-5 to
guide you in your choice. This table applies to both Point-in-Time copy and remote mirror and
copy functions.

Table 11-5 Deciding which scope to use


Scenario Point-in-Time copy or remote mirror and Suggested scope setting
copy functions usage consideration

1 This function will only ever be used by open Select FB.


systems hosts.

2 This function will only ever be used by Select CKD.


System z hosts.

3 This function will be used by both open Select All.


systems and System z hosts.

4 This function is currently only needed by Select FB and change to scope


open systems hosts but we may use it for All if and when the System z
System z at some point in the future. requirement occurs.

5 This function is currently only needed by Select CKD and change to


System z hosts but we may use it for open scope All if and when the open
systems hosts at some point in the future. systems requirement occurs.

6 This function has already been set to All. Leave the scope set to All.
Changing it to CKD or FB at this
point requires a disruptive
outage.

Any scenario that changes from FB or CKD to All will not require an outage. If you choose to
change from All to either CKD or FB, then you will need take a disruptive outage. If you are
absolutely certain that your machine will only ever be used for one storage type (for example,
only CKD or only FB), then you could also quite safely just use the All scope.

11.3.2 Using a feature for which you are not licensed


In Example 11-3 on page 212, we have a machine where the scope of the Point-in-Time copy
license is set to FB. This means we cannot use Point-in-Time copy to create CKD
FlashCopies. When we try, the command fails. We can, however, create CKD volumes,
because the OEL key scope is All —note that the example corresponds to a 2107-92x/9A2
model, which does not require the FICON/ESCON Attachment licensed function as would be
the case with a 93x/9B2 Turbo Model.

Chapter 11. Features and license keys 211


6786ch_Activate_License.fm Draft Document for Review November 14, 2006 3:49 pm

Example 11-3 Trying to use a feature for which we are not licensed
dscli> lskey IBM.2107-7520391
Date/Time: 11 November 2005 19:01:44 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520391
Activation Key Capacity (TB) Storage Type
=================================================
Flash Copy 5 FB The scope is currently set to FB
Operating Environment 5 All
Remote Mirror and Copy 5 All

dscli> lsckdvol
Date/Time: 11 November 2005 19:01:52 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520391
Name ID accstate datastate configstate deviceMTM voltype orgbvols extpool cap (cyl)
======================================================================================
- 0000 Online Normal Normal 3390-3 CKD Base - P2 3339
- 0001 Online Normal Normal 3390-3 CKD Base - P2 3339

dscli> mkflash 0000:0001 We are not able to create CKD FlashCopies


Date/Time: 11 November 2005 19:01:58 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520391
CMUN03035E mkflash: 0000:0001: Copy Services operation failure: feature not installed

11.3.3 Changing the scope to All


As a follow on to the previous example, now in Example 11-4 we have logged onto DSFA and
changed the scope for the PTC license to All. We then apply this new activation code. We are
now able to perform a CKD FlashCopy.

Example 11-4 Changing the scope from FB to All


dscli> lskey IBM.2107-7520391
Date/Time: 11 November 2005 19:01:44 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520391
Activation Key Capacity (TB) Storage Type
=================================================
Flash Copy 5 FB The scope is currently set to FB
Operating Environment 5 All
Remote Mirror and Copy 5 All

dscli> applykey -key 1234-5678-9FEF-C232-51A7-429C-1234-5678 IBM.2107-7520391


Date/Time: 11 November 2005 19:12:35 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520391
CMUC00199I applykey: License Machine Code successfully applied to storage image
IBM.2107-7520391.

dscli> lskey IBM.2107-7520391


Date/Time: 11 November 2005 19:12:48 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520391
Activation Key Capacity (TB) Storage Type
=================================================
Flash Copy 5 All The scope is now set to All
Operating Environment 5 All
Remote Mirror and Copy 5 All

dscli> lsckdvol
Date/Time: 11 November 2005 19:12:52 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520391
Name ID accstate datastate configstate deviceMTM voltype orgbvols extpool cap (cyl)
======================================================================================
- 0000 Online Normal Normal 3390-3 CKD Base - P2 3339
- 0001 Online Normal Normal 3390-3 CKD Base - P2 3339

dscli> mkflash 0000:0001 We are now able to create CKD FlashCopies


Date/Time: 11 November 2005 19:12:55 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520391
CMUC00137I mkflash: FlashCopy pair 0000:0001 successfully created.

212 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_Activate_License.fm

11.3.4 Changing the scope from All to FB


In Example 11-5 we decide to increase storage capacity for the entire machine. However, we
do not wish to purchase any more PTC licenses, as PTC is only being used by open systems
hosts and this new capacity will only be used for CKD storage. We therefore decide to change
the scope to FB, so we log on to the DSFA Web site and create a new activation code. We
then apply it, but discover that because this is effectively a downward change (decreasing the
scope) it does not apply until we have taken a disruptive outage on the DS8000.

Example 11-5 Changing the scope for All to FB


dscli> lskey IBM.2107-7520391
Date/Time: 11 November 2005 20:20:59 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520391
Activation Key Capacity (TB) Storage Type
=================================================
Flash Copy 5 All The scope is currently All
Operating Environment 5 All
Remote Mirror and Copy 5 All

dscli> applykey -key ABCD-EFAB-EF9E-6B30-51A7-429C-1234-5678 IBM.2107-7520391


Date/Time: 11 November 2005 20:33:59 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520391
CMUC00199I applykey: License Machine Code successfully applied to storage image
IBM.2107-7520391.

dscli> lskey IBM.2107-7520391


Date/Time: 11 November 2005 20:34:23 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520391
Activation Key Capacity (TB) Storage Type
=================================================
Flash Copy 5 FB The scope is now set to FB
Operating Environment 5 All
Remote Mirror and Copy 5 All

dscli> lsckdvol
Date/Time: 11 November 2005 20:34:33 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520391
Name ID accstate datastate configstate deviceMTM voltype orgbvols extpool cap (cyl)
======================================================================================
- 0000 Online Normal Normal 3390-3 CKD Base - P2 3339
- 0001 Online Normal Normal 3390-3 CKD Base - P2 3339

dscli> mkflash 0000:0001 But we are still able to create CKD FlashCopies
Date/Time: 11 November 2005 20:34:42 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520391
CMUC00137I mkflash: FlashCopy pair 0000:0001 successfully created.
dscli>

In this scenario we have made a downward license feature key change. We must schedule an
outage of the storage image. We should in fact only make the downward license key change
immediately before taking this outage.

Restriction: Making a downward license change and then not immediately performing a
reboot of the storage image is not supported. Do not allow your machine to be in a position
where the applied key is different to the reported key.

11.3.5 Applying insufficient license feature key


In this example, we have a scenario where a 2107-92x had 5 TB of OEL, FlashCopy, and
RMC. We increased storage capacity and therefore increased the license key for OEL and
RMC. However, we forgot to increase the license key for FlashCopy. In Example 11-6 we can
see the FlashCopy license is only 5 TB. However, we are still able to create FlashCopies.

Chapter 11. Features and license keys 213


6786ch_Activate_License.fm Draft Document for Review November 14, 2006 3:49 pm

Example 11-6 Insufficient FlashCopy license


dscli> lskey IBM.2107-7520391
Date/Time: 13 November 2005 21:34:25 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520391
Activation Key Capacity (TB) Storage Type
=================================================
Flash Copy 5 All
Operating Environment 10 All
Remote Mirror and Copy 10 All

dscli> mkflash 1800:1801


Date/Time: 13 November 2005 21:38:36 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520391
CMUC00137I mkflash: FlashCopy pair 1800:1801 successfully created.

At this point this is still a valid configuration. This is because the configured ranks on the
machine total less than 5 TB of storage. In Example 11-7 we then try to create a new rank
that would bring the total rank capacity above 5 TB. This command fails.

Example 11-7 Creating a rank when we are exceeding a license key


dscli> mkrank -array A1 -stgtype CKD
Date/Time: 13 November 2005 21:43:52 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520391
CMUN02403E mkrank: Unable to create rank: licensed storage amount has been exceeded

To configure the additional ranks, we must first increase the license key capacity of every
installed license. In this example, that would be the FlashCopy license.

11.3.6 Calculating how much capacity is used for CKD or FB


To calculate how much disk space is being used for CKD or FB storage, we need to combine
the output of two commands. There are some simple rules:
򐂰 License key values are decimal numbers. So 5 TB of license is 5000 GB.
򐂰 License calculations use the disk size number shown by the lsarray command.
򐂰 License calculations include the capacity of all DDMs in each array site.
򐂰 Each array site is eight DDMs.

To make the calculation we use the lsrank command to determine how many arrays the rank
contains, and whether those ranks are being used for FB or CKD storage. Then we use the
lsarray command to find out the disk size being used by each array. Then we multiply the
disk size (73, 146, or 300) by eight (for eight DDMs in each array site).

In Example 11-8 on page 215, lsrank tells us that rank R0 uses array A0 for CKD storage.
Then lsarray tells us that array A0 uses 300 GB DDMs. So we multiple 300 (the DDM size)
by 8, giving us 300 x 8 = 2400 GB. This means we are using 2400 GB for CKD storage.

Now rank R4 in Example 11-8 is based on array A6. array A6 uses 146 GB DDMs, so we
multiply 146 by 8, giving us 146 x 8 = 1168 GB. This means we are using 1168 GB for FB
storage.

214 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_Activate_License.fm

Example 11-8 Displaying array site and rank usage


dscli> lsrank -dev IBM.2107-75ABTV1
Date/Time: 3 January 2006 20:46:06 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1
ID Group State datastate Array RAIDtype extpoolID stgtype
==========================================================
R0 0 Normal Normal A0 5 P0 ckd
R4 0 Normal Normal A6 5 P4 fb
ddscli> lsarray -dev IBM.2107-75ABTV1
Date/Time: 3 January 2006 20:45:31 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1
Array State Data RAIDtype arsite Rank DA Pair DDMcap (10^9B)
====================================================================
A0 Assigned Normal 5 (6+P+S) S1 R0 0 300.0
A1 Assigned Normal 5 (6+P+S) S2 - 0 300.0
A2 Unassigned Normal 5 (6+P+S) S3 - 0 300.0
A3 Unassigned Normal 5 (6+P+S) S4 - 0 300.0
A4 Assigned Normal 5 (7+P) S5 - 0 146.0
A5 Assigned Normal 5 (7+P) S6 - 0 146.0
A6 Assigned Normal 5 (7+P) S7 R4 0 146.0
A7 Assigned Normal 5 (7+P) S8 R5 0 146.0

So for CKD scope licenses, we are using 2400 GB. For FB scope licenses, we are using 1168
GB. For licenses with a scope of All, we are using 3568 GB. Using the limits shown in
Example 11-6 on page 214 we are within scope for all licenses.

If we combine Example 11-6, Example 11-7, and Example 11-8, we can also see why the
mkrank command in Example 11-7 failed. In Example 11-7 we tried to create a rank using
array A1. Now array A1 uses 300 GB DDMs. This means that for FB scope and All scope
licenses, we will use 300 x 8 = 2400 GB, more license key. Now from Example 11-6, we had
only 5 TB of FlashCopy license with a scope of All. This means that we cannot have total
configured capacity that exceeds 5000 TB. Since we are already using 3568 GB, the attempt
to use 2400 more GB will fail, since 3568 plus 2400 equals 5968 GB, which is clearly more
than 5000 GB. If we increase the size of the FlashCopy license to 10 TB, then we can have
10000 GB of total configured capacity, so the rank creation will then succeed.

Chapter 11. Features and license keys 215


6786ch_Activate_License.fm Draft Document for Review November 14, 2006 3:49 pm

216 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786p_Config.fm

Part 3

Part 3 Storage
Configuration
In this part we discuss the configuration tasks required on your DS8000. The topics covered
include:
򐂰 Configuration with DS Storage Manager GUI
򐂰 Configuration with Command Line interface

© Copyright IBM Corp. 2006. All rights reserved. 217


6786p_Config.fm Draft Document for Review November 14, 2006 3:49 pm

218 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_Configflow.fm

12

Chapter 12. Configuration flow


This chapter gives a brief overview of the tasks required to configure the storage in a DS8000
system.

© Copyright IBM Corp. 2006. All rights reserved. 219


6786ch_Configflow.fm Draft Document for Review November 14, 2006 3:49 pm

12.1 Configuration work sheets


During installation, IBM customizes the setup of your storage complex based on information
that you provide in a set of customization work sheets. Each time you install a new storage
unit or Management Console, you must complete the customization work sheets before the
IBM service representatives can perform the installation.

The customization work sheets are very important and need to be completed before the
installation. It is important that this information is entered into the machine so that preventive
maintenance and high availability of the machine is maintained.

The customization work sheets can be found in the IBM System Storage DS8000 Host
Systems Attachment Guide, SC26-7917.

The customization work sheets allow you to specify the initial setup for the following:
򐂰 Company information. This information allows IBM service personnel to contact you as
quickly as possible to access your storage complex.
򐂰 Management Console network settings. Allows you to specify the IP address and LAN
settings for your Management Console (MC).
򐂰 Remote support (includes Call Home and remote service settings). Allows you to specify
whether you want outbound (Call Home) or inbound (remote services) remote support.
򐂰 Notifications (includes SNMP trap and e-mail notification settings). Allows you to specify
the types of notifications that you want and others may want to receive.
򐂰 Power control. To select and control the various power modes for the storage complex.
򐂰 Control Switch settings. Allow you to specify certain DS8000 settings that affect host
connectivity. You are asked to enter these choices on the control switch settings work
sheet so that the service representative can set them during the installation of the
DS8000.

Important: IBM service representatives cannot install a storage unit or Management


Console until you provide them with the completed customization work sheets.

12.2 Configuration flow


The following list shows the tasks that need to be done when configuring storage in the
DS8000. The order of the tasks does not have to be exactly as shown here, some of the
individual tasks can be done in a different order.
1. Install License keys. Activate the license keys for the storage unit.
2. Create arrays. Configure the installed disk drives as either RAID 5 or RAID 10 arrays.
3. Create ranks. Assign each array to either a FB rank or a CKD rank.
4. Create extent pools. Define extent pools, associate each with either Server 0 or Server 1,
and assign at least one rank to each extent pool.
5. Configure I/O ports. Define the type of the Fibre Channel/FICON ports. The port type can
be either Switched Fabric, Arbitrated Loop, or FICON.
6. Create host connections. Define open systems hosts and their FC HBA world wide port
names.
7. Create volume groups. Create volume groups where FB volumes will be assigned and
select the host attachments for the volume groups.

220 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_Configflow.fm

8. Create open systems volumes. Create open systems FB volumes and assign them to one
or more volume groups.
9. Create System z LCUs. Define their type and other attributes such as SSID.
10.Create System z volumes. Create System z CKD base volumes and PAV aliases for them.

The actual configuration can be done using either the DS Storage Manager GUI or DS
Command-Line Interface, or a mixture of both. A novice user may prefer to use the GUI, while
a more experienced user may use the CLI, particularly for some of the more repetitive tasks
such as creating large numbers of volumes.

For a more detailed discussion of how to perform the specific tasks, refer to:
򐂰 Chapter 11, “Features and license keys” on page 197
򐂰 Chapter 13, “Configuration with DS Storage Manager GUI” on page 223
򐂰 Chapter 14, “Configuration with Command Line interface” on page 267

General guidelines when configuring storage


Remember the following general guidelines when configuring storage on the DS8000:
򐂰 Create as many extent pools as there are ranks.
򐂰 Address groups (16 LCUs/LSSs) are all for CKD or all for FB.
򐂰 Volumes of one LCU/LSS can be allocated on multiple extent pools.
򐂰 An extent pool cannot contain both RAID 5 and RAID 10 ranks.
򐂰 CKD
– ESCON channels can only access devices in address group 0 (LSSs 00-0F).
– 3380 and 3390 type volumes can be intermixed in an LCU and extent pool.
򐂰 FB
– Create a volume group for each server unless LUN sharing is required.
– Place all ports for a single server in one volume group.
– If LUN sharing is required there are two options:
• Use separate volumes for servers and place LUNs in multiple volume groups.
• Place servers (clusters) and volumes to be shared in a single volume group.
򐂰 I/O Ports
– Distribute host connections of each type (ESCON, FICON, FCP) evenly across the I/O
enclosure.
– Typically, access any is used for I/O ports with access to ports controlled via SAN
zoning.

Chapter 12. Configuration flow 221


6786ch_Configflow.fm Draft Document for Review November 14, 2006 3:49 pm

222 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_ConfigGUI.fm

13

Chapter 13. Configuration with DS Storage


Manager GUI
The DS Storage Manager provides a graphical user interface (GUI) to configure the DS8000.
In this chapter we explain how to configure the storage on the DS8000 with the DS Storage
Manager using either the Real-time manager or the Simulated manager.

This chapter includes the following sections:


򐂰 DS Storage Manager — Overview
򐂰 Logical configuration process
򐂰 Real-time manager
򐂰 Simulated manager
򐂰 Examples of configuring DS8000 storage

In this chapter we discuss the use of the DS Storage Manager for storage configuration of the
DS8000, not for Copy Services configuration. For Copy Services configuration in the DS8000
using the DS SM refer to the following books:
򐂰 IBM System Storage DS8000 Series: Copy Services in Open Environments, SG24-6788
򐂰 IBM System Storage DS8000 Series: Copy Services with System z servers, SG24-6787

© Copyright IBM Corp. 2006. All rights reserved. 223


6786ch_ConfigGUI.fm Draft Document for Review November 14, 2006 3:49 pm

13.1 DS Storage Manager — Overview


To configure the storage on the DS8000 you use the DS Storage Manager (DS SM) which
provides a graphical user interface (GUI). You have two options to work with the DS Storage
Manager:
򐂰 Real-time (Online) configuration: This provides real-time management support for logical
configuration and Copy Services features for a network-attached storage unit.
Configuration changes are applied to the storage unit as they are being made.
򐂰 Simulated (Offline) configuration: This application allows the user to create logical
configurations when disconnected from the network. After creating the configuration, you
can save it and then apply it to a network-attached storage unit at a later time.

To connect to the DS8000 through the browser, enter the URL of either the default Storage
Hardware Management Console (DS HMC) or the optional DS HMC you may have
purchased. You can connect through either DS HMC, but we recommend that once you start
updating and modifying from one DS HMC, you continue to make your changes through that
DS HMC for the duration of the change. The URL consists of the TCP/IP address, as shown
in Figure 13-1, or a fully qualified name.

Figure 13-1 Entering the URL using the TCP/IP address for the DS HMC

The default user ID is admin and the default password is also admin. The first time you log on
you will be prompted to change the password. Make sure that the new user ID and password
you select for your local GUI installation matches the password of the HMC that you will
connect to.

Note: It is always good practice to create alternate or backup user IDs in case one user ID
gets locked out. At least one of these user IDs and passwords must be the same as the
user ID and password on the HMC that you will connect to. If you intend to connect to
multiple HMCs that have different user IDs and passwords, then you should create a user
ID and password for each of these HMCs.

DS Storage Manager Welcome panel


After you log on you will see the DS Storage Manager Welcome panel; see Figure 13-2 on
page 225.

224 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_ConfigGUI.fm

Show all tasks


Information Center
Toggel Banner

Exit

Close Task

Hide Task List

Figure 13-2 DS Storage Manager Welcome panel

In the Welcome panel of the DS8000 Storage Manager you find these options:
򐂰 Show all tasks: opens the Task Manager panel, from where you can end a task or switch
to another task.
򐂰 Hide Task List: to hide the Task list and expands your work area.
򐂰 Toggle Banner: to remove the banner with the IBM System Storage Manager logo and
expands the working space.
򐂰 Information Center: to launch the Information Center. The Information Center is the online
help for the DS8000. The Information Center provides contextual help for each panel, but
also is independently accessible from the Internet.
򐂰 Close Task: to close the active task.
򐂰 Exit: to log off from the DS Storage Manager.

On the left side of the panel you have the navigation panel where you have the following
selections:
򐂰 Real-time manager
򐂰 Simulated manager

DS Storage Manager panel options


Figure 13-3 on page 226 shows an example of the Ranks panel. On this panel we explain
some important options that are common to a lot of the other panels on the DS Storage
Manager.

Chapter 13. Configuration with DS Storage Manager GUI 225


6786ch_ConfigGUI.fm Draft Document for Review November 14, 2006 3:49 pm

Select Action drop-down


menu. From this menu you
To select / deselect can select a specific task you Downloads the information
all items from the want perform. from the table below in csv
tabletable
below below
or from (comma separated values)
all tables
format. You can open this file
with a spreadsheet program.

Starts the printer


dialog from your
PC to print the
To set / unset filters table below.
.
for the table below.
To list only specific
items.

Figure 13-3 Example of the Ranks panel

The DS Storage Manager displays the configuration of your DS8000 in tables. To make this
more convenient there are several options you can use.
򐂰 To download the information from the table click Download. This can be useful if you want
to make a documentation of your configuration. The file is in csv format and can be
opened with a spreadsheet program. This function is also useful if the table on the
DS8000 Manager consists of several pages; the csv file includes all pages.
򐂰 The Print button opens a new window with the table in HTML format and starts the printer
dialog of your PC if you want to print the table.
򐂰 The Select Action pull-down menu provides you with specific actions you can perform (for
example, Create).
򐂰 There are also buttons to set and clear filters so that only specific items are displayed in
the table (for example, only FB ranks will be shown in the table). This can be useful if you
have tables with large numbers of items.

13.2 Logical configuration process


You can perform the initial logical configuration using either the Real-time manager or
Simulated manager.

When configuring the storage with the Real-time manager the configuration actions are
performed as soon as the commands are issued. When configuring the storage with the
Simulated manager, the configuration commands will only be applied after exporting the
resultant configuration to an unconfigured DS8000 storage unit.

Attention: Changes to an installed configuration may only be made using the Real-time
manager.

The first step is the creation of the storage complex along with the definition of the hardware
of the storage unit. The storage unit may have one or more storage images.

226 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_ConfigGUI.fm

Many of the logical configuration tasks can be performed independently, for example, you can
create host definitions before you create any storage definitions. However, some tasks do
have dependencies on previous items. This might determine the order in which you perform
the logical configuration. For example, you cannot create any host attachment definitions until
you have configured the I/O ports in the Storage Unit panel.

Many of the task wizards will take you into other wizards to create interdependent items, for
example, if you create a storage unit, but have no storage complex defined, the Create
Storage Unit wizard will take you into the Create Storage Complex wizard. This may seem
confusing, but as long as you have a comprehensive document of your proposed logical
configuration, you can follow the process through the wizards.

A top-down approach to performing the logical configuration is likely to be the most


straightforward.
1. Start with defining the storage complex or importing the storage complex and then import
the storage unit.
2. Configure the I/O ports.
3. Create host definitions
4. Create arrays and assign them to ranks.
5. Create extent pools, and add ranks to the extent pools.
6. Create open systems volumes and volume groups.
7. Create CKD LSSs and volumes.

Long Running Task panel


Some logical configuration tasks have dependencies on the successful completion of other
tasks, for example, you cannot create ranks on arrays until the array creation is complete. To
assist you in this process there is a Long Running Task Summary panel that reports the
status of these long-running tasks.

Figure 13-4 shows the successful completion of the step to import a storage complex.

Figure 13-4 Long-running task monitor

13.3 Real-time manager


When configuring the storage with the Real-time manager the configuration actions are
performed as soon as the commands are issued.

Chapter 13. Configuration with DS Storage Manager GUI 227


6786ch_ConfigGUI.fm Draft Document for Review November 14, 2006 3:49 pm

Creating a storage complex


If you are using the Real-time manager, the hardware configuration is retrieved directly from
the storage unit HMC. You can retrieve the hardware configuration by adding either the
storage complex or the storage unit.
1. From the DS Storage Manager select Real-time manager.
2. Select Manage hardware.
3. Click Storage Complexes.
4. Select Add Storage Complex from the drop-down list.

The Add Storage Complex page is displayed. Specify the IP address of the storage complex.
Click Ok. The storage complex that you added is available for selection on the Storage
Complexes main page. Alternatively, you can define a storage complex, and then add the
storage unit from the Storage Unit panel.

Configuring with the Real-time manager


Refer to Section 13.5, “Examples of configuring DS8000 storage” on page 239 to see how the
various storage configuration activities are performed with the DS Storage Manager.

13.4 Simulated manager


There are four starting points for creating a logical configuration for a DS8000 using the
Simulated manager:
򐂰 Import a configuration from an already configured DS8000.
򐂰 Import a hardware configuration from an econfig (IBM order) file.
򐂰 Import a hardware configuration from an XML file.
򐂰 Manually enter the configuration details.

The configuration commands will only be applied after exporting the resultant configuration to
an unconfigured DS8000 storage unit.

13.4.1 Import the configuration from the Storage HMC


This is the preferred method if you want to create a similar configuration for exporting
(applying) to an unconfigured DS8000. This would be especially valuable if installing multiple
DS8000s and all DS8000s are to have the same hardware and logical configuration.
Configure the first DS8000 by the chosen method, for example, using the DS CLI, the
Real-time DS Storage Manager, or the Simulated DS Storage Manager. Once the
configuration is successfully completed and verified, import that configuration into the
Simulated manager and modify if necessary. This new configuration can then be exported
and applied to one or more additional DS8000s.

The first step is to log on to the DS Storage Manager (SM) GUI:


1. Select Simulated manager.
2. Select Manage configuration files.
The panel changes to Manage configuration files: Simulated, as shown in Figure 13-5 on
page 229.

228 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_ConfigGUI.fm

Figure 13-5 Manage configuration files: Simulated panel

In the Select Action pull-down we can select Default or Create New. We select Create
New. The next window contains the CMUG00098W warning message; see Figure 13-6.

Figure 13-6 Multiple open configuration files warning message

The CMUG00098W warning message is indicating that two configuration files are open:
– The Default file that we did not select (see the entry in State column of Figure 13-5)
– The new configuration file that we selected to be created
Do not choose Continue. Choose OK to close the Default file and to create the new
configuration file. The successful creation of the new configuration file is shown in
Figure 13-7.

Figure 13-7 New configuration file created

We click Close and View Summary.

Chapter 13. Configuration with DS Storage Manager GUI 229


6786ch_ConfigGUI.fm Draft Document for Review November 14, 2006 3:49 pm

The next panel will be the Long running task summary: Simulated panel, that shows the
task Finished. We select Manage configuration files and we are returned to the starting
Manage configuration files: Simulated panel; see Figure 13-8, In this panel we see that the
Default configuration file is Closed and the new file is Open.

Figure 13-8 New configuration file open and Default closed

Now, with the Default configuration file Closed and the new file Open we proceed.
3. Select Manage Hardware.
4. Select Storage Unit.

The steps above opened a new configuration file while the steps to come will import a new
configuration from an existing DS8300 —and the configuration information will be placed into
this newly opened file.

When the Storage units: Simulated panel is presented, select Select Action and the screen
will appear as shown in Figure 13-9.

Figure 13-9 Storage unit creation choices panel with pull-down

We are now ready to start the import of an existing configuration.


5. Select Import.
The Identify storage complex panel is displayed; see Figure 13-10 on page 231. In our
situation, there is no previously defined storage complex.

230 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_ConfigGUI.fm

Figure 13-10 Import Storage Unit panel

6. In the Identify storage complex panel we enter the IP address and click Next.

Figure 13-11 Import Storage Unit panel

7. The next panel is the Select storage unit to import; see Figure 13-11. Here we select the
storage unit of interest and request all the data possible to be imported.
Click Next.
8. The result will be the general information panel, as shown in Figure 13-12 on page 232. In
this figure we have already made an entry in the Description field, which will appear as
blank and you can make a brief entry. The Nickname field can also be edited.

Chapter 13. Configuration with DS Storage Manager GUI 231


6786ch_ConfigGUI.fm Draft Document for Review November 14, 2006 3:49 pm

Figure 13-12 General storage unit info panel

Verify information, enter a description, and click Next.

Figure 13-13 Import Storage Unit verification panel

9. A Verification panel is then displayed; see Figure 13-13. Here you will need to verify the
information. Note that the configuration that is going to be imported is from a DS8300
(model 922) with advanced functions installed, the total capacity is just greater than 21 TB,
and there are 16 I/O adapters.
Verify the information and click Finish.
The next panel will be a long running task panel similar to that shown in Figure 13-14 on
page 233 —this figure is an example of when another configuration was being imported;
thus, the resource and the date are different than the specific import that we are using in
our ongoing example. This example shows that the import activity is 56 percent complete.
We simply wait until the state changes to finished.

232 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_ConfigGUI.fm

Figure 13-14 Import long running task panel

After some period of time the import activity will complete and a panel similar to that shown in
Figure 13-15 will be presented, indicating that the import has completed. Note that the
example import activity of the 28 TB machine took 2 minutes to complete.

Figure 13-15 Import Storage Unit finished panel

10.When the import is finished, click Close and View summary.


The Long running task summary: Simulated panel will then be displayed; see Figure 13-16
on page 234. In our example this panel remained in an active run mode, that is, the
top-most box continued to indicate an active running mode.

Chapter 13. Configuration with DS Storage Manager GUI 233


6786ch_ConfigGUI.fm Draft Document for Review November 14, 2006 3:49 pm

Figure 13-16 Long running task summary: Simulated panel

Since we saw the state as finished we move to another activity.


11.Before starting the import of an existing configuration we selected Simulated manager,
Manage hardware, and Storage units (see Figure 13-9 on page 230). Now that same
panel has been changed (see Figure 13-17) to show just the imported storage unit.

Figure 13-17 Storage unit in Storage units: Simulated panel

We have now successfully completed the import of a configuration from an existing DS8300.

Examples from imported configuration


In this section we examine a small number of panels that show some details of the imported
configuration. We must ensure that the correct configuration file is open by selecting
Simulated manager and then Manage configuration files. We should see a panel similar to
the one shown in Figure 13-8 on page 230. We must be sure that the correct configuration file
is shown as Open in the State column. If necessary, open the correct configuration file.

Next select Manage Hardware and then Storage image. The panel presented will appear
similar to that shown in Figure 13-18 on page 235. Select the storage image to examine by
checking the appropriate box. Note that the only available storage image is serial number
00011. This implies the storage unit serial number is 00010. These are artificial numbers
supplied by the Simulated manager. This is the image that we want to examine.

234 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_ConfigGUI.fm

Figure 13-18 Manage hardware, Storage Images

Now select Configure storage and then Arrays. The shown in Figure 13-19 will display.

Figure 13-19 Configure storage, arrays

Here you select the storage image. When the storage image is selected the panel will
automatically change to that shown in Figure 13-20. The panel displays 16 arrays.

Figure 13-20 Arrays displayed

Chapter 13. Configuration with DS Storage Manager GUI 235


6786ch_ConfigGUI.fm Draft Document for Review November 14, 2006 3:49 pm

Ranks and extent pools can be viewed if you follow a very similar procedure to that just
outlined for arrays. It is also possible to see the configured ports, as shown in Figure 13-21.
Select Manage hardware, then Storage images, Configure I/O ports in the Select Action
pull-down. For this configuration 64 ports are displayed with the appropriate details. Also
shown in Figure 13-21, on the left of the figure are selections for:
򐂰 Open systems Volumes
򐂰 Open systems Volume groups
򐂰 System z LCUs
򐂰 System z Volumes

Figure 13-21 Additional information available in the configuration file

The above discussion and figures provide significant details for importing an existing
configuration and examining the details of that imported configuration.

13.4.2 Import the hardware configuration from an eConfig file


If you have the IBM eConfig (order) file you can import that configuration file into the
Simulated manager. Start with the eConfig file (.cfr file) in a location available to the Simulated
manager.

The first step is to log on to the DS Storage Manager (SM) GUI:


1. Select Simulated manager.
2. Select Manage configuration files.
The panel changes to Manage configuration files: Simulated, as shown in Figure 13-5 on
page 229. We could select Default or select Create New from the Select Action pull-down,
which is the option that we will take. The next window will contain the CMUG00098W
warning message indicating that two configuration files are open (see Figure 13-6 on
page 229.
Do not choose Continue. Choose OK to close the current open file and to create the new
configuration file.
3. Select Manage Hardware.
4. Select Storage Unit.
We now import a new configuration from an existing DS8300 eConfig (.cfr) file, and the
configuration information will be placed into this newly opened file. When the Storage unit:

236 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_ConfigGUI.fm

Simulated panel is presented, select Select Action and the screen shown in Figure 13-22
will display.

Figure 13-22 Select import from econfig file

5. Select Import from econfig file.


The Import Storage Unit(s) From eConfig File: Simulated panel will be displayed; see
Figure 13-23.

Figure 13-23 econfig path selection

6. Locate the econfig file and click OK.

13.4.3 Manually enter the configuration details


You can manually enter the details of the hardware configuration, for example, if you need to
create a logical configuration using the Simulated manager for education purposes. In this
case you need to first create the storage complex, and then create the storage unit.
1. From the DS Storage Manager select Simulated Manager.
2. Select Manage hardware.
3. Click Storage Complexes.
4. Select Create from the drop-down list.

The Define properties panel will be displayed. Enter a nickname for the storage complex and
an optional description, as shown in Figure 13-24 on page 238.

Chapter 13. Configuration with DS Storage Manager GUI 237


6786ch_ConfigGUI.fm Draft Document for Review November 14, 2006 3:49 pm

Figure 13-24 Define storage complex properties

To input the hardware details you need to create a storage unit:


1. From the DS Storage Manager select Simulated Manager.
2. Select Manage hardware.
3. Click Storage Unit.
4. Select Create from the drop-down list.

Specify the machine model type for the storage unit. If you specify an LPAR machine you will
have the option to define each LPAR separately. Enter a nickname for the storage unit and an
optional description. You can also specify which storage complex this storage unit is
associated with:
1. Click Next.
2. Enter the number of packs of each DDM type. Click Add. Repeat for each DDM type.
3. Specify the licensed functions that you have purchased. Remember that you need to
process the authorized functions, including the operating environment license, before you
can configure the storage.
4. Click Next.
5. Specify the number of I/O adapters (cards) of each type. Each I/O adapter is four ports for
the FCP/FICON cards, and two ports for each ESCON card. Click Add.
6. Click Next.
7. Review the Verification panel. You can click Back to make any changes, or Finish to
accept the configuration.

Manually entering the hardware configuration is not recommended for a production


environment because of the potential to make a mistake in creating the hardware
configuration that would result in an invalid logical configuration. If the hardware configuration

238 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_ConfigGUI.fm

that you specify does not match the installed configuration, you will not be able to apply the
logical configuration to the HMC.

13.4.4 Import the hardware configuration from an xml file


You can optionally import a configuration from an xml file. This xml file may contain a
complete logical configuration.
1. From the DS Storage Manager select Simulated Manager.
2. Select Manage Configuration Files.
3. Select Import from the drop-down list.
4. Specify the location of the xml file and click OK.

Once you have imported the file you can review the configuration:
1. From the DS Storage Manager select Simulated Manager.
2. Select Storage Units.
3. Select Properties from the drop-down list.

You can use the configuration files function to create cloned logical configurations for identical
storage units. You could also use the configuration files if you have an environment where you
need to completely replace a configuration.

13.5 Examples of configuring DS8000 storage


In the following sections we show for each configuration task an example of how you can
configure the DS8000 with the DS Storage Manager.

For each configuration task (for example, creating an array) the process will guide you
through different panels where you have to enter the necessary information. During this you
have the possibility to go back to do modifications or cancel the process. At the end of each
process you will get a verification panel where you can verify the information you entered
before you submit the task.

13.5.1 Configure I/O ports


Before you can assign host attachments to I/O ports you need to define the format of the I/O
ports. You do not need to configure ESCON ports because these are single function cards.
There are four FCP/FICON ports on each card, and these are each independently
configurable.
1. Select Manage hardware.
2. Click Storage Images.
The Storage images panel is displayed; see Figure 13-25 on page 240.

Chapter 13. Configuration with DS Storage Manager GUI 239


6786ch_ConfigGUI.fm Draft Document for Review November 14, 2006 3:49 pm

Figure 13-25 Configure I/O ports

3. Select Configure IO Ports from the drop-down menu, as shown in Figure 13-25
4. The Configure IO Ports panel is displayed; see Figure 13-26.
Here you select the ports that you want to format as FcSf, FC-AL, or FICON, and then
select the port format from the drop-down menu, as shown in Figure 13-26. The ports are
identified by their location: Rack - I/O enclosure - Card - Port.

Figure 13-26 Select port format

You will get a message warning you that if hosts are currently accessing these ports and
you reformat them, it is possible that the hosts will lose access to the ports. This is
because this step in the configuration is when you are selecting whether a port is to be
FICON or FCP.
5. You can repeat this step to format all ports to their required function.

240 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_ConfigGUI.fm

13.5.2 Configure logical host systems


In this section we show you how to configure host systems. This applies only for open system
hosts. A default FICON host definition is automatically created once you define an I/O port to
be a FICON port. A default ESCON host definition is automatically created if you specify
ESCON ports on the storage unit.

To find out which hosts are logged into the system you use the Query Host WWPNs panel;
see Figure 13-27.

Figure 13-27 Query Host WWPNs panel

This panel can be used to debug host access and switch configuration issues.

To create a new host system do the following:


1. Select Manage hardware.
2. Click Host Systems. The Host systems panel will display; see Figure 13-28.

Figure 13-28 Host systems panel

Chapter 13. Configuration with DS Storage Manager GUI 241


6786ch_ConfigGUI.fm Draft Document for Review November 14, 2006 3:49 pm

Figure 13-28 shows you existing host systems definitions in the Host systems panel of the
Real-time manager.

From this panel you can select one or more hosts and then select from the Select Action
pull-down menu a specific task (for example, Modify). To create a new host select Create
from the Select Action pull-down menu. The following process will guide you through the host
configuration; see Figure 13-29.

Figure 13-29 General host information panel

In the General host information panel, you have to enter the following information:
򐂰 Type: The host type; in our example we create a pSeries host. The pull-down menu gives
you a list of types you can select.
򐂰 Nickname: Name of the host.
򐂰 Description: Optionally, you can give a description of the host, for example, the TCP/IP
address of the host or the location of the host.

When you have entered the needed information click Next to define the host ports

Figure 13-30 Define host ports panel

242 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_ConfigGUI.fm

The Define host attachments panel will be displayed; see Figure 13-30. In this panel you to
enter the following information:
򐂰 Quantity of WWPN: The number of Fibre Channel adapters from your host from that you
want to access the DS8000.
򐂰 Quantity of volume groups: The number of VGs you want to assign to the host.
򐂰 Attachment Port Type: You have to specify if the host is attached over a FC Switch fabric
(P-P) or direct FC arbitrated loop to the DS8000.
򐂰 Group ports to share a common set of volumes: If you check this box it means that the
adapters can access the same volumes from the DS8000.
򐂰 If you are using the Simulated manager, you can optionally specify the WWPN.

After you enter the information click Add.

You can repeat this for multiple groups of adapters, for example, if you want to create two
adapter definitions for a host. The Defined host ports list will be updated with the information.
In this example we defined two FC adapters in a FC switch fabric configuration and the
adapters are grouped.

Each host attachment has an identifier of the form host_n|pSeries|FcSf|(2)_0, where:


򐂰 host_n is the host nickname.
򐂰 pSeries is the host type.
򐂰 FcSf is the port format, in this case FCP switched.
򐂰 (2) indicates that there are two ports grouped in this host attachment.
򐂰 _0 indicates that this is the first definition with these characteristics.

When complete, click Next.

The Define host WWPN panel will be displayed; see Figure 13-31.

Figure 13-31 Define host WWPN panel

In the Define host WWPN panel, you have for each FC adapter a field where you have to
enter the World Wide Port Name (WWPN) of the adapter. On the previous panel we defined
that our host has two FC adapters; therefore, we have on the Define host WWPN panel two
fields to enter the WWPNs from the FC adapters.

Chapter 13. Configuration with DS Storage Manager GUI 243


6786ch_ConfigGUI.fm Draft Document for Review November 14, 2006 3:49 pm

If you are using the Simulated manager you can optionally specify the WWPN for each
attachment. Alternatively, you can leave this blank (or input a dummy value) and modify the
definition to retrieve the WWPN from a pull-down that will be populated once the host is
attached.

Click Next. The Specify Connection panel will be displayed; see Figure 13-32.

Figure 13-32 Specify Connection panel

In the Select storage images box, all storage images from the storage unit are listed. Select
the storage images you want to access from the server and click Apply Assignment.

When complete, click Next. This takes you to the Verification panel; see Figure 13-33.

Figure 13-33 Verification

In the Verification panel you have to check the information you entered during the process. If
you want to make modifications select Back or you can Cancel the process. After you verify
the information, click Finish to create the host system.

If you need to make changes to a host system definition you can go to the Host system panel,
select the host system, and then select Modify from the drop-down menu.

244 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_ConfigGUI.fm

13.5.3 Create arrays


In this section we explain how you can create an array with the DS Storage Manager. On the
DS8000 an array is built from one array site.
1. Select Configure storage.
2. Click Arrays.
The Arrays panel is displayed; see Figure 13-34.

Figure 13-34 Arrays panel

You can see in the Arrays panel in Figure 13-34 that already more than ten arrays are
created. To create a new array, select Create from the drop-down menu.

You will be guided through the process to build a new array; see Figure 13-35

Figure 13-35 Definition method panel

You have the option between:

Chapter 13. Configuration with DS Storage Manager GUI 245


6786ch_ConfigGUI.fm Draft Document for Review November 14, 2006 3:49 pm

򐂰 Create arrays automatically: The system will choose an array site to build the array. You
specify the quantity of arrays and RAID type. You can optionally specify the format of the
rank and add the array to a rank.
򐂰 Create custom arrays: Then you have the following panel to choose the RAID type and the
array site to create the array (see Example 13-36).

In this example we proceed with Create custom arrays.

Figure 13-36 Array configuration (Custom) panel

The Array configuration (Custom) panel is displayed; see Figure 13-36.

In this panel you have to select the RAID type (it can be RAID 5 or RAID 10), and you have to
choose the array site from which the array will be built. In this example we create a RAID 5
array on array site S3. If you create multiple arrays at one time, the arrays will be created over
array sites distributed across the DAs. The array number is created by default.

In the Add arrays to ranks panel you have the possibility of adding the array to a rank.

Figure 13-37 Add arrays to ranks panel

246 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_ConfigGUI.fm

If you want the array you are creating to be added to a rank then you have to select Add
these arrays to ranks, and also specify whether it should be a FB (fixed block) rank for open
system hosts or a CKD (count key data) rank for System z hosts; see Figure 13-37.

If you do not select Add these arrays to ranks you have to later assign the array to a rank.

Click Next. This will take you to the Verification panel; see Figure 13-38.

Figure 13-38 Verification panel

In this panel you can verify your specifications for the array. If everything is correct click
Finish to create the array. You can check the progress of the array creation by using the Long
Running Task monitor.

To delete arrays use the Delete option from the drop-down menu on the main Arrays panel. If
there are volumes allocated on the arrays to be deleted a warning is issued. However, you
can still proceed with deletion.

13.5.4 Create ranks


In this section we show how you can create a rank with the DS Storage Manager if you did not
do this as part of the create array step. You must wait until the array creation is complete
before you can create ranks.
1. Select Configure storage.
2. Click Ranks.
This will take you to the Ranks panel; see Figure 13-39 on page 248.

Chapter 13. Configuration with DS Storage Manager GUI 247


6786ch_ConfigGUI.fm Draft Document for Review November 14, 2006 3:49 pm

Figure 13-39 Ranks panel

You can see that already more than ten are created. To create another rank you can choose
from the Select Action pull-down menu create and click Go. This will start the process to
create a rank.

First you have to select the array, or arrays, from which you want to build the ranks. In our
example only one array is available that we can choose; see Figure 13-40.

Figure 13-40 Select array for rank panel

When complete, click Next.

The Define rank properties panel will display; see Figure 13-41 on page 249.

248 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_ConfigGUI.fm

Figure 13-41 Define rank properties panel

In the Define rank properties panel you decide if you want to create a FB (fixed block) rank for
open system servers or a CKD (Count Key Data) rank for System z server. In our example we
select Storage type FB.

Click Next to proceed.

Figure 13-42 Select extent pool panel

Chapter 13. Configuration with DS Storage Manager GUI 249


6786ch_ConfigGUI.fm Draft Document for Review November 14, 2006 3:49 pm

Optionally, you can select the extent pool to which you want to assign the rank and click Next
to get the Verification panel; see Figure 13-42. If you have not yet created extent pools you
can leave this step until later.

Figure 13-43 Verification panel

In the Verification panel (Figure 13-43) you can verify your specifications for the rank. If
everything is correct click Finish to create the rank. The rank number is generated by default.
You can monitor the progress of the Create rank task using the Long Running Task Monitor.

You can delete a rank using the Delete option from the drop-down menu in the Rank main
panel. If there are any volumes allocated on the rank this will result in a warning message, but
volumes will be deleted if the rank is deleted. You can also use the Modify option to change
the format of a rank, or remove a rank from an extent pool. You can only modify the properties
of a rank if there are no extents allocated.

13.5.5 Create extent pools


To create a new extent pool you can follow this procedure:
1. Select Configure storage.
2. Click Extent pools. The Extent pools panel is displayed; see Figure 13-44 on page 251.

250 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_ConfigGUI.fm

Figure 13-44 Extent pools panel

To create a new extent pool select Create from the Select Action pull-down list. The Definition
method panel is displayed; see Figure 13-45.

Figure 13-45 Definition method panel

In the Definition method panel you select the definition method. You can choose between:
򐂰 Create extent pool automatically based on storage requirements: You specify the amount
of storage, the RAID type, and the format (FB or CKD) you need and the system will
automatically use unassigned ranks of the type you specify to fulfill your requirements.
򐂰 Create custom extent pool: You specify the ranks that you assign to the extent pool.

In this example we choose the custom option. Click Next.

Chapter 13. Configuration with DS Storage Manager GUI 251


6786ch_ConfigGUI.fm Draft Document for Review November 14, 2006 3:49 pm

Figure 13-46 Define properties panel

The Define properties panel is displayed; see Figure 13-46. In this panel we enter:
򐂰 Nickname: Name of the extent pool.
򐂰 Storage type: Select FB for open systems hosts or CKD for System z servers.
򐂰 RAID type: You can select RAID 5 or RAID 10.
򐂰 Server: ranks acquire server affinity from being added into an extent pool. It is
recommended that you balance ranks between server 0 and server 1.

When completed, click Next.

Figure 13-47 Create custom extent pool

The Select ranks panel is displayed; see Figure 13-47. here you select the ranks that you
want to add to the extent pool.

In our example there are only two RAID 5, FB ranks defined so you can only select from these
ranks. We recommend that you create extent pools with one rank for maximum management
of your performance environment; however, you are required to create a minimum of one
extent pool for each disk format (FB or CKD). If you want to manage your environment with

252 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_ConfigGUI.fm

the least number of extent pools you should have one extent pool for each server for FB and
CKD data.

In the next panel (Figure 13-48) you can specify Reserved storage, which is the amount of
storage that you choose to reserve for later use. Capacity that is reserved cannot be used to
create volumes; volumes may only be allocated in reserved space once the space is explicitly
released using the Modify option. You might use reserved space to keep capacity in the
extent pool free for planned application growth. Usually you will enter 0, which means that you
can use all of the storage.

Figure 13-48 Reserve storage panel

Click Next. The Verification panel is displayed; see Figure 13-49

Figure 13-49 Verification panel

In this panel you can verify your specifications for the extent pool. If everything is correct click
Finish to create the extent pool.

If you create the extent pools before you create the ranks, you can add the ranks to the
created extent pools as part of rank creation. Alternatively, you can go to the Rank main panel
and select Add rank to extent pool from the drop-down menu.

Chapter 13. Configuration with DS Storage Manager GUI 253


6786ch_ConfigGUI.fm Draft Document for Review November 14, 2006 3:49 pm

13.5.6 Create FB volumes


This section explains the creation of fixed block volumes.
1. Select Configure storage.
2. Select Open systems.
3. Click Volumes - Open system. Figure 13-50 shows the Open systems panel.

Figure 13-50 Volumes - Open systems

Choose Create from the Select Action pull-down menu. The Select extent pool panel will
display; see Figure 13-51.

Figure 13-51 Select extent pool panel

In the Select extent pool panel choose the previously created extent pool from which you want
to create the volume (you can choose only one extent pool). If you have not yet created extent
pools you can do so by clicking the button to enter the Create new extent pool wizard.

When finished with the previous task, you then have to define the volume characteristics.

254 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_ConfigGUI.fm

Figure 13-52 Define volume characteristics panel

In the Define volume characteristics panel (see Figure 13-52) the fields are:
򐂰 Volume type: You can choose Standard Open System volumes in either binary (DS sizes)
or decimal (ESS sizes) for compatibility with ESS. In general, the DS sizes should be fine.
You can select either protected or unprotected iSeries volumes. Select unprotected iSeries
volumes if you are going to use iSeries mirroring.
򐂰 Select volume groups: You can select one or more volume groups to which you want to
assign the volumes. If you choose no volume group you can assign the volumes later to a
volume group by using the Modify option. Alternatively, you can click Create new group to
enter the Create volume group wizard.

When complete click Next to define the volumes properties; see Figure 13-53.

Figure 13-53 Define volume properties

Chapter 13. Configuration with DS Storage Manager GUI 255


6786ch_ConfigGUI.fm Draft Document for Review November 14, 2006 3:49 pm

In the Define volume properties panel you have to enter the following information:
򐂰 Quantity: The number of volumes you want to create. The calculator will tell you how many
volumes of your chosen size will fit in the available space in the extent pool.
򐂰 Size: Size of the volumes in GB (binary or decimal). The minimum allocation is 0.1 GB;
however, this will consume an entire 1 GB extent. The maximum LUN size is 2 TB. iSeries
volumes can only be created in the sizes supported by the operating system; these are
selectable from a drop-down menu.
򐂰 Select LSSs for volumes: If you select this check box you can specify the LSS for the
volumes. Only the LSSs that are available will be displayed. In this example we have only
even LSSs displayed because of selecting an extent pool associated with server 0.

You can assign the volumes to a specific LSS. This can be important if you want to use Copy
Services. You can have a maximum of 256 volumes in each LSS.

In this example we create two volumes with 5 GB each assigned to LSS 10; see Figure 13-53.

Figure 13-54 Create volume nicknames panel

In this example the two volumes will get the names MetroM0001 and MetroM0002; see
Figure 13-54. You can see this also in the verification panel; see Figure 13-55.

Figure 13-55 Verification panel

256 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_ConfigGUI.fm

In the Verification panel you check the information you entered during the process. If you want
to make modifications select Back or you can Cancel the process. After you verify the
information click Finish to create the volume.

13.5.7 Create volume groups


To create a volume group you can follow this procedure:
1. Select Configure storage.
2. Select Open systems.
3. Click Volume groups. The Volume groups panel will display; see Figure 13-56.

Figure 13-56 Volume groups panel

4. To create a new volume group select Create from the Action pull-down menu; see
Figure 13-56.

Figure 13-57 Define volume group properties panel

5. In the Define volume group properties panel (see Figure 13-57) enter the nickname for the
volume group and select the host type from which you want to access the volume group. If

Chapter 13. Configuration with DS Storage Manager GUI 257


6786ch_ConfigGUI.fm Draft Document for Review November 14, 2006 3:49 pm

you select one host (for example, System p), all other host types with the same addressing
method will be automatically selected. This does not affect the functionality of the volume
group; it will support the host type selected.
6. Select the host attachment for your volume group. You can select only one host
attachment. See Figure 13-58.

Figure 13-58 Select host attachments panel

In the Select volumes for group panel you have to select the volumes that should be included
in the volume group; see Figure 13-59.

Figure 13-59 Select volumes for group panel

258 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_ConfigGUI.fm

If you have to select a large number of volumes, you can specify a filter so that only these
volumes are displayed in the list, and then you can select all.

Click Next to get the Verification panel; see Figure 13-60.

Figure 13-60 Verification panel

In the Verification panel you have to check the information you entered during the process. If
you want to make modifications select Back, or you can Cancel the process. After you verify
the information click Finish to create the host system attachment.

13.5.8 Create LCUs


In this section we show how you can create LCUs. This is only necessary for System z.
1. Select Configure storage.
2. Select zSeries.
3. Select LCUs. The LCUs panel will be displayed; see Figure 13-61.

Figure 13-61 LCUs panel

Already four LCUs are created. To create one or more new LCUs choose Create from the
Select the Action pull-down menu; see Figure 13-61.

Chapter 13. Configuration with DS Storage Manager GUI 259


6786ch_ConfigGUI.fm Draft Document for Review November 14, 2006 3:49 pm

In Select from available LCUs panel you can select the LCU IDs you want to create; see
Figure 13-62.

Figure 13-62 Select from available LCUs panel

In this example we create LCUs 05 and 06. When finished click Next.

Figure 13-63 Define LCU properties panel

In the Define LCU properties panel (see Figure 13-63) you can define the LCU properties.
򐂰 SSID: Enter the Subsystem ID (SSID) for the LCU. If you create multiple LCUs in one step
the SSIDs will be incremented.
򐂰 LCU type: Select the LCU type you want to create (3990 Mod.3, 3990 Mod. 3 for TPF, or
3990 Mod. 6).

The following parameters are important if you use Copy Services:

260 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_ConfigGUI.fm

򐂰 Concurrent copy session time-out (sec.): The time in seconds that any logical device on
this LCU in a concurrent copy session stays in a long busy state before suspending a
concurrent copy session.
򐂰 z/OS Global Mirror Session time-out: The time in seconds that any logical device in an
XRC session stays long busy before suspending an XRC session. The long busy occurs
because the data mover has not off-loaded data when the logical device (or XRC session)
is no longer able to accept additional data. You can change the default time (in seconds)
by highlighting it and entering a new time.

Click Next to get to the Verification panel; see Figure 13-64.

Figure 13-64 Verification panel

On the Verification panel you have to check the information you entered during the process. If
you want to make modifications select Back, or you can Cancel the process. After you verify
the information click Finish to create the LCU.

13.5.9 Creating CKD volumes


In this section we describe how you can create and configure System z volumes. To create
System z volumes you can follow this procedure:
1. Select Configure storage.
2. Select zSeries.
3. Click Volumes - zSeries.

The Volumes - zSeries panel displays; see Figure 13-65 on page 262.

Chapter 13. Configuration with DS Storage Manager GUI 261


6786ch_ConfigGUI.fm Draft Document for Review November 14, 2006 3:49 pm

Figure 13-65 Volumes - zSeries panel

To create new System z volumes select Create from the Select Action pull-down menu.

There is another option in the drop-down menu to Define Address Allocation Policy, which
allows you to set defaults for the address group for the start address and whether addresses
will be incremented or decremented. You can override these defaults when you create
volumes, or alternatively you can specify these parameters when you create the volumes.

Figure 13-66 Select extent pool panel

In the Select extent pool panel (see Figure 13-66) select the extent pool in which you want to
create the volumes. You can select only one extent pool in each step, although you can create
volumes across multiple LSSs in one step. To create volumes in multiple extent pools you
must repeat this step.

262 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_ConfigGUI.fm

You then have to define the base volume characteristics; see Figure 13-67.

Figure 13-67 Define base volume characteristics

In the Define base volume characteristics panel, the fields are:


򐂰 Volume type: Select the volume model you want to create, for example, 3390 Mod. 9.
򐂰 LCU: Select the LCU in which you want to create the volumes. You can allow volumes to
be created across multiple LCUs by selecting multiple LCUs. You then have the option of
whether to spread the volumes equally across the LCUs or whether to fill and spill the
volume creations across the first LCU, then the second LCU, and so on.

Click Next to define the base volume properties; see Figure 13-68.

Figure 13-68 Define base volume properties panel

In the Define base volume properties panel, the fields are:

Chapter 13. Configuration with DS Storage Manager GUI 263


6786ch_ConfigGUI.fm Draft Document for Review November 14, 2006 3:49 pm

򐂰 Quantity: The number of base volumes you want to create. The calculator will show how
many volumes of your selected size you can create in the extent pool that you selected.
The calculator also tells you how many addresses are available in the LCUs that you have
selected.
򐂰 Base start address: The address of the first volumes you want to create. This will default at
the value you specify in the address policy definition.
򐂰 Ascending/Descending: Select the addressing order for the base volumes. This will default
at the value you specify in the address policy definition.

Click Next to create the nicknames for the volumes.

Figure 13-69 Create volume nicknames panel

In the Create volume nicknames we create the nicknames for the volumes; see Figure 13-69.

In this example we create the base volumes VOL0001, VOL0002... VOL0064. You can specify
prefix and suffixes up to a total of 16 characters.

You have the option to use a hexadecimal addressing sequence typical of System z
addressing if you check the box Use hexadecimal sequence.

Define alias volumes


If you are going to define alias volumes for PAV use, you follow these steps:
1. After completing the preceding task for base volume creation, click Next to proceed with
the definition of the alias volumes.

264 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_ConfigGUI.fm

Figure 13-70 Define alias assignment panel

2. In the Create zSeries volume panel (see Figure 13-70) select the volumes for which you
want to create alias volumes, for which you can use the Select all check box, and make the
selections for the following:
– Starting address: In this field you have to enter the first alias address.
– Ascending/Descending: Select the addressing order for the aliases.
– Aliases/Per Volumes: In these two fields you have to enter how many aliases you want
to create for each selected base volume. This creates a ratio between base and alias
addresses that will be applied to all the volumes selected. The ratio can be multiple
aliases for each base address, or multiple base addresses to each alias; however, only
whole numbers and evenly divisible ratios are acceptable, for example:
• One alias: Two base addresses
• Two aliases: One base address
• Three aliases: Six base addresses
• But not three aliases: Two base addresses
In this example we are creating two aliases for each base address.
Click Add aliases and click Next to go to the verification panel; see Figure 13-71 on
page 266.

Chapter 13. Configuration with DS Storage Manager GUI 265


6786ch_ConfigGUI.fm Draft Document for Review November 14, 2006 3:49 pm

Figure 13-71 Verification panel

Verify that you entered the correct information before you click Finish.

266 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_DSCLI.fm

14

Chapter 14. Configuration with Command


Line interface
In this chapter we explain how to configure the storage on the DS8000 with the DS
Command-Line interface (DS CLI). This chapter includes the following sections:
򐂰 DS Command-Line Interface — overview
򐂰 Configuring the I/O ports
򐂰 Configuring the DS8000 storage for FB volumes
򐂰 Configuring the DS8000 storage for CKD volumes
򐂰 Scripting the DS CLI

In this chapter we discuss the use of the DS CLI for storage configuration of the DS8000, not
for Copy Services configuration. For Copy Services configuration in the DS8000 using the DS
CLI refer to the following books:
򐂰 IBM System Storage DS8000: Command-Line Interface User´s Guide, SC26-7916
򐂰 IBM System Storage DS8000 Series: Copy Services in Open Environments, SG24-6788
򐂰 IBM System Storage DS8000 Series: Copy Services with System z servers, SG24-6787

© Copyright IBM Corp. 2006. All rights reserved. 267


6786ch_DSCLI.fm Draft Document for Review November 14, 2006 3:49 pm

14.1 DS Command-Line Interface — overview


This section briefly discusses some basic use and setup characteristics of the DS CLI. The
following sections in this chapter further discuss in more detail, and give examples, on how to
perform the DS8000 storage related configuration tasks using the DS CLI. For detailed
information on the DS CLI use and setup, see the publication IBM System Storage DS8000:
Command-Line Interface User´s Guide, SC26-7916.

DS CLI help command


The DS CLI has an internal help system that can be used with each command or with the
help command. Example 14-1 shows the help command.

Note: The DS CLI is used for both the DS8000 and the DS6000. In most cases, the
commands operate in exactly the same way for both storage units. If you see a command
being run on a DS6000, it will probably work in exactly the same manner on a DS8000.
There number of commands that are unique to either disk subsystem are very few.

Example 14-1 Displaying a list of all commands in DS CLI using the help command
dscli> help
applykey lsframe mkpprc setdialhome
chckdvol lshba mkpprcpath setflashrevertible
chextpool lshostconnect mkrank setioport
chfbvol lshosttype mkremoteflash setoutput
chhostconnect lshostvol mksession setplex
chlcu lsioport mkuser setremoteflashrevertible
chlss lskey mkvolgrp setsim
chpass lslcu offloadss setsmtp
chrank lslss pausegmir setsnmp
chsession lsportprof pausepprc setvpn
chsi lspprc quit showarray
chsp lspprcpath restorevolaccess showarraysite
chsu lsproblem resumegmir showckdvol
chuser lsrank resumepprc showcontactinfo
chvolgrp lsremoteflash resyncflash showextpool
clearvol lsserver resyncremoteflash showfbvol
closeproblem lssession reverseflash showgmir
commitflash lssi revertflash showgmircg
commitremoteflash lsstgencl revertremoteflash showgmiroos
dscli lssu rmarray showhostconnect
exit lsuser rmckdvol showioport
failbackpprc lsvolgrp rmextpool showlcu
failoverpprc managehostconnect rmfbvol showlss
freezepprc managepwfile rmflash showpass
help mkaliasvol rmgmir showplex
lsaddressgrp mkarray rmhostconnect showrank
lsarray mkckdvol rmlcu showsi
lsarraysite mkesconpprcpath rmpprc showsp
lsavailpprcport mkextpool rmpprcpath showsu
lsckdvol mkfbvol rmrank showuser
lsda mkflash rmremoteflash showvolgrp
lsddm mkgmir rmsession testcallhome
lsextpool mkhostconnect rmuser unfreezeflash
lsfbvol mklcu rmvolgrp unfreezepprc
lsflash mkpe setcontactinfo ver
dscli>

268 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_DSCLI.fm

When configuring a DS8000 with the DS CLI, you are required to include the machine’s ID in
nearly every command that is issued. If we do not want to type this ID in after each command,
then you need to change the DS CLI profile. If you enter the serial number of the machine and
the HMC’s network address into this profile, you will not have to include this field in each
command.

This profile can usually be found in the directory:


C:\Program Files\IBM\dscli\Profile\dscli.profile.

A simple way to edit the profile is to do the following:


1. From the Windows desktop, double-click the DS CLI icon.
2. From the command window that opens, enter the command: cd profile
3. Now from the profile directory, enter the command notepad dscli.profile, as shown in
Example 14-2.

Example 14-2 Command prompt operation


C:\Program Files\ibm\dscli>cd profile
C:\Program Files\IBM\dscli\profile>notepad dscli.profile

4. Now you have notepad opened with the DS CLI profile. There are four lines you could
consider adding. Examples of these are shown in bold in Example 14-3.

Example 14-3 DS CLI profile example


#
# DS CLI Profile
#
# Management Console/Node IP Address(es)
# hmc1 and hmc2 are equivalent to -hmc1 and -hmc2 command options.
#hmc1:127.0.0.1
#hmc2:127.0.0.1

# Default target Storage Image ID


# "devid" and "remotedevid" are equivalent to
# "-dev storage_image_ID" and "-remotedev storeage_image_ID" command options,
respectively.
#devid: IBM.2107-AZ12341
#remotedevid:IBM.2107-AZ12341

devid: IBM.2107-75ABCDE
hmc1: 10.0.0.250

username: admin
password: passw0rd

5. Save the changed file as a new file and close notepad. You can then reference the new
profile using the -cfg parameter when starting DS CLI.

Attention: The default profile file created when you install DS CLI will potentially be
replaced every time you install a new version of DS CLI. It is a better practice to open the
default profile and then save it as a new file. You can then create multiple profiles and
reference the relevant profile file using the -cfg parameter.

Adding the serial number using the devid parameter, and the HMC IP address using the hmc1
parameter is highly recommended. Adding the username and password parameters will

Chapter 14. Configuration with Command Line interface 269


6786ch_DSCLI.fm Draft Document for Review November 14, 2006 3:49 pm

certainly simplify your DS CLI startup, but it is less recommended. This is because a
password saved in a profile file is saved in clear text. Anyone who has access to that file can
then read the password.

Important: Take care if adding multiple devid and HMC entries. Only one should be
uncommented (or more literally, unhashed) at any one time. If you have multiple hmc1 or
devid entries, the DS CLI uses the one closest to the bottom of the profile.

14.2 Configuring the I/O ports


A good first step is to set the I/O ports to the desired topology. In Example 14-4 we list the I/O
ports using the lsioport command. Note that I0230-I0233 are on one card, while
I0300-I0303 are on another card.

Example 14-4 Listing the I/O ports


dscli> lsioport -dev IBM.2107-7503461
Date/Time: 29 October 2005 2:30:31 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
ID WWPN State Type topo portgrp
===============================================================
I0000 500507630300008F Online Fibre Channel-SW SCSI-FCP 0
I0001 500507630300408F Online Fibre Channel-SW SCSI-FCP 0
I0002 500507630300808F Online Fibre Channel-SW SCSI-FCP 0
I0003 500507630300C08F Online Fibre Channel-SW SCSI-FCP 0
I0100 500507630308008F Online Fibre Channel-LW FICON 0
I0101 500507630308408F Online Fibre Channel-LW SCSI-FCP 0
I0102 500507630308808F Online Fibre Channel-LW FICON 0
I0103 500507630308C08F Online Fibre Channel-LW FICON 0

There are three possible topologies for each I/O port:


SCSI-FCP Fibre Channel switched fabric (also called point to point)
FC-AL Fibre channel arbitrated loop
FICON FICON - for System z hosts only

In Example 14-5 we set two I/O ports to the FICON topology and then check the results.

Example 14-5 Changing topology using setioport


dscli> setioport -topology ficon I0001
Date/Time: 27 October 2005 23:04:43 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
CMUC00011I setioport: I/O Port I0001 successfully configured.
dscli> setioport -topology ficon I0101
Date/Time: 27 October 2005 23:06:13 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
CMUC00011I setioport: I/O Port I0101 successfully configured.
dscli> lsioport
Date/Time: 27 October 2005 23:06:32 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
ID WWPN State Type topo portgrp
===============================================================
I0000 500507630300008F Online Fibre Channel-SW SCSI-FCP 0
I0001 500507630300408F Online Fibre Channel-SW FICON 0
I0002 500507630300808F Online Fibre Channel-SW SCSI-FCP 0
I0003 500507630300C08F Online Fibre Channel-SW SCSI-FCP 0
I0100 500507630308008F Online Fibre Channel-LW FICON 0
I0101 500507630308408F Online Fibre Channel-LW FICON 0
I0102 500507630308808F Online Fibre Channel-LW FICON 0
I0103 500507630308C08F Online Fibre Channel-LW FICON 0

270 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_DSCLI.fm

14.3 Configuring the DS8000 storage for FB volumes


This section goes through examples of a typical DS8000 storage configuration when
attaching to open systems hosts. We do the DS8000 storage configuration by going through
the following steps:
1. Set I/O ports.
2. Install license keys
3. Create arrays.
4. Create ranks.
5. Create extent pools.
6. Create volumes.
7. Create volume groups.
8. Create host connections.

The first task, set the I/O ports, has already been discussed in 14.2, “Configuring the I/O
ports” on page 270. The second task, install the license keys, has already been discussed in
11.2.4, “Applying activation codes using the DS CLI” on page 209.

14.3.1 Create array


In this next step we create the arrays. Before creating the arrays, it is a good idea to first list
the arrays sites. The command that was issued is lsarraysite; see Example 14-6.

Attention: Remember that an array for a DS8000 can only contain one array site and a
DS8000 array site contains eight disk drive modules (DDMs).

Example 14-6 Listing array sites


dscli> lsarraysite
Date/Time: 27 October 2005 20:54:31 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
arsite DA Pair dkcap (10^9B) State Array
=============================================
S1 0 146.0 Unassigned -
S2 0 146.0 Unassigned -
S3 0 146.0 Unassigned -
S4 0 146.0 Unassigned -

In Example 14-6, we can see that there are four array sites and that we can therefore create
four arrays.

We can now issue the mkarray command to create arrays, as in Example 14-7. You will notice
that in this case we have used one array site (in the first array, S1) to create a single RAID 5
array. If we wished to create a RAID 10 array, we would have to change the -raidtype
parameter to 10 (instead of 5).

Example 14-7 Creating arrays with mkarray


dscli> mkarray -raidtype 5 -arsite S1
Date/Time: 27 October 2005 21:57:59 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
CMUC00004I mkarray: Array A0 successfully created.
dscli> mkarray -raidtype 5 -arsite S2
Date/Time: 27 October 2005 21:58:24 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
CMUC00004I mkarray: Array A1 successfully created.
dscli>

Chapter 14. Configuration with Command Line interface 271


6786ch_DSCLI.fm Draft Document for Review November 14, 2006 3:49 pm

We can now see what arrays have been created by using the lsarray command; see
Example 14-8.

Example 14-8 Listing the arrays with lsarray


dscli> lsarray
Date/Time: 27 October 2005 21:58:27 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
Array State Data RAIDtype arsite Rank DA Pair DDMcap (10^9B)
=====================================================================
A0 Unassigned Normal 5 (6+P+S) S1 - 0 146.0
A1 Unassigned Normal 5 (6+P+S) S2 - 0 146.0

Example 14-8 shows the result of the lsarray command. We can see the type of RAID array
and the number of disks that are allocated to the array (in this example 6+P+S, which means
the usable space of the array is 6 times the DDM size), as well as the capacity of the DDMs
that are used and which array sites were used to create the arrays.

14.3.2 Create ranks


Once we have created all the arrays that are required, we then create the ranks using the
mkrank command. The format of the command is: mkrank -array Ax -stgtype xxx, where
xxx is either FB or CKD, depending on whether you are configuring for open systems or
System z hosts.

Once we have created all the ranks, we do an lsrank command. This command will display
all the ranks that have been created, which server the rank is attached to, the RAID type, and
the format of the rank—whether it is Fixed Block (FB) or Count Key Data (CKD).

Example 14-9 shows the mkrank commands and the result of a succeeding lsrank -l
command.

Example 14-9 Creating and listing ranks with mkrank and lsrank
dscli> mkrank -array A0 -stgtype fb
Date/Time: 27 October 2005 21:31:16 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
CMUC00007I mkrank: Rank R0 successfully created.
dscli> mkrank -array A1 -stgtype fb
Date/Time: 27 October 2005 21:31:16 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
CMUC00007I mkrank: Rank R1 successfully created.
dscli> lsrank -l
Date/Time: 27 October 2005 21:32:31 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
ID Group State datastate Array RAIDtype extpoolID extpoolnam stgtype exts usedexts
=======================================================================================
R0 - Unassigned Normal A0 5 - - fb 773 -
R1 - Unassigned Normal A1 5 - - fb 773 -

14.3.3 Create extent pools


The next step is to create extent pools. Below are some points that should be remembered
when creating the extent pools:
򐂰 The number of extent pools can range from one to as many ranks that exist.
򐂰 The extent pool has an associated rank group (either 0 or 1, for server0 or server1).
򐂰 They have the -stgtype attribute, that is, they are either FB or CKD.
򐂰 Ideally, all ranks within an extent pool should have the same characteristics, in that they
should have the same DDM type and same RAID type.

272 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_DSCLI.fm

For ease of management, we create empty extent pools relating to the type of storage that is
in this pool. For example, create an extent pool for high capacity disk, create another for high
performance, and, if needed, extent pools for the CKD environment. For high capacity, you
would consider using 300 GB 10k rpm DDMs, while for high performance you might consider
73 GB 15k rpm DDMs.

It is also a good idea to note to which server the extent pool has an affinity.

Example 14-10 An extent pool layout plan


FB Extent Pool high capacity 300gb disks assigned to server 0 (FB_LOW_0)
FB Extent Pool high capacity 300gb disks assigned to server 1 (FB_LOW_1)
FB Extent Pool high performance 146gb disks assigned to server 0 (FB_High_0)
FB Extent Pool high performance 146gb disks assigned to server 0 (FB_High_1)
CKD Extent Pool High performance 146gb disks assigned to server 0 (CKD_High_0)
CKD Extent Pool High performance 146gb disks assigned to server 1 (CKD_High_1)

Example 14-10 shows an example of how we could divide your machine. Now in
Example 14-6 on page 271 we only had four array sites, so clearly we would need more
DDMs to support this many extent pools.

Note that the mkextpool command forces you to name the extent pools. In Example 14-11 we
first create empty extent pools using mkextpool. We then list the extent pools to get their IDs.
Then we attach a rank to an empty extent pool using the chrank command. Finally we list the
extent pools again using lsextpool and note the change in capacity of the extent pool.

Example 14-11 Extent pool creation using mkextpool, lsextpool, and chrank
dscli> mkextpool -rankgrp 0 -stgtype fb FB_high_0
Date/Time: 27 October 2005 21:42:04 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
CMUC00000I mkextpool: Extent Pool P0 successfully created.
dscli> mkextpool -rankgrp 1 -stgtype fb FB_high_1
Date/Time: 27 October 2005 21:42:12 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
CMUC00000I mkextpool: Extent Pool P1 successfully created.
dscli> lsextpool
Date/Time: 27 October 2005 21:49:33 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
Name ID stgtype rankgrp status availstor (2^30B) %allocated available reserved numvols
===========================================================================================
FB_high_0 P0 fb 0 below 0 0 0 0 0
FB_high_1 P1 fb 1 below 0 0 0 0 0
dscli> chrank -extpool P0 R0
Date/Time: 27 October 2005 21:43:23 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
CMUC00008I chrank: Rank R0 successfully modified.
dscli> chrank -extpool P1 R1
Date/Time: 27 October 2005 21:43:23 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
CMUC00008I chrank: Rank R1 successfully modified.
dscli> lsextpool
Date/Time: 27 October 2005 21:50:10 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
Name ID stgtype rankgrp status availstor (2^30B) %allocated available reserved numvols
===========================================================================================
FB_high_0 P0 fb 0 below 773 0 773 0 0
FB_high_1 P1 fb 1 below 773 0 773 0 0

After having assigned a rank to an extent pool, we should be able to see this when we display
the ranks. In Example 14-12 on page 274 we can see that rank R0 is assigned to extpool P0.

Chapter 14. Configuration with Command Line interface 273


6786ch_DSCLI.fm Draft Document for Review November 14, 2006 3:49 pm

Example 14-12 Displaying the ranks after assigning a rank to an extent pool
dscli> lsrank -l
Date/Time: 27 October 2005 22:08:42 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
ID Group State datastate Array RAIDtype extpoolID extpoolnam stgtype exts usedexts
===================================================================================
R0 0 Normal Normal A0 5 P0 FB_high_0 fb 773 0
R1 1 Normal Normal A1 5 P1 FB_high_1 fb 773 0

14.3.4 Creating FB volumes


We are now able to create volumes and volume groups. When we create them, we should try
to distribute them evenly across the two rank groups in the storage unit. We should also try
and create the same number of volumes in each rank group.

The format of the command that we use is:


mkfbvol -extpool pX -cap xx -name high_fb_0# 1000-1003

In Example 14-13, we have created eight volumes, each with a capacity of 10 GB. The first
four volumes are assigned to rank group 0 and the second four are assigned to rank group 1.

Example 14-13 Creating fixed block volumes using mkfbvol


dscli> lsextpool
Date/Time: 27 October 2005 21:50:10 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
Name ID stgtype rankgrp status availstor (2^30B) %allocated available reserved numvols
===========================================================================================
FB_high_0 P0 fb 0 below 773 0 773 0 0
FB_high_1 P1 fb 1 below 773 0 773 0 0
dscli> mkfbvol -extpool p0 -cap 10 -name high_fb_0_#h 1000-1003
Date/Time: 27 October 2005 22:24:15 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
CMUC00025I mkfbvol: FB volume 1000 successfully created.
CMUC00025I mkfbvol: FB volume 1001 successfully created.
CMUC00025I mkfbvol: FB volume 1002 successfully created.
CMUC00025I mkfbvol: FB volume 1003 successfully created.
dscli> mkfbvol -extpool p1 -cap 10 -name high_fb_1_#h 1100-1103
Date/Time: 27 October 2005 22:26:18 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
CMUC00025I mkfbvol: FB volume 1100 successfully created.
CMUC00025I mkfbvol: FB volume 1101 successfully created.
CMUC00025I mkfbvol: FB volume 1102 successfully created.
CMUC00025I mkfbvol: FB volume 1103 successfully created.

Looking closely at the mkfbvol command used in Example 14-13, we see that volumes
1000–1003 are in extpool P0. That extent pool is attached to rank group 0, which means
server 0. Now rank group 0 can only contain even numbered LSSs, so that means volumes in
that extent pool must belong to an even numbered LSS. The first two digits of the volume
serial number are the LSS number, so in this case, volumes 1000–1003 are in LSS 10.

For volumes 1100–1003 in Example 14-13, the first two digits of the volume serial number are
11, which is an odd number, which signifies they belong to rank group 1. Also note that the
-cap parameter determines size, but because the -type parameter was not used, the default
size is a binary size. So these volumes are 10 GB binary, which equates to 10,737,418,240
bytes. If we used the parameter -type ess, then the volumes would be decimally sized and
would be a minimum of 10,000,000,000 bytes in size.

Finally, in Example 14-13 we named the volumes using the naming base: high_fb_0_#h. The
#h means to use the hexadecimal volume number as part of the volume name. This can be
seen in Example 14-14 on page 275, where we list the volumes that we have created using

274 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_DSCLI.fm

the lsfbvol command. We then list the extent pools to see how much space we have left after
the volume creation.

Example 14-14 Checking the machine after creating volumes, by using lsextpool and lsfbvol
dscli> lsfbvol
Date/Time: 27 October 2005 22:28:01 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
Name ID accstate datastate configstate deviceMTM datatype extpool cap (2^30B) cap (10^9B)
===========================================================================================================
high_fb_0_1000 1000 Online Normal Normal 2107-922 FB 512 P0 10.0 -
high_fb_0_1001 1001 Online Normal Normal 2107-922 FB 512 P0 10.0 -
high_fb_0_1002 1002 Online Normal Normal 2107-922 FB 512 P0 10.0 -
high_fb_0_1003 1003 Online Normal Normal 2107-922 FB 512 P0 10.0 -
high_fb_1_1100 1100 Online Normal Normal 2107-922 FB 512 P1 10.0 -
high_fb_1_1101 1101 Online Normal Normal 2107-922 FB 512 P1 10.0 -
high_fb_1_1102 1102 Online Normal Normal 2107-922 FB 512 P1 10.0 -
high_fb_1_1103 1103 Online Normal Normal 2107-922 FB 512 P1 10.0 -
dscli> lsextpool
Date/Time: 27 October 2005 22:27:50 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
Name ID stgtype rankgrp status availstor (2^30B) %allocated available reserved numvols
===========================================================================================
FB_high_0 P0 fb 0 below 733 5 733 0 4
FB_high_1 P1 fb 1 below 733 5 733 0 4

Important: For the DS8000, the LSSs can be ID 00 to ID FE. The LSSs are in address
groups. Address group 0 is LSS 00 to 0F, address group 1 is LSS 10 to 1F, and so on. The
moment you create an FB volume in an address group, then that entire address group can
only be used for FB volumes. Be aware of this when planning your volume layout in a
mixed FB/CKD DS8000.

14.3.5 Creating volume groups


Fixed block volumes get assigned to open systems hosts using volume groups —not to be
confused with the term volume groups used in AIX. A fixed bock volume can be a member of
multiple volume groups. Volumes can be added or removed from volume groups as required.
Each volume group must be either SCSI MAP256 or SCSI MASK, depending on the SCSI
LUN address discovery method used by the operating system to which the volume group will
be attached.

Determining if an open systems host is SCSI MAP256 or SCSI MASK


First we determine what sort of SCSI host we are working with. Then we use the lshostype
command with the -type parameter of scsimask and then scsimap256. In Example 14-15 we
can see the results of each command.

Example 14-15 Listing host types with the lshostype command


dscli> lshosttype -type scsimask
Date/Time: 27 October 2005 23:13:50 IBM DSCLI Version: 5.1.0.204
HostType Profile AddrDiscovery LBS
==================================================
Hp HP - HP/UX reportLUN 512
SVC San Volume Controller reportLUN 512
SanFsAIX IBM pSeries - AIX/SanFS reportLUN 512
pSeries IBM pSeries - AIX reportLUN 512
zLinux IBM zSeries - zLinux reportLUN 512
dscli> lshosttype -type scsimap256
Date/Time: 27 October 2005 23:13:58 IBM DSCLI Version: 5.1.0.204
HostType Profile AddrDiscovery LBS

Chapter 14. Configuration with Command Line interface 275


6786ch_DSCLI.fm Draft Document for Review November 14, 2006 3:49 pm

=====================================================
AMDLinuxRHEL AMD - Linux RHEL LUNPolling 512
AMDLinuxSuse AMD - Linux Suse LUNPolling 512
AppleOSX Apple - OSX LUNPolling 512
Fujitsu Fujitsu - Solaris LUNPolling 512
HpTru64 HP - Tru64 LUNPolling 512
HpVms HP - Open VMS LUNPolling 512
LinuxDT Intel - Linux Desktop LUNPolling 512
LinuxRF Intel - Linux Red Flag LUNPolling 512
LinuxRHEL Intel - Linux RHEL LUNPolling 512
LinuxSuse Intel - Linux Suse LUNPolling 512
Novell Novell LUNPolling 512
SGI SGI - IRIX LUNPolling 512
SanFsLinux - Linux/SanFS LUNPolling 512
Sun SUN - Solaris LUNPolling 512
VMWare VMWare LUNPolling 512
Win2000 Intel - Windows 2000 LUNPolling 512
Win2003 Intel - Windows 2003 LUNPolling 512
iLinux IBM iSeries - iLinux LUNPolling 512
pLinux IBM pSeries - pLinux LUNPolling 512

Having determined the host type, we can now make a volume group. In Example 14-16 the
example host type we chose is AIX, and checking Example 14-15, we can see the address
discovery method for AIX is scsimask.

Example 14-16 Creating a volume group with mkvolgrp and displaying it


dscli> mkvolgrp -type scsimask -volume 1000-1002,1100-1102 AIX_VG_01
Date/Time: 27 October 2005 23:18:07 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
CMUC00030I mkvolgrp: Volume group V11 successfully created.
dscli> lsvolgrp
Date/Time: 27 October 2005 23:18:21 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
Name ID Type
=======================================
ALL CKD V10 FICON/ESCON All
AIX_VG_01 V11 SCSI Mask
ALL Fixed Block-512 V20 SCSI All
ALL Fixed Block-520 V30 OS400 All
dscli> showvolgrp V11
Date/Time: 27 October 2005 23:18:15 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
Name AIX_VG_01
ID V11
Type SCSI Mask
Vols 1000 1001 1002 1100 1101 1102

In this example we added volumes 1000 to 1002 and 1100 to 1102 —we did this to spread the
workload evenly across the two rank groups. We then listed all available volume groups using
lsvolgrp. Finally, we listed the contents of volume group V11, since this was the volume
group we created.

Clearly we may also want to add or remove volumes to this volume group at a later time. To
achieve this we use chvolgrp with the -action parameter. In Example 14-17 on page 277 we
added volume 1003 to the volume group V11. We display the results, and then removed the
volume.

276 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_DSCLI.fm

Example 14-17 Changing a volume group with chvolgrp


dscli> chvolgrp -action add -volume 1003 V11
Date/Time: 27 October 2005 23:22:50 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
CMUC00031I chvolgrp: Volume group V11 successfully modified.
dscli> showvolgrp V11
Date/Time: 27 October 2005 23:22:58 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
Name AIX_VG_01
ID V11
Type SCSI Mask
Vols 1000 1001 1002 1003 1100 1101 1102
dscli> chvolgrp -action remove -volume 1003 V11
Date/Time: 27 October 2005 23:23:08 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
CMUC00031I chvolgrp: Volume group V11 successfully modified.
dscli> showvolgrp V11
Date/Time: 27 October 2005 23:23:13 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
Name AIX_VG_01
ID V11
Type SCSI Mask
Vols 1000 1001 1002 1100 1101 1102

Attention: Not all operating systems can deal with the removal of a volume. Consult your
operating system documentation to determine the safest way to remove a volume from a
host.

14.3.6 Creating host connections


The final step in the logical configuration process is to create host connections for your
attached hosts. You will need to assign volume groups to those connections. Each host HBA
can only be defined once, and each hostconnect can only have one volume group assigned
to it. Remember though that a volume can be assigned to multiple volume groups

In Example 14-18 we create a single host connection —that represents one HBA in our
example AIX host. We use the -hosttype parameter using the hosttype we determined in
Example 14-15 on page 275. We allocated it to volume group V11. At this point —provided
the SAN zoning is correct— the host should be able to see the LUNs in volume group V11.

Example 14-18 Creating host connections using mkhostconnect and lshostconnect


dscli> mkhostconnect -wwname 100000C912345678 -hosttype pSeries -volgrp V11 AIX_Server_01
Date/Time: 27 October 2005 23:28:03 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
CMUC00012I mkhostconnect: Host connection 0000 successfully created.
dscli> lshostconnect
Date/Time: 27 October 2005 23:28:12 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
Name ID WWPN HostType Profile portgrp volgrpID ESSIOport
=========================================================================================
AIX_Server_01 0000 100000C912345678 pSeries IBM pSeries - AIX 0 V11 all
dscli>

Note that you can also use just -profile instead of -hosttype. However, this is not
recommended. If you use the -hosttype parameter it will actually invoke both parameters
(-profile and -hosttype), where using just -profile will leave the -hosttype column
un-populated.

There is also the option in the mkhostconnect command to restrict access to only certain I/O
ports. This is done with the -ioport parameter. Restricting access in this way is usually not
necessary. If you wish to restrict access for certain hosts to certain I/O ports on the DS8000,
do this through zoning on your SAN switch.

Chapter 14. Configuration with Command Line interface 277


6786ch_DSCLI.fm Draft Document for Review November 14, 2006 3:49 pm

Managing hosts with multiple HBAs


If you have a host with multiple HBAs, you have two considerations:
򐂰 For the GUI to consider multiple host connects to be used by the same server, the host
connects must have the same name. So in Example 14-19, host connects 0010 and 0011
would appear in the GUI as a single server with two HBAs. However, host connects 000E
and 000F would appear as two separate hosts even though in reality they are being used
by the same server. If you do not plan to use the GUI to manage host connections then
this is not a major consideration. Using more verbose hostconnect naming may make
management easier.
򐂰 If you wish to use a single command to change the assigned volume group of several
hostconnects at the same time, then you need to assign these hostconnects to a unique
port group and then use the managehostconnect command. This command changes the
assigned volume group for all hostconnects assigned to a particular port group.

When creating hosts you can specify the -portgrp parameter. By using a unique port group
number for each attached server, you can easily detect servers with multiple HBAs.

In Example 14-19 we have six host connections. By using the port group number we can see
there are three separate hosts, each with two HBAs. Port group 0 is used for all hosts that do
not have a port group number set.

Example 14-19 Using the portgrp number to separate attached hosts


dscli> lshostconnect
Date/Time: 14 November 2005 4:27:15 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781
Name ID WWPN HostType Profile portgrp volgrpID
===========================================================================================
bench_tic17_fc0 0008 210000E08B1234B1 LinuxSuse Intel - Linux Suse 8 V1 all
bench_tic17_fc1 0009 210000E08B12A3A2 LinuxSuse Intel - Linux Suse 8 V1 all
p630_fcs0 000E 10000000C9318C7A pSeries IBM pSeries - AIX 9 V2 all
p630_fcs1 000F 10000000C9359D36 pSeries IBM pSeries - AIX 9 V2 all
p615_7 0010 10000000C93E007C pSeries IBM pSeries - AIX 10 V3 all
p615_7 0011 10000000C93E0059 pSeries IBM pSeries - AIX 10 V3 all

Changing host connections


If we wish to change a host connection we can use the chhostconnect command. This
command can be used to change nearly all parameters of the host connection except for the
WWPN. If you need to change the WWPN you will need to create a whole new host
connection. To change the assigned volume group use either chhostconnect to change one
hostconnect at a time, or use the managehostconnect command to simultaneously reassign
all the hostconnects in one port group.

14.3.7 Mapping open systems hosts disks to storage unit volumes


When you have assigned volumes to an open systems host, and you have then installed the
DS CLI on this host, you can run the DS CLI command lshostvol on this host. This command
will map assigned LUNs to open systems host volume names.

In this section we give examples for several operating systems. In each example we assign
some storage unit volumes (either DS8000 or DS6000) to an open systems host. We install
DS CLI on this host. We log on to this host and start DS CLI. It does not matter which HMC or
SMC we connect to with the DS CLI. We then issue the lshostvol command.

278 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_DSCLI.fm

Important: The lshostvol command communicates only with the operating system of the
host on which the DS CLI is installed. You cannot run this command on one host to see the
attached disks of another host.

AIX: Mapping disks when MPIO is being used


In Example 14-20, we have an AIX server that uses MPIO. We have two volumes assigned to
this host, 1800 and 1801. Because MPIO is being used, we do not see the number of paths.
In fact from this display it is not possible to tell if MPIO is even installed. You would need to run
the pcmpath query device command to confirm the path count.

Example 14-20 lshostvol on an AIX host using MPIO


dscli> lshostvol
Date/Time: November 15, 2005 7:00:15 PM CST IBM DSCLI Version: 5.1.0.204
Disk Name Volume Id Vpath Name
==========================================
hdisk3 IBM.1750-1300819/1800 ---
hdisk4 IBM.1750-1300819/1801 ---

AIX: Mapping disks when SDD is being used


In Example 14-21, we have an AIX server that uses SDD. We have two volumes assigned to
this host, 1000 and 1100. Each volume has four paths.

Example 14-21 lshostvol on an AIX host using SDD


dscli> lshostvol
Date/Time: November 10, 2005 3:06:26 PM CET IBM DSCLI Version: 5.0.6.142
Disk Name Volume Id Vpath Name
============================================================
hdisk1,hdisk3,hdisk5,hdisk7 IBM.1750-1300247/1000 vpath0
hdisk2,hdisk4,hdisk6,hdisk8 IBM.1750-1300247/1100 vpath1

HP-UX: Mapping disks when SDD is not being used


In Example 14-22 we have a HP-UX host that does not have SDD. We have two volumes
assigned to this host, 1105 and 1106.

Example 14-22 lshostvol on an HP-UX host that does not use SDD
dscli> lshostvol
Date/Time: November 16, 2005 4:03:25 AM GMT IBM DSCLI Version: 5.0.4.140
Disk Name Volume Id Vpath Name
==========================================
c38t0d5 IBM.2107-7503461/1105 ---
c38t0d6 IBM.2107-7503461/1106 ---

HP-UX or Solaris: Mapping disks when SDD is being used


In Example 14-23 on page 280 we have a Solaris host that has SDD installed. We have two
volumes assigned to this host, 4205 and 4206. Each volume has two paths. The Solaris
command iostat -En can also produce similar information. The output of lshostvol on a
HP-UX host looks exactly the same, with each vpath made up of disks with c-t-d numbers
(controller, target, disk).

Chapter 14. Configuration with Command Line interface 279


6786ch_DSCLI.fm Draft Document for Review November 14, 2006 3:49 pm

Example 14-23 lshostvol on a Solaris host that has SDD


dscli> lshostvol
Date/Time: November 10, 2005 3:54:27 PM MET IBM DSCLI Version: 5.1.0.204
Disk Name Volume Id Vpath Name
==================================================
c2t1d0s0,c3t1d0s0 IBM.2107-7520781/4205 vpath2
c2t1d1s0,c3t1d1s0 IBM.2107-7520781/4206 vpath1

Solaris: Mapping disks when SDD is not being used


In Example 14-24 we have a Solaris host that does not have SDD installed. It instead uses an
alternative multi-pathing product. We have two volumes assigned to this host, 4200 and 4201.
Each volume has two paths. The Solaris command iostat -En can also produce similar
information.

Example 14-24 lshostvol on a Solaris host that does not have SDD
dscli> lshostvol
Date/Time: November 10, 2005 3:58:29 PM MET IBM DSCLI Version: 5.1.0.204
Disk Name Volume Id Vpath Name
==========================================
c6t1d0 IBM-2107.7520781/4200 ---
c6t1d1 IBM-2107.7520781/4201 ---
c7t2d0 IBM-2107.7520781/4200 ---
c7t2d1 IBM-2107.7520781/4201 ---

Windows: Mapping disks when SDD is not being used


In Example 14-25 we run lshostvol on a Windows host that does not use SDD. There is no
multi-pathing software installed. The disks are listed by Windows Disk number. If you wish to
know which disk is associated with which drive letter, you will need to look at Windows Disk
manager.

Example 14-25 lshostvol on a Windows host that does not use SDD
dscli> lshostvol
Date/Time: 11. November 2005 12:02:26 CET IBM DSCLI Version: 5.1.0.204
Disk Name Volume Id Vpath Name
==========================================
Disk0 IBM.1750-1300247/1400 ---
Disk1 IBM.1750-1300247/1401 ---
Disk2 IBM.1750-1300247/1402 ---
Disk3 IBM.1750-1300247/1403 ---

14.4 Configuring the DS8000 storage for CKD volumes


To configure the DS8000 storage for CKD use you follow almost exactly the same steps as for
FB volumes —there is one additional step, that is to create Logical Control Units (LCUs):
1. Set I/O ports.
2. Install License keys if necessary.
3. Create arrays.
4. Create CKD ranks.
5. Create CKD extent pools.
6. Create LCUs (Logical Control Units).
7. Create CKD volumes.

280 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_DSCLI.fm

You do not have to create volume groups or host connects for CKD volumes. Provided there
are I/O ports in FICON mode, access to CKD volumes by FICON hosts will be granted
automatically.

14.4.1 Create array


Array creation for CKD is exactly the same as for fixed block (FB) —see “Create array” on
page 271.

14.4.2 Ranks and extent pool creation


When creating ranks and extent pools, you need to specify -stgtype ckd, as shown in
Example 14-26.

Example 14-26 Rank and extent pool creation for ckd


dscli> mkrank -array A0 -stgtype ckd
Date/Time: 28 October 2005 0:05:31 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
CMUC00007I mkrank: Rank R0 successfully created.
dscli> lsrank
Date/Time: 28 October 2005 0:07:51 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
ID Group State datastate Array RAIDtype extpoolID stgtype
==============================================================
R0 - Unassigned Normal A0 5 - ckd
dscli> mkextpool -rankgrp 0 -stgtype ckd CKD_High_0
Date/Time: 28 October 2005 0:13:53 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
CMUC00000I mkextpool: Extent Pool P0 successfully created.
dscli> chrank -extpool P2 R0
Date/Time: 28 October 2005 0:14:19 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
CMUC00008I chrank: Rank R0 successfully modified.
dscli> lsextpool
Date/Time: 28 October 2005 0:14:28 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
Name ID stgtype rankgrp status availstor (2^30B) %allocated available reserved numvol
===========================================================================================
CKD_High_0 2 ckd 0 below 252 0 287 0 0

14.4.3 Logical control unit creation


When creating volumes for a CKD environment, you are required to create Logical Control
Units (LCUs) before creating the volumes. In Example 14-27 you can see what happens if
you try to create a CKD volume without creating an LCU first.

Example 14-27 Trying to create CKD volumes without an LCU


dscli> mkckdvol -extpool p2 -cap 1113 -name ZOS_ckd_#h 1200
Date/Time: 28 October 2005 0:36:12 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
CMUN02308E mkckdvol: Query failure: logical subsystem does not exist.

So first we must use the mklcu command. The format of the command is:
mklcu -qty XX -id XX -ssXX

Then, to display the LCUs that we have created, we can use the lslcu command.

In Example 14-28 on page 282, we create two LCUs using mklcu, and then list the created
LCUs using lslcu. Note that by default, the LCUs that were created are 3990-6.

Chapter 14. Configuration with Command Line interface 281


6786ch_DSCLI.fm Draft Document for Review November 14, 2006 3:49 pm

Example 14-28 Creating a logical control unit with mklcu


dscli> mklcu -qty 2 -id 00 -ss FF00
Date/Time: 28 October 2005 16:53:17 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
CMUC00017I mklcu: LCU 00 successfully created.
CMUC00017I mklcu: LCU 01 successfully created.
dscli> lslcu
Date/Time: 28 October 2005 16:53:26 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
ID Group addrgrp confgvols subsys conbasetype
=============================================
00 0 0 0 0xFF00 3990-6
01 1 0 0 0xFF01 3990-6

Also note that because we created two LCUs (using the parameter -qty 2), the first LCU
being ID 00 (an even number) is in address group 0, which equates to rank group 0; while the
second LCU being ID 01 (an odd number) is in address group 1, which equates to rank group
1. By placing the LCUs into both address groups we maximize performance by spreading
workload across both rank groups of the DS8000.

Note: For the DS8000, the CKD LCUs can be ID 00 to ID FE. The LCUs will fit into one of
16 address groups. Address group 0 is LCUs 00 to 0F, address group 1 is LCUs 10 to 1F,
and so on. If you create a CKD LCU in an address group, then that address group cannot
be used for FB volumes. Likewise, if there were, for instance, FB volumes in LSS 40 to 4F
(address group 4), then that address group cannot be used for CKD. Be aware of this
when planning the volume layout in a mixed FB/CKD DS8000.

14.4.4 Create CKD volumes


Having created an LCU, we can now create CKD volumes, using the mkckdvol command. The
format of the mkckdvol command is:
mkckdvol -extpool pX -cap 1113 -name zOS_ckd_#h 00xx-00xx

The major difference to note here is that the capacity must be equivalent to that of a 3390
model 3 or a multiple of this capacity (3339 cylinders) so that there is very little or no space
wasted.

In Example 14-29 we create a single 3390-3 volume using 3339 cylinders.

Example 14-29 Creating CKD volumes using mkckdvol


dscli> mkckdvol -extpool p2 -cap 3339 -name zOS_ckd_#h 0000
Date/Time: 28 October 2005 17:04:31 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
CMUC00021I mkckdvol: CKD Volume 0000 successfully created.
dscli> lsckdvol
Date/Time: 28 October 2005 17:04:37 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
Name ID accstate datastate configstate deviceMTM voltype orgbvols extpool cap (cyl)
==============================================================================================
zOS_ckd_0000 0000 Online Normal Normal 3390-3 CKD Base - P2 3339

Remember we can only create CKD volumes using LCUs that we have already created. From
our examples, trying, for instance, to make volume 0200 will fail with the same message seen
in Example 14-27 on page 281. This is because we only created LCU IDs 00 and 01,
meaning all CKD volumes must be in the address range 00xx (LCU ID 00) and 01xx (LCU ID
01).

282 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_DSCLI.fm

You also need to be aware that volumes in even numbered LCUs must be created from an
extent pool that belongs to rank group 0, while volumes in odd numbered LCUs must be
created from an extent pool in rank group 1.

14.5 Scripting the DS CLI


Because the DS CLI is shell based it lends itself very well to being scripted. You can either
call single DS CLI commands in a script —one at a time as needed, or your can start the DS
CLI and point it to a script.

14.5.1 Single command mode


A simple example of calling DS CLI in a script is to create a Windows batch file and place
individual DS CLI commands in the batch file. Each command starts the DS CLI environment,
executes the DS CLI command, and then terminates DS CLI.

In Example 14-30 we have created a file called samplebat.bat. Because it is a Windows batch
file and we are issuing individual DS CLI commands, it can also contain any command that
can be executed in a Windows command prompt.

Example 14-30 Contents of a simple windows BAT file


@ECHO OFF
REM This is a sample windows BAT file that can display the config of a machine
REM Output is logged to a file called output.txt
dscli lsarraysite > output.txt
dscli lsarray >> output.txt
dscli lsrank >> output.txt
dscli lsextpool >> output.txt
type samplebat.bat

Having creating the BAT file, we can run it and display the output file. An example is shown in
Example 14-31. We run the batch file samplebat.bat and the command output is displayed.

Example 14-31 Executing a BAT file with DS CLI commands in it


D:\>samplebat.bat
Date/Time: 28 October 2005 23:02:32 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
arsite DA Pair dkcap (10^9B) State Array
=============================================
S1 0 73.0 Unassigned -
S2 0 73.0 Unassigned -
S3 0 73.0 Unassigned -
S4 0 73.0 Unassigned -
S5 0 73.0 Unassigned -
S6 0 73.0 Unassigned -
Date/Time: 28 October 2005 23:02:39 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
CMUC00234I lsarray: No Array found.
Date/Time: 28 October 2005 23:02:47 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
CMUC00234I lsrank: No Rank found.
Date/Time: 28 October 2005 23:02:53 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
CMUC00234I lsextpool: No Extent Pool found.

D:\>

Chapter 14. Configuration with Command Line interface 283


6786ch_DSCLI.fm Draft Document for Review November 14, 2006 3:49 pm

14.5.2 Script mode


If you want to run a script that only contains DS CLI commands, then you can start DS CLI in
script mode. The main thing to remember here is that the script that DS CLI executes can only
contain DS CLI commands.

In Example 14-32 we have the contents of a DS CLI script file. Note that it only contains DS
CLI commands, though comments can be placed in the file using a hash (#). One advantage
of using this method is that scripts written in this format can be used by the DS CLI on any
operating system into which you can install DS CLI.

Example 14-32 Example of a DS CLI script file


# Sample ds cli script file
# Comments can appear if hashed
lsarraysite
lsarray
lsrank

In Example 14-33, we start the DS CLI using the -script parameter and specifying the name
of the script that contains the commands from Example 14-32.

Example 14-33 Executing DS CLI with a script file


C:\Program Files\ibm\dscli>dscli -script sample.script
Date/Time: 28 October 2005 23:06:47 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
arsite DA Pair dkcap (10^9B) State Array
=============================================
S1 0 73.0 Unassigned -
S2 0 73.0 Unassigned -
S3 0 73.0 Unassigned -
S4 0 73.0 Unassigned -
S5 0 73.0 Unassigned -
S6 0 73.0 Unassigned -
Date/Time: 28 October 2005 23:06:52 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
CMUC00234I lsarray: No Array found.
Date/Time: 28 October 2005 23:06:53 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
CMUC00234I lsrank: No Rank found.

C:\Program Files\ibm\dscli>

284 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786p_Host consideration.fm

Part 4

Part 4 Host
considerations
In this part we discuss the specific host considerations that you may need for implementing
the DS8000 with your chosen platform. The following host platforms are presented:
򐂰 Open systems considerations
򐂰 System z considerations
򐂰 System i considerations

© Copyright IBM Corp. 2006. All rights reserved. 285


6786p_Host consideration.fm Draft Document for Review November 14, 2006 3:49 pm

286 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_hostconsid_open.fm

15

Chapter 15. Open systems considerations


This chapter discusses the specifics for attaching the DS8000 to host systems running the
following operating system platforms:
򐂰 Windows
򐂰 AIX
򐂰 Linux
򐂰 OpenVMS
򐂰 VMware
򐂰 Sun Solaris
򐂰 HP-UX

Also, some general considerations are discussed at the beginning of this chapter.

© Copyright IBM Corp. 2006. All rights reserved. 287


6786ch_hostconsid_open.fm Draft Document for Review November 14, 2006 3:49 pm

15.1 General considerations


In this section we cover some topics that are not specific to a single operating system. This
includes available documentation, links to additional information, considerations common to
all platforms.

15.1.1 Getting up to date information


This section provides a list of online resources where detailed and up-to-date information
about supported configurations, recommended settings, device driver versions, and so on,
can be found. Due to the high innovation rate in the IT industry, the support information is
updated frequently. Therefore it is advisable to visit these resources regularly and check for
updates.

The DS8000 Interoperability Matrix


The DS8000 Interoperability Matrix always provides the latest information about supported
platforms, operating systems, HBAs, and SAN infrastructure solutions. It contains detailed
specifications about models and versions. It also lists special support items, such as boot
support, and exceptions. It can be found at:
http://www-1.ibm.com/servers/storage/disk/DS8000/interop.html

The IBM HBA Search Tool


For information about supported Fibre Channel HBAs and the recommended or required
firmware and device driver levels for all IBM storage systems, you can visit the IBM HBA
Search Tool site, sometimes also referred to as the Fibre Channel host bus adapter firmware
and driver level matrix:
http://knowledge.storage.ibm.com/servers/storage/support/hbasearch/interop/hbaSearch.do

For each query, select one storage system and one operating system only; otherwise the
output of the tool will be ambiguous. You will be shown a list of all supported HBAs together
with the required firmware and device driver levels for your combination. Furthermore, you
can select a detailed view for each combination with more information, quick links to the HBA
vendors’ Web pages and their IBM supported drivers, and a guide to the recommended HBA
settings.

The DS8000 Host Systems Attachment Guide


The IBM System Storage DS8000 Host Systems Attachment Guide, SC26-7917, guides you
in detail through all the steps that are required to attach an open system host to your DS8000
storage system. It is available at the following Web site by clicking Documentation:
http://www-1.ibm.com/support/servers/storage/support/disk/ds8300/installing.html

The general link to installation documentation


Sometimes links are changed. A good starting point to find documentation and
troubleshooting information is available at:
http://www-1.ibm.com/servers/storage/support/disk/ds8100/installing.html
http://www-1.ibm.com/servers/storage/support/disk/ds8300/installing.html

The System Storage Proven program


IBM has introduced the System Storage Proven program to help clients identify storage
solutions and configurations that have been pre-tested for interoperability. It builds on IBM's

288 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_hostconsid_open.fm

already extensive interoperability efforts to develop and deliver products and solutions that
work together with third-party products.

The System Storage Proven Web site provides more detail on the program, as well as the list
of pre-tested configurations:
http://www.ibm.com/servers/storage/proven/index.html

HBA vendor resources


All of the Fibre Channel HBA vendors have Web sites that provide information about their
products, facts, and features, as well as support information. These sites will be useful when
the IBM resources are not sufficient, for example, when troubleshooting an HBA driver. Be
aware that IBM cannot be held responsible for the content of these sites.

QLogic Corporation
The Qlogic Web site can be found at:
http://www.qlogic.com

QLogic maintains a page that lists all the HBAs, drivers, and firmware versions that are
supported for attachment to IBM storage systems:
http://www.qlogic.com/support/ibm_page.html

Emulex Corporation
The Emulex home page is:
http://www.emulex.com

They also have a page with content specific to IBM storage systems:
http://www.emulex.com/ts/docoem/framibm.htm

JNI/AMCC
AMCC took over the former JNI, but still markets FC HBAs under the JNI brand name. JNI
HBAs are supported for DS8000 attachment to SUN systems. The home page is:
http://www.amcc.com

Their IBM storage specific support page is:


http://www.amcc.com/drivers/IBM.html

Atto
Atto supplies HBAs, which IBM supports for Apple Macintosh attachment to the DS8000.
Their home page is:
http://www.attotech.com

They have no IBM storage specific page. Their support page is:
http://www.attotech.com/support.html

Downloading drivers and utilities for their HBAs requires registration.

Platform and operating system vendors pages


The platform and operating system vendors also provide lots of support information to their
clients. Go there for general guidance about connecting their systems to SAN-attached
storage. However, be aware that in some cases you will not find information that will help you
with third-party. You should always check with IBM about interoperability and support from

Chapter 15. Open systems considerations 289


6786ch_hostconsid_open.fm Draft Document for Review November 14, 2006 3:49 pm

IBM in regard to these products. It is beyond the scope of this redbook to list all the vendors
Web sites.

15.1.2 Boot support


For most of the supported platforms and operating systems you can use the DS8000 as a
boot device. The DS8000 Interoperability Matrix provides detailed information about boot
support.

The IBM System Storage DS8000 Host Systems Attachment Guide, SC26-7917, helps you
with the procedures necessary to set up your host in order to boot from the DS8000.

The IBM System Storage Multipath Subsystem Device Driver User’s Guide, SC30-4131, also
helps with identifying the optimal configuration and lists the steps required to boot from
multipathing devices.

15.1.3 Additional supported configurations (RPQ)


There is a process for cases where a desired configuration is not represented in the support
matrix. This process is called Request for Price Quotation (RPQ). Clients should contact their
IBM storage sales specialist or IBM Business Partner for submission of an RPQ. Initiating the
process does not guarantee the desired configuration will be supported. This depends on the
technical feasibility and the required test effort. A configuration that equals or is similar to one
of the already approved ones is more likely to get approved than a completely different one.

15.1.4 Multipathing support — Subsystem Device Driver (SDD)


To ensure maximum availability most clients choose to connect their open systems hosts
through more than one Fibre Channel path to their storage systems. With an intelligent SAN
layout this protects you from failures of FC HBAs, SAN components, and host ports in the
storage subsystem.

Most operating systems, however, cannot deal natively with multiple paths to a single disk.
This puts the data integrity at risk, because multiple write requests can be issued to the same
data and nothing takes care of the correct order of writes.

To utilize the redundancy and increased I/O bandwidth you get with multiple paths, you need
an additional layer in the operating system’s disk subsystem to recombine the multiple disks
seen by the HBAs into one logical disk. This layer manages path failover, should a path
become unusable, and balancing of I/O requests across the available paths.

Subsystem Device Driver (SDD)


For most operating systems that are supported for DS8000 attachment, IBM makes available
the IBM Subsystem Device Driver (SDD) free to provide the following functionality:
򐂰 Enhanced data availability through automatic path failover and failback
򐂰 Increased performance through dynamic I/O load-balancing across multiple paths
򐂰 Ability for concurrent download of licensed internal code
򐂰 User configurable path-selection policies for the host system

IBM Multi-path Subsystem Device Driver (SDD) provides load balancing and enhanced data
availability capability in configurations with more than one I/O path between the host server
and the DS8000. SDD performs dynamic load balancing across all available preferred paths
to ensure full utilization of the SAN and HBA resources.

SDD can be downloaded from:

290 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_hostconsid_open.fm

http://www.ibm.com/servers/storage/support/software/sdd/downloading.html

When you click the Subsystem Device Driver downloads link, you will be presented a list of
all operating systems for which SDD is available. Selecting one leads you to the download
packages, the user´s guide, and additional support information.

The user’s guide, the IBM System Storage Multipath Subsystem Device Driver User’s Guide,
SC30-4131, contains all the information that is needed to install, configure, and use SDD for
all supported operating systems.

Note: SDD and RDAC, the multipathing solution for the IBM System Storage DS4000
series, can coexist on most operating systems, as long as they manage separate HBA
pairs. Refer to the DS4000 series documentation for detailed information.

Other multipathing solutions


Some operating systems come with native multipathing software, for example:
򐂰 SUN StorEdge Traffic Manager for SUN Solaris.
򐂰 HP PVLinks for HP-UX.
򐂰 IBM AIX native multipathing (MPIO).
򐂰 IBM OS/400 V5R3 multipath support.
򐂰 In addition, there are third-party multipathing solutions, such as Veritas DMP, which is part
of Veritas Volume Manager.

Most of these solutions are also supported for DS8000 attachment, although the scope may
vary. There may be limitations for certain host bus adapters or operating system versions.
Always consult the DS8000 Interoperability Matrix for the latest information.

15.2 Windows
DS8000 supports Fibre Channel attachment to Microsoft Windows 2000/2003 servers. For
details regarding operating system versions and HBA types see the DS8000 Interoperability
Matrix, available at:
http://www.ibm.com/servers/storage/disk/DS8000/interop.html

The support includes cluster service and acts as a boot device. Booting is supported currently
with host adapters QLA23xx (32 bit or 64 bit) and LP9xxx (32 bit only). For a detailed
discussion about SAN booting (advantages, disadvantages, potential difficulties, and
troubleshooting), we highly recommend the Microsoft document Boot from SAN in Windows
Server 2003 and Windows 2000 Server, available at:
http://www.microsoft.com/windowsserversystem/wss2003/techinfo/plandeploy/BootfromSANinWi
ndows.mspx

15.2.1 HBA and operating system settings


Depending on the host bus adapter type, several HBA and driver settings may be required.
Refer to the IBM System Storage DS8000 Host Systems Attachment Guide, SC26-7917, for
the complete description of these settings. Although the volumes can be accessed with other
settings, too, the values recommended there have been tested for robustness.

To ensure optimum availability and recoverability when you attach a storage unit to a
Windows 2000/2003 host system, we recommend setting the Time Out Value value
associated with the host adapters to 60 seconds. The operating system uses the Time Out
Value parameter to bind its recovery actions and responses to the disk subsystem. The value

Chapter 15. Open systems considerations 291


6786ch_hostconsid_open.fm Draft Document for Review November 14, 2006 3:49 pm

is stored in the Windows registry at:


HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Disk\TimeOutValue

The value has the data type REG-DWORD and should be set to 0x0000003c hexadecimal
(60 decimal).

15.2.2 SDD for Windows


An important task with a Windows host is the installation of the SDD multipath driver. Ensure
that SDD is installed before adding additional paths to a device, otherwise the operating
system could lose the ability to access existing data on that device. For details, refer to the
IBM System Storage Multipath Subsystem Device Driver User’s Guide, SC30-4131.

In Figure 15-1 you see an example of two disks connected by four paths to the server. You
see two IBM 2107900 SDD Disk Devices as real disks on Windows. The IBM 2107900 SCSI
Disk Device is hidden by SDD. The Disk manager view is shown in Figure 15-2.

Figure 15-1 SDD devices on Windows Device manager

Figure 15-2 Disk manager view

292 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_hostconsid_open.fm

Note: New assigned disks will be discovered; if not, go to Disk manager and rescan disks
or to the Device manager and scan for hardware changes.

SDD datapath query


For datapath query device the option -l is added to mark the non-preferred paths with an
asterisk.

Example 15-1 is a sample output of the command datapath query device -l. You see that
all paths of the DS8000 will be used —because the DS8000 does not implement the concept
of preferred path, as the DS6000 does.

Example 15-1 datapath query device -l


Microsoft Windows [Version 5.2.3790]
(C) Copyright 1985-2003 Microsoft Corp.

C:\Program Files\IBM\Subsystem Device Driver>datapath query device -l

Total Devices : 2

DEV#: 0 DEVICE NAME: Disk2 Part0 TYPE: 2107900 POLICY: OPTIMIZED


SERIAL: 75065711000
LUN IDENTIFIER: 6005076303FFC0B60000000000001000
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 22 0
1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 32 0
2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 40 0
3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 32 0

DEV#: 1 DEVICE NAME: Disk3 Part0 TYPE: 2107900 POLICY: OPTIMIZED


SERIAL: 75065711001
LUN IDENTIFIER: 6005076303FFC0B60000000000001001
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 6 0
1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 4 0
2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 4 0
3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 2 0

Another helpful command is datapath query wwpn, shown in Example 15-2. It helps you to
get the World Wide Port Name (WWPN) of your Fibre Channel adapter.

Example 15-2 datapath query WWPN

C:\Program Files\IBM\Subsystem Device Driver>datapath query wwpn


Adapter Name PortWWN
Scsi Port2: 210000E08B037575
Scsi Port3: 210000E08B033D76

The commands datapath query essmap and datapath query portmap are not available.

Mapping SDD devices to Windows drive letters


When assigning DS8000 LUNs to a Windows host, it may be advantageous to understand
which Windows drive letter is using which DS8000 LUN. To do this you need to use the

Chapter 15. Open systems considerations 293


6786ch_hostconsid_open.fm Draft Document for Review November 14, 2006 3:49 pm

information displayed by the datapath query device command, plus the information
displayed in the Windows Disk Management panel and combine them.

In our example of Figure 15-1 on page 292, if we listed the vpaths we could see that SDD
DEV#: 0 has DEVICE NAME: Disk2. We could also see the serial number of the disk is
75065711000, which breaks out as LUN ID 1000 on DS8000 serial 7506571. We then need
to look at the Windows Disk Management panel, an example of which is shown in Figure 15-2
on page 292. From this we can see that Disk 2 is Windows drive letter E. If you can view this
document in color you can see the blue circle in Figure 15-2. SDD DEV#: 1 corresponds to
the red circle around Windows drive letter G.

Now that we have mapped the LUN ID to a Windows drive letter, if driver letter E was no
longer required on this Windows server, we could safely un-assign LUN ID 1000 on the
DS8000 with serial number 7506571, knowing we have removed the correct drive.

Support for Windows 2000 and Windows 2003 clustering


SDD 1.5.x.x does not support I/O load balancing in a Windows Server clustering. SDD
1.6.0.0 (or later) is required to support load balancing in Windows clustering. When running
Windows clustering, clustering failover might not occur when the last path is being removed
from the shared resources. See Microsoft article Q294173 for additional information, at:
http://support.microsoft.com/default.aspx?scid=kb;en-us;Q294173

Windows does not support dynamic disks in the MSCS environment.

Considerations in Windows 2000/2003 clustering environments


There are subtle differences in the way that SDD handles path reclamation in a Windows
clustering environment compared to a non clustering environment. When the Windows server
loses a path in a non-clustering environment, the path condition changes from open to dead
and the adapter condition changes from active to degraded. The adapter and path condition
will not change until the path is made operational again. When the Windows server loses a
path in a clustering environment, the path condition changes from open to dead and the
adapter condition changes from active to degraded. However, after a period of time, the path
condition changes back to open and the adapter condition changes back to normal, even if
the path has not been made operational again.

Note: The adapter goes to DEGRAD state when there are active paths left on the adapter.
It goes to FAILED state when there are no active paths.

The datapath set adapter # offline command operates differently in a clustering


environment as compared to a non-clustering environment. In a clustering environment, the
datapath set adapter offline command does not change the condition of the path if the
path is active or being reserved. If you issue the command, the following message is
displayed: to preserve access some paths left online.

15.2.3 Boot support


When booting from the FC storage systems, special restrictions apply:
򐂰 With Windows 2000, you should not use the same HBA as both the FC boot device and
the clustering adapter. The reason for this is the usage of SCSI bus resets by MSCS to
break up disk reservations during quorum arbitration. Because a bus reset cancels all
pending I/O operations to all FC disks visible to the host via that port, an MSCS-initiated
bus reset may cause operations on the C:\ drive to fail.

294 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_hostconsid_open.fm

򐂰 With Windows 2003, MSCS uses target resets. See the Microsoft technical article
Microsoft Windows Clustering: Storage Area Networks at:
http://www.microsoft.com/windowsserver2003/techinfo/overview/san.mspx
򐂰 Windows Server 2003 will allow for boot disk and the cluster server disks hosted on the
same bus. However, you would need to use Storport miniport HBA drivers for this
functionality to work. This is not a supported configuration in combination with drivers of
other types (for example, SCSI port miniport or Full port drivers).
򐂰 If you reboot a system with adapters while the primary path is in a failed state, you must
manually disable the BIOS on the first adapter and manually enable the BIOS on the
second adapter. You cannot enable the BIOS for both adapters at the same time. If the
BIOS for both adapters is enabled at the same time and there is a path failure on the
primary adapter, the system will stop with an INACCESSIBLE_BOOT_DEVICE error upon
reboot.

15.2.4 Windows Server 2003 VDS support


With Windows Server 2003 Microsoft introduced the Virtual Disk Service (VDS). It unifies
storage management and provides a single interface for managing block storage
virtualization. This interface is vendor and technology neutral, and is independent of the layer
where virtualization is done, operating system software, RAID storage hardware, or other
storage virtualization engines.

VDS is a set of APIs that uses two sets of providers to manage storage devices. The built-in
VDS software providers enable you to manage disks and volumes at the operating system
level. VDS hardware providers supplied by the hardware vendor enable you to manage
hardware RAID Arrays. Windows Server 2003 components that work with VDS include the
Disk Management Microsoft Management Console (MMC) snap-in; the DiskPart
command-line tool; and the DiskRAID command-line tool, which is available in the Windows
Server 2003 Deployment Kit. Figure 15-3 shows the VDS architecture.

Command-Line Tools:
Disk Management Storage Management
•DiskPart
MMC Snap-in Applications
•DiskRAID

Virtual Disk Service

Software Providers: Hardware Providers


•Basic Disks
•Dynamic Disks

Other
Disk
Subsystem

HDDs LUNs
DS8000/DS6000
Hardware
Microsoft Functionality
Non-Microsoft Functionality

Figure 15-3 Microsoft VDS architecture

Chapter 15. Open systems considerations 295


6786ch_hostconsid_open.fm Draft Document for Review November 14, 2006 3:49 pm

For a detailed description of VDS, refer to the Microsoft Windows Server 2003 Virtual Disk
Service Technical Reference at:
http://www.microsoft.com/Resources/Documentation/windowsserv/2003/all/techref/en-us/W2K3
TR_vds_intro.asp

The DS8000 can act as a VDS hardware provider. The implementation is based on the DS
Common Information Model (CIM) agent, a middleware application that provides a
CIM-compliant interface. The Microsoft Virtual Disk Service uses the CIM technology to list
information and manage LUNs. See the IBM System Storage DS Open Application
Programming Interface Reference, GC35-0516, for information about how to install and
configure VDS support.

The following sections present examples of VDS integration with advanced functions of the
DS8000 storage systems that became possible with the implementation of the DS CIM agent.

Volume Shadow Copy Service


The Volume Shadow Copy Service provides a mechanism for creating consistent
point-in-time copies of data, known as shadow copies. It integrates IBM System Storage
FlashCopy to produce consistent shadow copies, while also coordinating with business
applications, file-system services, backup applications, and fast-recovery solutions.

For more information refer to:


http://www.microsoft.com/resources/documentation/WindowsServ/2003/all/techref/en-us/w2k3
tr_vss_how.asp

What is necessary to use these functions


To use these functions you need an installed CIM client. This CIM client requires a DS CLI
Client to communicate with the DS8000 or the DS6000, or an ESS CLI client if going to
communicate with an ESS. On each server you need the IBM API support for Microsoft
Volume Shadow Copy Service; see Figure 15-4.

W in d o w s 2 0 0 3 W in d o w s 2 0 0 3
S e rv e r S e rv e r

W in d o w s V S S W in d o w s V S S
IB M A P I s u p p o r t fo r M ic r o s o f t I B M A P I s u p p o r t f o r M ic r o s o f t
V o lu m e S h a d o w C o p y S e r v ic e V o lu m e S h a d o w C o p y S e r v ic e

M a n a g e m e n t s e rv e r

D S C LI
C IM C lie n t E S S C LI

S M C S H M C

D S 6000 D S 8000 E S S

Figure 15-4 VSS installation infrastructure

296 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_hostconsid_open.fm

After the installation of these components —described in IBM System Storage DS Open
Application Programming Interface Reference, GC35-0516, you have to:
򐂰 define a VSS_FREE volume group and virtual server
򐂰 define a VSS_RESERVED volume group and virtual server
򐂰 assign volumes to the VSS_FREE volume group.

The WWPN default for the VSS_FREE virtual server is 50000000000000; for the
VSS_RESERVED virtual server is 50000000000001. This disks are available for the server
like a pool of free available disks. If you want to have different pools of free disks you can
define your own WWPN for another pool; see Example 15-3.

Example 15-3 ESS Provider Configuration Tool Commands Help

C:\Program Files\IBM\ESS Hardware Provider for VSS>ibmvssconfig.exe /?

ESS Provider Configuration Tool Commands


----------------------------------------
ibmvssconfig.exe <command> <command arguments>

Commands:
/h | /help | -? | /?
showcfg
listvols <all|free|vss|unassigned>
add <volumeID list> (separated by spaces)
rem <volumeID list> (separated by spaces)

Configuration:
set targetESS <5-digit ESS Id>
set user <CIMOM user name>
set password <CIMOM password>
set trace [0-7]
set trustpassword <trustpassword>
set truststore <truststore location>
set usingSSL <YES | NO>
set vssFreeInitiator <WWPN>
set vssReservedInitiator <WWPN>
set FlashCopyVer <1 | 2>
set cimomPort <PORTNUM>
set cimomHost <Hostname>
set namespace <Namespace>

With the ibmvssconfig.exe listvols command you can also verify what volumes are
available for VSS in the VSS_FREE pool; see Example 15-4.

Example 15-4 VSS list volumes at free pool

C:\Program Files\IBM\ESS Hardware Provider for VSS>ibmvssconfig.exe listvols free


Listing Volumes...

LSS Volume Size Assigned to


---------------------------------------------------
10 003AAGXA 5.3687091E9 bytes5000000000000000
11 103AAGXA 2.14748365E10 bytes5000000000000000

Also disks that are unassigned in your disk subsystem can be assigned with the add
command to the VSS_FREE pool. In Example 15-5 on page 298 we verify the volumes
available for VSS.

Chapter 15. Open systems considerations 297


6786ch_hostconsid_open.fm Draft Document for Review November 14, 2006 3:49 pm

Example 15-5 VSS list volumes available for vss

C:\Program Files\IBM\ESS Hardware Provider for VSS>ibmvssconfig.exe listvols vss


Listing Volumes...

LSS Volume Size Assigned to


---------------------------------------------------
10 001AAGXA 1.00000072E10 bytesUnassigned
10 003AAGXA 5.3687091E9 bytes5000000000000000
11 103AAGXA 2.14748365E10 bytes5000000000000000

How to use VSS and VDS for backup


In Figure 15-5 a scenario is shown of using VSS and VDS for backup. More detailed
information can be found in IBM TotalStorage Business Continuity Solutions Guide,
SG24-6547.

Command
Command
Requestor Line
Line
Backup Interface
Interface
App Writers
Writers
Virtual Storage
Storage
Disk Mgt
Apps Service App
App
Volume
Shadow
Copy I/O Disk
Disk Mgt
Mgt
Service

DS8000/
DS8000/ IBM VSS Hardware IBM VDS
Provider Provider
DS6000
DS6000 Provider

VSS_RESERVE VSS_FREE Production


Pool Pool Pool

FlashCopy
Target Source

Win2003 Win2003
Backup Production
Server Server

Figure 15-5 VSS VDS example

Geographically Dispersed Sites (GDS)


IBM System Storage Continuous Availability for Windows (formerly GDS for MSCS) is
designed to provide high availability and a disaster recovery solution for clustered Microsoft
Server environments. It integrates Microsoft Cluster Service (MSCS) and the Metro Mirror
feature of the DS8000. It is designed to allow Microsoft Cluster installations to span
geographically dispersed sites and help protect clients from site disasters or storage system
failures.

IBM has a GDS service offering to deliver solutions in this area —visit the IBM web site and
see the Services & Industry Solutions page for more information.

For more information see IBM System Storage DS8000 Series: Copy Services in Open
Environments, SG24-6788.

298 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_hostconsid_open.fm

15.3 AIX
This section covers items specific to the IBM AIX operating system. It is not intended to
repeat the information that is contained in other publications. We focus on topics that are not
covered in the well-known literature or are important enough to be repeated here.

For complete information refer to IBM System Storage DS8000 Host Systems Attachment
Guide, SC26-7917.

15.3.1 Finding the World Wide Port Names


In order to allocate DS8000 disks to a System p server, the World Wide Port Name (WWPN)
of each of the System p Fibre Channel adapters has to be registered in the DS8000. You can
use the lscfg command to find out these names, as shown in Example 15-6.

Example 15-6 Finding Fibre Channel adapter WWN


lscfg -vl fcs0
fcs0 U1.13-P1-I1/Q1 FC Adapter

Part Number.................00P4494
EC Level....................A
Serial Number...............1A31005059
Manufacturer................001A
Feature Code/Marketing ID...2765
FRU Number.................. 00P4495
Network Address.............10000000C93318D6
ROS Level and ID............02C03951
Device Specific.(Z0)........2002606D
Device Specific.(Z1)........00000000
Device Specific.(Z2)........00000000
Device Specific.(Z3)........03000909
Device Specific.(Z4)........FF401210
Device Specific.(Z5)........02C03951
Device Specific.(Z6)........06433951
Device Specific.(Z7)........07433951
Device Specific.(Z8)........20000000C93318D6
Device Specific.(Z9)........CS3.91A1
Device Specific.(ZA)........C1D3.91A1
Device Specific.(ZB)........C2D3.91A1
Device Specific.(YL)........U1.13-P1-I1/Q1

You can also print the WWPN of an HBA directly by running


lscfg -vl <fcs#> | grep Network

The # stands for the instance of each FC HBA you want to query.

15.3.2 AIX multipath support


The DS8000 supports two methods of attaching AIX hosts:
򐂰 Subsystem Device Driver (SDD
򐂰 AIX MPIO with PCM (SDDPCM)

MPIO and SDD cannot coexist on the same server. See following sections for detailed
discussion and considerations.

Chapter 15. Open systems considerations 299


6786ch_hostconsid_open.fm Draft Document for Review November 14, 2006 3:49 pm

15.3.3 SDD for AIX


The following file sets are needed for SDD:
򐂰 devices.sdd.51.rte or devices.sdd.52.rte or devices.sdd.53.rte or devices.sdd.433.rte
dependent on the OS version
򐂰 devices.fcp.disk.ibm.rte

The following file sets should not be installed and must be removed:
򐂰 devices.fcp.disk.ibm.mpio.rte
򐂰 devices.sddpcm.52.rte, or devices.sddpcm.53.rte

For datapath query device the option -l is added to mark the non preferred paths in a
storage unit. This option can be used in addition to the existing datapath query device
commands. In Example 15-7, DS8000 disks are mixed with DS6000 devices. Because the
DS8000 does not implement the concept of preferred data path, you see non preferred paths
marked with an asterisk (*) only for the DS6000 volumes. On the DS8000 all paths are used.

Example 15-7 datapath query device -l on AIX

root@sanh70:/ > datapath query device -l

Total Devices : 4

DEV#: 0 DEVICE NAME: vpath0 TYPE: 2107900 POLICY: Optimized


SERIAL: 75065711002
LUN IDENTIFIER: 6005076303FFC0B60000000000001002
==========================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 fscsi0/hdisk1 OPEN NORMAL 843 0
1 fscsi0/hdisk3 OPEN NORMAL 906 0
2 fscsi1/hdisk5 OPEN NORMAL 900 0
3 fscsi1/hdisk8 OPEN NORMAL 867 0

DEV#: 1 DEVICE NAME: vpath1 TYPE: 2107900 POLICY: Optimized


SERIAL: 75065711003
LUN IDENTIFIER: 6005076303FFC0B60000000000001003
==========================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 fscsi0/hdisk2 CLOSE NORMAL 0 0
1 fscsi0/hdisk4 CLOSE NORMAL 0 0
2 fscsi1/hdisk6 CLOSE NORMAL 0 0
3 fscsi1/hdisk9 CLOSE NORMAL 0 0

DEV#: 2 DEVICE NAME: vpath2 TYPE: 1750500 POLICY: Optimized


SERIAL: 13AAGXA1000
LUN IDENTIFIER: 600507630EFFFC6F0000000000001000
==========================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 fscsi0/hdisk10 OPEN NORMAL 2686 0
1* fscsi0/hdisk12 OPEN NORMAL 0 0
2 fscsi1/hdisk14 OPEN NORMAL 2677 0
3* fscsi1/hdisk16 OPEN NORMAL 0 0

DEV#: 3 DEVICE NAME: vpath3 TYPE: 1750500 POLICY: Optimized


SERIAL: 13AAGXA1100
LUN IDENTIFIER: 600507630EFFFC6F0000000000001100
==========================================================================

300 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_hostconsid_open.fm

Path# Adapter/Hard Disk State Mode Select Errors


0* fscsi0/hdisk11 CLOSE NORMAL 0 0
1 fscsi0/hdisk13 CLOSE NORMAL 0 0
2* fscsi1/hdisk15 CLOSE NORMAL 0 0
3 fscsi1/hdisk17 CLOSE NORMAL 0 0

The datapath query portmap command shows the usage of the ports. In Example 15-8 you
see a mixed DS8000 and DS6000 disks configuration seen by the server. For the DS6000 the
datapath query portmap command uses capital letters for the preferred paths and lower case
letters for non preferred paths —this does not apply to the DS8000.

Example 15-8 datapath query portmap on AIX


root@sanh70:/ > datapath query portmap
BAY-1(B1) BAY-2(B2) BAY-3(B3) BAY-4(B4)
ESSID DISK H1 H2 H3 H4 H1 H2 H3 H4 H1 H2 H3 H4 H1 H2 H3 H4
ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD
BAY-5(B5) BAY-6(B6) BAY-7(B7) BAY-8(B8)
H1 H2 H3 H4 H1 H2 H3 H4 H1 H2 H3 H4 H1 H2 H3 H4
ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD
13AAGXA vpath2 Y--- ---- ---- ---- y--- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ----
13AAGXA vpath3 o--- ---- ---- ---- O--- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ----
7506571 vpath0 -Y-- ---- ---- ---- ---- ---- -Y-- ---- ---- ---- ---- ---- ---- ---- ---- ----
7506571 vpath1 -O-- ---- ---- ---- ---- ---- -O-- ---- ---- ---- ---- ---- ---- ---- ---- ----

Y = online/open y = (alternate path) online/open


O = online/closed o = (alternate path) online/closed
N = offline n = (alternate path) offline
- = path not configured
PD = path down

Note: 2105 devices' essid has 5 digits, while 1750/2107 device's essid has 7 digits.

Sometimes the lsvpcfg command helps you get an overview of your configuration. You can
easily count how many physical disks there are, with which serial number, and how many
paths. See Example 15-9.

Example 15-9 lsvpcfg command


root@sanh70:/ > lsvpcfg
vpath0 (Avail pv sdd_testvg) 75065711002 = hdisk1 (Avail ) hdisk3 (Avail ) hdisk5 (Avail ) hdisk8 (Avail )
vpath1 (Avail ) 75065711003 = hdisk2 (Avail ) hdisk4 (Avail ) hdisk6 (Avail ) hdisk9 (Avail )

There are also some other valuable features in SDD for AIX:
򐂰 Enhanced SDD configuration methods and migration.
SDD has a feature in the configuration method to read the pvid from the physical disks and
convert the pvid from hdisks to vpaths during the SDD vpath configuration. With this
feature, you can skip the process of converting the pvid from hdisks to vpaths after
configuring SDD devices. Furthermore, SDD migration can skip the pvid conversion
process. This tremendously reduces the SDD migration time, especially with a large
number of SDD devices and LVM configuration environment.
򐂰 Allow mixed volume groups with non-SDD devices in hd2vp, vp2hd, and dpovgfix.
Mixed volume group is supported by three SDD LVM conversion scripts: hd2vp, vp2hd,
and dpovgfix. These three SDD LVM conversion script files allow pvid conversion even if
the volume group consists of SDD supported devices and non-SDD supported devices.
Non-SDD supported devices allowed are IBM RDAC, EMC Powerpath, NEC MPO, and
Hitachi Dynamic Link Manager devices.

Chapter 15. Open systems considerations 301


6786ch_hostconsid_open.fm Draft Document for Review November 14, 2006 3:49 pm

򐂰 Migration option for large device configuration.


SDD offers an environment variable, SKIP_SDD_MIGRATION, for you to customize the
SDD migration or upgrade to maximize performance. The SKIP_SDD_MIGRATION
environment variable is an option available to permit the bypass of the SDD automated
migration process backup, restoration, and recovery of LVM configurations and SDD
device configurations. This variable can help decrease the SDD upgrade time if you
choose to reboot the system after upgrading SDD.
For details about these features see the IBM System Storage Multipath Subsystem Device
Driver User’s Guide, SC30-4131.

15.3.4 AIX Multipath I/O (MPIO)


AIX MPIO is an enhancement to the base OS environment that provides native support for
multi-path Fibre Channel storage attachment. MPIO automatically discovers, configures, and
makes available every storage device path. The storage device paths are managed to provide
high availability and load balancing of storage I/O. MPIO is part of the base kernel and is
available for AIX 5.2 and AIX 5.3.

The base functionality of MPIO is limited. It provides an interface for vendor-specific Path
Control Modules (PCMs) which allow for implementation of advanced algorithms.

IBM provides a PCM for DS8000 that enhances MPIO with all the features of the original
SDD. It is called SDDPCM and is available from the SDD download site.

For basic information about MPIO see the online guide AIX 5L System Management
Concepts: Operating System and Devices from the AIX documentation site, at:
http://publib16.boulder.ibm.com/pseries/en_US/aixbman/admnconc/hotplug_mgmt.htm#mpioconcepts

The management of MPIO devices is described in the online guide System Management
Guide: Operating System and Devices for AIX 5L from the AIX documentation site, at:
http://publib16.boulder.ibm.com/pseries/en_US/aixbman/baseadmn/manage_mpio.htm

For information about SDDPCM commands, refer to the IBM System Storage Multipath
Subsystem Device Driver User’s Guide, SC30-4131. The SDDPCM Web site is located at:
http://www-1.ibm.com/servers/storage/support/software/sdd/index.html

Benefits of MPIO
There are some reasons to prefer MPIO with SDDPCM to traditional SDD:
򐂰 Performance improvements due to direct integration with AIX
򐂰 Better integration if different storage systems are attached
򐂰 Easier administration through native AIX commands

Restrictions and considerations


The following considerations apply:
򐂰 Default MPIO is not supported on DS8000.
򐂰 MPIO with PCM is at the moment not supported in a HACMP environment.
򐂰 SDDPCM and SDD cannot coexist on an AIX server. If a server connects to both ESS
storage devices and DS family storage devices, all devices must be configured either as
non-MPIO-capable devices or as MPIO-capable devices.
򐂰 If you choose to use MPIO with SDDPCM instead of SDD, you have to remove the regular
DS8000 Host Attachment Script and install the MPIO version of it. This script identifies the

302 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_hostconsid_open.fm

DS8000 volumes to the operating system as MPIO manageable. Of course, you can’t have
SDD and MPIO/SDDPCM on a given server at the same time.
򐂰 A point worth considering when deciding between SDD and MPIO is, that the IBM System
Storage SAN Volume Controller does not support MPIO at this moment. For updated
information refer to:
http://www.ibm.com/servers/storage/software/virtualization/svc/index.html

Setup and use


The following filesets are needed for MPIO on AIX:
򐂰 devices.common.IBM.mpio.rte
򐂰 devices.fcp.disk.ibm.mpio.rte
򐂰 devices.sddpcm.52.rte and devices.sddpcm.53.rte dependent on the OS level

The following filesets are not needed and must be removed:


򐂰 devices.sdd.52.rte
򐂰 devices.fcp.disk.ibm.rte

Other than with SDD, each disk will only be presented one time, and you can use normal AIX
commands; see Example 15-10. The DS8000 disk will only be seen one time as IBM MPIO
FC 2107.

Example 15-10 MPIO lsdev


root@san5198b:/ > lsdev -Cc disk
hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drive
hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drive
hdisk2 Available 1p-20-02 IBM MPIO FC 2107
hdisk3 Available 1p-20-02 IBM MPIO FC 2107

Like SDD, MPIO with PCM supports the preferred path of DS6000 — in the DS8000 there are
no preferred paths. The algorithm of load leveling can be changed like SDD.

Example 15-11 shows a pcmpath query device command for a mixed environment, with two
DS8000s and one DS6000 disk.

Example 15-11 MPIO pcmpath query device


root@san5198b:/ > pcmpath query device

DEV#: 2 DEVICE NAME: hdisk2 TYPE: 2107900 ALGORITHM: Load Balance


SERIAL: 75065711100
==========================================================================
Path# Adapter/Path Name State Mode Select Errors
0 fscsi0/path0 OPEN NORMAL 1240 0
1 fscsi0/path1 OPEN NORMAL 1313 0
2 fscsi0/path2 OPEN NORMAL 1297 0
3 fscsi0/path3 OPEN NORMAL 1294 0

DEV#: 3 DEVICE NAME: hdisk3 TYPE: 2107900 ALGORITHM: Load Balance


SERIAL: 75065711101
==========================================================================
Path# Adapter/Path Name State Mode Select Errors
0 fscsi0/path0 CLOSE NORMAL 0 0
1 fscsi0/path1 CLOSE NORMAL 0 0
2 fscsi0/path2 CLOSE NORMAL 0 0
3 fscsi0/path3 CLOSE NORMAL 0 0

Chapter 15. Open systems considerations 303


6786ch_hostconsid_open.fm Draft Document for Review November 14, 2006 3:49 pm

DEV#: 4 DEVICE NAME: hdisk4 TYPE: 1750500 ALGORITHM: Load Balance


SERIAL: 13AAGXA1101
==========================================================================
Path# Adapter/Path Name State Mode Select Errors
0* fscsi0/path0 OPEN NORMAL 12 0
1 fscsi0/path1 OPEN NORMAL 3787 0
2* fscsi1/path2 OPEN NORMAL 17 0
3 fscsi1/path3 OPEN NORMAL 3822 0

All other commands are like with SDD, such as pcmpath query essmap or pcmpath query
portmap. In Example 15-12 you see these commands in a mixed environment with two
DS8000 disks and one DS6000 disk.

Example 15-12 MPIO pcmpath queries in a mixed DS8000 and DS6000 environment
root@san5198b:/ > pcmpath query essmap
Disk Path P Location adapter LUN SN Type Size LSS Vol Rank C/A S Connection port RaidMod
e
------- ----- - ---------- -------- -------- ------------ ---- --- --- ---- --- - ----------- ---- -------
-
hdisk2 path0 1p-20-02[FC] fscsi0 75065711100 IBM 2107-900 5.0 17 0 0000 17 Y R1-B1-H1-ZB 1 RAID5
hdisk2 path1 1p-20-02[FC] fscsi0 75065711100 IBM 2107-900 5.0 17 0 0000 17 Y R1-B2-H3-ZB 131 RAID5
hdisk2 path2 1p-20-02[FC] fscsi0 75065711100 IBM 2107-900 5.0 17 0 0000 17 Y R1-B3-H4-ZB 241 RAID5
hdisk2 path3 1p-20-02[FC] fscsi0 75065711100 IBM 2107-900 5.0 17 0 0000 17 Y R1-B4-H2-ZB 311 RAID5
hdisk3 path0 1p-20-02[FC] fscsi0 75065711101 IBM 2107-900 5.0 17 1 0000 17 Y R1-B1-H1-ZB 1 RAID5
hdisk3 path1 1p-20-02[FC] fscsi0 75065711101 IBM 2107-900 5.0 17 1 0000 17 Y R1-B2-H3-ZB 131 RAID5
hdisk3 path2 1p-20-02[FC] fscsi0 75065711101 IBM 2107-900 5.0 17 1 0000 17 Y R1-B3-H4-ZB 241 RAID5
hdisk3 path3 1p-20-02[FC] fscsi0 75065711101 IBM 2107-900 5.0 17 1 0000 17 Y R1-B4-H2-ZB 311 RAID5
hdisk4 path0 * 1p-20-02[FC] fscsi0 13AAGXA1101 IBM 1750-500 10.0 17 1 0000 07 Y R1-B1-H1-ZA 0 RAID5
hdisk4 path1 1p-20-02[FC] fscsi0 13AAGXA1101 IBM 1750-500 10.0 17 1 0000 07 Y R1-B2-H1-ZA 100 RAID5
hdisk4 path2 * 1p-28-02[FC] fscsi1 13AAGXA1101 IBM 1750-500 10.0 17 1 0000 07 Y R1-B1-H1-ZA 0 RAID5
hdisk4 path3 1p-28-02[FC] fscsi1 13AAGXA1101 IBM 1750-500 10.0 17 1 0000 07 Y R1-B2-H1-ZA 100 RAID5

root@san5198b:/ > pcmpath query portmap


BAY-1(B1) BAY-2(B2) BAY-3(B3) BAY-4(B4)
ESSID DISK H1 H2 H3 H4 H1 H2 H3 H4 H1 H2 H3 H4 H1 H2 H3 H4
ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD
BAY-5(B5) BAY-6(B6) BAY-7(B7) BAY-8(B8)
ESSID DISK H1 H2 H3 H4 H1 H2 H3 H4 H1 H2 H3 H4 H1 H2 H3 H4
ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD
7506571 hdisk2 -Y-- ---- ---- ---- ---- ---- -Y-- ---- ---- ---- ---- -Y-- ---- -Y-- ---- ----
7506571 hdisk3 -O-- ---- ---- ---- ---- ---- -O-- ---- ---- ---- ---- -O-- ---- -O-- ---- ----
13AAGXA hdisk4 y--- ---- ---- ---- Y--- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ----

Y = online/open y = (alternate path) online/open


O = online/closed o = (alternate path) online/closed
N = offline n = (alternate path) offline
- = path not configured
? = path information not available
PD = path down

Note: 2105 devices' essid has 5 digits, while 1750/2107 device's essid has 7 digits.

Note that the non-preferred path asterisk is only for the DS6000.

Determine the installed SDDPCM level


You use the same command as for SDD, lslpp -l "*sdd*", to determine the installed level of
SDDPCM. It will also tell you whether you have SDD or SDDPCM installed.

SDDPCM software provides useful commands such as:


򐂰 pcmpath query device to check the configuration status of the devices
򐂰 pcmpath query adapter to display information about adapters
򐂰 pcmpath query essmap to display each device, path, location, and attributes

304 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_hostconsid_open.fm

Useful MPIO commands


The lspath command displays the operational status for the paths to the devices, as shown in
Example 15-13. It can also be used to read the attributes of a given path to an MPIO-capable
device.

Example 15-13 lspath command result


{part1:root}/ -> lspath |pg
Enabled hdisk0 scsi0
Enabled hdisk1 scsi0
Enabled hdisk2 scsi0
Enabled hdisk3 scsi7
Enabled hdisk4 scsi7
...
Missing hdisk9 fscsi0
Missing hdisk10 fscsi0
Missing hdisk11 fscsi0
Missing hdisk12 fscsi0
Missing hdisk13 fscsi0
...
Enabled hdisk96 fscsi2
Enabled hdisk97 fscsi6
Enabled hdisk98 fscsi6
Enabled hdisk99 fscsi6
Enabled hdisk100 fscsi6

The chpath command is used to perform change operations on a specific path. It can either
change the operational status or tunable attributes associated with a path. It cannot perform
both types of operations in a single invocation.

The rmpath command un-configures or un-defines, or both, one or more paths to a target
device. It is not possible to un-configure (un-define) the last path to a target device using the
rmpath command. The only way to do this is to un-configure the device itself (for example, use
the rmdev command).

Refer to the manpages of the MPIO commands for more information

15.3.5 LVM configuration


In AIX, all storage is managed by the AIX Logical Volume Manager (LVM). It virtualizes
physical disks to be able to dynamically create, delete, resize, and move logical volumes for
application use. To AIX our DS8000 logical volumes appear as physical SCSI disks. There
are some considerations to take into account when configuring LVM.

LVM striping
Striping is a technique for spreading the data in a logical volume across several physical disks
in such a way that all disks are used in parallel to access data on one logical volume. The
primary objective of striping is to increase the performance of a logical volume beyond that of
a single physical disk.

In the case of a DS8000, LVM striping can be used to distribute data across more than one
array (rank).

Refer to Chapter 10, “Performance” on page 167 for a more detailed discussion of methods to
optimize performance.

Chapter 15. Open systems considerations 305


6786ch_hostconsid_open.fm Draft Document for Review November 14, 2006 3:49 pm

LVM mirroring
LVM has the capability to mirror logical volumes across several physical disks. This improves
availability, because in case a disk fails, there will be another disk with the same data. When
creating mirrored copies of logical volumes, make sure that the copies are indeed distributed
across separate disks.

With the introduction of SAN technology, LVM mirroring can even provide protection against a
site failure. Using long wave Fibre Channel connections, a mirror can be stretched up to a 10
km distance.

15.3.6 AIX access methods for I/O


AIX provides several modes to access data in a file system. It may be important for
performance to choose the right access method.

Synchronous I/O
Synchronous I/O occurs while you wait. An application’s processing cannot continue until the
I/O operation is complete. This is a very secure and traditional way to handle data. It ensures
consistency at all times, but can be a major performance inhibitor. It also doesn’t allow the
operating system to take full advantage of functions of modern storage devices, such as
queueing, command reordering, and so on.

Asynchronous I/O
Asynchronous I/O operations run in the background and do not block user applications. This
improves performance, because I/O and application processing run simultaneously. Many
applications, such as databases and file servers, take advantage of the ability to overlap
processing and I/O. They have to take measures to ensure data consistency, though. You can
configure, remove, and change asynchronous I/O for each device using the chdev command
or SMIT.

Tip: If the number of async I/O (AIO) requests is high, then the recommendation is to
increase maxservers to approximately the number of simultaneous I/Os there might be. In
most cases, it is better to leave the minservers parameter to the default value since the AIO
kernel extension will generate additional servers if needed. By looking at the CPU
utilization of the AIO servers, if the utilization is even across all of them, that means that
they’re all being used; you may want to try increasing their number in this case. Running
pstat -a will allow you to see the AIO servers by name, and running ps -k will show them
to you as the name kproc.

Direct I/O
An alternative I/O technique called Direct I/O bypasses the Virtual Memory Manager (VMM)
altogether and transfers data directly from the user’s buffer to the disk and vice versa. The
concept behind this is similar to raw I/O in the sense that they both bypass caching at the file
system level. This reduces CPU overhead and makes more memory available to the
database instance, which can make more efficient use of it for its own purposes.

Direct I/O is provided as a file system option in JFS2. It can be used either by mounting the
corresponding file system with the mount –o dio option, or by opening a file with the
O_DIRECT flag specified in the open() system call. When a file system is mounted with the
–o dio option, all files in the file system use Direct I/O by default.

306 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_hostconsid_open.fm

Direct I/O benefits applications that have their own caching algorithms by eliminating the
overhead of copying data twice, first between the disk and the OS buffer cache, and then from
the buffer cache to the application’s memory.

For applications that benefit from the operating system cache, Direct I/O should not be used,
because all I/O operations would be synchronous. Direct I/O also bypasses the JFS2
read-ahead. Read-ahead can provide a significant performance boost for sequentially
accessed files.

Concurrent I/O
In 2003, IBM introduced a new file system feature called Concurrent I/O (CIO) for JFS2. It
includes all the advantages of Direct I/O and also relieves the serialization of write accesses.
It improves performance for many environments, particularly commercial relational
databases. In many cases, the database performance achieved using Concurrent I/O with
JFS2 is comparable to that obtained by using raw logical volumes.

A method for enabling the concurrent I/O mode is to use the mount -o cio option when
mounting a file system.

15.3.7 Boot device support


The DS8100 and DS8300 are supported as boot devices on System p servers that have Fibre
Channel boot capability. This support is not currently available for the IBM eServer
BladeCenter. Refer to IBM System Storage DS8000 Host Systems Attachment Guide,
SC26-7917, for additional information.

15.4 Linux
Linux is an open source UNIX-like kernel, originally created by Linus Torvalds. The term
Linux is often used to mean the whole operating system of GNU/Linux. The Linux kernel,
along with the tools and software needed to run an operating system, are maintained by a
loosely organized community of thousands of (mostly) volunteer programmers.

There are several organizations (distributors) that bundle the Linux kernel, tools, and
applications to form a distribution, a package that can be downloaded or purchased and
installed on a computer. Some of these distributions are commercial; others are not.

15.4.1 Support issues that distinguish Linux from other operating systems
Linux is different from the other, proprietary, operating systems in many ways:
򐂰 There is no one person or organization that can be held responsible or called for support.
򐂰 Depending on the target group, the distributions differ largely in the kind of support that is
available.
򐂰 Linux is available for almost all computer architectures.
򐂰 Linux is rapidly changing.

All these factors make it difficult to promise and provide generic support for Linux. As a
consequence, IBM has decided on a support strategy that limits the uncertainty and the
amount of testing.

IBM only supports the major Linux distributions that are targeted at enterprise clients:
򐂰 RedHat Enterprise Linux
򐂰 SUSE Linux Enterprise Server

Chapter 15. Open systems considerations 307


6786ch_hostconsid_open.fm Draft Document for Review November 14, 2006 3:49 pm

򐂰 Asianux (Red Flag Linux)

These distributions have release cycles of about one year, are maintained for five years, and
require the user to sign a support contract with the distributor. They also have a schedule for
regular updates. These factors mitigate the issues listed previously. The limited number of
supported distributions also allows IBM to work closely with the vendors to ensure
interoperability and support. Details about the supported Linux distributions can be found in
the DS8000 Interoperability Matrix:
http://www-1.ibm.com/servers/storage/disk/DS8000/interop.html

There are exceptions to this strategy when the market demand justifies the test and support
effort.

15.4.2 Reference material


There is a lot of information available that helps you set up your Linux server to attach it to a
DS8000 storage subsystem.

The DS8000 Host Systems Attachment Guide


The IBM System Storage DS8000 Host Systems Attachment Guide, SC26-7917, provides
instructions to prepare an Intel IA-32-based machine for DS8000 attachment, including:
򐂰 How to install and configure the FC HBA
򐂰 Peculiarities of the Linux SCSI subsystem
򐂰 How to prepare a system that boots from the DS8000

It is not very detailed with respect to the configuration and installation of the FC HBA drivers.

Implementing Linux with IBM disk storage


The redbook Implementing Linux with IBM Disk Storage, SG24-6261, covers several
hardware platforms and storage systems. It is not yet updated with information about the
DS8000. The details provided for the attachment to the IBM Enterprise Storage Server (ESS
2105) are mostly valid for the DS8000, too. Read it for information regarding storage
attachment:
򐂰 Via FCP to an IBM eServer System z running Linux
򐂰 To an IBM Eserver System p running Linux
򐂰 To an IBM Eserver BladeCenter running Linux

It can be downloaded from:


http://publib-b.boulder.ibm.com/abstracts/sg246261.html

Linux with System z and ESS: Essentials


The redbook Linux with zSeries and ESS: Essentials, SG24-7025, provides much information
about Linux on System z and the ESS. It also describes in detail how the Fibre Channel
(FCP) attachment of a storage system to zLinux works. It does not, however, describe the
actual implementation. This information is at:
http://www.redbooks.ibm.com/redbooks/pdfs/sg247025.pdf

Getting Started with zSeries Fibre Channel Protocol


The redpaper Getting Started with zSeries Fibre Channel Protocol, REDP0205 is an older
publication (last updated in 2003) that provides an overview of Fibre Channel (FC) topologies
and terminology, and instructions to attach open systems (Fixed Block) storage devices via
FCP to an IBM Eserver System z running Linux. It can be found at:
http://www.redbooks.ibm.com/redpapers/pdfs/redp0205.pdf

308 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_hostconsid_open.fm

Other sources of information


Numerous hints and tips, especially for Linux on System z, are available on the IBM
Redbooks technotes page:
http://www.redbooks.ibm.com/redbooks.nsf/tips/

IBM System z dedicates its own Web page to storage attachment via FCP:
http://www.ibm.com/servers/eserver/zseries/connectivity/ficon_resources.html

The System z connectivity support page lists all supported storage devices and SAN
components that can be attached to a System z server. There is an extra section for FCP
attachment:
http://www.ibm.com/servers/eserver/zseries/connectivity/#fcp

The whitepaper ESS Attachment to United Linux 1 (IA-32) is available at:


http://www.ibm.com/support/docview.wss?uid=tss1td101235

It is intended to help users to attach a server running an enterprise-level Linux distribution


based on United Linux 1 (IA-32) to the IBM 2105 Enterprise Storage Server. It provides very
detailed step-by-step instructions and a lot of background information about Linux and SAN
storage attachment.

Another whitepaper, Linux on IBM eServer pSeries SAN - Overview for Customers, describes
in detail how to attach SAN storage (ESS 2105 and DS4000(former FAStT)) to a System p
server running Linux:
http://www.ibm.com/servers/eserver/pseries/linux/whitepapers/linux_san.pdf

Most of the information provided in these publications is valid for DS8000 attachment,
although much of it was originally written for the ESS 2105.

15.4.3 Important Linux issues


Linux treats SAN-attached storage devices like conventional SCSI disks. The Linux SCSI I/O
subsystem has some peculiarities that are important enough to be described here, even if
they show up in some of the publications listed in the previous section.

Some Linux SCSI basics


Within the Linux kernel, device types are defined by major numbers. The instances of a given
device type are distinguished by their minor number. They are accessed through special
device files. For SCSI disks, the device files /dev/sdx are used, with x being a letter from a
through z for the first 26 SCSI disks discovered by the system and continuing with aa, ab, ac,
and so on, for subsequent disks. Due to the mapping scheme of SCSI disks and their
partitions to major and minor numbers, each major number allows for only 16 SCSI disk
devices. Therefore we need more than one major number for the SCSI disk device type.
Table 15-1 shows the assignment of special device files to major numbers.

Table 15-1 Major numbers and special device files


Major number First special device file Last special device file

8 /dev/sda /dev/sdp

65 /dev/sdq /dev/sdaf

66 /dev/sdag /dev/sdav

71 /dev/sddi /dev/sddx

Chapter 15. Open systems considerations 309


6786ch_hostconsid_open.fm Draft Document for Review November 14, 2006 3:49 pm

Major number First special device file Last special device file

128 /dev/sddy /dev/sden

129 /dev/sdeo /dev/sdfd

135 /dev/sdig /dev/sdiv

Each SCSI device can have up to 15 partitions, which are represented by the special device
files /dev/sda1™, /dev/sda2, and so on. The mapping of partitions to special device files and
major and minor numbers is shown in Table 15-2.

Table 15-2 Minor numbers, partitions, and special device files


Major number Minor number Special device file Partition

8 0 /dev/sda All of 1st disk

8 1 /dev/sda1 1st partition of 1st disk

...

8 15 /dev/sda15 15th partition of 1st


disk

8 16 /dev/sdb All of 2nd disk

8 17 /dev/sdb1 1st partition of 2nd disk

...

8 31 /dev/sdb15 15th partition of 2nd


disk

8 32 /dev/sdc All of 3rd disk

...

8 255 /dev/sdp15 15th partition of 16th


disk

65 0 /dev/sdq All of 16th disk

65 1 /dev/sdq1 1st partition on 16th


disk

... ...

Missing device files


The Linux distributors do not always create all the possible special device files for SCSI disks.
If you attach more disks than there are special device files available, Linux will not be able to
address them. You can create missing device files with the mknod command. The mknod
command requires four parameters in a fixed order:
򐂰 The name of the special device file to create
򐂰 The type of the device: b stands for a block device, c for a character device
򐂰 The major number of the device
򐂰 The minor number of the device

Refer to the man page of the mknod command for more details. Example 15-14 on page 311
shows the creation of special device files for the seventeenth SCSI disk and its first three
partitions.

310 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_hostconsid_open.fm

Example 15-14 Create new special device files for SCSI disks
mknod /dev/sdq b 65 0
mknod /dev/sdq1 b 65 1
mknod /dev/sdq2 b 65 2
mknod /dev/sdq3 b 65 3

After creating the device files you may have to change their owner, group, and file permission
settings to be able to use them. Often, the easiest way to do this is by duplicating the settings
of existing device files, as shown in Example 15-15. Be aware that after this sequence of
commands, all special device files for SCSI disks have the same permissions. If an
application requires different settings for certain disks, you have to correct them afterwards.

Example 15-15 Duplicating the permissions of special device files


knox:~ # ls -l /dev/sda /dev/sda1
rw-rw---- 1 root disk 8, 0 2003-03-14 14:07 /dev/sda
rw-rw---- 1 root disk 8, 1 2003-03-14 14:07 /dev/sda1
knox:~ # chmod 660 /dev/sd*
knox:~ # chown root:disk /dev/sda*

Managing multiple paths


If you assign a DS8000 volume to a Linux system through more than one path, it will see the
same volume more than once. It will also assign more than one special device file to it. To
utilize the path redundancy and increased I/O bandwidth, you need an additional layer in the
Linux disk subsystem to recombine the multiple disks seen by the system into one, to manage
the paths, and to balance the load across them.

The IBM multipathing solution for DS8000 attachment to Linux on Intel IA-32 and IA-64
architectures, IBM System p, and System i is the IBM Subsystem Device Driver (SDD). SDD
for Linux is available in the Linux RPM package format for all supported distributions from the
SDD download site. It is proprietary and binary only. It only works with certain kernel versions
with which it was tested. The readme file on the SDD for Linux download page contains a list
of the supported kernels.

The version of the Linux Logical Volume Manager that comes with all current Linux
distributions does not support its physical volumes being placed on SDD vpath devices.

What is new with SDD 1.6


The following is new:
򐂰 Redhat and Red Flag: They do not allow rpm upgrade or removal while SDD is in use. This
can be overridden with a --nopre and --nopreun flag to rpm, but since SUSE does not
support these flags the feature is not available in SUSE (--noscripts prevents required post
conditions from running as well, so is not an option).
򐂰 Tracing is now turned on by default for SDD. The SDD driver logs are saved to
/var/log/sdd.log and the sddsrv daemon logs are saved to /var/log/sddsrv.log.
򐂰 As part of the new performance improvement, we separate out an optimized sequential
policy from the optimized policy, and added a round-robin sequential policy. The
optimized sequential policy is now the default policy of Linux. Both sequential policies
base the path selection on whether the I/O is sequential, and if not, fall through to use the
existing optimized (load balanced) or round-robin policies. Highly sequential I/O can have
a significant performance improvement, and non-sequential I/O should perform the same
as without the sequential policy in place.

Chapter 15. Open systems considerations 311


6786ch_hostconsid_open.fm Draft Document for Review November 14, 2006 3:49 pm

򐂰 Non-root users can now open a vpath device. Before, only root users would have this
privilege, but with the new capabilities in the OS, non-root users can do the same.

Note: SDD is not available for Linux on System z. SUSE Linux Enterprise Server 8 for
System z comes with built-in multipathing provided by a patched Logical Volume Manager.
Today there is no multipathing support for Redhat Enterprise Linux for System z.

Limited number of SCSI devices


Due to the design of the Linux SCSI I/O subsystem in the Linux Kernel Version 2.4, the
number of SCSI disk devices is limited to 256. Attaching devices through more than one path
reduces this number. If, for example, all disks were attached through four paths, only up to 64
disks could be used.

Important: The latest update to the SUSE Linux Enterprise Server 8, Service Pack 3 uses
a more dynamic method of assigning major numbers and allows the attachment of up to
2304 SCSI devices.

SCSI device assignment changes


Linux assigns special device files to SCSI disks in the order they are discovered by the
system. Adding or removing disks can change this assignment. This can cause serious
problems if the system configuration is based on special device names (for example, a file
system that is mounted using the /dev/sda1 device name). You can avoid some of them by
using:
򐂰 Disk labels instead of device names in /etc/fstab
򐂰 LVM logical volumes instead of /dev/sdxx devices for file systems
򐂰 SDD, which creates a persistent relationship between a DS8000 volume and a vpath
device regardless of the /dev/sdxx devices

RedHat Enterprise Linux (RH-EL) multiple LUN support


RH-EL by default is not configured for multiple LUN support. It will only discover SCSI disks
addressed as LUN 0. The DS8000 provides the volumes to the host with a fixed Fibre
Channel address and varying LUN. Therefore RH-EL 3 will see only one DS8000 volume
(LUN 0), even if more are assigned to it.

Multiple LUN support can be added with an option to the SCSI midlayer Kernel module
scsi_mod. To have multiple LUN support added permanently at boot time of the system, add
the following line to the file /etc/modules.conf: options scsi_mod max_scsi_luns=128.

After saving the file, rebuild the module dependencies by running: depmod -a

Now you have to rebuild the Initial RAM Disk, using the command:
mkinitrd <initrd-image> <kernel-version>

Issue mkinitrd -h for more help information. A reboot is required to make the changes
effective.

Fibre Channel disks discovered before internal SCSI disks


In some cases, when the Fibre Channel HBAs are added to a RedHat Enterprise Linux
system, they will be automatically configured in a way that they are activated at boot time,
before the built-in parallel SCSI controller that drives the system disks. This will lead to shifted
special device file names of the system disk and can result in the system being unable to boot
properly.

312 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_hostconsid_open.fm

To prevent the FC HBA driver from being loaded before the driver for the internal SCSI HBA
you have to change the /etc/modules.conf file:
򐂰 Locate the lines containing scsi_hostadapterx entries, where x is a number.
򐂰 Reorder these lines: First come the lines containing the name of the internal HBA driver
module, then the ones with the FC HBA module entry.
򐂰 Renumber lines: No number for the first entry, 1 for the second, 2 for the third, and so on.

After saving the file, rebuild the module dependencies by running: depmod -a

Now you have to rebuild the Initial RAM Disk, using the command:
mkinitrd <initrd-image> <kernel-version>

Issue mkinitrd -h for more help information. If you reboot now, the SCSI and FC HBA drivers
will be loaded in the correct order.

Example 15-16 shows how the /etc/modules.conf file should look with two Adaptec SCSI
controllers and two QLogic 2340 FC HBAs installed. It also contains the line that enables
multiple LUN support. Note that the module names will be different with different SCSI and
Fibre Channel adapters.

Example 15-16 Sample /etc/modules.conf


scsi_hostadapter aic7xxx
scsi_hostadapter1 aic7xxx
scsi_hostadapter2 qla2300
scsi_hostadapter3 qla2300
options scsi_mod max_scsi_luns=128

Adding FC disks dynamically


The commonly used way to discover newly attached DS8000 volumes is to unload and reload
the Fibre Channel HBA driver. However, this action is disruptive to all applications that use
Fibre Channel attached disks on this particular host.

A Linux system can recognize newly attached LUNs without unloading the FC HBA driver.
The procedure slightly differs depending on the installed FC HBAs.

In case of QLogic HBAs issue the command


echo "scsi-qlascan" > /proc/scsi/qla2300/<adapter-instance>.

With Emulex HBAs, issue the command sh force_lpfc_scan.sh "lpfc<adapter-instance>"

This script is not part of the regular device driver package and must be downloaded
separately: http://www.emulex.com/ts/downloads/linuxfc/rel/201g/force_lpfc_scan.sh

It requires the tool dfc to be installed under /usr/sbin/lpfc.

In both cases the command must be issued for each installed HBA, with the
<adapter-instance> being the SCSI instance number of the HBA.

After the FC HBAs rescan the fabric, you can make the new devices available to the system
with the command echo "scsi add-single-device s c t l" > /proc/scsi/scsi.

The quadruple s c t l is the physical address of the device:


򐂰 s is the SCSI instance of the FC HBA.
򐂰 c is the channel (in our case always 0).
򐂰 t is the target address (usually 0, except if a volume is seen by a HBA more than once).
򐂰 l is the LUN.

Chapter 15. Open systems considerations 313


6786ch_hostconsid_open.fm Draft Document for Review November 14, 2006 3:49 pm

The new volumes are added after the already existing ones. The following examples illustrate
this. Example 15-17 shows the original disk assignment as it existed since the last system
start.

Example 15-17 SCSi disks attached at system start time


/dev/sda - internal SCSI disk
/dev/sdb - 1st DS8000 volume, seen by HBA 0
/dev/sdc - 2nd DS8000 volume, seen by HBA 0
/dev/sdd - 1st DS8000 volume, seen by HBA 1
/dev/sde - 2nd DS8000 volume, seen by HBA 1

Example 15-18 shows the SCSI disk assignment after one more DS8000 volume is added.

Example 15-18 SCSi disks after dynamic addition of another DS8000 volume
/dev/sda - internal SCSI disk
/dev/sdb - 1st DS8000 volume, seen by HBA 0
/dev/sdc - 2nd DS8000 volume, seen by HBA 0
/dev/sdd - 1st DS8000 volume, seen by HBA 1
/dev/sde - 2nd DS8000 volume, seen by HBA 1
/dev/sdf - new DS8000 volume, seen by HBA 0
/dev/sdg - new DS8000 volume, seen by HBA 1

The mapping of special device files is now different than it would have been if all three
DS8000 volumes had already been present when the HBA driver was loaded. In other words,
if the system is now restarted, the device ordering will change to what is shown in
Example 15-19.

Example 15-19 SCSi disks after dynamic addition of another DS8000 volume and reboot
/dev/sda - internal SCSI disk
/dev/sdb - 1st DS8000 volume, seen by HBA 0
/dev/sdc - 2nd DS8000 volume, seen by HBA 0
/dev/sdd - new DS8000 volume, seen by HBA 0
/dev/sde - 1st DS8000 volume, seen by HBA 1
/dev/sdf - 2nd DS8000 volume, seen by HBA 1
/dev/sdg - new DS8000 volume, seen by HBA 1

Gaps in the LUN sequence


The Qlogic HBA driver cannot deal with gaps in the LUN sequence. When it tries to discover
the attached volumes, it probes for the different LUNs, starting at LUN 0 and continuing until it
reaches the first LUN without a device behind it.

When assigning volumes to a Linux host with QLogic FC HBAs, make sure LUNs start at 0
and are in consecutive order. Otherwise the LUNs after a gap will not be discovered by the
host. Gaps in the sequence can occur when you assign volumes to a Linux host that are
already assigned to another server.

The Emulex HBA driver behaves differently: It always scans all LUNs up to 127.

15.4.4 Troubleshooting and monitoring


In this section we discuss topics related to troubleshooting and monitoring.

314 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_hostconsid_open.fm

The /proc pseudo file system


The /proc pseudo file system is maintained by the Linux kernel and provides dynamic
information about the system. The directory /proc/scsi contains information about the
installed and attached SCSI devices.

The file /proc/scsi/scsi contains a list of all attached SCSI devices, including disk, tapes,
processors, and so on. Example 15-20 shows a sample /proc/scsi/scsi file.

Example 15-20 Sample /proc/scsi/scsi file


knox:~ # cat /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: IBM-ESXS Model: DTN036C1UCDY10F Rev: S25J
Type: Direct-Access ANSI SCSI revision: 03
Host: scsi0 Channel: 00 Id: 08 Lun: 00
Vendor: IBM Model: 32P0032a S320 1 Rev: 1
Type: Processor ANSI SCSI revision: 02
Host: scsi2 Channel: 00 Id: 00 Lun: 00
Vendor: IBM Model: 2107900 Rev: .545
Type: Direct-Access ANSI SCSI revision: 03
Host: scsi2 Channel: 00 Id: 00 Lun: 01
Vendor: IBM Model: 2107900 Rev: .545
Type: Direct-Access ANSI SCSI revision: 03
Host: scsi2 Channel: 00 Id: 00 Lun: 02
Vendor: IBM Model: 2107900 Rev: .545
Type: Direct-Access ANSI SCSI revision: 03
Host: scsi3 Channel: 00 Id: 00 Lun: 00
Vendor: IBM Model: 2107900 Rev: .545
Type: Direct-Access ANSI SCSI revision: 03
Host: scsi3 Channel: 00 Id: 00 Lun: 01
Vendor: IBM Model: 2107900 Rev: .545
Type: Direct-Access ANSI SCSI revision: 03
Host: scsi3 Channel: 00 Id: 00 Lun: 02
Vendor: IBM Model: 2107900 Rev: .545
Type: Direct-Access ANSI SCSI revision: 03

There is also an entry in /proc for each HBA, with driver and firmware levels, error counters,
and information about the attached devices. Example 15-21 shows the condensed content of
the entry for a QLogic Fibre Channel HBA.

Example 15-21 Sample /proc/scsi/qla2300/x


knox:~ # cat /proc/scsi/qla2300/2
QLogic PCI to Fibre Channel Host Adapter for ISP23xx:
Firmware version: 3.01.18, Driver version 6.05.00b9
Entry address = c1e00060
HBA: QLA2312 , Serial# H28468
Request Queue = 0x21f8000, Response Queue = 0x21e0000
Request Queue count= 128, Response Queue count= 512
.
.
Login retry count = 012
Commands retried with dropped frame(s) = 0

SCSI Device Information:


scsi-qla0-adapter-node=200000e08b0b941d;
scsi-qla0-adapter-port=210000e08b0b941d;
scsi-qla0-target-0=5005076300c39103;

Chapter 15. Open systems considerations 315


6786ch_hostconsid_open.fm Draft Document for Review November 14, 2006 3:49 pm

SCSI LUN Information:


(Id:Lun)
( 0: 0): Total reqs 99545, Pending reqs 0, flags 0x0, 0:0:81,
( 0: 1): Total reqs 9673, Pending reqs 0, flags 0x0, 0:0:81,
( 0: 2): Total reqs 100914, Pending reqs 0, flags 0x0, 0:0:81,

Performance monitoring with iostat


The iostat command can be used to monitor the performance of all attached disks. It is
shipped with every major Linux distribution, but not necessarily installed by default. It reads
data provided by the kernel in /proc/stats and prints it in human readable format. See the man
page of iostat for more details.

The generic SCSI tools


The SUSE Linux Enterprise Server comes with a set of tools that allow low-level access to
SCSI devices. They are called the sg tools. They talk to the SCSI devices through the generic
SCSI layer, which is represented by special device files /dev/sg0, /dev/sg0, and so on.

By default SLES 8 provides sg device files for up to 16 SCSI devices (/dev/sg0 through
/dev/sg15). Additional sg device files can be created using the command mknod. After creating
new sg devices you should change their group setting from root to disk. Example 15-22
shows the creation of /dev/sg16, which would be the first one to create.

Example 15-22 Creation of new device files for generic SCSI devices
mknod /dev/sg16 c 21 16
chgrp disk /dev/sg16

Useful sg tools are:


򐂰 sg_inq /dev/sgx prints SCSI Inquiry data, such as the volume serial number.
򐂰 sg_scan prints the /dev/sg → scsihost, channel, target, LUN mapping.
򐂰 sg_map prints the /dev/sd → /dev/sg mapping.
򐂰 sg_readcap prints the block size and capacity (in blocks) of the device.

sginfo prints SCSI inquiry and mode page data; it also allows manipulating the mode pages.

15.5 OpenVMS
DS8000 supports FC attachment of OpenVMS Alpha systems with operating system
Version 7.3 or later. For details regarding operating system versions and HBA types, see the
DS8000 Interoperability Matrix, available at:
http://www.ibm.com/servers/storage/disk/ds8000/interop.html

The support includes clustering and multiple paths (exploiting the OpenVMS built-in
multipathing). Boot support is available via Request for Price Quotations (RPQ).

15.5.1 FC port configuration


In early DS8000 codes, the OpenVMS FC driver had some limitations in handling FC error
recovery. The operating system may react to some situations with MountVerify conditions,
which are not recoverable. Affected processes may hang and eventually stop.

316 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_hostconsid_open.fm

Instead of writing a special OpenVMS driver, it had been decided to handle this in the
DS8000 host adapter microcode. As a result, it became a general rule not to share DS8000
FC ports between OpenVMS and non-OpenVMS hosts.

Important: The DS8000 FC ports used by OpenVMS hosts must not be accessed by any
other operating system, not even accidentally. The OpenVMS hosts have to be defined for
access to these ports only, and it must be ensured that no foreign HBA (without definition
as an OpenVMS host) is seen by these ports. Conversely, an OpenVMS host must have
access only to the DS8000 ports configured for OpenVMS compatibility.

You must dedicate storage ports for only the OpenVMS host type. Multiple OpenVMS
systems can access the same port. Appropriate zoning must be enforced from the beginning.
Wrong access to storage ports used by OpenVMS hosts may clear the OpenVMS-specific
settings for these ports. This might remain undetected for a long time—until some failure
happens, and by then I/Os might be lost. It is worth mentioning that OpenVMS is the only
platform with such a restriction (usually, different open systems platforms can share the same
DS8000 FC adapters).

The restrictions listed in this section apply only if your DS8000 licensed machine code has a
version before 5.0.4. After these versions, the restrictions are removed. Note that you can
display the versions of the DS CLI, the DS Storage Manager, and the licensed machine code
by using the DS CLI command ver -l.

15.5.2 Volume configuration


OpenVMS Fibre Channel devices have device names according to the schema: $1$DGA<n>

With the following elements:


򐂰 The first portion of the device name ($1$) is the allocation class (a decimal number in the
range 1–255). FC devices always have the allocation class 1.
򐂰 The following two letters encode the drivers, where the first letter denotes the device class
(D = disks, M = magnetic tapes) and the second letter the device type (K = SCSI, G =
Fibre Channel). So all Fibre Channel disk names contain the code DG.
򐂰 The third letter denotes the adapter channel (from range A to Z). Fibre Channel devices
always have the channel identifier A.
򐂰 The number <n> is the User-Defined ID (UDID), a number from the range 0–32767,
which is provided by the storage system in response to an OpenVMS-special SCSI inquiry
command (from the range of command codes reserved by the SCSI standard for vendor’s
private use).

OpenVMS does not identify a Fibre Channel disk by its path or SCSI target/LUN like other
operating systems. It relies on the UDID. Although OpenVMS uses the WWID to control all FC
paths to a disk, a Fibre Channel disk that does not provide this additional UDID cannot be
recognized by the operating system.

In the DS8000, the volume nickname acts as the UDID for OpenVMS hosts. If the character
string of the volume nickname evaluates to an integer in the range 0–32767, then this integer
is replied as the answer when an OpenVMS host asks for the UDID.

The DS CLI command chfbvol -name 21 1001 assigns the OpenVMS UDID 21 to the
DS8000 volume 1001 (LSS 10, volume 01). Thus the DS8000 volume 1001 will appear as an
OpenVMS device with the name $1$DGA21 or $1$GGA21. The DS CLI command lshostvol
shows the DS8000 volumes with their corresponding OpenVMS device names.

Chapter 15. Open systems considerations 317


6786ch_hostconsid_open.fm Draft Document for Review November 14, 2006 3:49 pm

The DS management utilities do not enforce UDID rules. They accept incorrect values that
are not valid for OpenVMS. It is possible to assign the same UDID value to multiple DS8000
volumes. However, because the UDID is in fact the device ID seen by the operating system,
several consistency rules have to be fulfilled. These rules are described in detail in the
OpenVMS operating system documentation; see HP Guidelines for OpenVMS Cluster
Configurations at http://h71000.www7.hp.com/doc/72final/6318/6318pro.html:
򐂰 Every FC volume must have a UDID that is unique throughout the OpenVMS cluster that
accesses the volume. The same UDID may be used in a different cluster or for a different
stand-alone host.
򐂰 If the volume is planned for MSCP serving, then the UDID range is limited to 0–9999 (by
operating system restrictions in the MSCP code).

OpenVMS system administrators tend to use elaborate schemes for assigning UDIDs, coding
several hints about physical configuration into this logical ID, for instance, odd/even values or
reserved ranges to distinguish between multiple data centers, storage systems, or disk
groups. Thus they must be able to provide these numbers without additional restrictions
imposed by the storage system. In the DS8000, UDID is implemented with full flexibility, which
leaves the responsibility about restrictions to the user.

In Example 15-23, we configured a DS8000 volume with the UDID 8275 for OpenVMS
attachment. This gives the OpenVMS Fibre Channel disk device $1$DGA8275. You see the
output from the OpenVMS command show device/full $1$DGA8275. The OpenVMS host
has two Fibre Channel HBAs with names PGA0 and PGB0. Because each HBA accesses
two DS8000 ports, we have four I/O paths.

Example 15-23 OpenVMS volume configuration


$ show device/full $1$DGA8275:

Disk $1$DGA8275: (NFTE18), device type IBM 2107900, is online, file-oriented


device, shareable, device has multiple I/O paths, served to cluster via MSCP
Server, error logging is enabled.

Error count 0 Operations completed 2


Owner process "" Owner UIC [SYSTEM]
Owner process ID 00000000 Dev Prot S:RWPL,O:RWPL,G:R,W
Reference count 0 Default buffer size 512
Current preferred CPU Id 9 Fastpath 1
Host name "NFTE18" Host type, avail Compaq AlphaServer GS60 6/525, yes
Alternate host name "NFTE17" Alt. type, avail Compaq AlphaServer GS60 6/525, yes
Allocation class 1

I/O paths to device 5


Path MSCP (NFTE17), primary path.
Error count 0 Operations completed 0
Path PGA0.5005-0763-0319-8324 (NFTE18), current path.
Error count 0 Operations completed 1
Path PGA0.5005-0763-031B-C324 (NFTE18).
Error count 0 Operations completed 1
Path PGB0.5005-0763-0310-8324 (NFTE18).
Error count 0 Operations completed 0
Path PGB0.5005-0763-0314-C324 (NFTE18).
Error count 0 Operations completed 0

The DS CLI command lshostvol displays the mapping of DS8000 volumes to host system
device names. More details regarding this command can be found in the IBM System Storage
DS8000: Command-Line Interface User´s Guide, SC26-7916.

318 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_hostconsid_open.fm

15.5.3 Command Console LUN


HP StorageWorks FC controllers use LUN 0 as Command Console LUN (CCL) for
exchanging commands and information with in-band management tools. This concept is
similar to the Access LUN of IBM System Storage DS4000 (FAStT) controllers.

Because the OpenVMS FC driver has been written with StorageWorks controllers in mind,
OpenVMS always considers LUN 0 as CCL, never presenting this LUN as a disk device. On
HP StorageWorks HSG and HSV controllers, you cannot assign LUN 0 to a volume.

The DS8000 assigns LUN numbers per host using the lowest available number. The first
volume that is assigned to a host becomes this host’s LUN 0; the next volume is LUN 1, and
so on.

Because OpenVMS considers LUN 0 as CCL, the first DS8000 volume assigned to the host
cannot be used even when a correct UDID has been defined. So we recommend creating the
first OpenVMS volume with a minimum size as a dummy volume for use as the CCL. Multiple
OpenVMS hosts, even in different clusters, that access the same storage system can share
the same volume as LUN 0, because there will be no other activity to this volume. In large
configurations with more than 256 volumes per OpenVMS host or cluster, it might be
necessary to introduce another dummy volume (when LUN numbering starts again with 0).

Defining a UDID for the CCL is not required by the OpenVMS operating system. OpenVMS
documentation suggests that you always define a unique UDID since this identifier causes the
creation of a CCL device visible for the OpenVMS command show device or other tools.
Although an OpenVMS host cannot use the LUN for any other purpose, you can display the
multiple paths to the storage device, and diagnose failed paths. Fibre Channel CCL devices
have the OpenVMS device type GG.

In Example 15-24, the DS8000 volume with volume ID 100E is configured as an OpenVMS
device with UDID 9998. Because this was the first volume in the volume group, it became
LUN 0 and thus the CCL. Please note that the volume WWID, as displayed by the show
device/full command, contains the DS8000 World-Wide Node ID (6005-0763-03FF-C324)
and the DS8000 volume number (100E).

Example 15-24 OpenVMS command console LUN


$ show device/full $1$GGA9998:

Device $1$GGA9998:, device type Generic SCSI device, is online, shareable,


device has multiple I/O paths.

Error count 0 Operations completed 1


Owner process "" Owner UIC [SYSTEM]
Owner process ID 00000000 Dev Prot S:RWPL,O:RWPL,G:RWPL,W:RWPL
Reference count 0 Default buffer size 0
WWID 01000010:6005-0763-03FF-C324-0000-0000-0000-100E

I/O paths to device 4


Path PGA0.5005-0763-0319-8324 (NFTE18), primary path, current path.
Error count 0 Operations completed 1
Path PGA0.5005-0763-031B-C324 (NFTE18).
Error count 0 Operations completed 0
Path PGB0.5005-0763-0310-8324 (NFTE18).
Error count 0 Operations completed 0
Path PGB0.5005-0763-0314-C324 (NFTE18).
Error count 0 Operations completed 0

Chapter 15. Open systems considerations 319


6786ch_hostconsid_open.fm Draft Document for Review November 14, 2006 3:49 pm

The DS CLI command chvolgrp provides the flag -lun which can be used to control which
volume becomes LUN 0.

15.5.4 OpenVMS volume shadowing


OpenVMS disks can be combined in host-based mirror sets, called OpenVMS shadow sets.
This functionality is often used to build disaster-tolerant OpenVMS clusters.

The OpenVMS shadow driver has been designed for disks according to DEC’s Digital
Storage Architecture (DSA). This architecture, forward-looking in the 1980s, includes some
requirements that are handled by today’s SCSI/FC devices with other approaches. Two such
things are the forced error indicator and the atomic revector operation for bad-block
replacement.

When a DSA controller detects an unrecoverable media error, a spare block is revectored to
this logical block number, and the contents of the block are marked with a forced error. This
causes subsequent read operations to fail, which is the signal to the shadow driver to execute
a repair operation using data from another copy.

However, there is no forced error indicator in the SCSI architecture, and the revector
operation is nonatomic. As a substitute, the OpenVMS shadow driver exploits the SCSI
commands READ LONG (READL) and WRITE LONG (WRITEL), optionally supported by
some SCSI devices. These I/O functions allow data blocks to be read and written together
with their disk device error correction code (ECC). If the SCSI device supports
READL/WRITEL, OpenVMS shadowing emulates the DSA forced error with an intentionally
incorrect ECC. For details see Scott H. Davis, Design of VMS Volume Shadowing Phase II —
Host-based Shadowing, Digital Technical Journal Vol. 3 No. 3, Summer 1991, archived at:
http://research.compaq.com/wrl/DECarchives/DTJ/DTJ301/DTJ301SC.TXT

The DS8000 provides volumes as SCSI-3 devices and thus does not implement a forced
error indicator. It also does not support the READL and WRITEL command set for data
integrity reasons.

Usually the OpenVMS SCSI Port Driver recognizes if a device supports READL/WRITEL, and
the driver sets the NOFE (no forced error) bit in the Unit Control Block. You can verify this
setting with the SDA utility: After starting the utility with the analyze/system command, enter
the show device command at the SDA prompt. Then the NOFE flag should be shown in the
device’s characteristics.

The OpenVMS command for mounting shadow sets provides a qualifier


/override=no_forced_error to support non-DSA devices. To avoid possible problems
(performance loss, unexpected error counts, or even removal of members from the shadow
set), we recommend that you apply this qualifier.

15.6 VMware
The DS8000 currently supports VMware high-end virtualization product, the ESX Server,
starting with Version 2.5. The supported guest operating systems are Windows 2000,
Windows Server 2003, SUSE Linux SLES 8, and Red Hat Enterprise Linux 2.1 and 3.0. This
information is likely to change, so check the Interoperability matrix for complete, up-to-date
information: http://www.ibm.com/servers/storage/disk/ds8000/interop.html

A great deal of useful information is available in the IBM System Storage DS8000 Host
Systems Attachment Guide, SC26-7917. This section is not intended to duplicate that

320 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_hostconsid_open.fm

publication, but rather it provides more information about optimizing your VMware
environment as well as a step-by-step guide to setting up ESX Server with the DS8000.

VMware other products, such as GSX Server and Workstation, are not intended for the
datacenter-class environments where the DS8000 typically is used. Certain other products,
such as VMotion and VirtualCenter, may be supported on a case-by-case basis. Before using
the techniques describe below, check with IBM and the latest Interoperability matrix for the
supportability of these techniques.

15.6.1 What is new in VMware ESX Server 2.5


The complete list of the new features in VMware ESX Server 2.5 is available from VMware
Web site. We will instead focus on the storage-related features. One significant enhancement
is related to direct SAN access (also known as raw LUN access) from the virtual machines.
VMware introduced this support previously, but the capabilities have been improved with the
addition of Raw Device Mappings (RDMs). Using RDMs can improve your virtual machine
performance and reduce overhead.

ESX Server 2.5 also introduced support for booting directly from the SAN. This feature is not
yet fully supported with the DS8000, though support may be available via RPQ. See the
DS8000 Interoperability Matrix for the latest information on support for SAN boot.

15.6.2 VMware disk architecture


Each of the virtual machines (VMs) can access one or more virtual disks (VM Disk0, Disk1,
and so on). The virtual disks can either be virtual machine disk (.dsk) files stored on a
VMFS-2 volume, or they can be raw disks from the SAN.

Figure 15-6 The logical disk structure of ESX Server

In Figure 15-6, VM1, VM2, and VM3 use .dsk files stored on a SAN disk, while VM4 directly
uses a raw SAN disk. VMs can also use the physical server’s local storage (as the Console

Chapter 15. Open systems considerations 321


6786ch_hostconsid_open.fm Draft Document for Review November 14, 2006 3:49 pm

OS does), but these disks tend to be not as fast or reliable as SAN disks. Both the virtual
machine .dsk files and the raw disks represent what is seen as physical disks by the guest
OS.

15.6.3 VMware setup and configuration


These are the high-level steps that need to be done in order to use DS8000 disks with your
virtual machines.

Assigning LUNs to the ESX Server machine


Assign the LUNs that you want your virtual machines to use to your ESX Server machine’s
HBAs. One method of doing this volume assignment is to use the DS CLI. When making the
host connections, it is important to use the flags -addrdiscovery lunpolling, -lbs 512, and
-profile VMware. Another option is to use the -hosttype VMware parameter. When making
the volume groups, you should use the parameter -type scsimap256.

As with other operating systems, you should have multiple paths from your server to the
DS8000 to improve availability and reliability. Normally, the LUNs would show up as multiple
separate devices, but VMware contains native multipathing software that automatically
conceals the redundant paths. Therefore, multipathing software may not be needed on your
guest operating systems.

As with other operating systems, you should also use persistent binding. See the IBM System
Storage DS8000 Host Systems Attachment Guide, SC26-7917, on why persistent binding is
important and how to configure it for VMware.

Figure 15-7 Storage Management Failover Paths

322 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_hostconsid_open.fm

After the LUNs are assigned properly, you will be able to see them from the VMware
administration console, under Options → Storage Management → Failover Paths. You
may have to tell VMware to refresh its disks by selecting Rescan SAN in the upper right-hand
corner. Figure 15-7 shows the LUNs assigned to the ESX Server and the paths to each of the
LUNs.

Assigning LUNs to the guest operating system


Now that the ESX Server machine can see the DS8000 LUNs, they can be presented to the
guest operating system in one of three ways.

Note: The virtual machine should be powered off before adding hardware.

To select the appropriate type, select the appropriate VM, choose the Hardware tab and then
Add Device.
򐂰 Option 1: Formatting these disks with the VMFS: This option maximizes the virtualization
features that are possible, and allows the guest operating system to use the special
features of VMFS volumes. However, this mode has the most overhead of the three
options.
– After clicking Add Device, choose Hard Disk and then select the Blank type.
– Select the options for the new hard disk that are appropriate for your environment.
򐂰 Option 2: Passing the disk through to the guest OS as a raw disk in physical compatibility
mode: No further virtualization occurs; the OS will write its own file system onto that disk
directly, just as it would in a standalone environment, without an underlying VMFS
structure. I/Os pass through the virtualization layer with minimal modification. This option
requires the least overhead.
– After clicking Add Device, choose Hard Disk then select the System LUN/Disk type.
– On the next panel, choose the compatibility mode of Physical.
– Select the options for the new hard disk that are appropriate for your environment.
򐂰 Option 3: Passing the disk through to the guest OS as a raw disk in virtual compatibility
mode: This mode allows the VM to take advantage of disk modes and other features,
including redo logs. Please see the VMware documentation on the four different disk
modes: persistent, nonpersistent, undoable, and append.
– After clicking Add Device, choose Hard Disk then select the System LUN/Disk type.
– On the next screen, choose the compatibility mode of Virtual.
– Select the options for the new hard disk that are appropriate for your environment.

In Figure 15-8 on page 324, Virtual Disk (SCSI 0:1) is a VMware virtual disk, while Virtual
Disk (SCSI 0:2) is a physical SAN LUN and Virtual Disk (SCSI 0:3) is a virtual SAN LUN. This
VM also contains a local virtual disk, Virtual Disk (SCSI 0:0), which has the guest operating
system installed.

Chapter 15. Open systems considerations 323


6786ch_hostconsid_open.fm Draft Document for Review November 14, 2006 3:49 pm

Figure 15-8 Virtual Disk device types

After powering up the server, notice how in Figure 15-9, the raw disk in Physical compatibility
mode shows up as an IBM 2107900 device, while the other three disks (the local disk, the
SAN disk formatted with VMFS, and the SAN disk in virtual compatibility mode) all show up
as VMware virtual disks.

Figure 15-9 Device Management by the guest operating system

324 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_hostconsid_open.fm

Now the disks can be formatted and used as with any regular disks. System LUNs in physical
compatibility mode have the additional advantage that SCSI commands pass down to the
hardware with minimal modifications. As a result, system administrators can use the DS CLI
command lshostvol to map the virtual machine disks to DS8000 disks.

15.7 Sun Solaris


As with the previous models, the IBM System Storage DS8000 series continues to provide
extensive support for Sun operating systems. Currently, the DS8000 supports Solaris 8, 9,
and 10 on a variety of platforms. It also supports VERITAS Cluster Server and Sun Cluster.
The Interoperability Matrix provides complete information on supported configurations,
including information about supported host bus adapters, SAN switches, and multipathing
technologies: http://www.ibm.com/servers/storage/disk/ds8000/interop.html

A great deal of useful information is available in the IBM System Storage DS8000 Host
Systems Attachment Guide, SC26-7917. This section is not intended to duplicate that
publication, but rather it provides more information about optimizing your Sun Solaris
environment as well as a step-by-step guide on using Solaris with the DS8000.

15.7.1 Locating the WWPNs of your HBAs


Before you can assign LUNs to your server, you will need to locate the WWPNs of the
server’s HBAs. One popular method for locating the WWPNs is to scan the
/var/adm/messages file. Often, the WWPN will only show up in the file after a reboot. Also, the
string to search for depends on the type of HBA that you have. Specific details are available in
the IBM System Storage DS8000 Host Systems Attachment Guide, SC26-7917.

In many cases, you will also be able to use the prtconf command to list the WWPNs, as
shown in Example 15-25.

Example 15-25 Listing the WWPNs


# prtconf -vp | grep port-wwn
port-wwn: 21000003.ba43fdc1
port-wwn: 210000e0.8b099408
port-wwn: 210000e0.8b0995f3
port-wwn: 210000e0.8b096cf6
port-wwn: 210000e0.8b098f08

15.7.2 Solaris attachment to DS8000


Solaris uses the LUN polling method in order to discover DS8000 LUNs. For this reason,
each Solaris host is limited to 256 LUNs from the DS8000. LUNs can be assigned using any
of the supported DS8000 user interfaces, including the DS Command-Line Interface (DS
CLI), the DS Storage Manager (DS SM), and the DS Open Application Programming
Interface (DS Open API). When using the CLI, you should make the host connections using
the flags -addrdiscovery lunpolling, -lbs 512, and -profile “SUN - Solaris”. Another
option is to use the -hosttype Sun parameter. When making the volume groups, you should
use the parameter -type scsimap256.

As with other operating systems, you should use persistent binding with Solaris and DS8000.
If you do not use persistent binding, it is possible that Solaris will assign a different SCSI
device identifier (SCSI ID) than the one it had been using previously. This can happen if a new
device is added to the SAN, for example. In this case, you will have to re-configure your
applications or your operating system.

Chapter 15. Open systems considerations 325


6786ch_hostconsid_open.fm Draft Document for Review November 14, 2006 3:49 pm

The methods of enabling persistent binding differ depending on your host bus adapter. The
IBM System Storage DS8000 Host Systems Attachment Guide, SC26-7917, contains the
recommended HBA settings for each supported type.

15.7.3 Multipathing in Solaris


As with other operating systems, you should use multiple paths between the DS8000 and
your Solaris server. Multiple paths help to maximize the reliability and performance of your
operating environment. The DS8000 supports three different multipathing technologies on
Solaris.

First, IBM provides the System Storage Multipath Subsystem Device Driver (SDD) as a part
of the DS8000 at no extra charge. Next, Sun Solaris contains native multipathing software
called the StorEdge Traffic Manager Software (STMS). STMS is commonly known as MPxIO
(multiplexed I/O) in the industry, and the remainder of this section will refer to this technology
as MPxIO. Finally, IBM supports VERITAS Volume Manager (VxVM) Dynamic Multipathing
(DMP), a part of the VERITAS Storage Foundation suite.

The multipathing technology that you should use depends a great deal on your operating
environment and, of course, your business requirements. There are some limitations
depending on your operating system version, your host bus adapters, and whether you use
clustering. Details are available in the IBM System Storage DS8000 Host Systems
Attachment Guide, SC26-7917.

One difference between the multipathing technologies is in whether they suppress the
redundant paths to the storage. MPxIO and DMP both suppress all paths to the storage
except for one, and the device appears to the application as a single-path device. SDD, on the
other hand, allows the original paths to be seen, but creates its own virtual device (called a
vpath) for applications to use.

If you assign LUNs to your server before you install multipathing software, you can see each
of the LUNs show up as two or more devices, depending on how many paths you have. In
Example 15-26, the iostat -nE command shows that the volume 75207814206 appears
twice—once as c2t1d1 on the first HBA, and once as c3t1d1 on the second HBA.

Example 15-26 Device listing without multipath software


# iostat -nE
c2t1d1 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: IBM Product: 2107900 Revision: .212 Serial No: 75207814206
Size: 10.74GB <10737418240 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0
c2t1d0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: IBM Product: 2107900 Revision: .212 Serial No: 75207814205
Size: 10.74GB <10737418240 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0
c3t1d1 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: IBM Product: 2107900 Revision: .212 Serial No: 75207814206
Size: 10.74GB <10737418240 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0
c3t1d0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: IBM Product: 2107900 Revision: .212 Serial No: 75207814205
Size: 10.74GB <10737418240 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0

326 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_hostconsid_open.fm

Illegal Request: 0 Predictive Failure Analysis: 0

IBM System Storage Multipath Subsystem Device Driver (SDD)


SDD is available from your local IBM support team, or it can be downloaded from the Internet.
Both the SDD software and supporting documentation are available from this IBM Web site:
http://www.ibm.com/servers/storage/support/software/sdd/index.html

After the SDD software is installed, you can see that the paths have been grouped into virtual
vpath devices. Example 15-27 shows the output of the showvpath command.

Example 15-27 Output of the showvpath command


# /opt/IBMsdd/bin/showvpath
vpath1: Serial Number : 75207814206
c2t1d1s0 /devices/pci@6,4000/fibre-channel@2/sd@1,1:a,raw
c3t1d1s0 /devices/pci@6,2000/fibre-channel@1/sd@1,1:a,raw

vpath2: Serial Number : 75207814205


c2t1d0s0 /devices/pci@6,4000/fibre-channel@2/sd@1,0:a,raw
c3t1d0s0 /devices/pci@6,2000/fibre-channel@1/sd@1,0:a,raw

For each device, the operating system creates a node in the /dev/dsk and /dev/rdsk
directories. After SDD is installed, you can see these new vpaths by listing the contents of
those directories. Note that with SDD, the old paths are not suppressed. Instead, new vpath
devices show up as /dev/rdsk/vpath1a, for example. When creating your volumes and file
systems, be sure to use the vpath device instead of the original device.

SDD also offers some parameters that you can tune for your environment. Specifically, SDD
offers three different load balancing schemes:
򐂰 Failover
– No load balancing
– Second path is used only if the preferred path fails
򐂰 Round robin
– Paths to use are chosen at random (but different paths than most recent I/O)
– If only two paths, then they alternate
򐂰 Load balancing
– Path chosen based on estimated path load
– Default policy

The policy can be set through the use of the datapath set device policy command.

StorEdge Traffic Manager Software (MPxIO)


On Solaris 8 and Solaris 9 systems, MPxIO is available as an operating system patch. You
must install these patches in order to use MPxIO. On Solaris 10 systems, MPxIO is installed
by default. In all cases, it needs to be enabled and configured before it can be used with the
DS8000.

Before you enable MPxIO, you will want to configure your host bus adapters. Issue the cfgadm
-la command to see the current state of your adapters. Example 15-28 on page 328 shows
two adapters, c3 and c4, of type fc.

Chapter 15. Open systems considerations 327


6786ch_hostconsid_open.fm Draft Document for Review November 14, 2006 3:49 pm

Example 15-28 cfgadm -la command output


# cfgadm -la
Ap_Id Type Receptacle Occupant Condition
c3 fc connected unconfigured unknown
c4 fc connected unconfigured unknown

Note how the command reports that both adapters are un-configured. To configure the
adapters, issue the cfgadm -c configure cX (where X is the adapter number, 3 and 4 in this
case). Now, both adapters should show up as configured.

Note: The cfgadm -c configure command is not necessary in Solaris 10.

To configure your MPxIO, you will need to first enable it by editing the
/kernel/drv/scsi_vhci.conf file. For Solaris 10, you will need to edit the
/kernel/drv/fp.conf file instead. Find and change the mpxio-disable parameter to no:
mpxio-disable="no";

Next, add the following stanza to supply the vendor identification (VID) and product
identification (PID) information to MPxIO in the /kernel/drv/scsi_vhci.conf file:
device-type-scsi-options-list =
"IBM 2107900", "symmetric-option";
symmetric-option = 0x1000000;

The vendor string must be exactly 8 bytes, so you must type IBM followed by 5 spaces. Finally,
the system must be rebooted. After the reboot, MPxIO will be ready to be used.

For more information about MPxIO, including all the MPxIO commands and tuning
parameters, see the Sun Web site: http://www.sun.com/storage/software/.

VERITAS Volume Manager Dynamic Multipathing (DMP)


Before using VERITAS Volume Manager (VxVM) DMP, a part of the VERITAS Storage
Foundation suite, you should download and install the latest Maintenance Pack. You will also
need to download and install the Array Support Library (ASL) for the DS8000. Both of these
packages are available from: http://support.veritas.com/.

During device discovery, the vxconfigd daemon compares the serial numbers of the different
devices. If two devices have the same serial number, then they are the same LUN, and DMP
will combine the paths. Listing the contents of the /dev/vx/rdmp directory will show only one
set of devices.

The vxdisk path command also demonstrates DMP’s path suppression capabilities. In
Example 15-29, devices c6t1d0s2 and c7t2d0s2 are combined into c6t1d0s2.

Example 15-29 vxdisk path command output


# vxdisk path
SUBPATH DANAME DMNAME GROUP STATE
c6t1d0s2 c6t1d0s2 Ethan01 Ethan ENABLED
c7t2d0s2 c6t1d0s2 Ethan01 Ethan ENABLED
c6t1d1s2 c7t2d1s2 Ethan02 Ethan ENABLED
c7t2d1s2 c7t2d1s2 Ethan02 Ethan ENABLED
c6t1d2s2 c7t2d2s2 Ethan03 Ethan ENABLED
c7t2d2s2 c7t2d2s2 Ethan03 Ethan ENABLED
c6t1d3s2 c7t2d3s2 Ethan04 Ethan ENABLED
c7t2d3s2 c7t2d3s2 Ethan04 Ethan ENABLED

328 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_hostconsid_open.fm

Now, you create volumes using the device name listed under the DANAME column. In
Figure 15-10, a volume is created using four disks, even though there are actually eight paths.

Figure 15-10 VERITAS DMP disk view

As with other multipathing software, DMP provides a number of parameters that you can tune
in order to maximize the performance and availability in your environment. For example, it is
possible to set a load balancing policy to dictate how the I/O should be shared between the
different paths. It is also possible to select which paths get used in which order in case of a
failure.

Complete details about the features and capabilities of DMP can be found on the VERITAS
Web site: http://www.veritas.com.

15.8 HP-UX
The DS8000 attachment is supported with HP-UX Version 11i or later. For providing a fault
tolerant connection to the DS8000, the HP Multipathing software PVLINKS or the IBM
Multipath Subsystem Device Driver (SDD) are supported.

This chapter is intended to be a basic step-by-step configuration to attach an HP host to the


point that the host would be capable to run I/O to the DS8000 device. It is not intended to
repeat the information that is contained in other publications.

Chapter 15. Open systems considerations 329


6786ch_hostconsid_open.fm Draft Document for Review November 14, 2006 3:49 pm

15.8.1 Available documentation


For the latest available supported HP-UX configuration and required software patches, refer
to the DS8000 Interoperability Matrix at:
http://www.ibm.com/servers/storage/disk/ds8000/interop.html.

For preparing the host to attach the DS8000, refer to the Infocenter at:
http://publib.boulder.ibm.com/infocenter/ds8000ic/index.jsp?topic=/com.ibm.storage
.ess.console.base.help.doc/f2c_attchnghpux_1tlxvv.html and select Configuring →
Attaching Hosts → Hewlett-Packard Server (HP-UX) host attachment.

For Installation of the SDD refer to IBM System Storage Multipath Subsystem Device Driver
User’s Guide, SC30-4131. The User’s Guide is available at the download page for each
individual SDD Operating System Version at:
http://www.ibm.com/support/dlsearch.wss?rs=540&tc=ST52G7&dc=D430.

15.8.2 DS8000-specific software depots


For HP-UX there are two additional DS8000-specific software depots available:
򐂰 IBM System Storage Multipath Subsystem Device Driver (SDD)
򐂰 IBM System Storage DS8000 Command-Line Interface (DS CLI)

SDD is a multipathing software with policy-based load balancing on all available paths to the
DS8000. For automation purposes of the Copy Services, storage management, and storage
allocation, the DS CLI should be installed on the host.

For the latest version of SDD and the DS CLI on your host, you can either use the version that
is delivered with the DS8000 Microcode bundle or you can download the latest available SDD
version from: http://www.ibm.com/support/dlsearch.wss?rs=540&tc=ST52G7&dc=D430.

The latest available ISO-image for the DS CLI CD can be downloaded from:
ftp://ftp.software.ibm.com/storage/ds8000/updates/CLI/.

15.8.3 Configuring the DS8000 on a HP-UX host


For configuring the server on the DS8000 a host connection has to be defined using the DS
Storage Manager GUI or the DS CLI, as the HP host is using the hosttype HP. The hosttype
will automatically configure the DS8000 to present the DS8000 volumes in a HP-UX preferred
method. The DS8000 volume group needs to be specified for the access mode scsimask.

After the volumes on the DS8000 have been configured, install the SDD, then connect your
host to the fabric. Once the host is configured to the fabric, just discover the devices using the
ioscan command. Example 15-30 on page 331 does show that the DS8000 devices have
been discovered successfully, but the devices cannot be used, as no special device file is
available.

330 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_hostconsid_open.fm

Example 15-30 Discovered DS8000 devices without a special device file


# ioscan -fnC disk

Class I H/W Path Driver S/W State H/W Type Description


===========================================================================
disk 0 0/0/2/0.0.0 sdisk CLAIMED DEVICE SEAGATE ST318203LC
/dev/dsk/c2t0d0 /dev/rdsk/c2t0d0
disk 1 0/0/2/1.2.0 sdisk CLAIMED DEVICE HP DVD-ROM 304
/dev/dsk/c3t2d0 /dev/rdsk/c3t2d0
disk 350 0/3/0/0.1.25.0.34.0.5 sdisk CLAIMED DEVICE IBM 2107900
disk 351 0/3/0/0.1.25.0.34.0.6 sdisk CLAIMED DEVICE IBM 2107900
disk 354 0/6/0/0.1.24.0.34.0.5 sdisk CLAIMED DEVICE IBM 2107900
disk 355 0/6/0/0.1.24.0.34.0.6 sdisk CLAIMED DEVICE IBM 2107900

To create the missing special device file, there are two options; the first one is a reboot of the
host, which is disruptive. The alternative to the reboot is to run the command insf -eC disk,
which will reinstall the special device files for all devices of the Class disk. After creating the
special device files the ioscan output should look like Example 15-31.

Example 15-31 Discovered DS8000 devices with a special device file


ioscan -fnC disk
Class I H/W Path Driver S/W State H/W Type Description
===========================================================================
disk 0 0/0/2/0.0.0 sdisk CLAIMED DEVICE SEAGATE ST318203LC
/dev/dsk/c2t0d0 /dev/rdsk/c2t0d0
disk 1 0/0/2/1.2.0 sdisk CLAIMED DEVICE HP DVD-ROM 304
/dev/dsk/c3t2d0 /dev/rdsk/c3t2d0
disk 350 0/3/0/0.1.25.0.34.0.5 sdisk CLAIMED DEVICE IBM 2107900
/dev/dsk/c36t0d5 /dev/rdsk/c36t0d5
disk 351 0/3/0/0.1.25.0.34.0.6 sdisk CLAIMED DEVICE IBM 2107900
/dev/dsk/c36t0d6 /dev/rdsk/c36t0d6
disk 354 0/6/0/0.1.24.0.34.0.5 sdisk CLAIMED DEVICE IBM 2107900
/dev/dsk/c38t0d5 /dev/rdsk/c38t0d5
disk 355 0/6/0/0.1.24.0.34.0.6 sdisk CLAIMED DEVICE IBM 2107900
/dev/dsk/c38t0d6 /dev/rdsk/c38t0d6

Once the volumes are visible like in Example 15-31, volume groups (VGs), logical volumes,
and file systems can be created. If you have multiple paths to your DS8000, note that you
have to use as the /dev/dsk/vpathX device for creating the VGs.

15.8.4 Multipathing
The IBM Multipath Subsystem Device Driver (SDD) is a multipathing software that is capable
of policy-based load balancing on all available paths to the DS8000. The load balancing is a
major advantage.

PVLINKS is multipathing software that is built into the LVM of HP-UX. This software is only
performing a path failover to the alternate path once the primary path is not available any
more. In a poorly designed fail-over configuration, a performance bottleneck can be
produced. The bottleneck can arise, if all devices will only accessed via one adapter, but the
additional adapters are idle.

Precautions have to be taken, as from now on, the /dev/dsk/vpathX devices have to be
chosen to create VGs.

Chapter 15. Open systems considerations 331


6786ch_hostconsid_open.fm Draft Document for Review November 14, 2006 3:49 pm

If you have installed the SDD on an existing machine and you want to migrate your devices to
become vpath devices, use the command hd2vp, which will convert your volume group to
access the vpath devices instead of the /dev/dsk/cXtXdX devices.

Example 15-32 shows the output of the DS CLI command lshostvol. This command is an
easy way of displaying the relationship between disk device files (paths to the DS8000), the
configured DS8000 LUN serial number, and the assigned vpath device.

Example 15-32 dscli command lshostvol


dscli> lshostvol
Date/Time: November 18, 2005 7:01:17 PM GMT IBM DSCLI Version: 5.0.4.140
Disk Name Volume Id Vpath Name
================================================
c38t0d5,c36t0d5 IBM.2107-7503461/1105 vpath11
c38t0d6,c36t0d6 IBM.2107-7503461/1106 vpath10
c38t0d7,c36t0d7 IBM.2107-7503461/1107 vpath9
c38t1d0,c36t1d0 IBM.2107-7503461/1108 vpath8

For the support of MC/ServiceGuard with SDD, refer to the latest version of the DS8000
Interoperability Matrix.

SDD troubleshooting
When all DS8000 volumes are visible after claiming them with ioscan, but are not configured
by the SDD, you can run the command cfgvpath -r to perform a dynamic re-configuration of
all SDD devices.

Link errors handling with HP-UX


If a Fibre Channel link to the DS8000 fails, SDD will automatically take care of taking the path
offline and bringing it back online once the path is established again. Example 15-33 shows
the messages the SDD posts to the syslog when a link went away and comes back.

Example 15-33 Sample syslog.log entries for the SDD link failure events
Nov 10 17:49:27 dwarf vmunix: WARNING: VPATH_EVENT: device = vpath8 path = 0 offline
Nov 10 17:49:27 dwarf vmunix: WARNING: VPATH_EVENT: device = vpath9 path = 0 offline
Nov 10 17:49:27 dwarf vmunix: WARNING: VPATH_EVENT: device = vpath10 path = 0 offline
Nov 10 17:50:15 dwarf vmunix: WARNING: VPATH_EVENT: device = vpath11 path = 0 offline

.....

Nov 10 17:56:12 dwarf vmunix: NOTICE: VPATH_EVENT: device = vpath9 path = 0 online
Nov 10 17:56:12 dwarf vmunix: NOTICE: VPATH_EVENT: device = vpath8 path = 0 online
Nov 10 17:56:12 dwarf vmunix: NOTICE: VPATH_EVENT: device = vpath10 path = 0 online
Nov 10 17:56:12 dwarf vmunix: NOTICE: VPATH_EVENT: device = vpath11 path = 0 online

332 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_Hostconsid_zOS.fm

16

Chapter 16. System z considerations


This chapter discusses the specifics for attaching the DS8000 to System z hosts. The
following topics are covered:
򐂰 Connectivity considerations
򐂰 Operating systems prerequisites and enhancements
򐂰 z/OS considerations
򐂰 z/VM considerations
򐂰 VSE/ESA and z/VSE considerations

© Copyright IBM Corp. 2006. All rights reserved. 333


6786ch_Hostconsid_zOS.fm Draft Document for Review November 14, 2006 3:49 pm

16.1 Connectivity considerations


The DS8000 storage unit connects to System z hosts via ESCON and FICON channels, with
the addition of Fibre Channel Protocol (FCP) connectivity for Linux for System z hosts.

ESCON
For optimum availability, make ESCON host adapters available through all I/O enclosures. For
good performance, and depending on your workload characteristics, use at least eight
ESCON ports installed on four ESCON host adapters in the storage unit.

Note: When using ESCON channels only volumes in address group 0 can be accessed.
For this reason, if you will have a mix of CKD and FB volumes in the storage image, you
may want to reserve the first 16 LSSs (00-0F) for the ESCON accessed CKD volumes.

FICON
You also need to check for dependencies in the host hardware driver level and the supported
feature codes. Your IBM service representative can help you determine your current hardware
driver level on your mainframe processor complex. Examples of limited host server feature
support are (FC 3319) FICON Express2 LX and (FC 3320) FICON Express2 SX, which are
available only for the z890 and z990 host server models.

LINUX FCP connectivity


You can use either direct or switched attachment to attach a storage unit to a System z host
system that runs SUSE SLES 8 or 9 or Red Hat Enterprise Linux 3.0 —with current
maintenance updates for ESCON and FICON.

FCP attachment to System z Linux systems is only through a switched-fabric configuration.


You cannot attach the host through a direct configuration.

16.2 Operating systems prerequisites and enhancements


The minimum software levels required to support the DS8000 are:
򐂰 z/OS 1.4+
򐂰 z/VM 4.4 or z/VM 5.1
򐂰 VSE/ESA 2.7 or z/VSE 3.1
򐂰 TPF 4.1with PTF
򐂰 Linux SuSE SLES 8 or 9 for System z
򐂰 Red Hat Enterprise Linux 3.0

Nevertheless, check the most recent edition of the DS8000 Interoperability Matrix to see the
list of supported operating systems at:
http://www-03.ibm.com/servers/storage/disk/ds8000/interop.html

Important: Always review the latest edition of the Interoperability Matrix and the
Preventive Service Planning (PSP) bucket of the 2107 for software updates.

The PSP information can be found on the Resource Link Web site at:
http://www-1.ibm.com/servers/resourcelink/svc03100.nsf?OpenDatabase

334 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_Hostconsid_zOS.fm

16.3 z/OS considerations


In this section we discuss the program enhancements that z/OS has implemented to support
the characteristics of the DS8000. We also give guidelines for the definition of PAVs.

16.3.1 z/OS program enhancements (SPEs)


The list of relevant Data Facility Storage Management Subsystem (DFSMS) small program
enhancements (SPEs) that have been introduced in z/OS for DS8000 support are:
򐂰 Scalability support
򐂰 Large volume support (LVS)
򐂰 Read availability mask support
򐂰 Initial Program Load (IPL) enhancements
򐂰 DS8000 device definition
򐂰 Read control unit and device recognition for DS8000
򐂰 Performance statistics
򐂰 Resource Measurement Facility (RMF)
򐂰 Migration considerations
򐂰 Coexistance considerations

Many of these program enhancements are initially available as APARs and PTFs for the
current releases of z/OS, and are later integrated into the following releases of z/OS. For this
reason we recommend you review the DS8000 PSP bucket for your current release of z/OS.

Scalability support
The IOS recovery was designed to support a small number of devices per control unit —and
a unit check was presented on all devices at failover. This did not scale well with a DS8000
that has the capability to scale up to 65,280 devices. Under these circumstances, we may
have had CPU or spin lock contention, or exhausted storage below the 16M line at device
failover, or both.

Starting with z/OS 1.4 and higher with the DS8000 software support, the IOS recovery has
been improved by consolidating unit checks at an LSS level instead of each disconnected
device. This consolidation shortens the recovery time as a result of I/O errors. This
enhancement is particularly important since the DS8000 has a much higher number of
devices compared to the predecessors (ESS IBM 2105). In the IBM 2105, we had 4K devices
and in the DS8000 we have up to 65,280 devices in a storage facility.

Benefits
With the enhanced scalability support, the following benefits are possible:
򐂰 Common storage (CSA) usage (above and below the 16M line) is reduced.
򐂰 IOS large block pool for error recovery processing and attention, and state change
interrupt processing, is located above the 16M line, thus reducing storage demand below
the 16M line.
򐂰 Unit control blocks (UCB) are pinned and event notification facility (ENF) signalling during
channel path recovery.
򐂰 These scalability enhancements provide additional performance improvements by:
– Bypassing dynamic pathing validation in channel recovery for reduced recovery I/Os.
– Reducing elapsed time, by reducing the wait time in channel path recovery.

Large volume support (LVS)


As we approach the limit of 64K UCBs, there is a need to find a way to stay within this limit.
With the ESS 2105 volumes, 32,760 cylinders were supported. This gave the capability to

Chapter 16. System z considerations 335


6786ch_Hostconsid_zOS.fm Draft Document for Review November 14, 2006 3:49 pm

remain within the 64K limit. But as today’s storage facilities tend to expand to even larger
capacities, we are approaching the 64K limitation at a very fast rate. This leaves no choice but
to plan for even larger volumes sizes.

Support has been enhanced to expand volumes to 65,520 cylinders, using existing 16 bit
cylinder addressing. This is often referred to as 64K cylinder volumes. Components and
products such as DADSM/CVAF, DFSMSdss, ICKDSF, and DFSORT, previously shipped with
32,760 cylinders, now also support 65,520 cylinders. Check point restart processing now
supports a checkpoint data set that resides partially or wholly above the 32,760 cylinder
boundary.

With the new LVS volumes, the VTOC has the potential to grow very large. Callers such as
DFSMSdss have to read the entire VTOC to find the last allocated DSCB —in cases where
the VTOC is very large, performance degradation could be experienced. A new interface is
implemented to return the high allocated DSCB on volumes initialized with an INDEX VTOC.
DFSMSdss uses this interface to limit VTOC searches and improve performance. The VTOC
has to be within the first 64K-1 tracks, while the INDEX can be anywhere on the volume.

Read availability mask support


Dynamic CHPID Management (DCM) allows the user to define a pool of channels that are
managed by the system. The channels are added and deleted from control units based on
workload importance and availability needs. DCM attempts to avoid single points of failure
when adding or deleting a managed channel by not selecting an interface on the control unit
on the same I/O card.

Control unit single point of failure information was specified in a table and had to be updated
for each new control unit. Instead, with the present enhancement, we can use the Read
Availability Mask (PSF/RSD) command to retrieve the information from the control unit. By
doing this, there is no need to maintain a table for this information.

Initial Program Load (IPL) enhancements


During the IPL sequence the channel subsystem selects a channel path to read from the
SYSRES device. Certain types of I/O errors on a channel path will cause the IPL to fail even
though there are alternate channel paths which may work. For example, consider a situation
where there is a bad switch link on the first path but good links on the other paths. In this
case, you could not IPL since the same faulty path was always chosen.

The channel subsystem and z/OS were enhanced to retry I/O over an alternate channel path.
This circumvents IPL failures that were due to the selection of the same faulty path to read
from the SYSRES device.

DS8000 device definition


To exploit the increase in the number of LSSs that can be added in the DS8000 —255 LSSs,
the unit must be defined as 2107 in the HCD/IOCP. The host supports 255 logical control
units when the DS8000 is defined as UNIT=2107. You must install the appropriate software to
support this. If you do not have the required software support installed, you can define the
DS8000 as UNIT=2105 —in this case only the 16 LCUs of Address Group 0 can be used.

Both storage images of 9A2 or 9B2 LPAR capable machine must be defined either as 2107 or
2105. Do not define one image as UNIT=2107 and the other as UNIT=2105.

Starting with z9-109 processors, users can define an additional subchannel set with ID 1 (SS
1) on top of the existing subchannel set (SS 0) in a channel subsystem. With this additional
subchannel set, you can configure more than 2*63K devices for a channel subsystem. With
z/OS V1R7, you can define Parallel Access Volume (PAV) alias devices (device types 3380A,

336 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_Hostconsid_zOS.fm

3390A) of the 2105, 2107 and 1750 DASD control units to SS 1. Device numbers may be
duplicated across channel subsystems and subchannel sets.

Read control unit and device recognition for DS8000


The host system informs the attached DS8000 of its capabilities, such that it supports native
DS8000 control unit and devices. The DS8000 will then only return information that is
supported by the attached host system using the self-description data, such as read data
characteristics (RDC), sense ID, and read configuration data (RCD).

The following commands display device type 2107 in their output:


򐂰 DEVSERV QDASD and PATHS command responses
򐂰 IDCAMS LISTDATA COUNTS, DSTATUS, STATUS and IDCAMS SETCACHE

Performance statistics
Two new sets of performance statistics that are reported by the DS8000 were introduced.
Since a logical volume is no longer allocated on a single RAID rank with a single RAID type or
single device adapter pair, the performance data is now provided with a set of rank
performance statistics and extent pool statistics. The RAID RANK reports are no longer
reported by RMF and IDCAMS LISTDATA batch reports —RMF and IDCAMS LISTDATA is
enhanced to report the logical volume statistics that are provided on the DS8000.

These reports consist of back-end counters that capture the activity between the cache and
the ranks in the DS8000 for each individual logical volume. These rank and extent pool
statistics are disk-system-wide instead of volume-wide only.

Resource Measurement Facility (RMF)


RMF support for the DS8000 was added via an SPE for z/OS 1.4 (APAR number OA06476,
PTFs UA90079 and UA90080). RMF has been enhanced to provide Monitor I and III support
for the DS8000 series. The Disk Systems Postprocessor report contains two new sections:
Extent Pool Statistics and Rank Statistics —these statistics are generated from SMF record
74 subtype 8:
򐂰 The Extent Pool Statistics section provides capacity and performance information about
allocated disk space. For each extent pool, it shows the real capacity and the number of
real extents.
򐂰 The Rank Statistics section provides measurements about read and write operations in
each rank of an extent pool. It also shows the number of arrays and the array width of all
ranks. These values show the current configuration. The wider the rank, the more
performance capability it has. By changing these values in your configuration, you can
influence the throughput of your work.

Also, new response and transfer statistics are available with the Postprocessor Cache Activity
report generated from SMF record 74 subtype 5. These statistics are provided at the
subsystem level in the Cache Subsystem Activity report and at the volume level in the Cache
Device Activity report. In detail, RMF provides the average response time and byte transfer
rate per read and write requests. These statistics are shown for the I/O activity (called host
adapter activity) and transfer activity from hard disk to cache and vice-versa (called disk
activity).

New reports have been designed for reporting FICON channel utilization. RMF also provides
support for remote mirror and copy link utilization statistics. This support is delivered by APAR
OA04877 —PTFs are available for z/OS V1R4.

Chapter 16. System z considerations 337


6786ch_Hostconsid_zOS.fm Draft Document for Review November 14, 2006 3:49 pm

Note: RMF cache reporting and the results of a LISTDATA STATUS command report a
cache size that is half the actual size. This is because the information returned represents
only the cluster to which the logical control unit is attached. Each LSS on the cluster
reflects the cache and NVS size of that cluster. z/OS users will find that only the SETCACHE
CFW ON | OFF command is supported while other SETCACHE command options (example
DEVICE, SUBSYSTEM, DFW, NVS) are not accepted.

Migration considerations
The DS8000 is supported as an IBM 2105 for z/OS systems without the DFSMS and z/OS
small program enhancements (SPEs) installed. This allows customers to roll the SPE to each
system in a sysplex without having to take a sysplex-wide outage. An IPL will have to be taken
to activate the DFSMS and z/OS portions of this support.

Coexistance considerations
Support for the DS8000 running in 2105 mode on systems without this SPE installed is
provided. It consists of the recognition of the DS8000 real control unit type and device codes
when it runs in 2105 emulation on these down-level systems.

Input/Output definition files (IODF) created by HCD may be shared on systems that do not
have this SPE installed. Additionally, existing IODF files that define IBM 2105 control unit
records for a 2107 subsystem should be able to be used as long as 16 or fewer logical
subsystems are configured in the DS8000.

16.3.2 Parallel Access Volumes (PAV) definition


Parallel Access Volumes (PAV) enables a single System z server image to simultaneously
process multiple I/O operations to the same logical volume, which can help to significantly
reduce device queue delays. This is achieved by defining multiple addresses per volume.
With Dynamic PAV, the assignment of addresses to volumes can be automatically managed
to help meet performance objectives and reduce overall queuing. For further discussion about
PAV, see Section 10.4.7, “Parallel Access Volume — PAV” on page 187.

Each Parallel Access Volume (PAV) alias device must have an associated alias address
defined in the HCD/IOCP. Issue the ‘D M=CHP(xx)’ system command to verify that the aliases
are bound where they are expected to be. PAV aliases can be dynamically moved among
base addresses within the same LSS.

Wherever possible use dynamic PAV with Workload Manager to maximize the benefit of your
available aliases. The correct number of aliases for your workload can be determined from
analysis of RMF data. The PAV Tool available at
http://www-03.ibm.com/servers/eserver/zseries/zos/unix/bpxa1ty2.html#pavanalysis

can be used to analyze PAV usage.

In the absence of workload data to model consider the following rule of thumb: define as
many aliases as the number of FICON channels to the LSS multiplied by six times. Also, you
may use Table 16-1 on page 339 as a conservative recommendation for base to alias ratios.

338 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_Hostconsid_zOS.fm

Table 16-1 Base to alias ratios — guideline


Size of base device (number Number of aliases for Number of aliases for static
of cylinders) dynamic PAV PAV

1 - 3339 0.33 1

3340 - 6678 0.66 2

6679 - 10,017 1 3

10,018 - 16,695 1.33 4

16,696 - 23,373 1.66 5

23,374 - 30,051 2 6

30,052 - 40,068 2.33 7

40,069 - 50,085 2.66 8

50,086 - 60,102 3 9

60,103 < 3.33 10

More information regarding dynamic PAVs can be found on the Internet at:
http://www.ibm.com/s390/wlm/

RMF considerations
RMF reports all I/O activity against the Base PAV address —not by the Base and associated
Aliases. The performance information for the Base includes all Base and Alias activity.

MIH values considerations


The DS8000 provides a recommended interval of 30 seconds as part of the Read
Configuration Data. z/OS uses this information to set its Missing Interrupt Handler (MIH)
value.

Missing Interrupt Handler times for PAV alias addresses must not be set. An alias device
inherits the MIH of the base address to which it is bound and it is not possible to assign an
MIH value to an alias address. Alias devices are not known externally and are only known and
accessible by IOS. If an external method is used to attempt to set the MIH on an alias device
address, an IOS090I message will be generated. For example, the following message will be
observed for each attempt to set the MIH on an alias device:
IOS090I alias-device-number IS AN INVALID DEVICE

Tip: When setting MIH times in the IECIOSxx member of SYS1.PARMLIB, do not use
device ranges that include alias device numbers.

16.3.3 HyperPAV — z/OS support


DS8000 series users can benefit from enhancements to PAV with support for HyperPAV.
HyperPAV allows an alias address to be used to access any base on the same control unit
image per I/O base. More on HyperPAV characteristics in Section 10.4.11, “HyperPAV” on
page 193.

HyperPAV is supported starting with z/OS 1.8. For prior levels of operating system, the
support is provided as a small program enhancement (SPE) back to z/OS 1.6.

Chapter 16. System z considerations 339


6786ch_Hostconsid_zOS.fm Draft Document for Review November 14, 2006 3:49 pm

16.4 z/VM considerations


In this section we discuss specific considerations that are relevant when attaching a DS8000
series to a z/VM environment.

16.4.1 Connectivity
z/VM provides the following connectivity:
򐂰 z/VM supports FICON attachment as 3990 Model 3 or 6 controller
򐂰 Native controller mode 2105, 2107, and 1750 are supported on z/VM 5.2.0 with APAR
VM63952
– Brings support on par with z/OS
– z/VM simulates controller mode support by each guest
򐂰 z/VM supports FCP attachment for Linux systems running as a guest.
򐂰 z/VM itself supports FCP attached SCSI disks —starting with z/VM 5.1.0.

16.4.2 Supported DASD types and LUNs


z/VM supports the following ECKD DASD types:
򐂰 3390 Models 2. 3. and 9 —including the 32,760 and 65,520 cylinders custom volumes.
򐂰 3390 Model 2 and 3 in 3380 track compatibility mode

z/VM also provides the following support when using FCP attachment:
򐂰 FCP attached SCSI LUNs as emulated 9336 Model 20 DASD
򐂰 1 TB SCSI LUNs

16.4.3 PAV and HyperPAV — z/VM support


z/VM provides PAV support. In this section we provide some basic support information, useful
when you have to implement PAV in a z/VM environment.

Additional z/VM technical information for PAV support can be found on the z/VM Technical
Web site at: http://www.vm.ibm.com/storman/pav/.

Also, for further discussion on PAV see Section 10.4.9, “PAV in z/VM environments” on
page 191.

z/VM guest support for dedicated PAVs


z/VM allows a guest z/OS to use PAV and dynamic PAV tuning as dedicated volumes. The
following considerations apply:
򐂰 Alias and base addresses must be attached to the z/OS guest. You need a separate
ATTACH for each alias. You should attach the base and its aliases to the same guest.
򐂰 A base cannot be attached to SYSTEM if one of its aliases is attached to that guest. This
means that you cannot use PAVs for Full Pack minidisks. The QUERY PAV command is
available for authorized (class B) users to query base and alias addresses: QUERY PAV
rdev and QUERY PAV ALL.
򐂰 To verify that PAV aliases are bound to the correct bases use the command ‘QUERY CHPID
xx’ combined with ‘QUERY PAV rdev-rdev’, where xx is the CHPID whose device addresses
should be displayed showing the addresses and any aliases, and rdev is the real device
address.

340 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_Hostconsid_zOS.fm

PAV minidisk support (SPE)


Starting with z/VM 5.2.0, with APAR VM63952, VM supports PAV minidisks. With this small
program enhancement (SPE) z/VM provides:
򐂰 Support of PAV minidisks
򐂰 Workload balancing for guests that do not exploit PAV (e.g. CMS)
򐂰 Real I/O dispatcher queues minidisk I/O across system attached alias volumes
򐂰 Provides linkable minidisks for guests that do exploit PAV (e.g. z/OS and Linux)
򐂰 PAVALIAS parameter of the DASDOPT and MINIOPT user directory statements or the CP
DEFINE ALIAS command create alias minidisks (fullpack and non-fullpack) for exploiting
guests
򐂰 Dynamic alias to base re-association is supported for guests that exploit PAV for dedicated
volumes and for minidisks under restricted conditions

HyperPAV support
It is intention of IBM to provide HyperPAV support in the next z/VM release—note that all
statements regarding IBM plans, directions, and intent are subject to change or withdrawal
without notice.

16.4.4 MIH
z/VM sets its MIH value as a factor of 1.25 of what the hardware reports. This way, with the
DS8000 setting the MIH value to 30 seconds, z/VM will be set to a MIH value of approximately
37.5 seconds. This allows the guest to receive the MIH 30 seconds before z/VM does.

16.5 VSE/ESA and z/VSE considerations


The following considerations apply regarding VSE/ESA and z/VSE support:
򐂰 An APAR is required for VSE 2.7 to exploit large volume support.
򐂰 VSE defaults MIH timer value to 180 seconds. This can be changed to the suggested
DS8000 value of 30 seconds by using the SIR MIH command which is documented in the
Hints and Tips for VSE/ESA 2.7, which can be downloaded from the VSE/ESA Web site
at:
http://www.ibm.com/servers/eserver/zseries/zvse/documentation/

Chapter 16. System z considerations 341


6786ch_Hostconsid_zOS.fm Draft Document for Review November 14, 2006 3:49 pm

342 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_hostconsid_iSeries.fm

17

Chapter 17. System i considerations


This chapter discusses the specifics for the DS8000 attachment to System i. The following
topics are covered in this chapter:
򐂰 Supported environment
򐂰 Logical volume sizes
򐂰 Protected versus unprotected volumes
򐂰 Adding volumes to System i configuration
򐂰 Multipath
򐂰 Sizing guidelines
򐂰 Migration
򐂰 Boot from SAN
򐂰 AIX on IBM System i
򐂰 Linux on IBM System i

For further information on these topics, refer to the redbook iSeries and IBM TotalStorage: A
Guide to Implementing External Disk on eserver i5, SG24-7120.

© Copyright IBM Corp. 2006. All rights reserved. 343


6786ch_hostconsid_iSeries.fm Draft Document for Review November 14, 2006 3:49 pm

17.1 Supported environment


Not all hardware and software combinations for OS/400 support the DS8000. This section
describes the hardware and software pre-requisites for attaching the DS8000.

17.1.1 Hardware
The DS8000 is supported on all System i models which support Fibre Channel attachment for
external storage. Fibre Channel was supported on all model 8xx onwards. AS/400 models
7xx and prior only supported SCSI attachment for external storage, so they cannot support
the DS8000.

There are three Fibre Channel adapters for System i. All support the DS8000:
򐂰 2766 2 Gigabit Fibre Channel Disk Controller PCI
򐂰 2787 2 Gigabit Fibre Channel Disk Controller PCI-X
򐂰 5760 4 Gigabit Fibre Channel Disk Controller PCI-X

Each adapter requires its own dedicated I/O processor.

The System i Storage Web page provides information about current hardware requirements,
including support for switches. This can be found at:
http://www.ibm.com/servers/eserver/iseries/storage/storage_hw.html

17.1.2 Software
The System i must be running V5R2 or V5R3 (i5/OS) or later level of OS/400. In addition, at
the time of writing, the following PTFs were required:
򐂰 V5R2: MF33327, MF33301, MF33469, MF33302, SI14711 and SI14754
򐂰 V5R3: MF33328, MF33845, MF33437, MF33303, SI14690, SI14755 and SI14550

Prior to attaching the DS8000 to System i, you should check for the latest PTFs, which may
have superseded those shown here.

17.2 Logical volume sizes


OS/400 is supported on DS8000 as Fixed Block storage. Unlike other Open Systems using
FB architecture, OS/400 only supports specific volume sizes and these may not be an exact
number of extents. In general, these relate to the volume sizes available with internal devices,
although some larger sizes are now supported for external storage only. OS/400 volumes are
defined in decimal Gigabytes (109 bytes).

Table 17-1 on page 345 gives the number of extents required for different System i volume
sizes.

344 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_hostconsid_iSeries.fm

Table 17-1 OS/400 logical volume sizes


Model type OS/400 Number Extents Unusable Usable
Device of LBAs space space%
Unprotected Protected size (GB) (GiB)

2107-A81 2107-A01 8.5 16,777,216 8 0.00 100.00

2107-A82 2107-A02 17.5 34,275,328 17 0.66 96.14

2107-A85 2107-A05 35.1 68,681,728 33 0.25 99.24

2107-A84 2107-A04 70.5 137,822,208 66 0.28 99.57

2107-A86 2107-A06 141.1 275,644,416 132 0.56 99.57

2107-A87 2107-A07 282.2 551,288,832 263 0.13 99.95

GiB represents “Binary Gigabytes” (230 bytes) and GB represents “Decimal Gigabytes” (109 bytes).

Note: Logical volumes of size 8.59 and 282.2 are not supported as System i Load Source
Unit (boot disk) where the Load Source Unit is to be located in the external storage server.

When creating the logical volumes for use with OS/400, you will see that in almost every
case, the OS/400 device size doesn’t match a whole number of extents, and so some space
will be wasted. You should also note that the #2766 and #2787 Fibre Channel Disk Adapters
used by System i can only address 32 LUNs, so creating more, smaller LUNs will require
more Input Output Adapters (IOAs) and their associated Input Output Processors (IOPs). For
more sizing guidelines for OS/400, refer to .

17.3 Protected versus unprotected volumes


When defining OS/400 logical volumes, you must decide whether these should be protected
or unprotected. This is simply a notification to OS/400 – it does not mean that the volume is
protected or unprotected. In reality, all DS8000 LUNs are protected, either RAID-5 or
RAID-10. Defining a volume as unprotected means that it is available for OS/400 to mirror that
volume to another of equal capacity — either internal or external. If you do not intend to use
OS/400 (host based) mirroring, you should define your logical volumes as protected.

Under some circumstances, you may wish to mirror the OS/400 Load Source Unit (LSU) to a
LUN in the DS8000. In this case, only one LUN should be defined as unprotected; otherwise,
when mirroring is started to mirror the LSU to the DS8000 LUN, OS/400 will attempt to mirror
all unprotected volumes.

17.3.1 Changing LUN protection


Although it is possible to change a volume from protected to unprotected (or vice versa) using
the DSCLI, you should be extremely careful if doing this. If the volume is not assigned to any
System i or non-configured, you can change the protection. However, if it is configured, you
should not change the protection. If you wish to do so, you must first delete the logical
volume.

This will return the extents used for that volume into the Extent Pool. You will then be able to
create a new logical volume with the correct protection after a short period of time (depending

Chapter 17. System i considerations 345


6786ch_hostconsid_iSeries.fm Draft Document for Review November 14, 2006 3:49 pm

on the number of extents being returned to the extent pool). This is unlike ESS E20, F20 and
800 where the entire array containing the logical volume had to be reformatted.

However, before deleting the logical volume on the DS8000, you must first remove it from the
OS/400 configuration (assuming it was still configured). This is an OS/400 task which is
disruptive if the disk is in the System ASP or User ASPs 2-32 because it requires an IPL of
OS/400 to completely remove the volume from the OS/400 configuration. This is no different
from removing an internal disk from an OS/400 configuration. Indeed, deleting a logical
volume on the DS8000 is similar to physically removing a disk drive from an System i. Disks
can be removed from an Independent ASP with the IASP varied off without IPLing the
system.

17.4 Adding volumes to System i configuration


Once the logical volumes have been created and assigned to the host, they will appear as
non-configured units to OS/400. This may be some time after being created on the DS8000.
At this stage, they are used in exactly the same way as non-configured internal units. There is
nothing particular to external logical volumes as far as OS/400 is concerned. You should use
the same functions for adding the logical units to an Auxiliary Storage Pool (ASP) as you
would for internal disks.

17.4.1 Using 5250 interface


Adding disk units to the configuration can be done either using the text (5250 terminal mode)
interface with Dedicated Service Tools (DST) or System Service Tools (SST), or with the
System i Navigator GUI. The following example shows how to add a logical volume in the
DS8000 to the System ASP, using green screen SST:
1. Start System Service Tools STRSST and sign on.
2. Select option 3, Work with disk units as shown in Figure 17-1.

System Service Tools (SST)

Select one of the following:

1. Start a service tool


2. Work with active service tools
3. Work with disk units
4. Work with diskette data recovery
5. Work with system partitions
6. Work with system capacity
7. Work with system security
8. Work with service tools user IDs

Selection
3

F3=Exit F10=Command entry F12=Cancel

Figure 17-1 System Service Tools menu

3. Select Option 2, Work with disk configuration as shown in Figure 17-2 on page 347.

346 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_hostconsid_iSeries.fm

Work with Disk Units

Select one of the following:

1. Display disk configuration


2. Work with disk configuration
3. Work with disk unit recovery

Selection
2

F3=Exit F12=Cancel

Figure 17-2 Work with Disk Units menu

4. When adding disk units to a configuration, you can add them as empty units by selecting
Option 2 or you can choose to allow OS/400 to balance the data across all the disk units.
Normally, we recommend balancing the data. Select Option 8, Add units to ASPs and
balance data as shown in Figure 17-3.

Work with Disk Configuration

Select one of the following:

1. Display disk configuration


2. Add units to ASPs
3. Work with ASP threshold
4. Include unit in device parity protection
5. Enable remote load source mirroring
6. Disable remote load source mirroring
7. Start compression on non-configured units
8. Add units to ASPs and balance data
9. Start device parity protection

Selection
8

F3=Exit F12=Cancel

Figure 17-3 Work with Disk Configuration menu

5. Figure 17-4 on page 348 shows the Specify ASPs to Add Units to panel. Specify the ASP
number next to the desired units. Here we have specified ASP1, the System ASP. Press
Enter.

Chapter 17. System i considerations 347


6786ch_hostconsid_iSeries.fm Draft Document for Review November 14, 2006 3:49 pm

Specify ASPs to Add Units to

Specify the ASP to add each unit to.

Specify Serial Resource


ASP Number Type Model Capacity Name
21-662C5 4326 050 35165 DD124
21-54782 4326 050 35165 DD136
1 75-1118707 2107 A85 35165 DD006

F3=Exit F5=Refresh F11=Display disk configuration capacity


F12=Cancel

Figure 17-4 Specify ASPs to Add Units to

6. The Confirm Add Units panel will appear for review as shown in Figure 17-5. If everything
is correct, press Enter to continue.

Confirm Add Units

Add will take several minutes for each unit. The system will
have the displayed protection after the unit(s) are added.

Press Enter to confirm your choice for Add units.


Press F9=Capacity Information to display the resulting capacity.
Press F12=Cancel to return and change your choice.

Serial Resource
ASP Unit Number Type Model Name Protection
1 Unprotected
1 02-89058 6717 074 DD004 Device Parity
2 68-0CA4E32 6717 074 DD003 Device Parity
3 68-0C9F8CA 6717 074 DD002 Device Parity
4 68-0CA5D96 6717 074 DD001 Device Parity
5 75-1118707 2107 A85 DD006 Unprotected

F9=Resulting Capacity F12=Cancel

Figure 17-5 Confirm Add Units

7. Depending on the number of units you are adding, this step could take some time. When it
completes, display your disk configuration to verify the capacity and data protection.

17.4.2 Adding volumes to an Independent Auxiliary Storage Pool


Independent Auxiliary Storage Pools (IASPs) can be switchable or private. Disks are added to
an IASP using the System i navigator GUI. In this example, we are adding a logical volume to
a private (non-switchable) IASP.
1. Start System i Navigator. Figure 17-6 on page 349 shows the initial panel.

348 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_hostconsid_iSeries.fm

Figure 17-6 System i Navigator initial panel

2. Expand the System i to which you wish to add the logical volume and sign on to that
server as shown in Figure 17-7.

Figure 17-7 System i Navigator Signon to System i window

3. Expand Configuration and Service, Hardware, and Disk Units as shown in Figure 17-8
on page 350.

Chapter 17. System i considerations 349


6786ch_hostconsid_iSeries.fm Draft Document for Review November 14, 2006 3:49 pm

Figure 17-8 System i Navigator Disk Units

4. You will be asked to sign on to SST as shown in Figure 17-9. Enter your Service tools ID
and password and press OK.

Figure 17-9 SST Signon

5. Right-click Disk Pools and select New Disk Pool.


6. The New Disk Pool wizard appears. Click Next.
7. On the New Disk Pool dialog shown in Figure 17-10 on page 351, select Primary from the
pull-down for the Type of disk pool, give the new disk pool a name and leave Database to
default to Generated by the system. Ensure the disk protection method matches the type
of logical volume you are adding. If you leave it unchecked, you will see all available disks.
Select OK to continue.

350 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_hostconsid_iSeries.fm

Figure 17-10 Defining a new disk pool

8. A confirmation panel like that shown in Figure 17-11 will appear to summarize the disk
pool configuration. Select Next to continue.

Figure 17-11 New disk pool - welcome

9. Now you need to add disks to the new disk pool. On the Add to disk pool screen, click the
Add disks button as shown in Figure 17-12 on page 352.

Chapter 17. System i considerations 351


6786ch_hostconsid_iSeries.fm Draft Document for Review November 14, 2006 3:49 pm

Figure 17-12 Add disks to disk pool

10.A list of non-configured units similar to that shown in Figure 17-13 appears. Highlight the
disks you want to add to the disk pool and click Add.

Figure 17-13 Choose the disks to add to the disk pool

11.A confirmation screen appears as shown in Figure 17-14 on page 353. Click Next to
continue.

352 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_hostconsid_iSeries.fm

Figure 17-14 Confirm disks to be added to disk pool

12.A summary of the Disk Pool configuration similar to Figure 17-15appears. Click Finish to
add the disks to the Disk Pool.

Figure 17-15 New disk pool summary

13.Take note of and respond to any message dialogs which appear. After taking action on any
messages, the New Disk Pool Status panel shown in Figure 17-16 on page 354 displays
and shows progress. This step may take some time, depending on the number and size of
the logical units being added.

Chapter 17. System i considerations 353


6786ch_hostconsid_iSeries.fm Draft Document for Review November 14, 2006 3:49 pm

Figure 17-16 New disk pool status

14.When complete, click OK on the information panel shown in Figure 17-17.

Figure 17-17 Disks added successfully to disk pool

15.The new Disk Pool can be seen on System i Navigator Disk Pools in Figure 17-18.

Figure 17-18 New disk pool shown on System i Navigator

16.To see the logical volume, as shown in Figure 17-19 on page 355, expand Configuration
and Service, Hardware, Disk Pools and click the disk pool you just created.

354 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_hostconsid_iSeries.fm

Figure 17-19 New logical volumes shown on System i Navigator.

17.5 Multipath
Multipath support was added for external disks in V5R3 of i5/OS (also known as OS/400
V5R3). Unlike other platforms which have a specific software component, such as Subsystem
Device Driver (SDD), multipath is part of the base operating system. At V5R3 and V5R4, up
to eight connections can be defined from multiple I/O adapters on an System i server to a
single logical volume in the DS8000. Each connection for a multipath disk unit functions
independently. Several connections provide availability by allowing disk storage to be utilized
even if a single path fails.

Multipath is important for System i because it provides greater resilience to SAN failures,
which can be critical to OS/400 due to the single level storage architecture. Multipath is not
available for System i internal disk units but the likelihood of path failure is much less with
internal drives. This is because there are fewer interference points where problems can occur,
such as long fiber cables and SAN switches, as well as the increased possibility of human
error when configuring switches and external storage, and the concurrent maintenance on the
DS8000 which may make some paths temporarily unavailable.

Many System i customers still have their entire environment in the System ASP and loss of
access to any disk will cause the system to fail. Even with User ASPs, loss of a UASP disk will
eventually cause the system to stop. Independent ASPs provide isolation such that loss of
disks in the IASP will only affect users accessing that IASP while the rest of the system is
unaffected. However, with multipath, even loss of a path to disk in an IASP will not cause an
outage.

Chapter 17. System i considerations 355


6786ch_hostconsid_iSeries.fm Draft Document for Review November 14, 2006 3:49 pm

Prior to multipath being available, some clients used OS/400 mirroring to two sets of disks,
either in the same or different external disk subsystems. This provided implicit dual-path as
long as the mirrored copy was connected to a different IOP/IOA, BUS, or I/O tower. However,
this also required two copies of data. Since disk level protection is already provided by
RAID-5 or RAID-10 in the external disk subsystem, this was sometimes seen as
unnecessary.

With the combination of multipath and RAID-5 or RAID-10 protection in the DS8000, we can
provide full protection of the data paths and the data itself without the requirement for
additional disks.

17.5.1 Avoiding single points of failure


In Figure 17-20, there are fifteen single points of failure, excluding the System i itself and the
DS8000 storage facility. Failure points 9-12 will not be present if you do not use an Inter
Switch Link (ISL) to extend your SAN. An outage to any one of these components (either
planned or unplanned) would cause the system to fail if IASPs are not used (or the
applications within an IASP if they are).

1. IO Frame
2. BUS
3. IOP
4. IOA 6. Port
7. Switch
5. Cable 8. Port
9. ISL
10. Port
11. Switch
12. Port
13. Cable
14. Host Adapter
15. IO Drawer

Figure 17-20 Single points of failure

When implementing multipath, you should provide as much redundancy as possible. As a


minimum, multipath requires two IOAs connecting the same logical volumes. Ideally, these
should be on different buses and in different I/O racks in the System i. If a SAN is included,
separate switches should also be used for each path. You should also use Host Adapters in
different I/O drawer pairs in the DS8000. Figure 17-21 on page 357 shows this.

356 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_hostconsid_iSeries.fm

Figure 17-21 Multipath removes single points of failure

Unlike other systems, which may only support two paths (dual-path), OS/400 V5R3 supports
up to eight paths to the same logical volumes. As a minimum, you should use two, although
some small performance benefits may be experienced with more. However, since OS/400
multipath spreads I/O across all available paths in a round-robin manner, there is no load
balancing, only load sharing.

17.5.2 Configuring multipath


System i has three I/O adapters that support DS8000:
򐂰 2766 2 Gigabit Fibre Channel Disk Controller PCI
򐂰 2787 2 Gigabit Fibre Channel Disk Controller PCI-X
򐂰 5760 4 Gigabit Fibre Channel Disk Controller PCI-X

All can be used for multipath and there is no requirement for all paths to use the same type of
adapter. Both adapters can address up to 32 logical volumes. This does not change with
multipath support. When deciding how many I/O adapters to use, your first priority should be
to consider performance throughput of the IOA since this limit may be reached before the
maximum number of logical units. See “Sizing guidelines” on page 363 for more information
on sizing and performance guidelines.

Figure 17-22 on page 358 shows an example where 48 logical volumes are configured in the
DS8000. The first 24 of these are assigned via a host adapter in the top left I/O drawer in the
DS8000 to a Fibre Channel I/O adapter in the first System i I/O tower or rack. The next 24
logical volumes are assigned via a host adapter in the lower left I/O drawer in the DS8000 to
a Fibre Channel I/O adapter on a different BUS in the first System i I/O tower or rack. This
would be a valid single path configuration.

Chapter 17. System i considerations 357


6786ch_hostconsid_iSeries.fm Draft Document for Review November 14, 2006 3:49 pm

Volumes 1-24

Volumes 25-48

Host Adapter 1 Host Adapter 2

IO Drawers and
IO Drawer IO Drawer

IO Drawer IO Drawer
Host Adapters
Host Adapter 3 Host Adapter 4

BUS a BUS x
FC IOA FC IOA
iSeries IO
BUS b
FC IOA FC IOA
BUS y Towers/Racks
Logical connection

Figure 17-22 Example of multipath with System i

To implement multipath, the first group of 24 logical volumes is also assigned to a Fibre
Channel I/O adapter in the second System i I/O tower or rack via a host adapter in the lower
right I/O drawer in the DS8000. The second group of 24 logical volumes is also assigned to a
Fibre Channel I/O adapter on a different BUS in the second System i I/O tower or rack via a
host adapter in the upper right I/O drawer.

17.5.3 Adding multipath volumes to System i using 5250 interface


If using the 5250 interface, sign on to SST and perform the following steps as described in
“Using 5250 interface” on page 346.
1. Option 3, Work with disk units.
2. Option 2, Work with disk configuration.
3. Option 8, Add units to ASPs and balance data.

You will then be presented with a panel similar to Figure 17-23 on page 359. The values in the
Resource Name column show DDxxx for single path volumes and DMPxxx for those which
have more than one path. In this example, the 2107-A85 logical volume with serial number
75-1118707 is available through more than one path and reports in as DMP135.
4. Specify the ASP to which you wish to add the multipath volumes.

358 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_hostconsid_iSeries.fm

Specify ASPs to Add Units to

Specify the ASP to add each unit to.

Specify Serial Resource


ASP Number Type Model Capacity Name
21-662C5 4326 050 35165 DD124
21-54782 4326 050 35165 DD136
1 75-1118707 2107 A85 35165 DMP135

F3=Exit F5=Refresh F11=Display disk configuration capacity


F12=Cancel

Figure 17-23 Adding multipath volumes to an ASP

Note: For multipath volumes, only one path is shown. In order to see the additional
paths, see “Managing multipath volumes using System i Navigator” on page 360.

5. You are presented with a confirmation screen as shown in Figure 17-24. Check the
configuration details and if correct, press Enter to accept.

Confirm Add Units

Add will take several minutes for each unit. The system will
have the displayed protection after the unit(s) are added.

Press Enter to confirm your choice for Add units.


Press F9=Capacity Information to display the resulting capacity.
Press F12=Cancel to return and change your choice.

Serial Resource
ASP Unit Number Type Model Name Protection
1 Unprotected
1 02-89058 6717 074 DD004 Device Parity
2 68-0CA4E32 6717 074 DD003 Device Parity
3 68-0C9F8CA 6717 074 DD002 Device Parity
4 68-0CA5D96 6717 074 DD001 Device Parity
5 75-1118707 2107 A85 DMP135 Unprotected

F9=Resulting Capacity F12=Cancel

Figure 17-24 Confirm Add Units

17.5.4 Adding multipath volumes to System i using System i Navigator


The System i Navigator GUI can be used to add volumes to the System, User or Independent
ASPs. In this example, we are adding a multipath logical volume to a private (non-switchable)
IASP. The same principles apply when adding multipath volumes to the System or User
ASPs.

Follow the steps outlined in “Adding volumes to an Independent Auxiliary Storage Pool” on
page 348.

Chapter 17. System i considerations 359


6786ch_hostconsid_iSeries.fm Draft Document for Review November 14, 2006 3:49 pm

When you get to the point where you will select the volumes to be added, you will see a panel
similar to that shown in Figure 17-25. Multipath volumes appear as DMPxxx. Highlight the
disks you want to add to the disk pool and click Add.

Figure 17-25 Adding a multipath volume

Note: For multipath volumes, only one path is shown. In order to see the additional paths,
see “Managing multipath volumes using System i Navigator” on page 360.

The remaining steps are identical to those in “Adding volumes to an Independent Auxiliary
Storage Pool” on page 348.

17.5.5 Managing multipath volumes using System i Navigator


All units are initially created with a prefix of DD. As soon as the system detects that there is
more than one path to a specific logical unit, it will automatically assign a unique resource
name with a prefix of DMP for both the initial path and any additional paths.

When using the standard disk panels in System i Navigator, only a single (the initial) path is
shown. The following steps show how to see the additional paths.

To see the number of paths available for a logical unit, open System i Navigator and expand
Configuration and Service, Hardware, and Disk Units as shown in Figure 17-26 on
page 361 and click All Disk Units. The number of paths for each unit is shown in column
Number of Connections visible on the right of the panel. In this example, there are 8
connections for each of the multipath units.

360 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_hostconsid_iSeries.fm

To see the other connections to a logical unit, right click on the unit and select Properties, as
shown in Figure 17-26.

Figure 17-26 Selecting properties for a multipath logical unit

Chapter 17. System i considerations 361


6786ch_hostconsid_iSeries.fm Draft Document for Review November 14, 2006 3:49 pm

You now get the General Properties tab for the selected unit, as shown in Figure 17-27. The
first path is shown as Device 1 in the box labelled Storage.

Figure 17-27 Multipath logical unit properties

To see the other paths to this unit, click the Connections tab, as shown in Figure 17-28,
where you can see the other seven connections for this logical unit.

Figure 17-28 Multipath connections

362 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_hostconsid_iSeries.fm

17.5.6 Multipath rules for multiple System i hosts or partitions

When you use multipath disk units, you must consider the implications of moving IOPs and
multipath connections between nodes. You must not split multipath connections between
nodes, either by moving IOPs between logical partitions or by switching expansion units
between systems. If two different nodes both have connections to the same LUN in the
DS8000, both nodes might potentially overwrite data from the other node.

The system enforces the following rules when you use multipath disk units in a
multiple-system environment:
򐂰 If you move an IOP with a multipath connection to a different logical partition, you must
also move all other IOPs with connections to the same disk unit to the same logical
partition.
򐂰 When you make an expansion unit switchable, make sure that all multipath connections to
a disk unit will switch with the expansion unit.
򐂰 When you configure a switchable independent disk pool, make sure that all of the required
IOPs for multipath disk units will switch with the independent disk pool.

If a multipath configuration rule is violated, the system issues warnings or errors to alert you
of the condition. It is important to pay attention when disk unit connections are reported
missing. You want to prevent a situation where a node might overwrite data on a LUN that
belongs to another node.

Disk unit connections might be missing for a variety of reasons, but especially if one of the
preceding rules has been violated. If a connection for a multipath disk unit in any disk pool is
found to be missing during an IPL or vary on, a message is sent to the QSYSOPR message
queue.

If a connection is missing, and you confirm that the connection has been removed, you can
update Hardware Service Manager (HSM) to remove that resource. Hardware Service
Manager is a tool for displaying and working with system hardware from both a logical and a
packaging viewpoint, an aid for debugging Input/Output (I/O) processors and devices, and for
fixing failing and missing hardware. You can access Hardware Service Manager in System
Service Tools (SST) and Dedicated Service Tools (DST) by selecting the option to start a
service tool.

17.5.7 Changing from single path to multipath


If you have a configuration where the logical units were only assigned to one I/O adapter, you
can easily change to multipath. Simply assign the logical units in the DS8000 to another I/O
adapter and the existing DDxxx drives will change to DMPxxx and new DMPxxx resources
will be created for the new path.

17.6 Sizing guidelines


In Figure 17-29 on page 364 shows the process you can use to size external storage on
System i. Ideally, you should have OS/400 Performance Tools reports, which can be used to
model an existing workload. If these are not available, you can use workload characteristics
from a similar workload to understand the I/O rate per second and the average I/O size. For
example, the same application may be running at another site and its characteristics can be
adjusted to match the expected workload pattern on your system.

Chapter 17. System i considerations 363


6786ch_hostconsid_iSeries.fm Draft Document for Review November 14, 2006 3:49 pm

Performance Workload
Tools Reports description

Workload Workload
statistics characteristics

Other
requirements:
HA, DR etc.
Rules of thumb

SAN Fabric

Proposed
configuration
Workload from
other servers
Modeling with
Adjust config
Disk Magic
based on DM
modeling

Requirements and
expectations met ?
No
Yes

Finish

Figure 17-29 Process for sizing external storage

17.6.1 Planning for arrays and DDMs


In general, although it is possible to use 146 GB and 300 GB 10K RPM DDMs, we
recommend that you use 73 GB 15K RPM DDMs for System i production workloads. The
larger, slower drives may be suitable for less I/O intensive work, or for those workloads which
do not require critical response times (for example, archived data or data which is high in
volume but low in use such as scanned images).

For workloads with critical response times, you may not want to use all the capacity in an
array. For 73 GB DDMs you may plan to use about 300 GB capacity per 8 drive array. The
remaining capacity could possibly be used for infrequently accessed data. For example, you
may have archive data, or some data such as images, which is not accessed regularly, or
perhaps FlashCopy target volumes which could use this capacity, but not impact on the
I/O /sec on those arrays.

For very high write environments, you may also consider using RAID-10, which offers a higher
I/O rate per GB than RAID-5 as shown in Table 17-3 on page 366. However, the majority of
System i workloads do not require this.

17.6.2 Cache
In general, System i workloads do not benefit from large cache. Still, depending on the
workload (as shown in OS/400 Performance Tools System, Component and Resource
Interval reports) you may see some benefit in larger cache sizes. However, in general, with
large System i main memory sizes, OS/400 Expert Cache can reduce the benefit of external
cache.

364 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_hostconsid_iSeries.fm

17.6.3 Number of System i Fibre Channel adapters


The most important factor to take into consideration when calculating the number of Fibre
Channel adapters in the System i is the throughput capacity of the adapter and IOP
combination.

Since this guideline is based only on System i adapters and Access Density (AD) of System i
workload, it doesn't change when using the DS8000.

Note: Access Density is the ratio that results from dividing the occupied disk space capacity
by the average I/Os per second. These values can be obtained from the OS/400 System,
Component and Resource Interval performance reports.

Table 17-2 shows the approximate capacity which can be supported with various IOA/IOP
combinations.

Table 17-2 Capacity per I/O Adapter


I/O Adapter I/O Processor Capacity per IOA Rule of thumb

2787 2844 1022/AD 500GB

2766 2844 798/AD 400GB

2766 2843 644/AD 320GB

5760 2844 See note***

Note: ***Size the same capacity per adapter 5760 as for 2787 on 2844. For transfer sizes
larger than 16KB, size about 50% more capacity than for adapter 2787.

For most System i workloads, Access Density is usually below 2, so if you do not know it, the
Rule of thumb column is a typical value to use.

17.6.4 Size and number of LUNs


As discussed in “Logical volume sizes” on page 344, OS/400 can only use fixed logical
volume sizes. As a general rule of thumb, we recommend that you should configure more
logical volumes than actual DDMs. As a minimum, we recommend 2:1. For example, with 73
GB DDMs, you should use a maximum size of 35.1 GB LUNs. The reason for this is that
OS/400 does not support command tag queuing. Using more, smaller LUNs can reduce I/O
queues and wait times by allowing OS/400 to support more parallel I/Os.

From the values in Table 17-2, you can calculate the number of System i Fibre Channel
adapters for your required System i disk capacity. As each I/O adapter can support a
maximum of 32 LUNs, divide the capacity per adapter by 32 to give the approximate average
size of each LUN.

For example, assume you require 2 TB capacity and are using 2787 I/O adapters with 2844
I/O processors. If you know the access density, calculate the capacity per I/O adapter, or use
the rule-of-thumb. Let’s assume the rule-of-thumb of 500 GB per adapter. In this case, we
would require four I/O adapters to support the workload. If we were able to have variable
LUNs sizes, we could support 32 15.6 GB LUNs per I/O adapter. However, since OS/400 only
supports fixed volume sizes, we could support 28 17. 5 GB volumes to give us approximately
492 GB per adapter.

Chapter 17. System i considerations 365


6786ch_hostconsid_iSeries.fm Draft Document for Review November 14, 2006 3:49 pm

17.6.5 Recommended number of ranks


As a general guideline, you may consider 1500 disk operations/sec for an average RAID rank.

When considering the number of ranks, take into account the maximum disk operations per
second per rank as shown in Table 17-3. These are measured at 100% DDM Utilization with
no cache benefit and with the average I/O being 4KB. Larger transfer sizes will reduce the
number of operations per second.

Based on these values you can calculate how many host I/O per second each rank can
handle at the recommended utilization of 40%. This is shown for workload read-write ratios of
70% read and 50% read in Table 17-3.

Table 17-3 Disk operations per second per RAID rank


RAID rank type Disk ops/sec Host I/O /sec Host I/O /sec
(70% read) (50% read)

RAID-5 15K RPM (7 + P) 1700 358 272

RAID-5 10K RPM (7 + P) 1100 232 176

RAID-5 15K RPM (6 + P + S) 1458 313 238

RAID-5 10K RPM (6 + P + S) 943 199 151

RAID-10 15K RPM (3 + 3 + 2S) 1275 392 340

RAID-10 10K RPM (3 + 3 + 2S) 825 254 220

RAID-10 15K RPM (4 + 4) 1700 523 453

RAID-10 15K RPM (4 + 4 1100 338 293

As can be seen in Table 17-3, RAID-10 can support higher host I/O rates than RAID-5.
However, you must balance this against the reduced effective capacity of a RAID-10 rank
when compared to RAID-5.

17.6.6 Sharing ranks between System i and other servers


As a general guideline consider using separate extent pools for System i workload and other
workloads. This will isolate the I/O for each server.

However, you may consider sharing ranks when the other servers’ workloads have a
sustained low disk I/O rate compared to the System i I/O rate. Generally, System i has a
relatively high I/O rate while that of other servers may be lower – often below one I/O per GB
per second.

As an example, a Windows file server with a large data capacity may normally have a low I/O
rate with less peaks and could be shared with System i ranks. However, SQL, DB or other
application servers may show higher rates with peaks, and we recommend using separate
ranks for these servers.

Unlike its predecessor the ESS, capacity used for logical units on the DS8000 can be reused
without reformatting the entire array. Now, the decision to mix platforms on an array is only
one of performance, since the disruption previously experienced on ESS to reformat the array
no longer exists.

366 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_hostconsid_iSeries.fm

17.6.7 Connecting via SAN switches


When connecting DS8000 systems to System i via switches, you should plan that I/O traffic
from multiple System i adapters can go through one port on a DS8000 and zone the switches
accordingly. DS8000 host adapters can be shared between System i and other platforms.

Based on available measurements and experiences with the ESS 800 we recommend you
should plan no more than four System i I/O adapters to one host port in the DS8000.

For a current list of switches supported under OS/400, refer to the System i Storage Web site
at: http://www-1.ibm.com/servers/eserver/iseries/storage/storage_hw.html.

17.7 Migration
For many System i customers, migrating to the DS8000 will be best achieved using traditional
Save/Restore techniques. However, there are some alternatives you may wish to consider.

17.7.1 OS/400 mirroring


Although it is possible to use OS/400 to mirror the current disks (either internal or external) to
a DS8000 and then remove the older disks from the System i configuration, this is not
recommended because both the source and target disks must initially be unprotected. If
moving from internal drives, these would normally be protected by RAID-5 and this protection
would need to be removed before being able to mirror the internal drives to DS8000 logical
volumes.

Once an external logical volume has been created, it will always keep its model type and be
either protected or unprotected. Therefore, once a logical volume has been defined as
unprotected to allow it to be the mirror target, it cannot be converted back to a protected
model, and therefore will be a candidate for all future OS/400 mirroring, whether you want this
or not.

17.7.2 Metro Mirror and Global Copy


Depending on the existing configuration, it may be possible to use Metro Mirror or Global
Copy to migrate from an ESS to a DS8000 (or indeed, any combination of external storage
units which support Metro Mirror and Global Copy). For further discussion on Metro Mirror
and Global Copy, see Chapter 7, “Copy Services” on page 107 or refer to the Redbook IBM
System Storage DS8000 Series: Copy Services in Open Environments, SG24-6788.

Consider the example shown in Figure 17-30 on page 368. Here, the System i has its internal
Load Source Unit (LSU) and possibly some other internal drives. The ESS provides additional
storage capacity. Using Metro Mirror or Global Copy, it is possible to create copies of the ESS
logical volumes in the DS8000.

When ready to migrate from the ESS to the DS8000, you should do a complete shutdown of
the System i, unassign the ESS LUNs and assign the DS8000 LUNs to the System i. After
IPLing the System i, the new DS8000 LUNs will be recognized by OS/400, even though they
are different models and have different serial numbers.

Note: It is important to ensure that both the Metro Mirror or Global Copy source and target
copies are not assigned to the System i at the same time because this is an invalid
configuration. Careful planning and implementation is required to ensure this does not
happen, otherwise unpredictable results may occur.

Chapter 17. System i considerations 367


6786ch_hostconsid_iSeries.fm Draft Document for Review November 14, 2006 3:49 pm

Me
tro
Mir
ror

Figure 17-30 Using Metro Mirror to migrate from ESS to DS8000

The same setup can also be used if the ESS LUNs are in an IASP, although the System i
would not require a complete shutdown since varying off the IASP in the ESS, unassigning
the ESS LUNs, assigning the DS8000 LUNs and varying on the IASP would have the same
effect.

Clearly, you must also take into account the licensing implications for Metro Mirror and Global
Copy.

Note: This is a special case of using Metro Mirror or Global Copy and will only work if the
same System i is used, along with the LSU to attach to both the original ESS and the new
DS8000. It is not possible to use this technique to a different System i.

17.7.3 OS/400 data migration


It is also possible to use native OS/400 functions to migrate data from existing disks to the
DS8000, whether the existing disks are internal or external. When you assign the new
DS8000 logical volumes to the System i, initially they are non-configured (see “Adding
volumes to System i configuration” on page 346 for more details). If you add the new units
and choose to spread data, OS/400 will automatically migrate data from the existing disks
onto the new logical units.

You can then use the OS/400 command STRASPBAL TYPE(*ENDALC) to mark the units to
be removed from the configuration as shown in Figure 17-31 on page 369. This can reduce
the down time associated with removing a disk unit. This will keep new allocations away from
the marked units.

368 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_hostconsid_iSeries.fm

Start ASP Balance (STRASPBAL)

Type choices, press Enter.

Balance type . . . . . . . . . . > *ENDALC *CAPACITY, *USAGE, *HSM...


Storage unit . . . . . . . . . . 1-4094
+ for more values

Bottom
F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display
F24=More keys

Figure 17-31 Ending allocation for existing disk units

When you subsequently run the OS/400 command STRASPBAL TYPE(*MOVDTA) all data
will be moved from the marked units to other units in the same ASP, as shown in
Figure 17-32. Clearly you must have sufficient new capacity to allow the data to be migrated.

Start ASP Balance (STRASPBAL)

Type choices, press Enter.

Balance type . . . . . . . . . . > *MOVDTA *CAPACITY, *USAGE, *HSM...


Time limit . . . . . . . . . . . 1-9999 minutes, *NOMAX

Bottom
F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display
F24=More keys

Figure 17-32 Moving data from units marked *ENDALC

You can specify a time limit that the function is to run for each ASP being balanced or the
balance can be set to run to completion. If the balance function needs to be ended prior to
this, use the End ASP Balance (ENDASPBAL) command. A message will be sent to the
system history (QHST) log when the balancing function is started for each ASP. A message
will also be sent to the QHST log when the balancing function completes or is ended.

If the balance function is run for a few hours and then stopped, it will continue from where it
left off when the balance function restarts. This allows the balancing to be run during off hours
over several days.

In order to finally remove the old units from the configuration, you will need to use Dedicated
Service Tools (DST) and re-IPL the system (or partition).

Using this method allows you to remove the existing storage units over a period of time.
However, it does require that both the old and new units are attached to the system at the
same time so it may require additional IOPs and IOAs if migrating from an ESS to a DS8000.

It may be possible in your environment to re-allocate logical volumes to other IOAs, but
careful planning and implementation will be required.

Chapter 17. System i considerations 369


6786ch_hostconsid_iSeries.fm Draft Document for Review November 14, 2006 3:49 pm

17.8 Boot from SAN


Traditionally, System i hosts have required the use if an internal disk as a boot drive or load
source unit (LSU). The new support for boot from SAN, contained in i5/OS V5R3M5 and later,
removes this requirement.

17.8.1 Boot from SAN and cloning


Cloning is a new concept for iSeries. Previously, to create a new system image, you had to
perform a full installation of the SLIC and i5/OS.

Boot from SAN support enables you to take advantage of some of the advanced features
available with the DS8000 series and Copy Services functions. One of these functions is
known as FlashCopy; this function allows you to perform a near instantaneous copy of the
data held on a LUN or group of LUNs. Therefore, when you have a system that only has
external LUNs with no internal drives, you are able to create a clone of your system.

Important: When we refer to a clone, we are referring to a copy of a system that only uses
external LUNs. Boot (or IPL) from SAN is therefore a prerequisite for this.

Considerations when cloning:


򐂰 You need enough free capacity on your external storage unit to accommodate the clone.
Additionally, you should remember that Copy Services functions are resource intensive for
the external storage unit —running them during the normal business hours could impact
the performance.
򐂰 You should not attach a clone to your network until you have resolved any potential
conflicts that the clone has with the parent system.

17.8.2 Why consider cloning


By using the cloning capability you can create a complete copy of your entire system in
moments. You can then use this copy in any way you please, for example, you could
potentially use it to minimize your backup windows, or protect yourself from a failure during an
upgrade, maybe even use it as a fast way to provide yourself with a backup or test system. All
of these tasks can be done by cloning with minimal impact to your production operations.

17.9 AIX on IBM System i


With the announcement of the IBM System i5™ (models 520, 550, 570, and 595), it is now
possible to run AIX in a partition on the i5. This can be either AIX 5L V5.2 or V5.3. All
supported functions of these operating system levels are supported on i5, including HACMP
for high availability and external boot from Fibre Channel devices.

The following i5 I/O adapters are required to attach a DS8000 directly to an i5 AIX partition:
򐂰 0611 Direct Attach 2 Gigabit Fibre Channel PCI
򐂰 0625 Direct Attach 2 Gigabit Fibre Channel PCI-X

It is also possible for the AIX partition to have its storage virtualized, whereby a partition
running OS/400 hosts the AIX partition's storage requirements. In this case, if using DS8000,
they would be attached to the OS/400 partition using either of the following I/O adapters:
򐂰 2766 2 Gigabit Fibre Channel Disk Controller PCI

370 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_hostconsid_iSeries.fm

򐂰 2787 2 Gigabit Fibre Channel Disk Controller PCI-X

For more information on running AIX in an i5 partition, refer to the i5 Information Center at:
򐂰 http://publib.boulder.ibm.com/infocenter/iseries/v1r2s/en_US/index.htm?info/iphat
/iphatlparkickoff.htm

Note: AIX will not run in a partition on earlier 8xx and prior System i hosts.

17.10 Linux on IBM System i


Since OS/400 V5R1, it has been possible to run Linux in an System i partition. On System i
models 270 and 8xx, the primary partition must run OS/400 V5R1 or higher and Linux is run
in a secondary partition. For later i5 systems (models i520, i550, i570 and i595), Linux can
run in any partition.

On both hardware platforms, the supported versions of Linux are:


򐂰 SUSE Linux Enterprise Server 9 for POWER
(New 2.6 Kernel based distribution also supports earlier System i servers)
򐂰 RedHat Enterprise Linux AS for POWER Version 3
(Existing 2.4 Kernel based update 3 distribution also supports earlier System i servers)

The DS8000 requires the following System i I/O adapters to attach directly to an System i or
i5 Linux partition.
򐂰 0612 Linux Direct Attach PCI
򐂰 0626 Linux Direct Attach PCI-X

It is also possible for the Linux partition to have its storage virtualized, where a partition
running i5/OS hosts the Linux partition's storage. This storage can be made up of any
supported storage, such as a mix of internal storage and DS8000s. To use the DS8000 for
this hosted storage running under the i5/OS partition, use either of the following I/O adapters:
򐂰 2766 2 Gigabit Fibre Channel Disk Controller PCI
򐂰 2787 2 Gigabit Fibre Channel Disk Controller PCI-X

More information on running Linux in an System i partition can be found in the System i
Information Center at:
򐂰 V5R2 at: http://publib.boulder.ibm.com/iseries/v5r2/ic2924/index.htm.
򐂰 V5R3 at: http://publib.boulder.ibm.com/infocenter/iseries/v5r3/ic2924/index.htm.

For running Linux in an i5 partition, check the i5 Information Center at:


򐂰 http://publib.boulder.ibm.com/infocenter/iseries/v1r2s/en_US/info/iphbi/iphbi.pdf

Chapter 17. System i considerations 371


6786ch_hostconsid_iSeries.fm Draft Document for Review November 14, 2006 3:49 pm

372 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786p_upgrading_and_maintaining.fm

Part 5

Part 5 Maintenance and


upgrades
The subjects covered in this part include:
򐂰 Licensed machine code
򐂰 Monitoring with SNMP
򐂰 Remote support
򐂰 Capacity upgrades and CoD

© Copyright IBM Corp. 2006. All rights reserved. 373


6786p_upgrading_and_maintaining.fm Draft Document for Review November 14, 2006 3:49 pm

374 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_Install_microcode.fm

18

Chapter 18. Licensed machine code


In this chapter we discuss considerations related to the planning and installation of new
licensed machine code (LMC) bundles on the DS8000. The following topics are covered in
this chapter:
򐂰 How new microcode is released
򐂰 Installation process
򐂰 Concurrent and non-concurrent updates
򐂰 HMC code updates
򐂰 Host adapter firmware updates
򐂰 Loading the code bundle
򐂰 Post-installation activities
򐂰 Planning and application

© Copyright IBM Corp. 2006. All rights reserved. 375


6786ch_Install_microcode.fm Draft Document for Review November 14, 2006 3:49 pm

18.1 How new microcode is released


The various components of the DS8000 use firmware that can be updated as new releases
become available. These components include device adapters, host adapters, power
supplies, Fibre Channel interface cards, and processor complexes. In addition, the microcode
and internal operating system that runs on the HMCs and on each partition of the processor
complexes can be updated. As IBM continues to develop the DS8000, new functional
features will also be released through new licensed machine code (LMC) levels.

When IBM releases new microcode for the DS8000, it is released in the form of a bundle. The
term bundle is used because a new code release may include updates for various DS8000
components. These updates are tested together and then the various code packages are
bundled together into one unified release. In general, when referring to what code level is
being used on a DS8000, the term bundle should therefore be used. However, components
within the bundle will have their own revision levels. For instance, bundle 6.0.500.53 contains
DS CLI Version 5.0.500.134 and DS Storage Manager Version 5.0.5.0302. It is important that
you always match your DS CLI version to the bundle installed on your DS8000.

18.2 Installation process


The installation process involves several stages.
1. The S-HMC code will be updated. The new code version will be supplied on CD or
downloaded via FTP. This may potentially involve updates to the internal Linux version of
the S-HMC, updates to the S-HMC licensed machine code, and updates to the firmware of
the S-HMC hardware.
2. New DS8000 licensed machine code (LMC) will be loaded onto the S-HMC and from there
to the internal storage of each server.
3. Occasionally, new PPS and RPC firmware may be released. New firmware can be loaded
into each Rack Power Control (RPC) card and Primary Power Supply (PPS) directly from
the S-HMC. Each RPC and PPS would be quiesced, updated, and resumed one at a time
until all have been updated.
4. Occasionally, new firmware for the hypervisor, service processor, system planar, and I/O
enclosure planars may be released. This firmware can be loaded into each device directly
from the S-HMC. Activation of this firmware may require each processor complex to be
shut down and rebooted, one at a time. This would cause each server on each processor
complex to fail over its logical subsystems to the server on the other processor complex.
Certain updates may not require this step, or it may occur without processor reboots.
5. Updates to the server operating system (currently AIX 5.2) plus updates to the internal
LMC will be performed. Every server in a storage image would be updated one at a time.
Each update would cause each server to fail over its logical subsystems to its partner
server on the other processor complex. This process would also update the firmware
running in each device adapter owned by that server.
6. Updates to the host adapters will be performed. For FICON/FCP adapters, these updates
will impact each adapter for less than 2.5 seconds, and should not affect connectivity. If an
update were to take longer than this, multi-pathing software on the host, or CUIR (for
ESCON and FICON), will be used to direct I/O to a different host adapter.

While the installation process described above may seem complex, it will not require a great
deal of user intervention. The code installer will normally just start the process and then
monitor its progress using the S-HMC.

376 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_Install_microcode.fm

Attention: An upgrade of your DS8000 microcode may require that you upgrade the level
of your DS CLI version and may also require that you upgrade your S-HMC version. Check
with your IBM representative on the description and contents of the release bundle.

18.3 Concurrent and non-concurrent updates


The DS8000 allows for concurrent microcode updates. This means that code updates can be
installed with all attached hosts up and running with no interruption to your business.

There is also the ability to install microcode update bundles non-concurrently, with all
attached hosts shut down. However, this should not be necessary. This method is usually only
employed at DS8000 installation time.

18.4 HMC code updates


The microcode that runs on the HMC normally gets updated as part of a new code bundle.
The S-HMC can hold up to six different versions of code. Each server can hold three different
versions of code (the previous version, the active version, and the next version).

The HMC portion of the bundle should be updated one to two days before the rest of the
bundle is installed. This is because prior to the update of the SFI, a pre-verification process is
run to ensure that no pre-existing conditions exist that need to be corrected before the SFI
code update. The HMC code update will update the pre-verification routine. Thus, we should
update the HMC code and then run the updated pre-verification routine. If problems are
detected, we still have one to two days before the scheduled code installation window to
correct them.

18.5 Host adapter firmware updates


One of the final steps in the concurrent code load process is the update of the host adapters.
Normally every code bundle will contain new host adapter code. For Fibre Channel cards,
regardless of whether they are being used for open systems attachment, or System z
(FICON) attachment, the update process is concurrent to the attached hosts. The Fibre
Channel cards use a technique known as adapter fast-load. This allows them to switch to the
new firmware in less than 2 seconds. This fast update means that single pathed hosts, hosts
that are fibre boot and hosts that do not have multi-pathing software, will not need to be shut
down during the update. They will be able to keep operating during the host adapter update
since the update is so fast. This also means that no SDD path management should be
necessary.

Remote mirror and copy path considerations


For remote mirror and copy paths that use Fibre Channel ports, there are no considerations.
The ability to perform a fast-load means that no interruption will occur to the remote mirror
operations.

Control Unit Initiated Reconfiguration


For ESCON adapters that are being used for System z host access, there is no ability to
perform a fast-load. However, the DS8000 supports Control Unit Initiated Reconfiguration
(CUIR). With CUIR, the paths to each ESCON adapter will be automatically varied offline, so
that the adapter can be updated. Once the adapter is updated, the paths are automatically

Chapter 18. Licensed machine code 377


6786ch_Install_microcode.fm Draft Document for Review November 14, 2006 3:49 pm

brought online and the process moves to the next ESCON adapter. The use of CUIR removes
any need for operator intervention. CUIR is not used for remote mirror and copy paths.

CUIR for general maintenance


Control Unit Initiated Reconfiguration prevents loss of access to volumes in System z
environments due to wrong path handling. This function automates channel path
management in System z environments in support of selected DS8000 service actions.
Control Unit Initiated Reconfiguration is available for the DS8000 when operated in the z/OS
and z/VM environments. The CUIR function automates channel path vary on and vary off
actions to minimize manual operator intervention during selected DS8000 service actions.

CUIR allows the DS8000 to request that all attached system images set all paths required for
a particular service action to the offline state. System images with the appropriate level of
software support will respond to such requests by varying off the affected paths, and either
notifying the DS8000 subsystem that the paths are offline, or that it cannot take the paths
offline. CUIR reduces manual operator intervention and the possibility of human error during
maintenance actions, at the same time reducing the time required for the maintenance. This
is particularly useful in environments where there are many systems attached to a DS8000.

18.6 Loading the code bundle


When new code bundles are released, they are placed onto the Internet at this location:
ftp://ftp.software.ibm.com/storage/ds8000/updates/

When it comes time to copy the new code bundle onto the DS8000, there are two ways to
achieve this:
򐂰 Load the new code bundle onto the HMC using CDs.
򐂰 Download the new code bundle directly from IBM using FTP.

The ability to download the code from IBM eliminates the need to order or burn CDs.
However, it may require firewall changes to allow the HMC to connect using FTP to the site
listed above.

18.6.1 Code update schedule example


You should plan for two code updates per year. Assuming the update is performed on a
Saturday, a proposed schedule for each update could be:
Thursday Copy or download the new code bundle to the HMCs.
Update the HMCs to the new code bundle.
Run the updated pre-verification process.
Resolve any issues raised by the pre-verification process.
Saturday Update the SFIs.

Note that the actual time required for the concurrent code load will vary based on the bundle
you are currently running and the bundle you are going to. It is not possible to state here how
long updates will take. Always consult with your IBM Service Representative.

378 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_Install_microcode.fm

18.7 Post-installation activities


Once the code bundle has been installed, you may need to perform the following tasks:
1. Upgrade the DS CLI. For the majority of new release code bundles, there will be a new
release of DS CLI. Make sure you upgrade to the new version of the DS CLI to take
advantage of any improvements they have made.
2. Upgrade the DS HMC. The majority of customers will not use a stand-alone HMC, since
the DS HMC provides all required functions except the offline configurator. If, however, you
have chosen to use a stand-alone HMC, then it may also need to be upgraded.

18.8 Planning and application


IBM may release changes to the DS8000 series Licensed Machine Code. IBM plans to make
most DS8000 series Licensed Machine Code changes available for download by the DS8000
series system from the IBM System Storage technical support Web site. Please note that not
all Licensed Machine Code changes may be available via the support Web site. If the
machine does not function as warranted and your problem can be resolved through your
application of downloadable Licensed Machine Code, you are responsible for downloading
and installing these designated Licensed Machine Code changes as IBM specifies. IBM has
responsibility for installing changes that IBM does not make available for you to download.

The DS8000 series includes many enhancements to make the Licensed Machine Code
change process simpler, quicker, and more automated. If you would prefer, you may request
IBM to install downloadable Licensed Machine Code changes; however, you may be charged
for that service.

Chapter 18. Licensed machine code 379


6786ch_Install_microcode.fm Draft Document for Review November 14, 2006 3:49 pm

380 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_SNMP.fm

19

Chapter 19. Monitoring with SNMP


This chapter provides information about the SNMP notifications and messages for the
DS8000. In this chapter we cover the following topics:
򐂰 SNMP overview
򐂰 SNMP notifications
򐂰 SNMP configuration

© Copyright IBM Corp. 2006. All rights reserved. 381


6786ch_SNMP.fm Draft Document for Review November 14, 2006 3:49 pm

19.1 SNMP overview


SNMP (Simple Network Management Protocol) has become a standard for monitoring an IT
environment. With SNMP a system can be monitored, and event management, based on
SNMP traps, can be automated.

SNMP is an industry-standard set of functions for monitoring and managing TCP/IP-based


networks. SNMP includes a protocol, a database specification, and a set of data objects. A
set of data objects forms a Management Information Base (MIB).

SNMP provides a standard MIB that includes information such as IP addresses and the
number of active TCP connections. The actual MIB definitions are encoded into the agents
running on a system.

MIB-2 is the Internet standard MIB that defines over 100 TCP/IP specific objects, including
configuration and statistical information such as:
򐂰 Information about interfaces
򐂰 Address translation
򐂰 IP, ICMP (Internet-control message protocol), TCP, and UDP

SNMP can be extended through the use of the SNMP Multiplexing protocol (the SMUX
protocol) to include enterprise-specific MIBs that contain information related to a specific
environment or application. A management agent (a SMUX peer daemon) retrieves and
maintains information about the objects defined in its MIB, and passes this information on to a
specialized network monitor or network management station (NMS).

The SNMP protocol defines two terms, agent and manager, instead of the client and server
used in many other TCP/IP protocols:

19.1.1 SNMP agent


An SNMP agent is a daemon process that provides access to the MIB objects on IP hosts
that the agent is running on. The agent can receive SNMP get or SNMP set requests from
SNMP managers and can send SNMP trap requests to SNMP managers.

Agents send traps to the SNMP manager to indicate that a particular condition exists on the
agent system, such as the occurrence of an error. In addition, the SNMP manager generates
traps when it detects status changes or other unusual conditions while polling network
objects.

19.1.2 SNMP manager


An SNMP manager can be implemented in two ways. An SNMP manager can be
implemented as a simple command tool that can collect information from SNMP agents. An
SNMP manager also can be composed of multiple daemon processes and database
applications. This type of complex SNMP manager provides you with monitoring functions
using SNMP. It typically has a graphical user interface for operators. The SNMP manager
gathers information from SNMP agents and accepts trap requests sent by SNMP agents.

19.1.3 SNMP trap


A trap is a message sent from an SNMP agent to an SNMP manager without a specific
request from the SNMP manager.

382 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_SNMP.fm

SNMP defines six generic types of traps and allows definition of enterprise-specific traps. The
trap structure conveys the following information to the SNMP manager:
򐂰 Agent’s object that was affected
򐂰 IP address of the agent that sent the trap
򐂰 Event description (either a generic trap or enterprise-specific trap, including trap number)
򐂰 Time stamp
򐂰 Optional enterprise-specific trap identification
򐂰 List of variables describing the trap

19.1.4 SNMP communication


The SNMP manager sends SNMP get, get-next, or set requests to SNMP agents, which
listen on UDP port 161, and the agents send back a reply to the manager. The SNMP agent
can be implemented on any kind of IP host, such as UNIX workstations, routers, and network
appliances.

You can gather various information on the specific IP hosts by sending the SNMP get and
get-next request, and can update the configuration of IP hosts by sending the SNMP set
request.

The SNMP agent can send SNMP trap requests to SNMP managers, which listen on UDP
port 162. The SNMP trap1 requests sent from SNMP agents can be used to send warning,
alert, or error notification messages2 to SNMP managers.

Figure 19-1 SNMP architecture and communication

Note that you can configure an SNMP agent to send SNMP trap requests to multiple SNMP
managers. Figure 19-1 illustrates the characteristics of SNMP architecture and
communication.

Chapter 19. Monitoring with SNMP 383


6786ch_SNMP.fm Draft Document for Review November 14, 2006 3:49 pm

19.1.5 Generic SNMP security


The SNMP protocol uses the community name for authorization. Most SNMP
implementations use the default community name public for read-only community, and
private for a read-write community. In most cases, a community name is sent in a plain-text
format between the SNMP agent and manager. Some SNMP implementations have
additional security features, such as the restriction of the accessible IP addresses.

Therefore, you should be careful about the SNMP security. At the very least:
򐂰 Do not use the default community name (public and private).
򐂰 Do not allow access to hosts that are running the SNMP agent, from networks or IP hosts
that do not necessarily require access.

You may want to physically secure the network to which you would send SNMP packets by
using a firewall, because community strings are included as plain text in SNMP packets.

19.1.6 Message Information Base (MIB)


The objects, which you can get or set by sending SNMP get or set requests, are defined as a
set of databases called Message Information Base (MIB). The structure of MIB is defined as
an Internet standard in RFC 1155, the MIB forms a tree structure.

Most hardware and software vendors provide you with extended MIB objects to support their
own requirements. The SNMP standards allow this extension by using the private sub-tree,
called enterprise specific MIB. Because each vendor has unique MIB sub-tree under the
private sub-tree, there is no conflict among vendor original MIB extension.

19.1.7 SNMP trap request


An SNMP agent can send SNMP trap requests to SNMP managers to inform them of the
change of values or statuses on the IP host where the agent is running. There are seven
predefined types of SNMP trap requests, as shown in Table 19-1.

Table 19-1 SNMP trap request types


Trap type Value Description

coldStart 0 Restart after a crash.

warmStart 1 Planned restart.

linkDown 2 Communication link is down.

linkUp 3 Communication link is up.

authenticationFailure 4 Invalid SNMP community string was used.

egpNeighborLoss 5 EGP neighbor is down.

enterpriseSpecific 6 Vendor specific event happened.

A trap message contains pairs of an OID and a value shown in Table 19-1 to notify the cause
of the trap message. You can also use type 6, the enterpriseSpecific trap type, when you
have to send messages that are not fit for other predefined trap types, for example, DISK I/O
error and application down. You can also set an integer value field called Specific Trap on
your trap message.

384 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_SNMP.fm

19.1.8 DS8000 SNMP configuration


SNMP for the DS8000 is designed in such a way that the DS8000 only sends out traps in
case of a notification. The traps can be sent to a defined IP address.

The DS8000 does not have an SNMP agent installed that can respond to SNMP polling. The
SNMP Community Name can be set within the DS Storage Manager GUI or the DS CLI. The
default Community Name is set to public.

The management server that is configured to receive the SNMP traps will receive all the
generic trap 6 and specific trap 3 messages, which will be sent out in parallel with the Call
Home to IBM.

Before configuring SNMP for the DS8000, you are required to get the destination address for
the SNMP trap and also the port information on which the Trap Daemon will listen.

Tip: The standard port for SNMP traps is port 162.

19.2 SNMP notifications


The HMC of the DS8000 will send an SNMPv1 trap in two cases:
򐂰 A serviceable event was reported to IBM via Call Home
򐂰 An event occurred in the Copy Services configuration or processing

A serviceable event will be posted as a generic trap 6 specific trap 3 message. The specific
trap 3 is the only event that is being sent out for serviceable events. For reporting Copy
Services events generic trap 6 and specific traps 100, 101, 102, 200, 202, 210, 211, 212, 213,
214, 215, 216, or 217 will be sent out.

19.2.1 Serviceable event using specific trap 3


In Example 19-1 we see the contents of generic trap 6 specific trap 3. The trap will hold the
information about the serial number of the DS8000, the event number that is associated with
the manageable events from the HMC, the reporting Storage Facility Image (SFI), the system
reference code (SRC), and the location code of the part that is logging the event.

The SNMP trap will be sent out in parallel to a Call Home for service to IBM.

Example 19-1 SNMP special trap 3 of an DS8000


Nov 14, 2005 5:10:54 PM CET
Manufacturer=IBM
ReportingMTMS=2107-922*7503460
ProbNm=345
LparName=null
FailingEnclosureMTMS=2107-922*7503460
SRC=10001510
EventText=2107 (DS 8000) Problem
Fru1Loc=U1300.001.1300885
Fru2Loc=U1300.001.1300885U1300.001.1300885-P1

For open events in the event log, a trap will be sent out every 8 hours until the event is closed.

Chapter 19. Monitoring with SNMP 385


6786ch_SNMP.fm Draft Document for Review November 14, 2006 3:49 pm

19.2.2 Copy Services event traps


For the state changes in a remote copy services environment there are 13 different traps
implemented. The traps 1xx are sent out for a state change of a physical link connection. The
2xx traps are sent out for state changes in the logical copy services setup. For all of these
events, no Call Home will be generated and IBM will not be notified.

This chapter describes only the messages and the circumstances when sent out by the
DS8000. For detailed information about these functions and terms, refer to IBM System
Storage DS8000 Series: Copy Services with System z servers, SG24-6787 and IBM System
Storage DS8000 Series: Copy Services in Open Environments, SG24-6788.

Physical connection events


With the trap 1xx range a state change of the physical links will be reported. The trap will be
sent out if the physical remote copy link is interrupted. The Link trap will be sent from the
primary system. The PLink and SLink columns are only used by the 2105 ESS disk unit.

If one or several links (but not all links) are interrupted, a trap 100, as shown in Example 19-2,
is posted and will indicate that the redundancy is degraded. The RC column in the trap will
represent the reason code for the interruption of the link — reason codes are listed in
Table 19-2 on page 387.

Example 19-2 Trap 100: Remote mirror links degraded


PPRC Links Degraded
UNIT: Mnf Type-Mod SerialNm LS
PRI: IBM 2107-922 75-20781 12
SEC: IBM 2107-9A2 75-ABTV1 24
Path: Type PP PLink SP SLink RC
1: FIBRE 0143 XXXXXX 0010 XXXXXX 15
2: FIBRE 0213 XXXXXX 0140 XXXXXX OK

If all links all interrupted, a trap 101, as shown in Example 19-3, is posted. This event
indicates that no communication between the primary and the secondary system is possible
any more.

Example 19-3 Trap 101: Remote mirror links down


PPRC Links Down
UNIT: Mnf Type-Mod SerialNm LS
PRI: IBM 2107-922 75-20781 10
SEC: IBM 2107-9A2 75-ABTV1 20
Path: Type PP PLink SP SLink RC
1: FIBRE 0143 XXXXXX 0010 XXXXXX 17
2: FIBRE 0213 XXXXXX 0140 XXXXXX 17

Once the DS8000 can communicate again via any of the links, trap 102, as shown in
Example 19-4, is sent once one or more of the interrupted links are available again.

Example 19-4 Trap 102: Remote mirror links up


PPRC Links Up
UNIT: Mnf Type-Mod SerialNm LS
PRI: IBM 2107-9A2 75-ABTV1 21
SEC: IBM 2107-000 75-20781 11
Path: Type PP PLink SP SLink RC
1: FIBRE 0010 XXXXXX 0143 XXXXXX OK
2: FIBRE 0140 XXXXXX 0213 XXXXXX OK

386 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_SNMP.fm

Table 19-2 Remote mirror and copy path reason codes


Reason Code Description

00 No path.

01 ESCON path established.

02 Initialization failed. ESCON link reject threshold exceeded when attempting to send ELP or RID
frames.

03 Time out. No reason available.

04 No resources available at primary for the logical path establishment.

05 No resources available at secondary for the logical path establishment.

06 Secondary CU Sequence Number or Logical Sub-system number mismatch.

07 Secondary CU SS ID mismatch or failure of the I/O that collects secondary information for validation.

08 ESCON link is offline. This is caused by the lack of light detection coming from a host, peer, or switch.

09 Establish failed but will retry when conditions change.

0A The primary control unit port or link cannot be converted to channel mode since a logical path is
already established on the port or link. The establish paths operation will not be retried within the
control unit automatically.

0B Reserved for use by StorageTek.

10 Configuration error. The source of the error is one of the following:


򐂰 The specification of the SA ID does not match the installed ESCON adapter cards in the primary
controller.
򐂰 For ESCON paths, the secondary control unit destination address is zero and an ESCON
Director (switch) was found in the path.
򐂰 For ESCON paths, the secondary control unit destination address is non-zero and an ESCON
Director does not exist in the path. That is, the path is a direct connection.

11 Reserved.

12 Reserved.

13 / OK Fibre path established.

14 Fibre Channel Path Link Down.

15 Fibre Channel Path Retry Exceeded.

16 Fibre Channel Path Secondary Adapter not PPRC capable. This could be due to:
򐂰 Secondary Adapter not configured properly, or does not have the correct microcode loaded.
򐂰 The secondary adapter is already a target of 32 different ESSs, DS8000s, or DS6000s.

17 Fibre Channel Path Secondary Adapter not available.

18 Fibre Channel Path Primary Login Exceeded.

19 Fibre Channel Path Secondary Login Exceeded.

Remote copy events


If you have configured Consistency Groups and a volume within this Consistency Group is
suspended due to a write error to the secondary device, trap 200, as shown in Example 19-5
on page 388, will be sent out. One trap per LSS, that is configured with the Consistency

Chapter 19. Monitoring with SNMP 387


6786ch_SNMP.fm Draft Document for Review November 14, 2006 3:49 pm

Group option, will be sent. This trap could be handled by automation software like eRCMF to
freeze this Consistency Group.

Example 19-5 Trap 200: LSS-Pair Consistency Group remote mirror pair error
LSS-Pair Consistency Group PPRC-Pair Error
UNIT: Mnf Type-Mod SerialNm LS LD SR
PRI: IBM 2107-922 75-03461 56 84 08
SEC: IBM 2107-9A2 75-ABTV1 54 84

Trap 202, as shown in Example 19-6, will be sent out if a remote Copy Pair goes into a
suspend State. The trap contains the serial number (SerialNm) of the primary and secondary
machine, the LSS (LS), and the logical device (LD). To avoid SNMP trap flooding, the number
of SNMP traps for the LSS will be throttled. The complete suspended pair information is
represented in the summary. The last row of the trap represents the suspend state for all pairs
in the reporting LSS. The suspended pair information contains a hexadecimal string of a
length of 64 characters. By converting this hex string into binary, each bit will represent a
single device. If the bit is 1 then the device is suspended; otherwise the device is still in full
duplex mode.

Example 19-6 Trap 202: Primary remote mirror devices on LSS suspended due to error
Primary PPRC Devices on LSS Suspended Due to Error
UNIT: Mnf Type-Mod SerialNm LS LD SR
PRI: IBM 2107-922 75-20781 11 00 03
SEC: IBM 2107-9A2 75-ABTV1 21 00
Start: 2005/11/14 09:48:05 CST
PRI Dev Flags (1 bit/Dev, 1=Suspended):
C000000000000000000000000000000000000000000000000000000000000000

Trap 210, as shown in Example 19-7, is sent out when a Consistency Group in a Global Mirror
environment was successfully formed.

Example 19-7 Trap210: Global Mirror initial Consistency Group successfully formed
2005/11/14 15:30:55 CET
Asynchronous PPRC Initial Consistency Group Successfully Formed
UNIT: Mnf Type-Mod SerialNm
IBM 2107-922 75-20781
Session ID: 4002

Trap 211, as shown in Example 19-8, will be sent out if the Global Mirror setup got into an
severe error state, where no attempts will be done to form a Consistency Group.

Example 19-8 Trap 211: Global Mirror Session is in a fatal state


Asynchronous PPRC Session is in a Fatal State
UNIT: Mnf Type-Mod SerialNm
IBM 2107-922 75-20781
Session ID: 4002

Trap 212, as shown in Example 19-9 on page 389, is sent out when a Consistency Group
could not be created in a Global Mirror relation. Some of the reasons could be:
򐂰 Volumes have been taken out of a copy session.
򐂰 The remote copy link bandwidth might not be sufficient.
򐂰 The FC link between the primary and secondary system is not available.

388 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_SNMP.fm

Example 19-9 Trap 212: Global Mirror Consistency Group failure - Retry will be attempted
Asynchronous PPRC Consistency Group Failure - Retry will be attempted
UNIT: Mnf Type-Mod SerialNm
IBM 2107-922 75-20781
Session ID: 4002

Trap 213, as shown in Example 19-10, will be sent out when a Consistency Group in a Global
Mirror environment could be formed after a previous Consistency Group formation failure.

Example 19-10 Trap 213: Global Mirror Consistency Group successful recovery
Asynchronous PPRC Consistency Group Successful Recovery
UNIT: Mnf Type-Mod SerialNm
IBM 2107-9A2 75-ABTV1
Session ID: 4002

Trap 214, as shown in Example 19-11, will be sent out if a Global Mirror Session is terminated
using the DS CLI command rmgmir or the corresponding GUI function.

Example 19-11 Trap 214: Global Mirror Master terminated


2005/11/14 15:30:14 CET
Asynchronous PPRC Master Terminated
UNIT: Mnf Type-Mod SerialNm
IBM 2107-922 75-20781
Session ID: 4002

Trap 215, as shown in Example 19-12, will be sent out if in the Global Mirror Environment the
Master has detected a failure to complete the FlashCopy commit. The trap will be sent out
after a number of commit retries have failed.

Example 19-12 Trap 215: Global Mirror FlashCopy at remote site unsuccessful
Asynchronous PPRC FlashCopy at Remote Site Unsuccessful
A UNIT: Mnf Type-Mod SerialNm
IBM 2107-9A2 75-ABTV1
Session ID: 4002

Trap 216, as shown in Example 19-13, will be sent out if a Global Mirror Master cannot
terminate the Global Copy relationship at one of his Subordinates (slave). This might occur if
the Master is terminated with rmgmir but the Master cannot terminate the copy relationship on
the Subordinate. The user may need to run a rmgmir against the Subordinate to prevent any
interference with other Global Mirror sessions.

Example 19-13 Trap 216: Global Mirror Subordinate termination unsuccessful


Asynchronous PPRC Slave Termination Unsuccessful
UNIT: Mnf Type-Mod SerialNm
Master: IBM 2107-922 75-20781
Slave: IBM 2107-921 75-03641
Session ID: 4002

Trap 217, as shown in Example 19-14 on page 390, will be sent out if a Global Mirror
environment was suspended by the DS CLI command pausegmir or the corresponding GUI
function.

Chapter 19. Monitoring with SNMP 389


6786ch_SNMP.fm Draft Document for Review November 14, 2006 3:49 pm

Example 19-14 Trap 217: Global Mirror paused


Asynchronous PPRC Paused
UNIT: Mnf Type-Mod SerialNm
IBM 2107-9A2 75-ABTV1
Session ID: 4002

19.3 SNMP configuration


The SNMP for the DS8000 is designed to send out traps as notification. The traps can be sent
to up to two defined IP addresses. The DS8000 does not have an SNMP agent installed that
can respond to SNMP polling. Also, the SNMP community name is fixed and set to public.

SNMP preparation on the DS HMC


During the planning for the installation (see 9.4.8, “Monitoring with the DS HMC” on
page 157), the IP addresses of the management system will be provided for the IBM service
personnel. This information must be applied by the IBM service personnel during the
installation. Also, the service personnel can configure the HMC to either send notification for
every serviceable event, or send notification for only those events that Call Home to IBM.

The network management server that is configured on the HMC will receive all the generic
trap 6 specific trap 3 messages, which will be sent out in parallel with any to the Call Home to
IBM.

SNMP preparation with the DS CLI


The configuration for receiving the Copy Services related traps will be done using the DS CLI.
Example 19-15 shows how SNMP will be enabled using the chsp command.

Example 19-15 configuring the SNMP using dscli


dscli> chsp -snmp on -snmpaddr 10.10.10.11,10.10.10.12
Date/Time: November 16, 2005 10:14:50 AM CET IBM DSCLI Version: 5.1.0.204
CMUC00040I chsp: Storage complex IbmStoragePlex_2 successfully modified.

dscli> showsp
Date/Time: November 16, 2005 10:15:04 AM CET IBM DSCLI Version: 5.1.0.204
Name IbmStoragePlex_2
desc ATS #1
acct -
SNMP Enabled
SNMPadd 10.10.10.11,10.10.10.12
emailnotify Disabled
emailaddr -
emailrelay Disabled
emailrelayaddr -
emailrelayhost -

SNMP preparation for the management software


For the DS8000 you can use the ibm2100.mib file, which will be delivered with the DS CLI
CD. Alternatively, you can download the latest version of the DS CLI CD image from:
ftp://ftp.software.ibm.com/storage/ds8000/updates/CLI/

390 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_remote_supt.fm

20

Chapter 20. Remote support


In this chapter we discuss remote support for the DS8000 which includes call home and
remote support via VPN. The following topics are covered in this chapter:
򐂰 Call Home and remote support
򐂰 Connections
򐂰 Optional firewall setup guidelines
򐂰 Additional remote support options

© Copyright IBM Corp. 2006. All rights reserved. 391


6786ch_remote_supt.fm Draft Document for Review November 14, 2006 3:49 pm

20.1 Call Home and remote support


The DS HMC supports both outbound (Call Home) and inbound (Remote Services) remote
support. In this section we give an overview of these functions and in the following sections
we discuss the connectivity options.

20.1.1 Call Home for service


Call Home is the capability of the DS HMC to contact IBM support services to report a
problem. This is referred to as Call Home for service. The DS HMC provides machine
reported product data (MRPD) information to IBM by way of the Call Home facility. The MRPD
information includes installed hardware, configurations, and features.

The storage complex uses the Call Home method to send heartbeat information to IBM, and
by doing this ensures that the DS HMC is able to initiate a Call Home to IBM in the case of an
error. Should the heartbeat information not reach IBM, a service call to the client is initiated by
IBM to investigate the status of the DS8000. The Call Home can either be configured for
modem or Internet setup. The Call Home service can only be initiated by the DS HMC.

20.1.2 Remote support


IBM Service personnel located outside of the client facility log in to the DS HMC to provide
service and support. The methods available for IBM to connect to the DS HMC are configured
by the IBM SSR at the direction of the client, and may include dial-up only access or access
through the high-speed Internet connection.

For the connection, a Virtual Private Internet (VPN) tunnel is established using the Internet
Security Architecture (IPSec) protocol. The VPN tunnel is established using an IPSec
end-to-end IPSec connection between the HMC and the IBM network.

Allowing the HMC to also set up a VPN connection to IBM adds additional redundancy to your
environment, as the HMC can use the VPN tunnel via the Ethernet to do the Call Home to
IBM or use the modem line, in case the Call Home over the network cannot be established.

Tip: For detailed information, IP addresses, and port information about remote support
using an Internet connection, refer to the document VPN Security and Implementation,
which is available in the DS8000 Document Library under the topic VPN implementation at:
http://www-03.ibm.com/servers/storage/support/disk/ds8100/installing.html
or directly access this URL to download the PDF document:
http://www-1.ibm.com/support/docview.wss?&rs=1114&uid=ssg1S1002693

20.2 Connections
The remote support can either be configured for modem or Internet setup.

20.2.1 Dial-up connection


This is a low-speed asynchronous modem connection to a telephone line. This connection
typically favors small amounts of data transfers. When configuring for a dial-up connection,
have the following information available:
򐂰 Which dialing mode will be used—either tone or pulse
򐂰 Whether a dialing prefix is required when dialing an outside line

392 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_remote_supt.fm

20.2.2 Secure high-speed connection


This connection is through a high-speed Ethernet connection that can be configured through
a secure virtual private network (VPN) Internet connection to ensure authentication and data
encryption. IBM has chosen to use a graphical interface (WebSM) for servicing the storage
facility, and for the problem determination activity logs, error logs, and diagnostic dumps that
may be required for effective problem resolution. These logs can be significant in size. For this
reason, a high-speed connection would be the ideal infrastructure for effective support
services.

A remote connection can be configured to meet the following client requirements:


򐂰 Allow call on error (machine detected).
򐂰 Allow connection for a few days (client initiated).
򐂰 Allow remote error investigation (service initiated).

Note: IBM recommends that the DS HMC be connected to the client’s public network over
a secure VPN connection, instead of a dial-up connection.

20.2.3 Establish a remote support connection


In cases that a remote support session is only allowed to be established manually, the VPN
Tunnel can be established by the service representative directly from Service Panels
available on the HMC or via DS CLI. Example 20-1 shows the DS CLI command setvpn,
which provides a client with the ability to establish a VPN Tunnel to connect to IBM, so that
the IBM remote support personnel can connect to the DS8000.

Example 20-1 DS CLI command setvpn


Date/Time: October 31, 2005 1:54:28 PM CET IBM DSCLI Version: 5.1.0.204 DS:
IBM.2107-7503461

dscli> setvpn -action connect


Date/Time: October 31, 2005 1:54:47 PM CET IBM DSCLI Version: 5.1.0.204
CMUC00232I setvpn: Secure connection is started successfully through the network.
dscli>

20.2.4 Example of connection to IBM


If the microcode running on the HMC decides to notify IBM about the current state of the
system, the HMC will initiate a VPN tunnel via the Internet or the phoneline and submit the
Call Home data. Then the VPN tunnel will be disconnected. The possible connection flow is
shown in Figure 20-1.

Public Telephone
Async Modem
Retain Network
FIREWALL

IBM VPN

FIREWALL
Server

AT&T Global AT&T OR


Network
Client
LAN
Internet HMC

Figure 20-1 DS HMC Call Home flow

Chapter 20. Remote support 393


6786ch_remote_supt.fm Draft Document for Review November 14, 2006 3:49 pm

Figure 20-2 shows the establish process for a VPN connection. When the IBM support
personnel needs to connect to the DS8000, support will call in using a modem connection
and an ASCII terminal screen will be presented. This ASCII screen will request a one-time
password, which is generated with a presented key and an IBM system that is calculating this
one-time password. Once the access is granted, the support user establishes a VPN tunnel
back to IBM.

1. Request connection using ASCII Terminal session


Dial-out
Server

IBM Public Telephone Async Modem


Intranet Network

HMC

Support/PFE
2. VPN tunnel established back from HMC
Public Telephone
Async Modem
RS3 Network
FIREWALL

IBM VPN

FIREWALL
Server

AT&T Global AT&T OR


Network
Client
LAN
HMC
Internet

3. Logon to HMC using WebSM

Public Telephone
Async Modem
RS3
Network
FIREWALL

IBM VPN
Server

IBM

FIREWALL
AT&T Global
Intranet AT&T OR
Network
Client
Support/PFE LAN
Internet HMC

Figure 20-2 VPN establish process flow

Depending on the outbound configuration made during the setup of the HMC, the HMC uses
the modem or the network connection to establish the VPN tunnel. The support person then
needs to log into an IBM system, which acts as the other end of the VPN tunnel. Now only the
workstation from where the VPN tunnel was opened from can connect to this VPN tunnel.
The tunnel will automatically stop if the support person logs out or the VPN tunnel was not
opened by IBM support within 10 minutes.

20.2.5 Support authorization via remote support


There are three authorization levels available for the DS8000, as described in Table 20-1.

Table 20-1 DS8000 remote support authority levels


Authorization level Allowed actions

Remote Establish a VPN session back to IBM.


Work with the standard support functions using the WebSM GUI.

PE Establish a VPN session back to IBM.


Work with advanced support functions using the WebSM GUI.

Developer Allowed root access, using ssh, but only if a PE user is logged in via
WebSM.

The authorization at the DS8000 has implemented a challenge-response authentication


scheme for the three authorization levels. The goal of these three is that only specially trained
support personnel will gain sufficient privilege according to their expertise.

394 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_remote_supt.fm

20.3 Optional firewall setup guidelines


The optional firewall in Figure 20-3 can be added for additional security. By using a stateful
firewall the roles would look like this:
򐂰 The HMC is allowed to establish a connection to the IBM VPN servers only and allow back
traffic from these servers.
򐂰 To use the DS CLI, the stateful firewall also needs a role in which all computers from the
HMC can establish a connection to the HMC, but the HMC cannot establish a connection
to other computers in the intranet.

Client
Network
DMZ

DS8000
Subsystems

DS HMC Opt. Firewall


eth eth provided by
customer
Redundant eth modem Internet
Ethernet
Fabric

Integrated
Firewall Phoneline
Proxy

DMZ
IBM
Network

IBM Remote Support


infrastructure

Figure 20-3 DS8000 DS HMC remote support connection

20.4 Additional remote support options


The following are additional remote support options to consider.

20.4.1 VPN implementation using an external HMC


If an IPSec connection between the HMC and IBM cannot be established through the client
intranet, use an external HMC (FC 1110). In Figure 20-4 on page 396 an external HMC
located between the two firewalls in the demilitarized zone (DMZ) of the network can be used
to make the IPsec connection to IBM. However, when using this setup, additional
considerations need to be made. Your clients should be able to communicate with both the
internal and external HMCs, or else they may not be able to use the DS Storage Manager or
the DS CLI if the internal HMC is not reachable on the network.

Chapter 20. Remote support 395


6786ch_remote_supt.fm Draft Document for Review November 14, 2006 3:49 pm

Optional
Firewall

eth eth
Client
eth modem Network
DMZ

DS8000
VPN
Subsystems Integrated
Firewall
Proxy

Redundant DS HMC
Internet
Ethernet eth eth
Fabric
eth modem

Integrated Phoneline
Firewall DMZ
Proxy
IBM
Network

IBM Remote Support


infrastructure

Figure 20-4 DS8000 HMC alternate remote support connection

20.4.2 Data offload using File Transfer Protocol


If for some reason an IPSec connection is not possible, it is possible to configure the HMC so
that support data can automatically be off-loaded from the HMC using the File Transfer
Protocol (FTP). This traffic can be examined at the firewall. To configure this, there are nine
different methods available to authorize at firewalls or FTP proxy servers.The FTP offload has
the advantage that the IBM service personnel can log in to the HMC via the modem line, while
support data will be transmitted to IBM via FTP. When a direct FTP connection to the IBM
server is not possible, the FTP offload can be configured to use a client internal FTP server.
The client then needs to set up an automated way to transfer this data from the internal FTP
server to the IBM server.

396 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_Capacity_Upgrades_CoD.fm

21

Chapter 21. Capacity upgrades and CoD


This chapter discusses aspects of implementing capacity upgrades and Capacity on Demand
(CoD) with the DS8000. The following topics are covered in the present chapter:
򐂰 Installing capacity upgrades
򐂰 Using Capacity on Demand (CoD)

© Copyright IBM Corp. 2006. All rights reserved. 397


6786ch_Capacity_Upgrades_CoD.fm Draft Document for Review November 14, 2006 3:49 pm

21.1 Installing capacity upgrades


Storage capacity can be ordered and added to the DS8000 unit via disk drive sets. A disk
drive set contains 16 disk drives modules (DDM), which have the same capacity and the
same speed (revolutions per minute or rpm). Disk drives are available in:
򐂰 73 GB (15k rpm)
򐂰 146 GB (10k rpm)
򐂰 146 GB (15k rpm)
򐂰 300 GB (10k rpm)
򐂰 500 GB (7,200 rpm, FATA)

The disk drives are installed in storage enclosures. A storage enclosure interconnects the
DDMs to the Fibre Channel switch cards that connect to the device adapters. Each storage
enclosure contains a redundant pair of Fibre Channel switch cards. See Figure 21-1.

Storage Enclosure

Fibre Channel switch card

DDM

Figure 21-1 DS8000 disk enclosure assembly

The storage enclosures are always installed in pairs, one enclosure in the front of the unit and
one enclosure in the rear. A storage enclosure pair can be populated with one or two disk
drive sets (16 or 32 DDMs). All DDMs in a disk enclosure pair must be of the same type
(capacity and speed). If a disk enclosure pair is populated with only 16 DDMs, disk drive filler
modules will be installed in the vacant DDM slots. This is to maintain correct cooling airflow
throughout the enclosure.

Each storage enclosure attaches to two separate device adapters (DAs). The DAs are the
raid adapter cards that connect the processor complexes to the DDMs. The DS8000 DA
cards always come as a redundant pair, so we refer to them as DA pairs.

Physical installation and testing of the device adapters, storage enclosure pairs, and DDMs is
performed by your IBM Service Representative. After the additional capacity is added
successfully, the new storage will appear as additional un-configured array sites.

You may need to obtain new license keys and apply them to the storage image before you
start configuring the new capacity; see Chapter 11, “Features and license keys” on page 197.

398 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_Capacity_Upgrades_CoD.fm

You will not be able to create ranks using the new capacity if this causes your machine to
exceed its license key limits.

21.1.1 Installation order of upgrades


Individual machine configurations will vary, so it is not possible to give an exact pattern for the
order in which every storage upgrade will be installed. This is because it is possible to order a
machine with multiple under-populated storage enclosures (SEs) across the device adapter
(DA) pairs. This is done to allow future upgrades to be performed with the least physical
changes. It should be noted, however, that all storage upgrades are concurrent, in that adding
capacity to a DS8000 does not require any down time.

As a general rule, when adding capacity to a DS8000, storage hardware is populated in the
following order:
1. DDMs are added to under-populated enclosures. Whenever you add 16 DDMs to a
machine, eight DDMs are installed into the front storage enclosure and eight into the rear
storage enclosure.
2. Once the first storage enclosure pair on a DA pair is fully populated with DDMs (32 DDMs
total), a second storage enclosure pair can be added to that DA pair.
3. Once a DA pair has two fully populated storage enclosure pairs (64 DDMs total), we add
another DA pair. The base frame supports two DA pairs. The first expansion frame
supports an additional two DA pairs for model 931 (for a total of four DA pairs), or an
additional four DA pairs for a model 932 or 9B2 (for a total of six DA pairs). In addition,
models 932 and 9B2 can have a second expansion frame that supports an additional two
DA pairs (for a total of 8 pairs). The DA cards are installed into the I/O enclosures that are
located at the bottom of the racks. They are not located in the storage enclosures.
4. The last stage varies according to model:
– For a model 931, once we have installed four DA pairs, and each DA pair has two fully
populated storage enclosure pairs (that is, we have 4 DA pairs with 64 DDMs on each
pair), then we start adding extra storage enclosure pairs to two of the DA pairs (the DA
pairs numbered 2 and 0). We can do this till these DA pairs have four storage
enclosure pairs each (128 DDMs per DA pair). At this point two of the DA pairs have 64
DDMs and two of the DA pairs have 128 DDMs. Both racks are now fully populated and
no more capacity can be added.
– For a model 932 or 9B2, once we have installed eight DA pairs, and each DA pair has
two fully populated storage enclosure pairs (that is 8 DA pairs with 64 DDMs on each
pair), then we start adding extra storage enclosure pairs to two of the DA pairs (the DA
pairs numbered 2 and 0). We can do this till these DA pairs have four storage
enclosure pairs each (128 DDMs per DA pair). At this point six of the DA pairs have 64
DDMs and two of the DA pairs have 128 DDMs. All three racks are now fully populated
and no more capacity can be added.

21.1.2 Checking how much capacity is installed


There are four DS CLI commands you can use to check how many DAs, SEs, and DDMs are
installed in your DS8000. These are lsda, lsstgencl, lsddm, and lsarraysite.

In Example 21-1 on page 400 we use these commands to list the DAs, the SEs, the DDMs,
and the array sites in a DS8000. Because there are 64 DDMs in the example machine, only
part of the list of DDMs is shown. If you use the -l parameter for these commands you will
receive additional information. In this example we have one DA pair (two device adapters) and
two storage enclosure pairs (four storage enclosures). Because we have 64 DDMs, we have

Chapter 21. Capacity upgrades and CoD 399


6786ch_Capacity_Upgrades_CoD.fm Draft Document for Review November 14, 2006 3:49 pm

eight array sites (since each array site consists of eight DDMs). In this example six of the
array sites are in use, and two are Unassigned (meaning that no array is using that array site).

Example 21-1 Commands to display storage in a DS8000


First we list the device adapters:
dscli> lsda -l IBM.2107-7503461
Date/Time: 18 November 2005 19:52:05 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
ID State loc FC Server DA pair interfs
=========================================================================================================
IBM.1300-001-00886/R-1-P1-C3 Online U1300.001.1300886-P1-C3 - 0 2 0x0230,0x0231,0x0232,0x0233
IBM.1300-001-00887/R-1-P1-C6 Online U1300.001.1300887-P1-C6 - 1 2 0x0360,0x0361,0x0362,0x0363

Now we list the storage enclosures:


dscli> lsstgencl IBM.2107-7503461
Date/Time: 18 November 2005 19:35:21 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
ID Interfaces interadd stordev cap (GB) RPM
=====================================================================================
IBM.2107-D01-00106/R1-S22 0x0231,0x0361,0x0230,0x0360 0x1 16 146.0 10000
IBM.2107-D01-00110/R1-S11 0x0233,0x0363,0x0232,0x0362 0x0 16 146.0 10000
IBM.2107-D01-00125/R1-S21 0x0231,0x0361,0x0230,0x0360 0x0 16 146.0 10000
IBM.2107-D01-00196/R1-S12 0x0233,0x0363,0x0232,0x0362 0x1 16 146.0 10000

Now we list the DDMs:


dscli> lsddm IBM.2107-7503461
Date/Time: 18 November 2005 19:36:13 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
ID DA Pair dkcap (10^9B) dkuse arsite State
===============================================================================
IBM.2107-D01-00106/R1-P1-D1 2 146.0 array member S5 Normal
IBM.2107-D01-00106/R1-P1-D2 2 146.0 array member S4 Normal
IBM.2107-D01-00106/R1-P1-D3 2 146.0 array member S1 Normal
IBM.2107-D01-00106/R1-P1-D4 2 146.0 array member S6 Normal
....
Now we list the array sites:
dscli> lsarraysite -l -dev IBM.2107-7503461
Date/Time: 18 November 2005 19:51:42 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461
arsite DA Pair dkcap (10^9B) diskrpm State Array
===================================================
S1 2 146.0 10000 Assigned A4
S2 2 146.0 10000 Assigned A5
S3 2 146.0 10000 Assigned A6
S4 2 146.0 10000 Assigned A7
S5 2 146.0 10000 Assigned A0
S6 2 146.0 10000 Assigned A1
S7 2 146.0 10000 Unassigned A2
S8 2 146.0 10000 Unassigned A3

21.2 Using Capacity on Demand (CoD)


IBM offers capacity on demand (CoD) solutions that are designed to meet the changing
storage needs of rapidly growing e-businesses. This section discusses CoD on the DS8000.

There are various rules about CoD and these are explained in the IBM System Storage
DS8000 Introduction and Planning Guide, GC35-0515. The purpose of this chapter is to
explain aspects of implementing a DS8000 that has CoD disk packs.

400 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_Capacity_Upgrades_CoD.fm

21.2.1 What is Capacity on Demand


The Standby CoD offering is designed to provide you with the ability to tap into additional
storage and is particularly attractive if you have rapid or unpredictable growth, or if you simply
want the knowledge that the extra storage will be there when you need it.

In many database environments, it is not unusual to have very rapid growth in the amount of
disk space required for your business. This can create a problem if there is an unexpected
and urgent need for disk space and no time to raise a purchase order or wait for the disk to be
delivered.

With this offering, up to four Standby CoD disk drive sets (64 disk drives) can be factory or
field installed into your system. To activate, you simply logically configure the disk drives for
use —a non disruptive activity that does not require intervention from IBM. Upon activation of
any portion of a Standby CoD disk drive set, you must place an order with IBM to initiate
billing for the activated set. At that time, you can also order replacement CoD disk drive sets.

This offering allows you to purchase licensed functions based upon your machines physical
capacity excluding un-configured Standby CoD capacity. This can help improve your cost of
ownership since your extent of IBM authorization for licensed functions can grow at the same
time you need your disk capacity to grow.

Contact your IBM representative to obtain additional information regarding Standby CoD
offering terms and conditions.

21.2.2 How you can tell if a DS8000 has CoD


A common question is how to determine if a DS8000 has CoD disks installed. There are two
important indicators that you need to check for:
򐂰 Is the CoD indicator present in the Disk Storage Feature Activation (DSFA) Web site?
򐂰 What is the OEL limit displayed by the lskey DS CLI command?

Checking on the DSFA Web site


To check for the CoD indicator on the DSFA Web site you need to get the machine signature
and then log on to DSFA to view the machine summary. First log on to the DS CLI as shown
in Example 21-2. You need to execute the showsi -fullid command to display the machine
signature. The signature is a unique value that can only be accessed from the machine.

Example 21-2 Displaying the machine signature


C:\Program Files\IBM\dscli\profile>dscli
Date/Time: 20 October 2005 23:58:24 IBM DSCLI Version: 5.1.0.204 DS:
IBM.2107-75ABCDE
dscli> showsi -fullid IBM.2107-75ABCDE
Date/Time: 20 October 2005 23:59:20 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABCDE
Name ATS_3_EXP
desc -
ID IBM.2107-75ABCDE
Storage Unit IBM.2107-75ABCD0
Model 922
WWNN 5005076303FFABCD
Signature 1234-5678-9abc-defg
State Online
ESSNet Enabled
Volume Group IBM.2107-75ABCDE/V0
os400Serial -

Chapter 21. Capacity upgrades and CoD 401


6786ch_Capacity_Upgrades_CoD.fm Draft Document for Review November 14, 2006 3:49 pm

Now using the signature, log on to the DSFA Web site at:
https://www-306.ibm.com/storage/dsfa/index.jsp.

On the View Machine Summary tab you will see whether the CoD indicator is on or off. In
Figure 21-2 you can see an example of a 2107-921 machine that has the CoD indicator.

Figure 21-2 CoD machine

If instead you see 0900 Non-Standby CoD then the CoD feature has not been ordered for
your machine.

Checking your machine to see if CoD is present


Normally new features or feature limits are activated using the DS CLI applykey command.
However, CoD does not have a discrete key. Instead, the CoD feature is installed as part of
the Operating Equipment License (OEL) key. The interesting thing is that an OEL key that
activates CoD will change the feature limit from the limit that you have paid for, to the largest
possible number. In Example 21-3 you can see how the OEL key is changed. The machine in
this example is licensed for 50 TB of OEL, but actually has 52 TB of disk installed (as it has 2
TB of CoD disks). However, if the user attempts to create ranks using the final 2 TB of
storage, the command will fail because the exceeded OEL limit. However, once the new OEL
key with CoD is installed, the OEL limit will increase to an enormous number (9 million TB).
This means that rank creation will succeed for the last 2 TB of storage.

Example 21-3 Applying an OEL key that contains CoD


dscli> lskey IBM.2107-75ABCDE
Date/Time: 23 September 2005 12:52:22 IBM DSCLI Version: 5.1.0.41 DS:
IBM.2107-75ABCDE
Activation Key Capacity (TB) Storage Type
==================================================
Operating Environment 50 All
dscli> applykey -key 1234-5678-9ABC-DEF0-1234-5678-1234-5678 IBM.2107-75ABCDE
Date/Time: 23 September 2005 12:52:25 IBM DSCLI Version: 5.1.0.41 DS: IBM.2107-75ABCDE
CMUC00199I applykey: License Machine Code successfully applied to storage image
IBM.2107-75ABCDE.
dscli> lskey IBM.2107-75ABCDE
Date/Time: 23 September 2005 12:52:35 IBM DSCLI Version: 5.1.0.41 DS: IBM.2107-753ABCDE
Activation Key Capacity (TB) Storage Type
================================================
Operating Environment 9999999 All

402 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ch_Capacity_Upgrades_CoD.fm

Note: Older levels of DS8000 microcode may show the OEL key where CoD is present as
being 9223372 TB instead of 9999999 TB. This is not a problem. CoD is still present.

21.2.3 Using the CoD storage


In this section we review the tasks required to start using CoD storage.

How many array sites can be CoD


If CoD storage is installed, it will be a maximum of 64 CoD disk drives. Since 16 drives make
up a drive set, a better use of terminology is to say a machine can have up to four drive sets of
CoD disk. Since eight drives are used to create an array site, this means that a maximum of
eight array sites of CoD can potentially exist in a machine. If a machine has, for example, 384
disk drives installed, of which 64 disk drives are CoD, then there would be a total of 48 array
sites, of which 8 are CoD.

How a user can tell how many CoD array sites have been ordered
From the machine itself, there is no way to tell how many of the array sites in a machine are
CoD array sites versus array sites you can start using right away. During the machine order
process this should be clearly understood and documented.

Which array sites are the CoD array sites


Using the example of a machine with 48 array sites, of which 8 represent CoD disks, the user
should not configure 8 of the existing array sites. In other words, start using array sites until
eight remain un-configured. This is assuming that all the disk drives are the same size. It is
possible to order CoD drive sets of different sizes. In this case the user needs to understand
how many of each size have been ordered and ensure that the correct number of array sites
of each size are left unused —until they are needed, of course.

How the user starts using the CoD array sites


Simply use the standard DS CLI (or DS GUI) commands to configure storage starting with
mkarray then mkrank and so on. Once the ranks are members of an Extent Pool, then volumes
can be created. See Chapter 13, “Configuration with DS Storage Manager GUI” on page 223,
and Chapter 14, “Configuration with Command Line interface” on page 267.

What if a user accidentally configures a CoD array site


Using the example of a machine with 48 array sites, of which 8 represent CoD disks, if the
user accidentally configures 41 array sites but did not intend to start using the CoD disks yet,
then the rmarray command should immediately be used to return that array site to an
unassigned state. If volumes have been created and those volumes are in use, then the user
has started to use the CoD arrays and should contact IBM to inform IBM that the CoD storage
is now in use.

What you do once the CoD array sites are in use


Once you have started to use the CoD array sites (and remember that IBM requires that a
Standby CoD disk drive set must be activated within a twelve-month period from the date of
installation; all such activation is permanent), then IBM should be contacted so that the CoD
indicator can be removed from the machine —you must place an order with IBM to initiate
billing for the activated set. At that time, you can also order replacement Standby CoD disk
drive sets. The OEL key will need to be changed to reflect that CoD is no longer needed on
the machine. If new CoD disk is ordered and installed then a new key will also be installed.

Chapter 21. Capacity upgrades and CoD 403


6786ch_Capacity_Upgrades_CoD.fm Draft Document for Review November 14, 2006 3:49 pm

404 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ax_datamig.fm

Appendix A. Data migration


This chapter provides information useful for planning the methods and tools you will be using
when migrating data from other disk subsystems into the DS8000 storage disk subsystem.
The topics presented in this chapter include:
򐂰 Data migration in open systems environments
򐂰 Data migration in System z environments
򐂰 IBM Migration Services
򐂰 Summary

© Copyright IBM Corp. 2006. All rights reserved. 405


6786ax_datamig.fm Draft Document for Review November 14, 2006 3:49 pm

Data migration in open systems environments


There are numerous methods that can be used to migrate data from one storage system to
another. In the following sections we briefly describe the most common ones and list their
advantages and disadvantages:
򐂰 Migrating with basic copy commands
򐂰 Migrating using volume management software
򐂰 Migrating using backup and restore methods

Migrating with basic copy commands


Using copy commands is the simplest way to move data from one storage disk subsystem to
another. Examples of UNIX commands are:
򐂰 cp
򐂰 cpio
򐂰 tar
򐂰 dump
򐂰 backup, restore

Examples for Windows:


򐂰 scopy
򐂰 xcopy
򐂰 robocopy
򐂰 drag and drop

These commands are available on every system supported for DS8000 attachment, but work
only with data organized on file systems. Data can be copied between file systems with
different sizes. Therefore, these methods can be used for consolidation of small volumes into
large ones.

The most significant disadvantage of this method is the disruptiveness. To preserve data
consistency, the applications writing to the data that is migrated have to be interrupted for the
duration of the copy process. Furthermore, some copy commands cannot preserve advanced
metadata such as access control lists or permissions.

Important: If your storage systems are attached through multiple paths, make sure that
the multipath drivers for the old storage and the new storage can coexist on the same
system. If not, you have to revert the host to a single path configuration and remove the
incompatible driver from the system before you attach the new storage system.

Copy raw devices


For raw data there are tools that allow you to read and write disk devices directly, such as the
dd command. They copy the data and its organizational structure (metadata) without having
any intelligence about it. Therefore they cannot be used for consolidation of small volumes
into large ones. Special care has to be taken when data and its metadata are kept in separate
places. They both have to be copied and realigned on the target system. By themselves, they
are useless.

This method also requires the disruption of the applications writing to the data for the
complete process.

406 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ax_datamig.fm

Migrating using volume management software


Logical Volume Managers are available for open systems. For the Windows platform it is
known as Logical Disk Manager (LDM). The LVM or LDM creates a layer of virtualization
within the operating system.

The basic functionality every volume management software provides is to:


򐂰 Extend logical volumes across several physical disks.
򐂰 Stripe data across several physical disks to improve performance.
򐂰 Mirror data for high availability and migration.

The LUNs provided by a DS8000 appear to the LVM or LDM as physical SCSI disks.

Usually the process is to set up a mirror of the data on the old disks to the new LUNs, wait
until it is synchronized, and split it at the cut over time. Some volume management software
provides commands that automate this process.

The biggest advantage of using the volume management software for data migration is that
the process can be totally non-disruptive, as long as the operating system allows you to add
and remove devices dynamically. Due to the virtualization nature of volume management
software, it also allows for all kinds of consolidation.

The major disadvantage of the volume management software mirroring method is that it
requires a lot of system administration intervention and attention. Production data is
manipulated while production is running, and it requires host cycles when the synchronization
of the data is running.

Attention: If you are planning to use volume management software functions to migrate
the data, be careful with some limitations such, as total number of physical disks in the
same volume group or volume set and, if you are consolidating volumes with different
sizes, check the procedures to see if this is possible.

Mirroring volumes using AIX Logical Volume Manager


In this section we show how to mirror a volume group using the AIX LVM commands. We are
showing the mirrorvg process in the rootvg volume group, but you can use this procedure for
other volume groups in your system.
1. Install or connect the destination disk drive on the system and run the cfgmgr command
that will configure the disk on the operating system. The lspv output shows our original
volume as hdisk0 and the new volume where we are mirroring the data as hdisk1:
root:/ > lspv
hdisk0 000007cac12a0429 rootvg active
hdisk1 None None
2. Use the chdev command to set a PVID to the new volume:
root:/ > chdev -l hdisk1 -a pv=yes
hdisk1 changed
root:/ > lspv
hdisk0 000007cac12a0429 rootvg active
hdisk1 000007ca7b577c70 None
3. Add the new volume to the same volume group of the original volume:
root:/ > extendvg rootvg hdisk1
root:/ > lspv
hdisk0 000007cac12a0429 rootvg active
hdisk1 000007ca7b577c70 rootvg active

Appendix A. Data migration 407


6786ax_datamig.fm Draft Document for Review November 14, 2006 3:49 pm

The content of the volume group is shown with the lsvg -l command. Note that in the
column LPs and PPs the proportion is 1 by 1 that means that we have only one physical
copy of each logical data:
root:/ > lsvg -l rootvg
rootvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
hd5 boot 1 1 1 closed/syncd N/A
hd6 paging 8 8 1 open/syncd N/A
hd8 jfs2log 1 1 1 open/syncd N/A
hd4 jfs2 1 1 1 open/syncd /
hd2 jfs2 21 21 1 open/syncd /usr
hd9var jfs2 1 1 1 open/syncd /var
hd3 jfs2 1 1 1 open/syncd /tmp
hd1 jfs2 1 1 1 open/syncd /home
hd10opt j fs2 1 1 1 open/syncd /opt
lg_dumplv sysdump 80 80 1 open/syncd N/A
download jfs2 200 200 1 open/syncd /downloads
4. Run the mirrorvg command to create the relationship and start the copy of the data:
root:/ > mirrorvg rootvg hdisk1
After the mirroring process ends, we have the following output for the lsvg -l command.
Now you will see that the proportion between LPs and PPs columns is 1 by 2 that means 1
logical data in 2 physical volumes:
root:/ > lsvg -l rootvg
rootvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
hd5 boot 1 2 2 closed/syncd N/A
hd6 paging 8 16 2 open/syncd N/A
hd8 jfs2log 1 2 2 open/syncd N/A
hd4 jfs2 1 2 2 open/syncd /
hd2 jfs2 21 42 2 open/syncd /usr
hd9var jfs2 1 2 2 open/syncd /var
hd3 jfs2 1 2 2 open/syncd /tmp
hd1 jfs2 1 2 2 open/syncd /home
hd10opt jfs2 1 2 2 open/syncd /opt
lg_dumplv sysdump 80 160 1 open/syncd N/A
download jfs2 200 400 2 open/syncd /downloads
Now the volume group is mirrored and the data is consistent in the 2 volumes as shown in
the columns LV STATE that is indicating the status syncd for all logical volumes.
If you want to remove the mirror, you can use the following command:
#unmirrorvg <vg_name> <hdisk#>
If you want to remove the hdisk1 and keep the hdisk0 active, run the following command:
#unmirrorvg rootvg hdisk1
If you want to remove the hdisk0 and keep the hdisk1 active, run the following command:
#unmirrorvg rootvg hdisk0

Note: You can use smit utility to perform these procedures accessing the fast path smit
mirrorvg to create a mirror or smit unmirrovg to remove a mirror.

Mirroring volumes using Windows Logical Disk Manager


Dynamic disks were first introduced with Windows 2000 and provide features that the basic
disks do not. One of these features is the ability to create fault-tolerant volumes. We show in
this section how to create a mirror using the Logical Disk Manager with dynamic disks.

408 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ax_datamig.fm

In this example we have two volumes, Disk 8 and Disk 9. The drive letter S: is associated with
Disk 8, which is the current volume running on the system. Disk 9 is the new disk that will be
part of the mirror. See Figure A-1.

Figure A-1 Preparing to mirror

Note: To configure new disks on the system, after connecting it to the Windows server, run
the Rescan Disks function in the Disk Management.

Appendix A. Data migration 409


6786ax_datamig.fm Draft Document for Review November 14, 2006 3:49 pm

Figure A-2 shows how to convert the new volume to dynamic disk.

Figure A-2 Convert disk to dynamic

Now, with Disk 9 as a dynamic disk, the system is ready to initiate the mirroring process. As
Figure A-3 illustrates, right-click the source volume (S:) and choose the Add Mirror option.

Figure A-3 Accessing Add Mirror panel

410 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ax_datamig.fm

The Add Mirror window will be displayed with a list of available disks. Mark the chosen disk
and then click Add Mirror; see Figure A-4.

Figure A-4 Selecting disks in Add Mirror panel

The synchronization process will start automatically. At this time you are will see that both
volumes, Disk 8 and Disk 9, are assigned to the same drive letter S:; see Figure A-5.

Figure A-5 Synchronization process running

Appendix A. Data migration 411


6786ax_datamig.fm Draft Document for Review November 14, 2006 3:49 pm

Figure A-6 shows the volumes after the synchronization process has finished.

Figure A-6 Synchronization process finished

The next panels show you how to remove a mirror. We can access this option by
right-clicking the selected volume. You have two options now:
򐂰 Break Mirrored Volume
The selected volume will keep the original drive letter and the other volume will be
automatically assigned to another letter. From this time the synchronization process will
not occur; both drives will have different drive letters, but the data is still on it.
򐂰 Remove mirror
If you choose to remove the mirror, a window will be displayed asking you which volume
you want to remove. The selected volume, after completing the process, will become a
free disk to the operating system with no drive letter and no data inside.

412 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ax_datamig.fm

In Figure A-7 we select the option Break Mirrored Volume from Disk 8.

Figure A-7 Break Mirrored Volume option

After you confirm the operation, you will see that Disk 8 changes to drive letter E:; see
Figure A-8. The data is still available, but the disks will not be fault-tolerant.

Figure A-8 Break Mirrored Volume finished

Appendix A. Data migration 413


6786ax_datamig.fm Draft Document for Review November 14, 2006 3:49 pm

Figure A-9 shows you the disks with the mirror, and this time we selected the Remove Mirror
option. A window opens in which you select which disk will be removed. We select Disk 8.

Figure A-9 Remove Mirror panel

After selecting Remove Mirror, the selected volume will became available to the operating
system without a drive letter and no data available. See Figure A-10.

Figure A-10 Remove mirror finished

414 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ax_datamig.fm

Migrating using backup and restore methods


Data centers have different ways to back up and restore data. These methods can be used for
data migration. We list this approach here because it shares the common advantages and
disadvantages with the approaches discussed previously, although the tools will not always
be provided natively by the operating system.

All open system platforms and many applications provide native backup and restore
capabilities. They may not be very sophisticated sometimes, but they are often suitable in
smaller environments. In large data centers it is customary to have a common backup
solution across all systems. Either can be used for data migration.

The backup and restore options allow for consolidation because the tools are aware of the
data structures they handle.

One significant difference to most of the other methods is that this method may not require
the source and target storage systems to be connected to the hosts at the same time.

Some of the most common software packages that provide this facility are:
򐂰 IBM Tivoli Storage Manager (TSM)
򐂰 Legato Networker
򐂰 BrightStor ARCserve
򐂰 Veritas NetBackup

Data migration in System z environments


Because today’s business environments do not allow the interruption of the data processing
services, it is crucial to migrate the data to the new storage systems as smoothly as possible.
The configuration changes and the actual data migration ought to be transparent to the users
and applications, with no or only minimal impact on data availability. This requires you to plan
for non-disruptive migration methods and to guarantee data integrity at all times.

This section discusses methods for migrating data from existing disk storage systems onto
the DS8000. Our intention here is to show the possibilities that you have and not provide a
detailed step-by-step migration process description. The following topics are discussed:
򐂰 Data migration based on physical volume migration
򐂰 Data migration based on logical data set migration
򐂰 Combination of physical and logical data migration
򐂰 Copy Services based migration

Data migration based on physical volume migration


We refer here to the physical full volume migration. This requires the same device geometry
on the source and target volumes. The device geometry is defined by the track capacity and
the number of tracks per cylinder. Usually this is not an issue because over time the device
geometry of the IBM 3390 volume has become a quasi standard, and most installations have
used this standard. For organizations still using other device geometry (for example, 3380), it
might be worthwhile to consider a device geometry conversion, if possible. This requires
moving the data on a logical level, that is on a data set level, which allows a reblocking during
the migration from 3380 to 3390.

Appendix A. Data migration 415


6786ax_datamig.fm Draft Document for Review November 14, 2006 3:49 pm

Physical full volume migration is possible with the following tools:


򐂰 Software based:
– DFSMSdss
– TDMF
– FDRPAS
򐂰 Software and hardware based:
– System z Piper
– z/OS Global Mirror (XRC)
򐂰 Hardware based:
– Global Mirror
– Global Copy
– FlashCopy in combination with either Global Mirror or Global Copy, or both

Data migration based on logical data set migration


Data logical migration is a data set by data set migration that maintains catalog entries
according to the data movement between volumes and, therefore, is not a volume-based
migration. This is the cleanest way to migrate data, and also allows device conversion from,
for example, 3380 to 3390. It also transparently supports multivolume data sets. Logical data
migration is a software only approach and does not rely on certain volume characteristics nor
on device geometries.

The following software products and components can be used for data logical migrations:
򐂰 DFSMS allocation management.
򐂰 Allocation management by CA-ALLOC.
򐂰 DFSMSdss.
򐂰 DFSMShsm.
򐂰 FDR.
򐂰 System utilities like:
– IDCAMS with REPRO, EXPORT/IMPORT commands
– IEBCOPY for Partitioned Data Sets (PDS) or Partitioned Data Sets Extended (PDSE)
– ICEGENER as part of DFSORT, which can handle sequential data but not VSAM data
sets, which also applies to IEBGENER
򐂰 CA-Favor.
򐂰 CA-DISK or ASM2.
򐂰 Database utilities for data that is managed by certain database managers like DB2 or IMS.
CICS as a transaction manager usually uses VSAM data sets.

Combination of physical and logical data migration


The following approach combines physical and logical data migration:
򐂰 Physical full volume copy to larger capacity volume when both volumes have the same
device geometry (same track size and same number of tracks per cylinder).
򐂰 Use COPYVOLID to keep the original volume label and not confuse catalog management.
You can still locate the data on the target volume through standard catalog search.
򐂰 Adjust the VTOC of the target volume to make the larger volume size visible to the system
with the ICKDSF REFORMAT command to refresh, REFVTOC, or expand the VTOC,
EXTVTOC, which requires you to delete and rebuild the VTOC index using EXTINDEX in
the REFORMAT command.

416 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ax_datamig.fm

򐂰 Then perform the logical data set copy operation to the larger volumes. This allows you to
use either DFSMSdss logical copy operations or the system-managed data approach.

When a level is reached where no data moves anymore because the remaining data sets are
in use all the time, some down time has to be scheduled to perform the movement of the
remaining data. This might require you to run DFSMSdss jobs from a system that has no
active allocations on the volumes that need to be emptied.

Copy Services based migration


Using the DS8000 Copy Services functions is basically a hardware based method. The
DS8000 remote mirror and copy functions that can be used for data migration are discussed
in length in IBM System Storage DS8000 Series: Copy Services with System z servers,
SG24-6787.

IBM Migration Services


This is the easiest way to migrate data, because IBM will assist you throughout the complete
migration process. In several countries IBM offers migration services. Check with your IBM
sales representative about migration services for your specific environment and needs.

For additional information about available migration services you can refer to the following
IBM Web site: http://www-1.ibm.com/servers/storage/services/disk.html.

Summary
This appendix shows that there are many ways to accomplish data migration. Thorough
analysis of the current environment, evaluation of the requirements, and planning are
necessary. Once you decide on one or more migration methods, refer to the documentation of
the tools you want to use to define the exact sequence of steps to take. Special care must be
exercised when data is shared between more than one host.

The migration might be used as an opportunity to consolidate volumes at the same time.

Appendix A. Data migration 417


6786ax_datamig.fm Draft Document for Review November 14, 2006 3:49 pm

418 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ax_tools.fm

Appendix B. Tools and service offerings


This appendix gives information about tools available to help you when doing planning,
management, migration, and analysis activities with your DS8000. In this appendix we also
reference the sites where you can find information on the service offerings that are available
from IBM to help you in several of the activities related to the DS8000 implementation.

© Copyright IBM Corp. 2006. All rights reserved. 419


6786ax_tools.fm Draft Document for Review November 14, 2006 3:49 pm

Capacity Magic
Because of the additional flexibility and configuration options they provide, it becomes a
challenge to calculate the raw and net storage capacity of disk subsystems like the DS8000
and the DS6000. The user has to put some considerable time and needs an in-depth
technical understanding of how spare and parity disks are assigned, as well as take into
consideration the simultaneous use of disks with different capacities and configurations that
deploy both RAID 5 and RAID 10.

Capacity Magic, a product from IntelliMagic, is there to do the physical (raw) to effective (net)
capacity calculations automatically, taking into consideration all applicable rules and the
provided hardware configuration —number and type of disk drive sets.

Capacity Magic is designed as an easy-to-use tool with a single main dialog. It offers a
graphical interface that allows you to enter the disk drive configuration of a DS8000, DS6000,
or ESS 800; the number and type of disk drive sets; and the RAID type. With this input
Capacity Magic calculates the raw and net storage capacities —also some new functionality
has been introduced into the tool to display the number of extents that are produced per rank.

Figure B-1 Configuration screen

Figure B-1 shows the configuration screen that Capacity Magic provides for you to specify the
desired number and type of disk drive sets.

Figure B-2 on page 421 shows the resulting output report that Capacity Magic produces. This
report is also helpful to plan and prepare the configuration of the storage in the DS8000, as it
also displays extent count information.

420 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ax_tools.fm

Figure B-2 Capacity Magic output report

Note: Capacity Magic is a product of IntelliMagic and is protected by a license code. For
more information, visit the following Web site: http://www.intellimagic.net/en/.

Disk Magic
Disk Magic, a product from IntelliMagic, is a Windows-based disk subsystem performance
modeling tool. It supports disk subsystems from multiple vendors, but it offers the most
detailed support for IBM subsystems.

The first release was issued as an OS/2 application in 1994, and since then Disk Magic has
evolved from supporting storage control units such as the IBM 3880 and 3990 to supporting
modern, integrated, advanced-function disk subsystems such as the DS8000, DS6000, ESS,
DS4000, and the SAN Volume Controller.

A critical design objective for Disk Magic is to minimize the amount of input the user must
enter, while offering a rich and meaningful modeling capability. The following list provides
some examples of what Disk Magic can model, but it is by no means complete:
򐂰 Move the current I/O load to a different disk subsystem model
򐂰 Merge the I/O load of multiple disk subsystems into a single one
򐂰 Insert a SAN Volume Controller in an existing disk configuration
򐂰 Increase the current I/O load
򐂰 Implement a storage consolidation
򐂰 Increase the disk subsystem cache size
򐂰 Change to larger capacity disk drives
򐂰 Upgrade from SCSI to Fibre Channel host adapters
򐂰 Use fewer or more Logical Unit Numbers (LUN)
򐂰 Activate Metro Mirror

Appendix B. Tools and service offerings 421


6786ax_tools.fm Draft Document for Review November 14, 2006 3:49 pm

Figure B-3 shows some of the panels that Disk Magic provides for you to input data —the
example shows the panels for input of host adapters information and disk configuration in an
hypothetical case of a DS8000 attaching open systems. Modeling results are presented
through tabular reports.

Figure B-3 Disk Magic Interfaces and Open Disk input screens

Note: Disk Magic is a product of IntelliMagic and is protected by a license code. For more
information, visit the following Web site: http://www.intellimagic.net/en/.

IBM TotalStorage Productivity Center for Disk


The IBM TotalStorage Productivity Center is a standard software package for managing
complex storage environments. One subcomponent of this package is the IBM TotalStorage
Productivity Center for Disk designed to help reduce the complexity of managing SAN storage
devices by allowing administrators to configure, manage and performance monitor storage
from a single console.

Productivity Center for Disk is designed to:


򐂰 Configure multiple storage devices from a single console
򐂰 Monitor and track the performance of SAN attached Storage Management Interface
Specification (SMI-S) compliant storage devices
򐂰 Enable proactive performance management by setting performance thresholds based on
performance metrics and the generation of alerts

IBM TotalStorage Productivity Center for Disk centralizes management of networked storage
devices that implement the SNIA SMI-S specification, which includes the IBM System
Storage DS family and SAN Volume Controller (SVC). It is designed to help reduce storage
management complexity and costs while improving data availability, centralizing management
of storage devices through open standards (SMI-S), enhancing storage administrator
productivity, increasing storage resource utilization, and offering proactive management of
storage devices. IBM TotalStorage Productivity Center for Disk offers the ability to discover

422 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ax_tools.fm

storage devices using Service Location Protocol (SLP) and provides the ability to configure
devices, in addition to gathering event and errors logs and launching device-specific
applications or elements.

For more information see Managing Disk Subsystems using IBM TotalStorage Productivity
Center, SG24-7097. Also, refer to:
http://www.ibm.com/servers/storage/software/center/index.html.

Disk Storage Configuration Migrator


IBM offers a service named Disk Storage Configuration Migrator. The purpose of this service
is to migrate the logical configuration of your currently installed ESS disk subsystems to the
DS8000 or DS6000 storage disk subsystems that your are installing.

Within this service, IBM will propose a possible configuration for the target disk subsystem
that is based on the information the user provides in a questionnaire and the configuration of
the currently used storage disk systems.

Create logical configuration


on DS8000 and generate
copy services scripts from
the available config data.

Linux or Windows
ThinkPad with
‘Disk Storage Configuration
Migrator’

Capture current
configuration(s) to
generate new DS8000
configuration(s)

Figure B-4 Disk storage configuration migrator

The standard CLI interfaces of the ESS and the DS8000 are used to read, modify, and write
the logical and Copy Services configuration. All information is saved in a data set in the
provided database on a workstation. Via the Graphical User Interface (GUI), the user
information gets merged with the hardware information and it is then applied to the DS8000
subsystem. See Figure B-4.

Note: This approach could also be used to convert DS4000, HSG80, EMC, and Hitachi to
DS8000.

For additional information see the following Web site:


http://web.mainz.de.ibm.com/ATSservices

Appendix B. Tools and service offerings 423


6786ax_tools.fm Draft Document for Review November 14, 2006 3:49 pm

IBM Global Technology Services — service offerings


IBM can assist you in deploying IBM System Storage DS8000 and DS6000 subsystems, IBM
TotalStorage Productivity Center, and SAN Volume Controller solutions. IBM Global Services
has the right knowledge and expertise to reduce your system and data migration workload, as
well as the time, money and resources needed to achieve a system-managed environment.

For more information, contact your IBM representative or IBM Business Partner, or visit
http://ibm.com/services/

For details on available services, contact your IBM representative or visit the following:
http://www.ibm.com/services/
http://www-03.ibm.com/servers/storage/services/disk.html

For details on available IBM Business Continuity and Recovery Services, contact your IBM
representative or visit http://www.ibm.com/services/continuity

For details on education offerings related to specific products, visit


http://www.ibm.com/services/learning/index.html

Select your country, and then select the product as the category.

424 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ax_Project_Plan.fm

Appendix C. Project plan


This appendix shows part of a skeleton for a project plan. Only the main topics are included.
Further detailing could be established within each individual project.

© Copyright IBM Corp. 2006. All rights reserved. 425


6786ax_Project_Plan.fm Draft Document for Review November 14, 2006 3:49 pm

Project plan skeleton

Figure C-1 Page 1 of project plan skeleton

426 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786ax_Project_Plan.fm

Figure C-2 Page 2 of project plan skeleton

Appendix C. Project plan 427


6786ax_Project_Plan.fm Draft Document for Review November 14, 2006 3:49 pm

428 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786bibl.fm

Related publications

The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this redbook.

IBM Redbooks
For information about ordering these publications, see “How to get IBM Redbooks” on
page 431. Note that some of the documents referenced here may be available in softcopy
only.
򐂰 IBM System Storage DS8000 Series: Copy Services with System z servers, SG24-6787
򐂰 IBM System Storage DS8000 Series: Copy Services in Open Environments, SG24-6788
򐂰 IBM TotalStorage Business Continuity Solutions Guide, SG24-6547
򐂰 The IBM TotalStorage Solutions Handbook, SG24-5250
򐂰 IBM TotalStorage Productivity Center V2.3: Getting Started, SG24-6490
򐂰 Managing Disk Subsystems using IBM TotalStorage Productivity Center, SG24-7097
򐂰 IBM TotalStorage: Integration of the SAN Volume Controller, SAN Integration Server, and
SAN File System, SG24-6097
򐂰 IBM TotalStorage: Introducing the SAN File System, SG24-7057

If you are implementing Copy Services in a mixed technology environment you may be
interested in referring to the following redbooks on the ESS and the DS6000.
򐂰 IBM TotalStorage Enterprise Storage Server Implementing ESS Copy Services in Open
Environments, SG24-5757
򐂰 IBM TotalStorage Enterprise Storage Server Implementing ESS Copy Services with
IBM Eserver zSeries, SG24-5680
򐂰 IBM System Storage DS6000 Series: Copy Services with System z servers, SG24-6782
򐂰 IBM System Storage DS6000 Series: Copy Services in Open Environments, SG24-6783
򐂰 DFSMShsm ABARS and Mainstar Solutions, SG24-5089
򐂰 Practical Guide for SAN with pSeries, SG24-6050
򐂰 Fault Tolerant Storage - Multipathing and Clustering Solutions for Open Systems for the
IBM ESS, SG24-6295
򐂰 Implementing Linux with IBM Disk Storage, SG24-6261
򐂰 Linux with zSeries and ESS: Essentials, SG24-7025

Other publications
These publications are also relevant as further information sources. Note that some of the
documents referenced here may be available in softcopy only.
򐂰 IBM System Storage DS8000: Command-Line Interface User´s Guide, SC26-7916
򐂰 IBM System Storage DS8000: Host Systems Attachment Guide, SC26-7917

© Copyright IBM Corp. 2006. All rights reserved. 429


6786bibl.fm Draft Document for Review November 14, 2006 3:49 pm

򐂰 IBM System Storage DS8000: Introduction and Planning Guide, GC35-0515


򐂰 IBM System Storage Multipath Subsystem Device Driver User’s Guide, SC30-4131
򐂰 IBM System Storage DS8000: User’s Guide, SC26-7915
򐂰 IBM System Storage DS Open Application Programming Interface Reference, GC35-0516
򐂰 IBM System Storage DS8000 Messages Reference, GC26-7914
򐂰 z/OS DFSMS Advanced Copy Services, SC35-0248
򐂰 Device Support Facilities: User’s Guide and Reference, GC35-0033

Online resources
These Web sites and URLs are also relevant as further information sources:
򐂰 IBM System Storage Ds8000 Information Center
http://publib.boulder.ibm.com/infocenter/ds8000ic/index.isp
򐂰 IBM Disk Storage Feature Activation (DSFA) Web site
http://www.ibm.com/storage/dsfa
򐂰 The PSP information
http://www-1.ibm.com/servers/resourcelink/svc03100.nsf?OpenDatabase
򐂰 Documentation for the DS8000
http://www.ibm.com/servers/storage/support/disk/2107.html
򐂰 The Interoperability Matrix
http://www-1.ibm.com/servers/storage/disk/ds8000/interop.html
򐂰 Fibre Channel host bus adapter firmware and driver level matrix
http://knowledge.storage.ibm.com/servers/storage/support/hbasearch/interop/hbaSearch.do
򐂰 ATTO
http://www.attotech.com/
򐂰 Emulex
http://www.emulex.com/ts/dds.html
򐂰 JNI
http://www.jni.com/OEM/oem.cfm?ID=4
򐂰 QLogic
http://www.qlogic.com/support/ibm_page.html
򐂰 IBM
http://www.ibm.com/storage/ibmsan/products/sanfabric.html
򐂰 McDATA
http://www.mcdata.com/ibm/
򐂰 Cisco
http://www.cisco.com/go/ibm/storage
򐂰 CIENA
http://www.ciena.com/products/transport/shorthaul/cn2000/index.asp
򐂰 CNT

430 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786bibl.fm

http://www.cnt.com/ibm/
򐂰 Nortel
http://www.nortelnetworks.com/
򐂰 ADVA
http://www.advaoptical.com/

How to get IBM Redbooks


You can search for, view, or download Redbooks, Redpapers, Hints and Tips, draft
publications and Additional materials, as well as order hardcopy Redbooks or CD-ROMs, at
this Web site:
ibm.com/redbooks

Help from IBM


IBM Support and downloads
ibm.com/support

IBM Global Services


ibm.com/services

Related publications 431


6786bibl.fm Draft Document for Review November 14, 2006 3:49 pm

432 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786IX.fm

Index
Control Unit Initiated Reconfiguration see CUIR
A cooling 86
AAL 54–55 disk enclosure 66
benefits 55 rack cooling fans 87
activate licenses Copy Services
applying activation codes using the DS CLI 209 event traps 386
functions 198 interfaces 124
address groups 101 CUIR 82
Advanced Function Licenses 155
activation 155
AIX 299 D
boot support 307 DA 50
I/O access methods 306 Fibre Channel 170
LVM configuration 305 daemon 382
on iSeries 370 data migration 415
WWPN 299 backup and restore 415
AIX MPIO 302 Disk Magic 421
applying activation codes using the DS CLI 209 Disk Storage Configuration Migrator 423
architecture 42 IBM Migration Services 417
array sites 94 logical migration 416
arrays 54, 94 open systems environments 406
arrays across loops see AAL physical and logical data migration 416
physical migration 415
summary 417
B volume management software 407
backup and restore 415 zSeries environments 415
base frame 40 data placement 105
battery backup assemblies 66 Data Set FlashCopy 112
battery backup unit see BBU DDMs 55
BBU 87 hot plugable 86
boot support 290 device adapter see DA
Business Continuity 4 disk
business continuity 8 raw 323
disk drive set 7
C disk drives
cables 138 capacity 143
cache management 44 FATA 56, 177
Call Home 385 disk enclosure 50
Call Home/call back 392 power and cooling 66
Capacity Magic 420 Disk Magic 421
CCL Disk Storage Configuration Migrator 423
Command Console LUN (CCL) see CCL disk subsystem 50
CKD volumes 98 disk virtualization 93
allocation and deletion 99 DS API 108
community name 384 DS CLI 108, 126, 140
components 39 applying activaton codes 209
used in the DS HMC environment 147 configuring second HMC 163
configuration console
FC port 316 user management 159
volume 317 DS Command-Line Interface see DS CLI
configuration flow 220 DS HMC
configuring the DS8000 271, 280 Call Home/call back 392
Configuring using DS CLI external 162
configuring the DS8000 271, 280 DS HMC planning 145
Consistency Group FlashCopy 113 activation of Advanced Function Licenses 155

© Copyright IBM Corp. 2006. All rights reserved. 433


6786IX.fm Draft Document for Review November 14, 2006 3:49 pm

components used in the DS HMC environment 147 DA


environment setup 152 data migration 406
hardware and software setup 154 data migration in zSeries environments 415
host prerequisites 156 data placement 176
latest DS8000 microcode 156 DDMs 55
logical flow of communication 147 disk drive set 7
machine signature 155 disk enclosure 50
maintenance windows 156 Disk Magic 421
order confirmation code 155 Disk Storage Configuration Migrator 423
serial number 155 disk subsystem 50
setup 153 distinguish Linux from other operating systems 307
technical environment 147 DS CLI console
time synchronization 157 DS HMC planning 145
using DS SM frontend DS HMC setup 153
using the DS CLI 154 DS8100 Model 921 18
using the DS Open API 154 environment setup 152
DS Open API 126, 154 EPO 41, 88
DS Open Application Interface see DS Open API ESCON 138
DS SM 108, 153 existing reference materials 308
configuring second HMC 164 expansion enclosure 54
user management 161 expansion frame 41
using frontend external DS HMC 162
DS Storage Manager 125 FC port configuration 316
DS Storage Manager see DS SM Fibre Channel disk drives 6
DS6000 Fibre Channel/FICON 138
business continuity 8 FICON 13
Capacity Magic 420 floor type and loading 134
compared to DS8000 12 frames 40
comparison with other DS family members 11 general considerations 288
dynamic LUN/volume creation and deletion 10 HA
large LUN and CKD volume support 10 hardware and software setup 154
simplified LUN masking 10 hardware overview 6
SNMP configuration 385 HBA and operating system settings 291
usermanagement using DS CLI 159 host adapter 6
DS8000 host interface and cables 138
AAL host prerequisites 156
activate license functions 198 I/O enclosure 49
activation of Advanced Function Licenses 155 I/O priority queuing 13
AIX 299 IBM Migration Services 417
AIX MPIO 302 IBM Redbooks 429
applying activation codes using the DS CLI 209 Information Lifecycle Management 4
architecture 42 Infrastructure Simplification 4
arrays 54 input voltage 137
backup and restore 415 interoperability 9
base frame 40 Linux 307
battery backup assemblies 66 Linux issues 309
boot support 290 logical flow of communication 147
Business Continuity 4 logical migration 416
cache management 44 machine signature 155
Call Home/call back 392 maintenance windows 156
Capacity Magic 420 microcode 156
Command Console LUN model comparison 22
common set of functions 11 model naming conventions 16
compared to DS6000 12 model upgrades 26
compared to ESS 11 modular expansion 40
components 39 multipathing support 290
components used in the DS HMC environment 147 multiple allegiance 13
configuration flow 220 network connectivity planning 139
configuring 271, 280 online resources 430
considerations prior to installation 132 OpenVMS 316

434 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786IX.fm

openVMS volume shadowing 320 updated and detailed information 288


order confirmation code 155 using DS SM frontend
other publications 429 using the DS CLI 154
p5 570 41 using the DS Open API 154
PAV volume configuration 317
performance 12, 167 volume management software 407
physical and logical data migration 416 VSE/ESA 341
physical migration 415 Windows 291
placement of data 105 Windows Server 2003 VDS support 295
planning for growth 144 z/OS considerations 335
positioning 11 z/OS Metro/Global Mirror 9
power and cooling 65 z/VM considerations 340
power connectors 136 zSeries performance 13
power consumption 137 DS8100
power control 88 Model 921 18
power control features 137 processor memory 48
Power Line Disturbance (PLD) 138 DS8300
power requirements 136 Copy Services 35
POWER5 6, 46, 171 FlashCopy 36
PPS 66 LPAR 30
prerequisites and enhancements 334 LPAR benefits 36
processor complex 46 LPAR implementation 31
processor memory 48 LPAR security 34
project plan skeleton 426 Model 9A2 configuration options 33
project planning 223 processor memory 48
rack operator panel 41 Remote Mirror and Copy 36
RAS dsk 322
Remote Mirror and Copy connectivity 142 dynamic LUN/volume creation and deletion 10
remote power control 141
remote support 140
RIO-G 48 E
room space and service clearance 135 EPO 41, 88
RPC ESCON 81, 128, 138
SAN connection 142 architecture 64
SARC 44 distances 64
scalability 10, 23 Remote Mirror and Copy 64
benefits 24, 26 supported servers 64
for capacity 23 ESS
for performance 24 compared to DS8000 11
SDD 300 ESS 800
SDD for Windows 292 Capacity Magic 420
serial number 155 Ethernet
series overview 4 switches 67, 89
server-based 44 expansion frame 41
service 10 extent pools 96
service processor 48 external DS HMC 162
setup 10
S-HMC 66 F
SMP 44 FATA 56
spares 54 FATA disk drives 177
sparing considerations 142 capacity 143
SPCN differences with FC 57
storage capacity 7 evolution of ATA 56
stripe size 178 performance 184
summary 417 positioning vs FC 58
supported environment 7 the right application 60
switched FC-AL 53 vs. Fiber Channel 62
technical environment 147 FC port configuration 316
time synchronization 157 FC-AL
troubleshooting and monitoring 314 non-switched 51

Index 435
6786IX.fm Draft Document for Review November 14, 2006 3:49 pm

overcoming shortcomings 168 AIX MPIO 302


switched 6 boot support 290
Fibre Attached Technology Adapted 56 Command Console LUN 319
Fibre Channel distinguish Linux from other operating systems 307
distances 65 existing reference materials 308
host adapters 64 FC port configuration 316
Fibre Channel/FICON 138 general considerations 288
FICON 13, 81 HBA and operating system settings 291
host adapters 64 Linux 307
fixed block LUNs 98 Linux issues 309
FlashCopy 36, 108, 370 multipathing support 290
benefits 110 OpenVMS 316
Consistency Group 113 openVMS volume shadowing 320
data set 112 prerequisites and enhancements 334
establish on existing RMC primary 113–114 SDD 300
inband commands 9, 115 SDD for Windows 292
Incremental 111 support issues 307
incremental 8 supported configurations (RPQ) 290
Multiple Relationship 112 troubleshooting and monitoring 314
multiple relationship 8 updated and detailed information 288
no background copy 110 VDS support 295
options 111 volume configuration 317
persistent 115 VSE/ESA 341
Refresh Target Volume 111 Windows 291
floor type and loading 134 z/OS considerations 335
frames 40 z/VM considerations 340
base 40 zSeries 334
expansion 41 HyperPAV 13
frontend Hypervisor 74
functions
activate license 198
I
I/O enclosure 49, 66
G I/O priority queuing 13
GDS for MSCS 298 IASP 348
general considerations 288 IBM Migration Services 417
Geographically Dispersed Sites for MSCS see GDS for IBM Redbooks 429
MSCS IBM TotalStorage Multipath Subsystem Device Driver see
Global Copy 9, 108, 117, 123, 367 SDD
Global Mirror 9, 108, 117, 124 IBM Virtualization Engine
how works 118 LPAR 28
inband commands 9, 115
Incremental FlashCopy 8, 111
H Independent Auxiliary Storage Pool 348
HA 6, 63 Independent Auxiliary Storage Pools see IASP
hardware Information Lifecycle Management 4
setup 154 Infrastructure Simplification 4
host Input Output Adapter (IOA) 345
interface 138 Input Output Processor (IOP) 345
prerequisites microcode 156 input voltage 137
host adapter installation
four port 170 DS8000 checklist 132
host adapter see HA iSeries 370
host adapters adding multipath volumes using 5250 interface 358
Fibre Channel 64 adding volumes 346
FICON 64 adding volumes to an IASP
host attachment 102 adding volumes using iSeries Navigator 359
host connection AIX on 370
zSeries 82 avoiding single points of failure 356
host considerations cache 364
AIX 299 changing from single path to multipath 363

436 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786IX.fm

changing LUN protection 345 M


configuring multipath 357 machine signature 155
connecting via SAN switches 367 maintenance windows 156
Global Copy 367 Management Information Base (MIB) 382
hardware 344 Metro Mirror 9, 108, 116, 123, 367
Linux 371 MIB 382, 384
logical volume sizes 344 microcode updates
LUNs 99 installation process 376
managing multipath volumes using iSeries Navigator migrating using volume management software 407
360 mirroring 306
Metro Mirror 367 modular expansion 40
migration to DS8000 367 multipathing support 290
multipath 355 multiple allegiance 13
multipath rules for multiple iSeries systems or parti- Multiple Relationship FlashCopy 8, 112
tions 363
number of fibre channel adapters 365
OS/400 data migration 368 N
OS/400 mirroring 367 network connectivity planning 139
planning for arrays and DDMs 364 NMS 382
protected versus unprotected volumes 345 non-volatile storage see NVS
recommended number of ranks 366 NVS 78
sharing ranks with other servers 366 NVS recovery
size and number of LUNs 365
sizing guidelines 363
software 344
O
online resources 430
using 5250 interface 346
open systems
cache size 176
L performance 176
large LUN and CKD volume support 10 sizing 176
Linux 307 OpenVMS 316
on iSeries 371 openVMS volume shadowing 320
Linux issues 309 order confirmation code 155
logical migration 416 OS/400 data migration 368
logical partition see LPAR OS/400 mirroring 367
logical subsystem see LSS other publications 429
logical volumes 97
LPAR 28, 70
application isolation 30
P
p5 570 41
benefits 36
panel
Copy Services 35
rack operator 41
DS8300 30
Parallel Access Volumes see PAV
DS8300 implementation 31
partitioning
Hypervisor 74
concepts 28
increased flexibility 30
PAV 13
production and test environments 29
performance
security through Power Hypervisor 34
data placement 176
storage facility image 30
FATA disk drives 62, 177, 184
why? 29
FATA positioning 58
LSS 100
FATA the right application 60
LUN 370
LVM striping 177
LUNs
open systems 176
allocation and deletion 99
determing the connections 179
fixed block 98
determining the number of paths to a LUN 179
iSeries 99
where to attach the host 179
masking 10
workload characteristics 176
LVM
z/OS 180
configuration 305
channel consolidation 182
mirroring 306
configuration recommendations 184
striping 177, 305
connect to zSeries hosts 180

Index 437
6786IX.fm Draft Document for Review November 14, 2006 3:49 pm

disk array sizing 183 considerations prior to installation 132


processor memory size 182 physical planning 131
Persistent FlashCopy 115 roles 132
PFA 86 skeleton 426
physical and logical data migration 416 project plan skeleton 426
physical migration 415 project planning 223
physical planning 131 information required 133
delivery and staging area 133 PTC 110
floor type and loading 134
host interface and cables 138
input voltage 137 R
network connectivity planning 139 rack operator panel 41
planning for growth 144 rack power control cards see RPC
power connectors 136 RAID-10
power consumption 137 AAL 85
power control features 137 drive failure 84
Power Line Disturbance (PLD) 138 implementation 84
power requirements 136 theory 84
Remote Mirror and Copy connectivity 142 RAID-5
remote power control 141 drive failure 84
remote support 140 implementation 83
room space and service clearance 135 theory 83
sparing considerations 142 ranks 95
storage area network connection 142 RAS 69
placement of data 105 CUIR
planning disk scrubbing 86
DS Hardware Management Console 129 disk subsystem 82
logical 129 disk path redundancy 82
physical 129 EPO 88
project 129 fault avoidance 72
planning for growth 144 first failure data capture 72
positioning 11 host connection availability 79
power 66 Hypervisor 74
BBU I/O enclosure 75
building power lost 87 metadata 76
disk enclosure 66 microcode updates 88
fluctuation protection 87 installation process 376
I/O enclosure 66 naming 70
PPS NVS recovery
processor enclosure 66 permanent monitoring 72
RPC 87 PFA
power and cooling 65, 86 power and cooling 86
BBU processor complex 72
PPS RAID-10 84
rack cooling fans 87 RAID-5 83
RPC 87 RIO-G 75
power connectors 136 server 75
power consumption 137 server failover and failback 76
power control features 137 S-HMC 89
Power Hypervisor 34 spare creation 85
Power Line Disturbance (PLD) 138 raw disk 323
power requirements 136 Recovery Point Objective see RPO
POWER5 6, 46, 171 Redbooks Web site 431
PPS 86 Contact us xxvi
Predictive Failure Analysis see PFA reference materials 308
primary power supply see PPS related publications 429
processor complex 46, 71 help from IBM 431
processor enclosure how to get IBM Redboks 431
power 66 online resources 430
project plan other publications 429
reliability, availability, serviceability see RAS

438 IBM System Storage DS8000 Series: Architecture and Implementation


Draft Document for Review November 14, 2006 3:49 pm 6786IX.fm

Remote Mirror and Copy 36, 108, 142 SNMP agent 382–383
ESCON 64 SNMP manage 382
Remote Mirror and Copy function see RMC SNMP trap 382, 384
Remote Mirror and Copy see RMC SNMP trap request 382
remote power control 141 software
remote support 140 setup 154
Requests for Price Quotation see RPQ spares 54, 85
RIO-G 48 floating 85
RMC 9, 115 sparing considerations 142
Global Copy 117 SPCN 48
Global Mirror 117 Standby Capacity on Demand see Standby CoD
Metro Mirror 116 Standby CoD 7
room space 135 storage area network connection 142
RPC 65 storage capacity 7
RPO 124 storage complex 70
RPQ 290 storage facility image 30, 70
addressing capabilities 37
hardware components 32
S I/O resources 32
SAN 81 processor and memory allocations 32
SAN LUNs 370 RIO-G interconnect separation 32–33
SARC 12, 44 Storage Hardware Management Console see S-HMC
SATA 56 storage unit 70
scalability 10 stripe
DS8000 size 178
scalability 174 summary 417
SDD 12, 179, 300 switched FC-AL 6
for Windows 292 advantages 52
Sequential Prefetching in Adaptive Replacement Cache DS8000 implementation 53
see SARC System i
serial number 155 protected volume 345
server system power control network see SPCN
RAS 75
server-based SMP 44
service clearance 135 T
service processor 48 time synchronization 157
settings tools
HBA and operating system 291 Capacity Magic 420
S-HMC 7, 66, 89, 124 TPC for Disk 422
simplified LUN masking 10 trap 382, 384
sizing troubleshooting and monitoring 314
open systems 176
z/OS 180
SMUX 382 U
SNMP 382 user managemen using DS SM 161
configuration 385, 390 user management
Copy Services event traps 386 using DS CLI 159
notifications 385 using DS SM 161
preparation for the management software 390
preparation on the DS HMC 390 V
preparation with DS CLI 390 VDS support
trap 101 386 Windows Server 2003 295
trap 202 388 virtualization
trap 210 388 abstraction layers for disk 93
trap 211 388 address groups 101
trap 212 388 array sites 94
trap 213 389 arrays 94
trap 214 389 benefits 106
trap 215 389 concepts 91
trap 216 389 definition 92
trap 217 389

Index 439
6786IX.fm Draft Document for Review November 14, 2006 3:49 pm

extent pools 96
hierarchy 104
host attachment 102
logical volumes 97
ranks 95
storage system 92
volume group 103
VMFS 323
volume groups 103
volumes
CKD 98
VSE/ESA 341

W
Windows 291
SDD 292
WWPN 299

Z
z/OS
considerations 335
VSE/ESA 341
z/OS Global Mirror 9, 108, 122, 124
z/OS Metro/Global Mirror 9, 119, 122
z/VM considerations 340
zSeries 415
host connection 82
host considerations 334
performance 13
prerequisites and enhancements 334

440 IBM System Storage DS8000 Series: Architecture and Implementation


To determine the spine width of a book, you divide the paper PPI into the number of pages in the book. An example is a 250 page book using Plainfield opaque 50# smooth which has a PPI of 526. Divided
250 by 526 which equals a spine width of .4752". In this case, you would use the .5” spine. Now select the Spine width for the book and hide the others: Special>Conditional
Text>Show/Hide>SpineSize(-->Hide:)>Set . Move the changed Conditional text settings to all files in your book by opening the book file with the spine.fm still open and File>Import>Formats the
Conditional Text Settings (ONLY!) to the book files.
Draft Document for Review November 14, 2006 3:49 pm 6786spine.fm 441
IBM System Storage DS8000 Series:
Architecture and Implementation
IBM System Storage DS8000 Series:
Architecture and Implementation
IBM System Storage DS8000 Series: Architecture and Implementation
IBM System Storage DS8000 Series: Architecture and Implementation
To determine the spine width of a book, you divide the paper PPI into the number of pages in the book. An example is a 250 page book using Plainfield opaque 50# smooth which has a PPI of 526. Divided
250 by 526 which equals a spine width of .4752". In this case, you would use the .5” spine. Now select the Spine width for the book and hide the others: Special>Conditional
Text>Show/Hide>SpineSize(-->Hide:)>Set . Move the changed Conditional text settings to all files in your book by opening the book file with the spine.fm still open and File>Import>Formats the
Conditional Text Settings (ONLY!) to the book files.
Draft Document for Review November 14, 2006 3:49 pm 6786spine.fm 442
IBM System Storage DS8000 Series:
Architecture and Implementation
IBM System Storage DS8000 Series:
Architecture and Implementation
Draft Document for Review November 14, 2006 3:49 pm
Back cover ®

IBM System Storage


DS8000 Series:
Architecture and
Plan, install and This IBM Redbook will help you doing the planning, installation,
and configuration of the IBM System Storage™ DS8000 series. It
INTERNATIONAL
configure the DS8000
will help you plan, design and implement a new installation, or TECHNICAL
for high-availability
migrate from an existing one. This publication also discusses the SUPPORT
and efficient
architecture and components of the DS8000 and it provides ORGANIZATION
utilization
information useful for those who will manage the DS8000. It
includes hints and tips derived from users experience for efficient
Turbo models,
installation and use.
POWER5+ technology,
FATA drives, and BUILDING TECHNICAL
The DS8000 series considerably extends the high-performance INFORMATION BASED ON
performance features and virtualization capabilities of its predecessors and is available PRACTICAL EXPERIENCE
in several Turbo models with IBM POWER 5+™ dual-two way or
Learn about the dual-four way processor complex implementations, as well as a IBM Redbooks are developed
DS8000 design Storage System LPAR capable model. All models use a switched by the IBM International
characteristics, and Fibre Channel design that enables larger growth capacities, plus Technical Support
reliability and service characteristics apt for the most demanding Organization. Experts from
management IBM, Customers and Partners
requirements. The DS8000 increased power and extended from around the world create
connectivity makes it suitable for environments with multiple timely technical information
servers, both for the open systems and the System z™. based on realistic scenarios.
Specific recommendations
The DS8000 Turbo models also feature: point-in-time and remote are provided to help you
implement IT solutions more
mirror and copy functions for business continuance solutions; effectively in your
500GB FATA drives for capacities up to 320 TB; 4Gb Fibre environment.
Channel / FICON host adapter connectivity option; plus a set of
enhancements that are discussed in this publication.

For more information:


ibm.com/redbooks

SG24-6786-02 ISBN 073849447X

You might also like