You are on page 1of 240

DFSMS Release 10

Technical Update
One-stop guide to know all of the
enhancements to DFSMS!

MUST-HAVE information for


installation planning!

Many worked examples!

Hyeong-Ge Park
Frank Byrne
Kohji Otsutomo

ibm.com/redbooks
SG24-6120-00

International Technical Support Organization

DFSMS Release 10 Technical Update

November 2000
Take Note!
Before using this information and the product it supports, be sure to read the general information in
Appendix A, “Special notices” on page 189.

First Edition (November 2000)

This edition applies to DFSMS Release 10 for use with OS/390 Version 2 Release 10, Program Number
5657-A01.

Comments may be addressed to:


IBM Corporation, International Technical Support Organization
Dept. 471F Building 80-E2
650 Harry Road
San Jose, California 95120-6099

When you send information to IBM, you grant IBM a non-exclusive right to use or distribute the
information in any way it believes appropriate without incurring any obligation to you.

© Copyright International Business Machines Corporation 2000. All rights reserved.


Note to U.S Government Users – Documentation related to restricted rights – Use, duplication or disclosure is
subject to restrictions set forth in GSA ADP Schedule Contract with IBM Corp.
Contents

Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii

Tables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xi

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
The team that wrote this redbook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Comments welcome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv

Chapter 1. Introduction to DFSMS Release 10 . . . . . . . . . . . . . . . . . . . .1


1.1 Overview of DFSMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1
1.1.1 DFSMSdfp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1
1.1.2 DFSMSdss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1
1.1.3 DFSMShsm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2
1.1.4 DFSMSrmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2
1.2 Overview of DFSMS Release 10 functional enhancements . . . . . . . . .2
1.2.1 Performance enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2
1.2.2 Improved availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3
1.2.3 Improved system throughput . . . . . . . . . . . . . . . . . . . . . . . . . . . .3
1.2.4 Improved removable media management with DFSMSrmm . . . . .5
1.3 Statement of direction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6
1.4 Ordering information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6
1.5 Considerations on coexistence/migration . . . . . . . . . . . . . . . . . . . . . . .7
1.5.1 Applying coexistence PTFs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7
1.5.2 DFSMS’s coexistence PTFs versus OS/390’s “N-3” policy . . . . . .7
1.5.3 Reference materials regarding coexistence . . . . . . . . . . . . . . . . .8
1.6 New/modified system messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8

Chapter 2. DFSMSdfp enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . .9


2.1 VSAM data striping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9
2.1.1 Background of this enhancement . . . . . . . . . . . . . . . . . . . . . . . . .9
2.1.2 How does DFSMSdfp Release 10 improve this function? . . . . . . 10
2.1.3 How to use VSAM data striping . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.4 How to migrate to VSAM striped data sets . . . . . . . . . . . . . . . . . 15
2.1.5 How the system allocates space for VSAM striped data sets . . . 16
2.1.6 Worked examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.1.7 Other considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.2 Large tape block sizes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.2.1 Background of this enhancement . . . . . . . . . . . . . . . . . . . . . . . . 36
2.2.2 How to write tape blocks greater than 32,760 . . . . . . . . . . . . . . . 36
2.2.3 Considerations on using large tape block sizes . . . . . . . . . . . . . 39
2.2.4 IBM supplied programs and large tape block sizes . . . . . . . . . . . 45

© Copyright IBM Corp. 2000 iii


2.2.5 Programming considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
2.2.6 Summary of recommendations on using large tape block sizes . 56
2.2.7 Worked examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
2.2.8 Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
2.3 UNIT=AFF ACS support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
2.3.1 Background of this enhancement . . . . . . . . . . . . . . . . . . . . . . . . 60
2.3.2 How does DFSMS Release 10 solve the problem? . . . . . . . . . . . 62
2.3.3 Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
2.4 DADSM rename of duplicate data sets . . . . . . . . . . . . . . . . . . . . . . . . 64
2.4.1 Background of this enhancement . . . . . . . . . . . . . . . . . . . . . . . . 64
2.4.2 How does DFSMS Release 10 solve the problem? . . . . . . . . . . . 65
2.4.3 How to rename a duplicate data set . . . . . . . . . . . . . . . . . . . . . . 65
2.4.4 Considerations on renaming data sets . . . . . . . . . . . . . . . . . . . . 70
2.5 High speed tape positioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
2.5.1 Background of this enhancement . . . . . . . . . . . . . . . . . . . . . . . . 71
2.5.2 How does DFSMSdfp and DFSMSrmm solve the problem? . . . . 74
2.5.3 How to use this function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
2.5.4 Worked examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
2.6 Enhanced catalog sharing availability enhancement . . . . . . . . . . . . . . 76
2.6.1 Background of this enhancement . . . . . . . . . . . . . . . . . . . . . . . . 76
2.6.2 How does DFSMSdfp Release 10 improve this function? . . . . . . 77
2.6.3 Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

Chapter 3. DFSMShsm enhancements . . . . . . . . . . . . . . . . . . . . . . . . . 79


3.1 Multiple DFSMShsm hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
3.1.1 Background of this enhancement . . . . . . . . . . . . . . . . . . . . . . . . 79
3.1.2 How does DFSMShsm Release 10 solve the problem? . . . . . . . 80
3.1.3 Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
3.1.4 Worked examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
3.2 Fast subsequent migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
3.2.1 Background of this enhancement . . . . . . . . . . . . . . . . . . . . . . . . 95
3.2.2 How does DFSMShsm Release 10 improve this function? . . . . . 98
3.2.3 How to use this new function . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
3.2.4 Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
3.2.5 Worked examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
3.3 Data set backup enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
3.3.1 Background of these enhancements . . . . . . . . . . . . . . . . . . . . . 106
3.3.2 How does DFSMShsm Release 10 improve this function? . . . . 110
3.3.3 Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
3.3.4 Worked example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
3.4 ABARS support for large tape block sizes . . . . . . . . . . . . . . . . . . . . 131
3.4.1 Background of this enhancement . . . . . . . . . . . . . . . . . . . . . . . 131
3.4.2 ABACKUP/ARECOVER data sets with large tape block sizes. . 133

iv DFSMS Release 10 Technical Update


3.4.3 Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

Chapter 4. DFSMSrmm enhancements . . . . . . . . . . . . . . . . . . . . . . . . 135


4.1 Virtual Tape Server (VTS) support enhancement . . . . . . . . . . . . . . . 135
4.1.1 Background of this enhancement . . . . . . . . . . . . . . . . . . . . . . . 135
4.1.2 How does DFSMSrmm Release 10 improve this function? . . . . 142
4.1.3 Export/import processing scenarios . . . . . . . . . . . . . . . . . . . . . 144
4.1.4 How to migrate to the VTS enhanced support environment. . . . 150
4.1.5 Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
4.2 Volume set management support . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
4.2.1 Background of this enhancement . . . . . . . . . . . . . . . . . . . . . . . 153
4.2.2 How does DFSMSrmm Release 10 improve this function? . . . . 154
4.2.3 Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
4.3 Using 3-way audit support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
4.3.1 Background of this enhancement . . . . . . . . . . . . . . . . . . . . . . . 159
4.3.2 How does DFSMSrmm improve this function? . . . . . . . . . . . . . 161
4.3.3 CDS maintenance scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
4.3.4 How MEND/MEND(SMSTAPE) works. . . . . . . . . . . . . . . . . . . . 164
4.3.5 Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
4.4 Pre-ACS interface/ACS support . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
4.4.1 Background of this enhancement . . . . . . . . . . . . . . . . . . . . . . . 166
4.4.2 How does DFSMSrmm Release 10 improve this function? . . . . 170
4.4.3 Migrating from EDGUX100 management to ACS management . 172
4.4.4 Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
4.5 Providing OPC batch loader sample JCL . . . . . . . . . . . . . . . . . . . . . 175
4.5.1 Background of this enhancement . . . . . . . . . . . . . . . . . . . . . . . 175
4.5.2 Understanding OPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
4.5.3 Batch job flow DFSMSrmm provides. . . . . . . . . . . . . . . . . . . . . 180
4.5.4 How to use this function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
4.5.5 When manual interventions are required . . . . . . . . . . . . . . . . . 184
4.5.6 Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
4.6 Miscellaneous enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
4.6.1 Fast tape positioning support . . . . . . . . . . . . . . . . . . . . . . . . . . 185
4.6.2 Large tape block size support . . . . . . . . . . . . . . . . . . . . . . . . . . 185
4.7 Sample LISTDATASET and LISTVOLUME output . . . . . . . . . . . . . . 186

Appendix A. Special notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189

Appendix B. Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193


B.1 IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
B.2 IBM Redbooks collections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
B.3 Other resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
B.4 Referenced Web sites. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194

v
How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
IBM Redbooks fax order form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196

Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213

IBM Redbooks review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217

vi DFSMS Release 10 Technical Update


Figures

1. Non-striping versus striping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9


2. Example of data class definition requesting extended format data set . . . 11
3. DEVSERV QDASD output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4. An example of storage class definition with SDR attribute . . . . . . . . . . . . 13
5. An example of storage class definition requesting guaranteed space. . . . 14
6. An example of the storage class ACS routine . . . . . . . . . . . . . . . . . . . . . . 15
7. Non-guaranteed space request divides primary quantity by stripe count . 16
8. Guaranteed space request allocates primary space on each volume . . . . 17
9. Secondary space allocation when all of the stripes have enough space . . 18
10. Secondary space allocation when insufficient space for one stripes . . . . . 19
11. Non-striped data set uses primary amount to extend another volume . . . 20
12. Striped VSAM data set extends up to primary quantity X volume count . . 21
13. Striped VSAM data set also takes candidate volumes into account . . . . . 22
14. Guaranteed space VSAM striped data set should have same amount . . . 23
15. The meanings of asterisks may vary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
16. Physical layout of a VSAM striped data set . . . . . . . . . . . . . . . . . . . . . . . . 25
17. An example of non-striped KSDS which has an adequate index CI size. . 27
18. An index CI that is too small to hold all index entries for a data CA . . . . . 28
19. A multi-layered data set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
20. Data load time comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
21. Down-level system cannot open VSAM striped data sets . . . . . . . . . . . . . 35
22. /*JOBPARM SYSAFF can be used to have system affinity . . . . . . . . . . . . 38
23. IHADFA macro expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
24. Data class has new Block Size Limit parameter . . . . . . . . . . . . . . . . . . . . 42
25. New allocation requesting that large block should not go to DASD. . . . . . 43
26. An example of IFHSTATR output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
27. Traditional BDW format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
28. Extended BDW format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
29. The location of the length-read field. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
30. Macro expansion of ARA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
31. ARA extended information segment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
32. Example for locating ARA extended information segment. . . . . . . . . . . . . 55
33. An example of RENAME macro. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
34. Data set separation on an SL tape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
35. Logical and physical views of tape data sets in a volume . . . . . . . . . . . . . 73
36. Data set positioning time comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
37. Multiple OS/390 images to run multiple DFSMShsm before Release 10 . 80
38. HSMplex: multiple DFSMShsm address spaces across two OS/390s . . . 81
39. HSM1 as MAIN host and HSM2 as AUX host . . . . . . . . . . . . . . . . . . . . . . 84
40. Intermixing Release 10 and pre-Release 10 in an HSMplex . . . . . . . . . . . 85

© Copyright IBM Corp. 2000 vii


41. The definition top panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
42. Multiple DFSMShsm performed better than single DFSMShsm . . . . . . . . 94
43. Data movement through migration function . . . . . . . . . . . . . . . . . . . . . . . . 95
44. DFSMShsm invalidates control records after recall . . . . . . . . . . . . . . . . . . 96
45. DFSMShsm creates new migration copy and updates control records . . . 97
46. DFSMShsm sets a bit on in the catalog record . . . . . . . . . . . . . . . . . . . . . 99
47. DFSMShsm does not make new ML2 tape copy if it can reconnect . . . . 100
48. Example of DFSMShsm active log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
49. Example of REPORT DAILY command output . . . . . . . . . . . . . . . . . . . . 104
50. Performance comparison — normal ML2 migration and reconnection . . 105
51. Data set backup is single task under DFSMShsm pre-Release 10 . . . . . 107
52. Moving backup versions from ML1 impacts primary volume processing . 108
53. Inconveniences on command data set backup . . . . . . . . . . . . . . . . . . . . 109
54. DFSMShsm Release 10 have up to 64 data set backup tasks . . . . . . . . 110
55. Third task cannot allocate tape device when only 2 devices available . . 111
56. TARGET(DASD) uses ML1 DASDs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
57. TARGET(TAPE) uses tape backup volumes . . . . . . . . . . . . . . . . . . . . . . 113
58. DFSHSM decides the best device when no TARGET parameter . . . . . . 115
59. DFSMShsm selects target device based on the size of data sets . . . . . . 117
60. DFSMShsm deletes idle tasks beyond MAXIDLETASKS . . . . . . . . . . . . 118
61. Each tape task sets its own timer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
62. Active task does not demount tape volume until current request done . . 121
63. Automatic backup and SWITCHTAPES . . . . . . . . . . . . . . . . . . . . . . . . . 123
64. DFSMShsm Concurrent Copy support overview . . . . . . . . . . . . . . . . . . . 127
65. Recover takeaway uses GRS to communicate with other DFSMShsm. . 128
66. Down-level DFSMShsm ignores new keyword . . . . . . . . . . . . . . . . . . . . 129
67. Command data set backup performance comparison . . . . . . . . . . . . . . . 130
68. ABARS creates a set of backups from primary, migration volumes . . . . 132
69. Down-level system fails ABACKUP if data set has large tape block . . . . 134
70. Down-level system cannot recover data set with large tape block size . . 134
71. Overview of VTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
72. Overview of export processing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
73. Converting from CA-1 to DFSMSrmm . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
74. Spilled data set in a specific mount request. . . . . . . . . . . . . . . . . . . . . . . 154
75. Different management policies for multi-volume data sets . . . . . . . . . . . 154
76. A Volume set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
77. Manage as a set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
78. Manage as a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
79. Automatic chain status maintenance of Case 1 . . . . . . . . . . . . . . . . . . . . 157
80. Automatic chain status maintenance of Case 2 . . . . . . . . . . . . . . . . . . . . 157
81. Tape volume information in an SMS managed tape library . . . . . . . . . . . 160
82. System-managed tape environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
83. Non-system-managed tape environment. . . . . . . . . . . . . . . . . . . . . . . . . 169

viii DFSMS Release 10 Technical Update


84. Non-system-managed environment of Release 10 . . . . . . . . . . . . . . . . . 172
85. OPC components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
86. Overview of OPC job scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
87. OPC event-triggered tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
88. OPC resources and example of job scheduling . . . . . . . . . . . . . . . . . . . . 180
89. Default sample job flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
90. Sample LISTDATASET output. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
91. Sample LISTVOLUME output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187

ix
x DFSMS Release 10 Technical Update
Tables

1. OS/390 Version 2 Release 10 base feature code . . . . . . . . . . . . . . . . . . . . 6


2. Feature code for DFSMSdss, DFSMShsm, and DFSMSrmm . . . . . . . . . . . 6
3. Coexistence PTFs for VSAM data striping. . . . . . . . . . . . . . . . . . . . . . . . . 34
4. INFO=AMCAP return area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
5. Optimum and maximum block size by device type . . . . . . . . . . . . . . . . . . 53
6. Large tape block size performance comparison . . . . . . . . . . . . . . . . . . . . 57
7. Coexistence PTFs for large tape block size. . . . . . . . . . . . . . . . . . . . . . . . 58
8. MAIN and AUX host differences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
9. CC parameter specification, authority, device capability, and results . . . 126
10. Coexistence APAR/PTF list for command data set backup. . . . . . . . . . . 129
11. Toleration APAR/PTF for ABARS large tape block size support . . . . . . . 133
12. Location priority number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
13. Summary of the DFSMSrmm enhancement for stacked volumes . . . . . . 143
14. MEND/MEND(SMSTAPE) processing . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
15. Applications configured . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182

© Copyright IBM Corp. 2000 xi


xii DFSMS Release 10 Technical Update
Preface

DFSMS, formerly known as DFSMS/MVS, continues to add enhancements to


performance, availability, system throughput, and usability for data access
and storage management.

DFSMS Release 10 is the first release of DFSMS that is available solely with
OS/390. DFSMS Release 10 is packaged and shipped with OS/390 Version 2
Release 10 and offers the ease of installation, integration, and maintenance
inherent in the OS/390 product.

This IBM Redbook provides an in-depth description of all the new


enhancements made to DFSMS Release 10. This book is designed to help
storage administrators plan, install, and migrate to DFSMS Release 10.

The team that wrote this redbook


This redbook was produced by a team of specialists from around the world
working at the International Technical Support Organization San Jose Center.

Hyeong-Ge Park is a Storage Specialist at the International Technical


Support Organization, San Jose Center. HG joined the ITSO in January 2000.
Before this, HG worked in the Field Support Organization, in IBM Japan.

Frank Byrne is an Advisory Specialist in the IBM UK. He has 31 years of


experience in support of OS/390 and its predecessors. His areas of expertise
include the implementation of Parallel Sysplex. He has written extensively on
DFSMSdfp.

Kohji Otsutomo is an Information Technology Engineer in IBM Japan.


He has 4 years of experience in the high-end storage product support field.
He holds a degree in electronics from Chiba University in Japan. His areas of
expertise include IBM 3494, Virtual Tape Server, and other tape products.
He has written extensively on DFSMSrmm.

Thanks to the following people for their invaluable contributions to this project:

From Storage Subsystems Division, San Jose:


Jim Becker
Stephen Branch
Jean Chang
Jerry Codde
Pat Choi

© Copyright IBM Corp. 2000 xiii


Ed Daray
John Humphrey
Victor Liang
Savur Rao
Wayne Rhoten

From Storage Subsystems Division, Tucson:


Cuong Le
Lyn L. Ashton
Tony Pearson
John Thompson
Henry Valenzuera

From IBM UK:


Mike Wood

From IBM Germany:


Andreas Henicke
Guenter Wilden

From IBM Japan:


Seiei Fujiwara

From International Technical Support Organization, Poughkeepsie Center:


Robert Haimowitz

From International Technical Support Organization, San Jose Center:


Emma Jacobs
Yvonne Lyon
Nigel Morton
Claudia Traver

xiv DFSMS Release 10 Technical Update


Comments welcome
Your comments are important to us!

We want our Redbooks to be as helpful as possible. Please send us your


comments about this or other Redbooks in one of the following ways:
• Fax the evaluation form found in “IBM Redbooks review” on page 217 to
the fax number shown on the form.
• Use the online evaluation form found at ibm.com/redbooks
• Send your comments in an Internet note to redbook@us.ibm.com

xv
xvi DFSMS Release 10 Technical Update
Chapter 1. Introduction to DFSMS Release 10

In this chapter, we provide an introduction to DFSMS Release 10, formerly


known as DFSMS/MVS.

1.1 Overview of DFSMS


DFSMS/MVS, now simply called DFSMS, is a software suite that
automatically manages data from creation to expiration. DFSMS provides
allocation control for availability and performance, backup/restore, and
disaster recovery services, space management, and tape management.

DFSMS consists of DFSMSdfp, an element of OS/390; and DFSMSdss,


DFSMShsm, and DFSMSrmm, features of OS/390. We briefly describe these
functions and their roles.

1.1.1 DFSMSdfp
DFSMSdfp provides a foundation for storage management, data
management, and program management.

System-managed storage helps storage administrators manage storage


allocation efficiently. It can optimize DASD data placement based on the
storage administration policy you defined. System-managed storage provides
more granular storage management along with DFSMShsm.

Application program interfaces, referred to as access methods, are the


foundation of data management. These are used to create, modify, delete,
read, or write data on DASDs, tapes, optical disks, and printers. These
access methods are considered as I/O drivers for OS/390 and various
applications, including database management subsystems (DBMSs) such as
CICS/VSAM, IMS, or DB2 for OS/390.

DFSMS binder and loader are the foundation for program management. They
provide functions to create, load, modify, list, read, transport, and copy
executable programs.

1.1.2 DFSMSdss
DFSMSdss provides comprehensive DASD data manipulation functions. You
can use DFSMSdss for data movement and replication, eliminating DASD
free-space fragmentation, data backup, and recovery, at either the data set
and volume levels, as well as data set and volume conversion to
system-managed storage.

© Copyright IBM Corp. 2000 1


1.1.3 DFSMShsm
DFSMShsm provides both automatic and manual storage management
functions which help storage administrators manage their storage more
efficiently. For example, DFSMShsm moves unreferenced data to a lower
hierarchy of storage automatically, and will make it available to applications
again if these need to access it. DFSMShsm can also make backups
automatically whenever data is changed, or can dump entire volumes. You
can also have DFSMShsm take backups through TSO commands, batch
programs, or the application programming interface for DFSMShsm.

1.1.4 DFSMSrmm
DFSMSrmm helps you manage your removable media, such as tape
cartridges, reels, and optical volumes. DFSMSrmm provides a central on-line
inventory of the resources in your removable media library and in storage
locations outside your removable media library. For example, DFSMSrmm
can keep track of the usage of tape cartridges at both the volume and data
set levels by interacting with DFSMSdfp and/or automatic tape libraries, and
create movement reports based on the administration policy you defined.

1.2 Overview of DFSMS Release 10 functional enhancements


DFSMS Release 10 continues to add enhancements to performance,
availability, system throughput, and usability for data access and storage
management. In the following sections we provide an overview of all the
enhancements included in DFSMS Release 10.

1.2.1 Performance enhancements


In this section, we describe enhancements that enhance performance.

1.2.1.1 VSAM striping


IBM introduced sequential data set striping with DFSMS 1.1, providing
significant throughput improvements for large sequential accesses. In
DFSMS Release 10, VSAM can now also take advantage of data set striping.
Enabling VSAM data sets to be striped across multiple volumes allows
applications like DB2 and VSAM to substantially reduce run times and
shorten batch windows.

2 DFSMS Release 10 Technical Update


1.2.1.2 Large tape block size
Today data is growing at an exponential rate. In order to manage this data on
tape, higher capacity tape media like IBM's 3590 is becoming the standard. In
order to fully exploit this new denser media, IBM has provided support for
tape block sizes larger than 32,760 bytes. Tape block sizes up to 262,144
bytes for 3590 and up to 65,535 bytes for non-3590 are now supported, to
enable applications to fill tapes much faster.

1.2.2 Improved availability


In this section, we describe enhancements which contribute to improve
availability.

1.2.2.1 UNIT=AFF support for tape libraries


UNIT=AFF is used to minimize the number of tape drives required for a job as
well as to stack multiple data sets on a tape. Starting with DFSMS 1.3,
allocation failures could be prevented for data set stacking (with VOL=REF=
or VOL=SER=) as the automatic class selection (ACS) routines could
determine whether the referenced DD was directed to system-managed
(SMS) DASD, SMS tape, or non-SMS devices. This could not be done for
non-stacking jobs (without VOL=REF=) that used UNIT=AFF.

New with DFSMS Release 10, the ACS routines can now determine whether
a referenced DD resides on SMS DASDs, SMS tapes, or non-SMS devices
when data sets are not stacked. This permits direct allocation using the ACS
routines and prevents job failures. Without having to make JCL changes, this
will allow you to direct allocations to either disk or tape based on their
characteristics, rather than just knowing the fact that they have UNIT=AFF
specified.

1.2.3 Improved system throughput


In this section, we describe enhancements which contribute to improved
system throughput.

1.2.3.1 DFSMShsm data sets command backups directly to tape


Previously, data set command backups were written first to ML1 DASD, then
subsequently copied to tape during the backup window. New with DFSMS
Release 10, data set command backups can be written directly to tape, and
optionally can be duplexed. This will allow you to run command backups
without the fear of filling ML1 DASD (which could fail migrations as well as
large data set backups) and gain the extra security of duplication.

Chapter 1. Introduction to DFSMS Release 10 3


1.2.3.2 DFSMShsm multi-tasking data set command backups
Prior to DFSMS Release 10, command data set backups were single
threaded. New with DFSMS R10, DFSMShsm will support up to 64
concurrent command data set backups to either ML1 DASD or tape. This will
increase the rate in which DFSMShsm can perform command data set
backups, and allow more batch applications to take advantage of it.

1.2.3.3 DFSMShsm Concurrent Copy on data set backups


Prior to DFSMS Release 10, subsequent job steps could not be started until
the job step taking a backup using Concurrent Copy was physically
completed. In DFSMS Release 10, subsequent job steps can be started as
soon as the backup with Concurrent Copy is logically completed. This
concurrent copy enhancement, along with backup direct to tape and 64
concurrent backup tasks, allows you to reduce the application batch window.

1.2.3.4 DFSMShsm fast subsequent migration


Prior to DFSMS Release 10, when a migrated data set is recalled for read,
the data set must be migrated again. New with DFSMS Release 10, data sets
that are recalled from ML2 tape, but not changed or deleted, can be
reconnected back to the original ML2 tape. This eliminates the unnecessary
data movement resulting from re-migration, and may reduce the need to
perform RECYCLE processing against these tapes.

1.2.3.5 Multiple DFSMShsm address spaces


Prior to DFSMS Release 10, only one DFSMShsm address space was
allowed per MVS system image. This address space performed backups,
migrations, expiration, recalls and recoveries. New with DFSMS Release 10,
multiple DFSMShsm address spaces can be started in the same OS/390
system image. This will reduce the contention, and increase the number of
tasks available for space management, incremental backup and full volume
dump processing. Each host address space can be assigned different tasking
levels and functions to be performed, and each can be assigned a different
service level through workload manager (WLM). For example, recalls
processed by the main HSM host can have higher priority than backups
performed by an auxiliary host.

Up to 39 HSM hosts can share a common set of control data sets. These
hosts can be all on a single MVS system image, or spread over several
systems.

ABARS is not impacted by this change. The main host will still manage up to
64 ABARS secondary address spaces per MVS image.

4 DFSMS Release 10 Technical Update


1.2.4 Improved removable media management with DFSMSrmm
In this section, we describe enhancements which contribute to improve
removable media management.

1.2.4.1 Multi-volume set retention and movement


DFSMSrmm now provides an option to process multi-volume, multi-data set
tapes as aggregates for retention and movement. This capability makes
DFSMSrmm more compatible with other products during migration to
DFSMSrmm and gives you more flexibility in managing your data.

1.2.4.2 Tivoli OPC sample job for DFSMSrmm tasks


With DFSMS Release 10, a sample job will be provided which can be used to
run the Tivoli OPC batch loader utility to set up DFSMSrmm as an application
whose scheduling is managed by OPC.

1.2.4.3 Pre-ACS interface support


DFSMSrmm can now provide its pool name (via MSPOOL) and management
value (via MSPOLICY) to the ACS routines to help you manage your tape
data set allocations.

1.2.4.4 DFSMSrmm SMS ACS support


DFSMSrmm now calls the SMS ACS routines to enable management class
and storage groups to be used for non-system-managed tape data sets. SMS
management class names can be used to replace the use of exit-set VRS
management values for policy management, allowing all policy management
decisions for tape to be made in SMS ACS processing. SMS storage group
names can be used to replace or extend the exit-based, or system-based
scratch pooling supported by DFSMSrmm.

1.2.4.5 IBM3494 Virtual Tape Server support enhancement


DFSMSrmm has added support for a volume type of “stacked” to allow
identification of the stacked volumes in a Virtual Tape Server (VTS) and direct
management of these volumes when exported. These stacked volumes will
now be assigned to specific slots when moved to storage locations which
require shelf management. Logical volumes are no longer assigned to slots at
storage locations.

1.2.4.6 Fast tape positioning


DFSMSrmm enables the use of high speed search for applications that do not
use this function. Along with DFSMSdfp, DFSMSrmm records the starting
and ending tape block IDs, and requests that these IDs be used when files
are read and when more data is written to a tape volume.

Chapter 1. Introduction to DFSMS Release 10 5


1.2.4.7 Three-way audit support
With DFSMS Release 10, DFSMSrmm will have the ability to audit the data in
its control data set against (or synchronized with) the tape configuration
database (TCDB) or volume catalog, and the IBM 3494 Library Manager
database, including both the logical and the stacked volumes in an export
capable VTS.

1.3 Statement of direction


IBM intends to allow batch programs and CICS online applications to
concurrently share VSAM data for read and write processing. This capability
will allow CICS applications to stay online along with many batch update
applications to help meet the 24 x 7 data availability requirement.

1.4 Ordering information


DFSMS Release 10 is the first release of DFSMS that is available solely with
OS/390. DFSMS Release 10 is packaged and shipped with OS/390 Version 2
Release 10, so you cannot order DFSMS Release 10 as a separate product
(unlike DFMS/MVS Version 1).

Table 1 shows the available feature code for OS/390 Version 2 Release 10
base.
Table 1. OS/390 Version 2 Release 10 base feature code

3480 tape cartridge 4mm DAT

6113 6112

The OS/390 Version 2 Release 10 base feature includes DFSMSdfp. The


other DFSMS Release 10 components are available as an optional priced
feature. Table 2 shows the available feature code for them.
Table 2. Feature code for DFSMSdss, DFSMShsm, and DFSMSrmm

3480 4mm DAT DFSMSdss DFSMShsm DFSMSrmm

5976 5713 O O O

5017 5723 O Not included O

Please note that you have only two options, while you could select other
combinations of those components than those appearing in the table prior to
DFSMS Release 10.

6 DFSMS Release 10 Technical Update


1.5 Considerations on coexistence/migration
In this section we describe some considerations regarding coexistence or
fallback to pre-DFSMS Release 10 systems.

1.5.1 Applying coexistence PTFs


If you plan to fallback to the current level of system you are using after
migrating to OS/390 Version 2 Release 10, or if you plan to share data sets
among OS/390 Version 2 Release 10 and other releases, you need to apply
coexistence PTFs for pre-DFSMS Release 10 system so that data sets
supported by DFSMS and catalogs you are sharing are not damaged. Note
that you also need to take care of compatibilities for program products and/or
your own applications between releases. For example, if you migrate DB2 for
OS/390 to the latest release, you need to be concerned with compatibility or
toleration among DB2 releases, as DFSMS has no knowledge of its logical
structure.

1.5.2 DFSMS’s coexistence PTFs versus OS/390’s “N-3” policy


As you may know, IBM provides coexistence support for four consecutive
releases of the OS/390 system. (Note that support was announced for the
coexistence of OS/390 Version 2 Release 6 and OS/390 Version 2 Release 10.
This is an exception to the “N-3” general policy). On the other hand, however,
coexistence PTFs are available for prior DFSMS/MVS releases which still
have currency.

For example, sharing data sets among OS/390 Version 2 Release 10 and
OS/390 Version 2 Release 4 with DFSMS/MVS Version 1 Release 4 is
beyond the scope of “N-3”. However, we provide coexistence PTFs for
DFSMS/MVS Version 1 Release 4. This is to prevent your data sets/catalogs
from being corrupted. If you do not involve any sysplex functions as a part of
data set sharing, you can share data sets beyond “N-3”. However, you need
to be careful about certain DFSMS functions which involve OS/390’s sysplex
services, such as VSAM RLS sharing. Or your applications or DBMS may
exploit these services. If any of these types of situations could apply to your
installation, it is VERY important that you keep your installations under four
consecutive OS/390 releases, so you can be confident that your installations
are fully supported.

Chapter 1. Introduction to DFSMS Release 10 7


1.5.3 Reference materials regarding coexistence
In the following chapters, we will describe the details of DFSMS Release 10
enhancements by item, and we will also explain about related maintenance, if
there are any concerns.

However, to make sure that you have the latest and most complete
information, we recommend that you refer to the latest version of the
following manuals:
• OS/390 DFSMS Migration, SC26-7329
• OS/390 Planning for Installation, GC28-1726

Also, you need to ask your IBM service representative to get the latest
maintenance information. Preventive service package (PSP) bucket
information is available under the OS390R10 entry.

1.6 New/modified system messages


To address new/modified system messages for DFSMS Release 10, refer to
the following manual:
• OS/390 Summary of Message Changes, GC28-1499

This manual also includes new/modified MVS messages. For a complete


message description, you should refer to the following manuals:
• OS/390 MVS System Messages, Vol. 1, GC28-1784
• OS/390 MVS System Messages, Vol. 2, GC28-1785
• OS/390 MVS System Messages, Vol. 3, GC28-1786
• OS/390 MVS System Messages, Vol. 4, GC28-1787
• OS/390 MVS System Messages, Vol. 5, GC28-1788

8 DFSMS Release 10 Technical Update


Chapter 2. DFSMSdfp enhancements

In this chapter, we describe the following DFSMSdfp enhancements made in


DFSMS Release 10:
• VSAM data striping
• Large tape block size
• UNIT=AFF ACS support
• DADSM rename for a duplicate data set
• High speed tape positioning
• Enhanced catalog sharing availability enhancement

2.1 VSAM data striping


Here we describe the new DFSMSdfp capability that allows you to have data
striping support on VSAM data sets.

2.1.1 Background of this enhancement


Striping is a technique to improve the performance of data sets which are
processed sequentially and which have a read or write transfer rate that is
greater than the capabilities of a single volume. This is achieved by splitting
the data set into segments and spreading those segments over sufficient
volumes to achieve the required data rate (see Figure 1).

Non-Striping Non-sriping I/O uses


only one path to
transfer data

D1 D2 D3 D4 D1
D2
D3
D4

Striping
D1 D1

D2 D2 Striping I/O uses


multiple paths to
transfer data

D3 D3

D4 D4

Figure 1. Non-striping versus striping

© Copyright IBM Corp. 2000 9


The striping technique has been available for non-VSAM data sets since
DFSMS/MVS Version 1 Release 1. However, it has not been available for
VSAM data sets.

2.1.2 How does DFSMSdfp Release 10 improve this function?


Now you can shorten the elapsed time when you need to reorganize your
database, or when your batch job processes VSAM data sequentially, by
using VSAM striped data sets with DFSMS Release 10.

2.1.3 How to use VSAM data striping


Here we describe how to use VSAM data striping.

2.1.3.1 VSAM organizations supported


VSAM striping is available for all types of extended format VSAM data sets:
• Key-sequenced data set (KSDS)
The index component is not striped. Only the data component is striped.
• Entry-sequenced data set (ESDS)
• Relative record data set (RRDS)
• Variable-length relative record data set (VRRDS)
• Linear data set (LDS)

However, the following attributes/access modes are not supported:


• Alternate index data sets
• KEYRANGE
• IMBED
Even if you specify IMBED, it will be ignored and no error messages will
be issued. We do not recommend that you specify REPLICATE, even
though it is still valid for extended format data sets. The DASD
subsystems which support extended format should have cache. Since
REPLICATE writes each index record on a track as many times as it will
fit, this will consume cache resource and make the cache hit ratio worse.
This is because these replicated records are unique physical records from
the hardware’s viewpoint.
Use REPLICATE only when you allocate non-extended format VSAM data
sets on native DASD, such as IBM 3390 behind IBM 3990 Model 2 storage
control.

10 DFSMS Release 10 Technical Update


• REUSE
Since REUSE is not supported, you cannot open a VSAM striped data set
with the RESET parameter.
• RLS access
Improved CI access (ICI)

2.1.3.2 How to “stripe” a VSAM data set


Striped data sets must be system-managed and in extended format, they can
only be allocated through the automatic class selection (ACS) routines.

2.1.3.3 Defining a data class for VSAM striping


In order to allocate extended format data sets, you need to assign a data
class with the Data Set Name Type = EXT attribute to a data set. You can define
a data class through the interactive storage management facility (ISMF).
Figure 2 is an example of the data class definition panel.

DATA CLASS DEFINE Page 3 of 3


Command ===>

SCDS Name . . . : SYS1.SMS.SCDS


Data Class Name : STRIPED

To DEFINE Data Class, Specify:


Data Set Name Type . . . . . . EXT (EXT, HFS, LIB, PDS or blank)
If Ext . . . . . . . . . . . R (P=Preferred, R=Required or blank)
Extended Addressability . . . N (Y or N)
Record Access Bias . . . . . (S=System, U=User or blank)
Reuse . . . . . . . . . . . . . N (Y or N)
Initial Load . . . . . . . . . S (S=Speed, R=Recovery or blank)
Spanned / Nonspanned . . . . . (S=Spanned, N=Nonspanned or blank)
BWO . . . . . . . . . . . . . . (TC=TYPECICS, TI=TYPEIMS, NO or blank)
Log . . . . . . . . . . . . . . (N=NONE, U=UNDO, A=ALL or blank)
Logstream Id . . . . . . . . .
Space Constraint Relief . . . . N (Y or N)

Figure 2. Example of data class definition requesting extended format data set

Extended format needs a certain hardware


Extended format data sets requires storage control units that support
non-synchronous I/O operation. If the system cannot find DASDs behind such
storage controls when it allocates a data set with Data Set Name Type = EXT, it
will fail the allocation, or it will allocate the data set in non-extended format,
depending on the If Ext field specification.

Chapter 2. DFSMSdfp enhancements 11


If you are not sure your DASDs support extended format data sets, we
recommend that you issue the DEVSERV QDASD command for volumes that
belong to a storage group into which you want to allocate extended format
data sets. Figure 3 shows an example of DEVSERV QDASD output.

DEVSERV QDASD,6600
IEE459I 17.09.38 DEVSERV QDASD 631
UNIT VOLSER SCUTYPE DEVTYPE CYL SSID SCU-SERIAL DEV-SERIAL EF-CHK
6600 SS6600 2105E20 2105 3339 8906 0113-xxxxx-12089 **OK**
**** 1 DEVICE(S) MET THE SELECTION CRITERIA
**** 0 DEVICE(S) FAILED EXTENDED FUNCTION CHECKING

Figure 3. DEVSERV QDASD output

If EF-CHK field shows **OK**, the volume will be eligible for extended format
data sets.

2.1.3.4 Defining a storage class


You use a storage class to specify the number of stripes you want. There are
two ways to specify the number of stripes:
• Using Sustained Data Rate (SDR) attribute
• Using Guaranteed Space attribute and non-zero SDR

These two methods are explained in the following sections.

Using Sustained Data Rate (SDR) attribute


One way to specify the number of stripes is to assign a storage class with
Sustained Data Rate = non-zero attribute. Figure 4 shows an example of
storage class definition.

12 DFSMS Release 10 Technical Update


STORAGE CLASS DEFINE Page 1 of 2
Command ===>

SCDS Name . . . . . : SYS1.SMS.SCDS


Storage Class Name : STRIPED
To DEFINE Storage Class, Specify:
Description ==>
==>
Performance Objectives
Direct Millisecond Response . . . . (1 to 999 or blank)
Direct Bias . . . . . . . . . . . . (R, W or blank)
Sequential Millisecond Response . . (1 to 999 or blank)
Sequential Bias . . . . . . . . . . (R, W or blank)
Initial Access Response Seconds . . (0 to 9999 or blank)
Sustained Data Rate (MB/sec) . . . 16 (0 to 999 or blank)
Availability . . . . . . . . . . . . (C, P ,S or N)
Accessibility . . . . . . . . . . . (C, P ,S or N)
Backup . . . . . . . . . . . . . . (Y, N or Blank)
Versioning . . . . . . . . . . . . (Y, N or Blank)

Figure 4. An example of storage class definition with SDR attribute

SDR is a numeric value which represents the data transfer rate required to
process the data set. The system uses SDR to derive the number of stripes.
The value is divided by four if the DASD volumes are 3390 track format and
by three if they are 3380 track format, and the result is the number of stripe
volumes. Since Figure 4 specifies SDR=16, the system tries to allocate a
data set with this storage class across four 3390 volumes or six 3380
volumes. For more detailed information, refer to the manual OS/390
DFSMSdfp Storage Administration Reference,SC26-7331.

Using Guaranteed Space attribute and non-zero SDR


The other way to specify the number of stripes is to assign a storage class
with Guaranteed Space = Y, non-zero SDR value, and specify volume serial
numbers or unit count at the time of allocation. Figure 5 is an example of
storage class definition with the Guaranteed Space attribute:

Chapter 2. DFSMSdfp enhancements 13


STORAGE CLASS DEFINE Page 2 of 2
Command ===>

SCDS Name . . . . . : SYS1.SMS.SCDS


Storage Class Name : STRIPED

To DEFINE Storage Class, Specify:

Guaranteed Space . . . . . . . . . Y (Y or N)
Guaranteed Synchronous Write . . . (Y or N)
CF Cache Set Name . . . . . . . . (up to 8 chars or blank)
CF Direct Weight . . . . . . . . . (1 to 11 or blank)
CF Sequential Weight . . . . . . . (1 to 11 or blank)

Figure 5. An example of storage class definition requesting guaranteed space

The system derives the number of stripes from the maximum of the numbers
of volume serial numbers specification, including asterisks (*), unit counts on
JCL, or units counts on the data class. For example, if you define a VSAM
data set through IDCAMS DEFINE CLUSTER command with VOLUMES(* * * * * *)
and it has a Guaranteed Space attribute and non-zero SDR, the system will
try to allocate the data set across six volumes. Or, if you allocate a VSAM
data set through a JCL DD card with UNIT=(xxxx,5) keyword, the system will
try to allocate the data set across five volumes.

The many anomalies related to the use of guaranteed space with specific
volume specifications apply to VSAM, consistent with the current selection
implementation for non-VSAM. We strongly recommend that you avoid the
use of specific volume specifications with a Guaranteed Space attribute.

SMS does not guarantee the number of stripes you request


Both of these methods represent a request for volumes; there is no guarantee
that the system will honor the number of stripes you requested. The system
may reduce the number of stripes. The following are some considerations on
volume selection for striping data sets:
• Volumes must support extended format.
• Volumes with QUINEW status will also be candidates for striping.
• VTOC index must be enabled, and free space information must be kept in
sync with that of VTOC index.
• SMS will not select a volume which will go beyond
ALLOCATION/MIGRATION HIGH THRESHOLD.

For more complete information, refer to Chapter 4,”Defining Storage Groups”,


in the OS/390 DFSMSdfp Storage Administration Reference,SC26-7331.

14 DFSMS Release 10 Technical Update


2.1.3.5 Modifying ACS routines
After you have prepared the system constructs for striped data sets, you
need to modify ACS routines so that data sets which comply with your system
administration policy get striped by having these constructs. Figure 6 is an
example of the storage class ACS routine. You might need to modify ACS
routines for other constructs, depending on your administration policy.

PROC STORCLAS
:
:
FILTLIST STRIPE INCLUDE(*.CRIT.** ) /* @02 */
:
:

IF &DSN = &STRIPE THEN DO /* &STTIPE DATA SETS SHOULD @02 */


SET &STORCLAS = 'STRIPED' /* BE STRIPED (SEE FILTLIST) @02 */
EXIT /* NOTE THAT IT SHOULD HAVE @02 */
END /* A DATA CLASS W/ EXTENDED @02 */
/* FORMAT TO BE STRIPED @02 */
:
:
END

Figure 6. An example of the storage class ACS routine

2.1.4 How to migrate to VSAM striped data sets


You can take the following steps to convert existing VSAM data sets to VSAM
striped data sets:
• Issue the command IDCAMS LISTCAT ALL against the original data set to
identify the names of its components, and to obtain other related
information.
• Allocate a new VSAM striped data set.
• Use IDCAMS REPRO to copy data from the original data set to the new
VSAM striped data set.
• Delete the existing data set.
• Use IDCAMS ALTER NEWNAME to rename the new VSAM striped data
set to the original name. We recommend that you rename, not only the
cluster name, but also the names of its components.

Chapter 2. DFSMSdfp enhancements 15


2.1.5 How the system allocates space for VSAM striped data sets
In this section, we describe how the system allocates space for striped data
sets. In order to simplify further discussion, we refer to single striped data
sets (in which the number of stripes is 1) as non-striped data sets, and we
refer to multiple striped data sets (in which the number of stripes is greater
than 1) as striped data sets.

2.1.5.1 Primary space allocation for non-guaranteed space request


If you specify the stripe count through SDR, the system uses the following
formula to derive the space required for each volume.

p = (P ÷ N)

In this formula, p is amount of space per volume, P is the primary quantity


you request, and N is the actual number of stripes the system determined. In
short, the primary quantity you specified is the total space amount for the
complete data set for a non-guaranteed space request. Figure 7 shows an
example of how the system allocates space when you request 120 cylinders
of primary space with SDR=16.

DEFINE CLUSTER(NAME(...)CYLS(120) VOLUMES(* * *)...)

Guaranteed Space=N

3390 3390 3390 3390


SDR=16 30 30 30 30

Stripe count =4

Figure 7. Non-guaranteed space request divides primary quantity by stripe count

The system will divide the primary quantity by four since SDR=16 implies the
stripe count as four. Assuming there are at least four volumes available for
satisfying this request and the system does not reduce the number of stripe, it
will allocate 30 cylinder on each volume.

Note: If you actually allocate a VSAM striped data set with this example, the
system will adjust the amount you specified — CYL(120) — and it will allocate
120 cylinders and eight tracks in total, 30 cylinders, and two tracks on each
volume. We describe in detail how the system adjusts allocation for VSAM
striped data sets in “Space amount calculation for VSAM striped data sets” on
page 26.

16 DFSMS Release 10 Technical Update


2.1.5.2 Primary space allocation for guaranteed space request
If you make a guaranteed space request, the system will allocate the primary
quantity you specified on each volume. Therefore, the total space is the
primary quantity multiplied by the number of volumes you request. Figure 8
illustrates how the system allocates space when you request 120 cylinders of
primary space with guaranteed space.

DEFINE CLUSTER(NAME(...)CYLS(120) VOLUMES(* * *)...)

Guaranteed Space=Y

3390
120

3390 3390 3390 3390


SDR=0 120
SDR=0 120 120 120

3390
120

Stripe count =1 Stripe count =3


(Non-striping)

Figure 8. Guaranteed space request allocates primary space on each volume

When SDR is zero


If an allocation request has guaranteed space along with SDR=0, the system
will not create a striped data set, but will create a non-striped multi-volume
data set. In this figure, the system allocates 360 cylinders in total, 120
cylinders on each volume, as VOLUMES parameter has three asterisks (*).

When SDR is a non-zero value


If an allocation request has guaranteed space along with a non-zero SDR, the
system will try to create a striped data set. Assuming that the system does
not reduce the number of stripes, the number of stripes will match the number
of entries in the VOLUME parameter, regardless of the SDR value. In this
figure, the system allocates 360 cylinders in total, with 120 cylinders on each
volume, as the VOLUMES parameter has three asterisks (*).

2.1.5.3 Secondary allocation


Unlike primary space allocation, there is no difference in secondary space
allocation between guaranteed space and non-guaranteed space requests,
except that a guaranteed space request honors volume serial numbers when
explicitly specified as candidate volumes.

Chapter 2. DFSMSdfp enhancements 17


The space amount of each volume is derived by the following formula:

s = (S ÷ N)

In this formula, s is amount of space per volume, S is the secondary quantity


you request, and N is the actual number of stripes the system determined. In
short, the secondary quantity you specified is the total amount for the
complete data set per extension. This is analogous to primary space
allocation for a non-guaranteed space request.

Note: VSAM striped data sets always use the secondary amount you
specified to perform a secondary allocation. Other system-managed VSAM
data sets can use the primary amount for secondary allocation if the
corresponding data class has the Add’l Volume Amount=P attribute, but this
attribute is ignored for VSAM striped data sets.

All stripes are extended at the same time


When a VSAM striped data set is extended, all stripes are extended at the
same time. The system tries to extend a data set on a stripe basis. Figure 9
shows an example of what occurs when all of the stripes have enough space.

A B C D
P P P P

E x te n d

A B C D
P P P P
S S S S

Figure 9. Secondary space allocation when all of the stripes have enough space

18 DFSMS Release 10 Technical Update


VSAM allocates space on the volume A and then allocates B, C, and D.

Figure 10 shows an example of what occurs when one of the stripes does not
have enough space.

A B C D
P P P P
S S S S
O th e r d a ta s e ts

E x te n d

A B C D
P P P P
S S S S
S O th e r d a ta s e ts S S

E
S

Figure 10. Secondary space allocation when insufficient space for one stripes

Unlike non-VSAM striped data sets, VSAM striped data sets can extend to
another volumes.

When the system finds that the volume B does not have enough space to
extend, it extends to another volume E. This example assumes that the data
set has a candidate volume in the catalog, and it selects the volume E to
satisfy the allocation request. If the data set does not have any candidate
volumes in catalog entries, then extension will fail.

Note that the system will not share the stripe when extending. This is why the
data set in the figure extends to the volume E, and not to A,C, or D.

When secondary quantity is zero


There is a special case when a secondary value of zero has been specified.
For a non-striped data set, specifying a value of zero means that the
secondary extent cannot be allocated on the current volume; either the
primary space is allocated on a new volume, or the extend fails. The normal
objective of this specification is, for performance reasons, to allow a data set
to be on multiple volumes. Figure 11 illustrates how a data set with
non-striped, non-guaranteed space is extended.

Chapter 2. DFSMSdfp enhancements 19


.
DEFINE CLUSTER( ... CYLS(120 0) VOLUMES( * * *))

A A A
120 Extend 120 Extend 120

B B
120 120

C
120

120 cylinders per volume/Up to (120 X 3) cylinders in total

Figure 11. Non-striped data set uses primary amount to extend another volume

When this data set extends to another volume, the primary quantity of 120
cylinders is allocated on the volume B, and then the volume C. If it is a
guaranteed space data set, it will have 120 cylinders of space on each
volume at the primary allocation.

Since a striped data set is already on multiple volumes, the zero specification
is interpreted in a different way. In this case, it is assumed that the
requirement is to extend each stripe, by an amount up to the value of the
primary allocation. As with normal secondary allocation, the new extents can
be on the existing volumes, or on new ones as long as it has candidate
volumes.

Figure 12 assumes that a three-striped, non-guaranteed space data set is


allocated. It also assumes there are no candidate volumes defined for the
data set. As you can see, a striped VSAM data set can extend up to the
primary quantity multiplied by the volume count.

20 DFSMS Release 10 Technical Update


DEFINE CLUSTER( ... CYLS(120 0))

A A
B C
A
40
40 40
40 40
40

Extend

A A
B C
A
40
40 40
40 40
40
40 40 40

Extend

A A
B C
A
40
40 40
40 40
40
40 40 40
40 40 40

Up to (40 x 3) cylinders per stripe/Up to (120 X 3) cylinders in total


Figure 12. Striped VSAM data set extends up to primary quantity X volume count

Since this is a non-guaranteed space data set, each volume has 40 cylinders
of space at primary allocation. When it extends, it uses the primary amount,
40 cylinders to extend. Each stripe can extend up to 120 cylinders, and up to
360 cylinders in total. If VSAM cannot allocate space due to lack of space, it
will not extend the data set, as the data set has no candidate volumes.

If the data set is a three-striped, guaranteed space, and has no candidate


volume, it will have 120 cylinders of space on each volume at the primary
allocation, and the system cannot extend the data set, as no candidate
volumes are available. However, it already has 360 cylinders of space in total,
so there will be no difference in the maximum amount of available space
between guaranteed space VSAM striped data sets and non-guaranteed
space VSAM striped data sets.

Chapter 2. DFSMSdfp enhancements 21


Figure 13 also assumes that a three-striped, non-guaranteed space data set
is allocated. Let us consider one more assumption — that we have added a
candidate volume after the primary allocation.

DEFINE CLUSTER( ... CYLS(120 0))


ALTER
ADDVOLUMES(*)

A A
B C
A
40
40 40
40 40
40

Extend

...
Extend

A A
B C
A
40 40 40
40 40 40
40 40 40
40 40 40
40 40 40

Up to (40 x 4) cylinders per stripe/Up to (120 X 4) cylinders in total

Figure 13. Striped VSAM data set also takes candidate volumes into account

Since this is a non-guaranteed space data set, each volume has 40 cylinders
of space at primary allocation. This is the same as in Figure 12 on page 21.

When this data set extends, it uses the primary amount of 40 cylinders to
extend. Each stripe can extend up to 160 cylinders, and up to 480 cylinders in
total, as the system takes candidate volumes into account. If the system
cannot allocate space due to lack of space, it will extend to another volume,
as the data set has a candidate volume.

Figure 14 assumes that a three-striped, guaranteed space data set is


allocated. Let us consider one more assumption — that we have added a
candidate volume after the primary allocation, just as in Figure 13.

22 DFSMS Release 10 Technical Update


DEFINE CLUSTER( ... CYLS(120 0))
ALTER
ADDVOLUMES(*)

A A
B C
A
40
120 40
120 40
120

Extend

A A
B C
A
120 120 120
40 40 40
40 40 40

Up to (40 x 4) cylinders per stripe/Up to (120 X 4) cylinders in total

Figure 14. Guaranteed space VSAM striped data set should have same amount

Since this is a guaranteed space data set, it has 120 cylinders of space on
each volume at the primary allocation. It can extend up to 160 cylinders per
stripe, and up to 480 cylinders in total. The amount used for secondary
allocation is the primary amount divided by the striped count. Therefore,
VSAM uses 40 cylinders in this example. It may extend to another volume, as
soon as the data set has a candidate volume.

As you can see these two examples, there is also no difference in the
maximum amount of space available between guaranteed space and
non-guaranteed space.

2.1.5.4 Considerations on extending data sets


In this section, we describe some considerations on extending data sets.

Maximum size allowed for a VSAM striped data set


Each stripe can extend up to 255 extents, and there can be 123 extents per
volume. This is same as for non-striped VSAM data sets. Since the maximum
number of stripes counted is 16, the maximum number of extents per striped
data set is 4080. Again, remember that only data components can be striped.

Chapter 2. DFSMSdfp enhancements 23


Candidate volumes
As we have described, candidate volumes must have been defined in order
to obtain secondary space on a new volume. Because stripes cannot share
volumes, the number of candidates must be sufficient to allow for the
possibility that all stripes will need to extend to new volumes.

If guaranteed space is being used, then the use of asterisks should be


reviewed. Prior to DFSMS Release 10, the asterisks were used to indicate the
number of volumes that may be used for the data set. In DFSMS Release 10,
the asterisks indicate the number of stripe volumes, and therefore cannot be
used to indicate candidate volumes. See Figure 15, for example.

3390 3390 3390 3390 3390 3390 3390 3390


Non Extended Format
Guranteed Space=Y

Non-striped multi-volume data set (no candidate volumes)

3390
Non Extended Format
Candidate Candidate Candidate Candidate Candidate Candidate Candidate
Guranteed Space=N

Non-striped multi-volume data set (7 candidate volumes)

Extended Format 3390 3390 3390 3390 3390 3390 3390 3390
Guranteed Space=Y
SDR=0
Non-striped multi-volume data set (no candidate volumes)

Extended Format 3390 3390 3390 3390 3390 3390 3390 3390
Guranteed Space=Y
SDR <> 0
8-striped data set (no candidate volumes)

Extended Format 3390 3390 3390 3390


Guranteed Space=N Candidate Candidate Candidate Candidate
SDR=16
4-striped data set (4 candidate volumes)

Extended Format 3390 3390 3390 3390 3390 3390 3390 3390
Guranteed Space=N
SDR=32
8-striped data set (no candidate volumes)

Figure 15. The meanings of asterisks may vary

When a data set is defined and VOLUMES(* * * * * * * *) is specified, and all


of the DASDs are in 3390 track format, some of the possible meanings of this
could be as follows:
• The SDR is 16, 4 volumes are allocated as primary, and 4 are available as
candidates.

24 DFSMS Release 10 Technical Update


• The SDR is 32, 8 volumes are allocated as primary, and none are
available as candidates.
• Guaranteed space is being used, 8 volumes are allocated as primary, and
none are available as candidates.

2.1.5.5 VSAM structure and space calculation


Figure 16 shows a layout of the physical structure of a VSAM striped data set.

VOLA VOLB VOLC

CI 0 CI 3 CI 6 CI 9 CI 1 CI 4 CI 7 CI 10 CI 2 CI 5 CI 8 CI 11
CI 12 CI 15 CI 18 CI 21 CI 13 CI 16 CI 19 CI 22 CI 14 CI 17 CI 20 CI 23
CI 24 CI 27 CI 30 CI 33 CI 25 CI 28 CI 31 CI 34 CI 26 CI 29 CI 32 CI 35
CI 36 CI 39 CI 42 CI 45 CI 37 CI 40 CI 43 CI 46 CI 38 CI 41 CI 44 CI 47

CI 48 CI 51 CI 54 CI 57 CI 49 CI 52 CI 55 CI 58 CI 50 CI 53 CI 56 CI 59
CI 60 CI 63 CI 66 CI 69 CI 61 CI 64 CI 67 CI 70 CI 62 CI 65 CI 68 CI 71
CI 72 CI 75 CI 78 CI 81 CI 73 CI 76 CI 79 CI 82 CI 74 CI 77 CI 80 CI 83
CI 84 CI 87 CI 90 CI 93 CI 85 CI 88 CI 91 CI 94 CI 86 CI 89 CI 92 CI 95

CI 38 CI 65 Control interval(CI)
Tracks
Control area (CA)
Figure 16. Physical layout of a VSAM striped data set

A control interval (CI) is a minimal I/O unit when the system makes I/O
requests. It contains logical records and control information, such as
remaining free space in the CI. By definition, a CI can consist of from one to
multiple equal length physical blocks. You can specify CI size manually, or the
system can determine it based on your request. Once the CI size has been
determined, the system also decides the size of each physical block and the
number of physical blocks per CI.

A control area (CA) is a fixed length contiguous area where CIs are grouped
together. The system allocates a space in an amount that consists of
multiples of CA. When you use a KSDS, an index CI in the lowest level
(referred as sequence set) controls all CIs in a data CA.

As you can see the figure, data is striped by CI for VSAM striped data sets,
and a CA encompasses all of the stripes.

Chapter 2. DFSMSdfp enhancements 25


CA size calculation for VSAM striped data sets
In the case of non-striped VSAM data sets, the size of CA is the minimum of
primary quantity and secondary quantity. If it is larger than 15 tracks, the size
will be 15 tracks. If it is smaller than one track, the size will be one track.

The CA size calculation is different from that of non-striped VSAM data sets.
A CA is spread equally across all of the stripe volumes. Therefore, the
maximum CA size is changed to 16 tracks, as a striped VSAM data set can
have up to 16 stripes, and the minimum CA size is the number of tracks which
is equal to the stripe count. The CA size calculation is based on the primary
quantity and secondary quantity you would specify, and the number of
stripes. You can use the following rules along with those values to figure out
the CA size that the system would derive:
• If the stripe count is greater than 8, it will be the CA size.
• Otherwise:
- If the stripe count is equal to or greater than the minimum of primary
quantity in tracks and secondary quantity, it will be the CA size.
- Else, the value derived by the following formula will be the CA size, if it
is equal to or smaller than 16. If it is greater than 16, then the value
minus the stripe count will be the CA size.

minimum ( primary, sec ondary, 15 )


------------------------------------------------------------------------------------------------- + 0.9 × stripecount
stripecount

Space amount calculation for VSAM striped data sets


The system converts the quantity you request into tracks, and rounds up to
the nearest multiples of the CA size. Assume you try to allocate a four-striped
VSAM data set and you request 120 cylinders of space. The system would
determine the CA size as 16 tracks (refer to “CA size calculation for VSAM
striped data sets” on page 26). 120 cylinders, which turns into 1,800 tracks,
are not divisible by 16, so the allocation amount is rounded up to 1,808
tracks. If this is a guaranteed space request, the system will try to allocate
1808 tracks on each stripe. In the case of a non-guaranteed space, the
system will try to allocate 452 tracks on each stripe.

26 DFSMS Release 10 Technical Update


Be careful about index CI size when KSDS is defined
Since the system adjusts CA size into multiples of stripes and space, it may
contain more data than you expected.

For example, if you define a non-striped KSDS data set with TRACKS(1 1),
the size of data CA will be one track. Please assume that four data CIs will fit
into a track, and a sequence set CI can hold index entries up to four. On this
assumption, all data CIs in a data CA can be used (see Figure 17).

When non-striped...
DEFINE CLUSTER(...TRACKS( 1 1))

Index set

Sequence set CI 0 CI 1 CI 2 CI 3 CI 4
0 CI 5
1 CI 6
2 CI 7
3

VOLA
CI 0 CI 1 CI 2 CI 3

CI 4 CI 5 CI 6 CI 7

4 entries/Sequence set CI >


= 4 CIs/CA
Figure 17. An example of non-striped KSDS which has an adequate index CI size

However, what if this is a 3-striped KSDS with the same definition? In this
case, a sequence set (the lowest level index) CI should hold 12 index entries
to control all CIs in a CA, as the CA size is three tracks. If we make the
same assumption on the size of the index CI size, it can hold four index
entries, therefore eight out of 12 data CIs in a data CA cannot be used (see
Figure 18).

Chapter 2. DFSMSdfp enhancements 27


When striped...
DEFINE CLUSTER(...TRACKS( 1 1))

Index set

Sequence set CI 0 CI 1 CI 2 CI 3 CI
CI 12
0 CI
CI 13
1 CI
CI 14
2 CI
CI 15
3

VOLA VOLB VOLC


CI 0 CI 3 CI 6 CI 9 CI 1 CI 4 CI 7 CI 10 CI 2 CI 5 CI 8 CI 11

CI 12 CI 15 CI 18 CI 21 CI 13 CI 16 CI 19 CI 22 CI 14 CI 17 CI 20 CI 23

4 entries/Sequence set CI < 12 CIs/CA

Figure 18. An index CI that is too small to hold all index entries for a data CA

You need to ensure that the index CI size is big enough to hold index entries
for all data CIs in a CA, in order to avoid waste of space.

2.1.5.6 Layering concept


When a striped data set is accessed, the I/O is performed in parallel to each
volume with a stripe extent on it. The volumes taking part in the I/O operation
are referred to as a layer. Whenever the data set extends to new volumes, a
new layer is created, and the data set becomes multi-layered. Figure 19
shows the relationship between layer and space allocation.

VOLA VOLB VOLC


Layer 1:
VOLA Single primary allocation
VOLB Single primary allocation
VOLC Single primary allocation
Layer 2:
VOLA Single secondary allocation
VOLD Single secondary allocation
VOLC Single secondary allocation VOLD
Layer 3:
VOLE Two secondary allocation
VOLF Two secondary allocation
VOLG Two secondary allocation

VOLE VOLF VOLG

Figure 19. A multi-layered data set

28 DFSMS Release 10 Technical Update


• VOLA, VOLB and VOLC were used for the primary allocation and are
considered Layer 1.
• When the data set extended, for the first time, the extent for stripe two was
allocated on VOLD. The new extents for stripes one and three were
allocated on the primary volumes. Layer 2 has now been created,
consisting of VOLA,VOLD, and VOLC.
• The next time the data set extends, all three stripes have their new extents
on new volumes, VOLE, VOLF, and VOLG. Layer 3 has now been created.
• The next time the data set extends, all three stripes have their new extents
on the same volumes as the previous extents. As this set of volumes is
already represented by Layer 3, a new layer is not created.

Usually you do not need to be concerned about the existence of layers, as


there is no user control of layers. However, it might be helpful to know that a
layer can be a target of extent reduction when DFSMSdss moves or restores
VSAM striped data sets.

2.1.6 Worked examples


In this section, we introduce some worked examples.

2.1.6.1 LISTCAT output


There are several fields in the LISTCAT output which are relevant to striping.

For example, we defined an ESDS with 2500 cylinders of primary space and 500
cylinders of secondary space. The storage group has an SDR value of 28.

An output of IDCAMS LISTCAT of the data set, before any data was loaded,
showed the following fields:
ATTRIBUTES
STRIPE-COUNT-----------7
ALLOCATION
SPACE-TYPE---------TRACK HI-A-RBA------1920307200
SPACE-PRI----------37506 HI-U-RBA---------------0
SPACE-SEC-----------7504
EXTENTS----------------7

The cylinder allocation values have been converted to tracks and rounded up
to the next highest multiple of 14, because the stripe count is 7 and the CA
size is 14 (as discussed in 2.1.5.5, “VSAM structure and space calculation”
on page 25).

Chapter 2. DFSMSdfp enhancements 29


High allocated RBA is maintained for all volumes for all stripes
After initial allocation, there are 7 volume records, all of which contain:
TRACKS--------------5358
TRACKS/CA--------------2
HI-A-RBA------1920307200
HI-U-RBA---------------0

As you can see, the high allocated RBA is that of the whole cluster and not of
the individual stripe.

The number of tracks per CA is shown as 2, as the stripe count is 7. The true
number of tracks per CA is 14 (as discussed in “CA size calculation for VSAM
striped data sets” on page 26); this value is not listed anywhere.

High used RBA is maintained on the first volume only


After data had been loaded, which caused the data set to extend, the
LISTCAT showed, for the cluster:
ALLOCATION
SPACE-TYPE---------TRACK HI-A-RBA------2304512000
SPACE-PRI----------37506 HI-U-RBA------2211840000
SPACE-SEC-----------7504
EXTENTS---------------14

For stripe 1:
ALLOCATION
HI-A-RBA------2304512000 EXTENT-NUMBER----------2
HI-U-RBA------2211840000 EXTENT-TYPE--------X'00'
LOW-RBA----------------0 TRACKS-------5378
HIGH-RBA------1920307199
LOW-RBA-------1920307200 TRACKS-------1072
HIGH-RBA------2304511999

For stripes 2 - 7:
HI-A-RBA------2304512000 EXTENT-NUMBER----------2
HI-U-RBA---------------0 EXTENT-TYPE--------X'00'
LOW-RBA---------------0 TRACKS----------5358
HIGH-RBA------1920307199
LOW-RBA-------1920307200 TRACKS----------1072
HIGH-RBA------2304511999

As you can see, the only the volume record for stripe 1 contains a value for
the high used RBA.

30 DFSMS Release 10 Technical Update


2.1.6.2 Practical experience
Figure 20 shows the result of our performance test.

Data load time comparison

1000
900
800
700
Elapsed time
seconds
600
500
400
300
200
100
0

None-striped to non-striped 8-Striped to 8-Striped

Figure 20. Data load time comparison

We prepared approximate 2.1 gigabytes (GB) of sequential data on a


non-striped non-VSAM data set and an 8-striped non-VSAM data set. We
also prepared a non-striped ESDS and an 8-striped ESDS, both of which had
enough capacity to contain 2.1GB of data. Then we measured the elapsed
time of the data loading, by using the IDCAMS REPRO command.

Note that the purpose of the performance measurement is only to give you a
general idea how this new function is useful. The test we made was not
formal and we do not guarantee that you would get the same results as this
figure since there are many factors that affect performance measurement,
such as I/O configuration, software configuration, workload distribution, and
so on.

2.1.7 Other considerations


In this section, we describe other considerations on VSAM striping.

2.1.7.1 Considerations on performance


Here are some considerations on performance criteria.

Bulk I/Os perform better at sequential operation


In the example shown in Figure 16 on page 25, three CIs may be read in
parallel at sequential operation. Each of the volumes can complete their I/O
operation in any order but, because the data has to be presented in the
correct sequence to the reading program, the data will remain buffered until it

Chapter 2. DFSMSdfp enhancements 31


is next in sequence. Therefore, the overall performance of the data set will be
determined by the response time of the slowest volume. In order to minimize
this effect, we recommend that you transfer a large amount of data at a time.
If you read the number of CIs, which is equal to the stripe count, only one CI
is read from a stripe. If the size of each CI is small, you would not realize a
gain in performance. We recommend that you use system-managed buffering
(SMB), rather than specifying BUFND parameter.

Random I/Os and VSAM striping


Generally speaking, VSAM striping has no negative impact on random (or direct)
I/Os, as only one CI is accessed per request. Since sequential I/O on striped
data sets can keep all the striped volumes busy, although for a shorter period of
time, this can affect direct I/Os for other data sets on these volumes. This affect
is minimized with the Parallel Access Volume/Multiple Allegiance (PAV/MA) and
the I/O priority queueing capabilities of IBM Enterprise Storage Server (ESS).

However, this does not mean that migrating non-striped VSAM data sets to
striped VSAM data sets is not worth doing, because you generally do not have
any data sets that are only accessed randomly during their life cycle. In almost all
cases, sequential processing is also involved, such as loading data into VSAM
data sets, making backup copies of VSAM data sets, or passing data
sequentially to another program like DFSORT. Batch processing is invariably
sequential in nature. Since VSAM striping will improve this sequential
processing, the overall performance of your applications should improve.

In addition, VSAM striping will help improve performance even for direct
processing in the following cases:
• When the CA split process is involved.
When your application processes a KSDS, CA splits may occur. During
the split process, the system moves half of the data in a CA to a new CA.
Since the system performs this series of I/Os sequentially, the process for
a striped data set can be completed faster than that of a non-striped data
set, as a CA is striped across multiple volumes.
• When single volume data sets are converted to striped data sets.
When you use a single volume data set and an application makes
concurrent I/O requests against the data set, a volume level I/O contention
can be observed unless you use ESS with its PAV/MA features. Since
VSAM striping spreads data across multiple volumes, the chance of
getting the volume level I/O contention gets lower than with a single
volume data set. This would be the same as for a non-striped multi-volume
data set, if you have chosen to allocate non-striped multi-volume data sets
for this purpose.

32 DFSMS Release 10 Technical Update


Note: For some applications, you may have used key range data sets or
multi-volume data sets, and the application’s design could have been
bound to these organizations, so that the application could avoid volume
level contention. Although rare, if this situation applies to one of your
applications, then migrating these data sets to VSAM striped data sets
could actually make the application’s performance even worse, because
the I/Os which could be processed concurrently before migration may not
be processed concurrently after migration.
In this case, you can use ESS with its PAV/MA features instead. PAV
allows concurrent I/O requests against a same logical volume from an
OS/390 system, and MA allows concurrent I/O requests against the same
logical volume from multiple OS/390 systems. Therefore, the chance of
getting volume level contention is decreased, while you get the benefit of
loading and unloading data by striping. If you cannot use PAV, you can
leave these data sets as they are, or you can try running your applications
with striped data sets on a trial basis before running them in production.

RAID DASD subsystems and VSAM striping


Today’s latest DASD subsystems use RAID architecture. This is very true for
ESS. ESS performs striping I/Os whenever it is possible.

So why is VSAM striping used? The reason involves channel speed versus
physical device speed. When an application transfers data to or from a
device, it uses only one channel path, therefore the maximum data bandwidth
per volume is 17 MB per second, when an ESCON channel is used.

On the other hand, ESS’s lower interface has 40 MB per second of bandwidth
per direction, and each device adaptor has two paths for read, the other two
paths for write. As you can see, this is much faster than the bandwidth of
single ESCON channel, therefore striping from the host side would be worth
doing.

Note: 17 MB/sec, or 40 MB/sec, is the theoretical limit of the respective


interface speed. The actual data transfer rate may vary.

DASD and channel resources


For a non-striped data set, the major factors effecting performance are the
channel capacity and the usage of other data sets on the volume. A striped
data set has these considerations for each volume containing a stripe.

When you use VSAM striping, the volumes in the storage group should be
spread across DASD subsystems as much as possible. Otherwise, they will
be competing for the same channel and subsystem resources.

Chapter 2. DFSMSdfp enhancements 33


The channel and DASD subsystems must have sufficient spare capacity to
process the additional workload.

Input or output device for VSAM striped data sets


The limiting factor on performance is most likely to be the device to which
data is being transferred. Although the striped data set data transfer rate is
maintained at the maximum allowed by the stripe count, the application
elapsed time is based on the processing requirements of all the I/O events
that make up application. If the bottleneck is the transfer rate of the 3590 and
a channel transfer rate of 17 MB/sec, transferring the striped data at a much
higher rate than that may not improve the elapsed time.

If a data set is being backed up to tape and the maximum performance is


required, an intermediate striped data set could be required.

For example, if an active data set with a stripe count of five has to be backed
up when Concurrent Copy is not usable, then perform the following steps:
1. Create an intermediate data set, which also has a stripe count of five.
2. Stop access to the active data set.
3. Copy the active to the temporary data set.
4. Allow access to the active data set.
5. Copy the temporary data set to tape.
6. Delete the temporary data set.

2.1.7.2 Coexistence with supported DFSMS/MVS releases


Table 3 shows the list of coexistence APARs/PTFs required for supported
DFSMS/MVS releases. These are coexistence PTFs for VSAM data striping.
Table 3. Coexistence PTFs for VSAM data striping

APAR\PTF DFSMS/MVS DFSMS/MVS DFSMS/MVS DFSMS/MVS


1.5.0 1.4.0 1.3.0 V1.2.0

OW41297 UW67758 UW67759 N/A N/A

OW23994 N/A N/A UW35958 UW35957

N/A: Not Applicable

The purpose of these PTFs is the same — that is, to prevent the pre-release
system from corrupting the VSAM striped data sets. However, these PTFs
work a bit differently (see Figure 21), so this information may help you plan.

34 DFSMS Release 10 Technical Update


- If not KSDS
- If access KSDS as ESDS
- # of volume not = # of stripe

OS/390 V2R9 OS/390 V2R6 OS/390 OS/390 V1R1 MVS/ESA SP


V2R10 V5.2.2

DFSMS/MVS DFSMS/MVS DFSMS/MVS DFSMS/MVS


V1R5 V1R4 DFSMS R10 V1R3 V1R2

+UW67759 +UW67758 +UW35958 +UW35957

IEC161I 255(001)-255 IEC161I 255(004)-255 IEC161I 037(140-078) IEC161I 037(140-078)


IEC161I 255(002)-255 IEC070I 255(003)-255
IEC070I 255(003)-255

VSAM striped data set

Figure 21. Down-level system cannot open VSAM striped data sets

DFSMS/MVS Version 1 Release 5 + coexistence PTF (UW67759) allows you


to process a VSAM striped data set created on DFSMS Release 10 if it meets
all of the following when opened:
• It is a striped KSDS.
Even if it is a KSDS, you cannot open a component of KSDS.
• Each stripe has only one volume.

Otherwise, the system will not allow you to open the data set. Even if all of
the above criteria are met, there will be no support for extension. The system
will fail the request with the error message IEC070I, as you can see from
Figure 21, if it needs to extend (even within the same volume).

DFSMS/MVS V1.4.0 + coexistence PTF(UW67758) do not allow you to open


a VSAM striped data set under any circumstances.

Chapter 2. DFSMSdfp enhancements 35


2.2 Large tape block sizes
In this section, we describe the new DFSMSdfp capability, which allows you
to have a tape block size greater than 32,760 bytes for tape data sets.

2.2.1 Background of this enhancement


Most tape devices used today support block sizes up to 64 kilobytes (KB),
and systems other than OS/390 can create files with block sizes greater than
32 KB. Currently the OS/390 standard sequential access methods, QSAM
and BSAM, only support a maximum block size of 32,760 bytes, and in order
to process the larger block sizes, the OS/390 user has to write a program
using EXCP processing. For example, DFSMSdss performs I/Os with its own
EXCP, so it supports 64 KB block size.

2.2.1.1 Benefits of larger block sizes


Larger block sizes make more efficient use of the channel and tape
subsystem resources, because more data is transferred with each I/O
operation. This is particularly true for the 3590 tape subsystem, which is
capable of supporting block sizes of up to 256K.

2.2.1.2 Why not DASD as well?


Data sets on disk are not an issue as far as exchanging data between
organizations is concerned.

There would be little increase in the track capacity of the widely used 3380
and 3390 device types. In addition to this, the major reason is that there are a
large number of programs, both supplied by vendors and written in-house,
which browse/edit disk data sets, all of which would need to be changed.

2.2.2 How to write tape blocks greater than 32,760


In order to use this capability, you need to change application programs. The
JCL may need to be changed, too. In this section, we explain these various
considerations. In order to simplify further discussion, we refer to a tape block
which is larger than 32,760 as a “large tape block”, and we also refer the
program capability of processing those large blocks as “large block interface
(LBI)”.

36 DFSMS Release 10 Technical Update


2.2.2.1 Tell the system to use LBI through DCBE
The critical change, for the programmer, is the requirement to have a DCBE
control block. You can use any of the following to tell the system to use LBI:
• Use COBOL
If you run programs written in COBOL, you do not have to take any
specific action to use LBI, as the run-time library provided by the
Language Environment for OS/390 Version 2 Release 10 enables LBI.
Refer to 2.2.4.6, “High level languages” on page 48 for further discussion.
• Use QSAM or BSAM through High Level Assembler directly.
You also need to do either of the following:
- Specify the BLKSIZE parameter on the DCBE macro.
You do not have to code the actual block size to be used in the
BLKSIZE parameter, as the decision will be made elsewhere.
- Set the DEBEULBI bit on.
This is a bit in the DCBE. Enabling this bit on implies that the program
intends to use LBI. Refer to the IHADCBE macro expansion to locate
the bit.
Without making this change (except COBOL), LBI cannot take place. You
can test the DCBESLBI bit after OPEN to see whether the program can
actually use LBI.

2.2.2.2 Tape labels supported


You can read and/or write a tape data set which has large block size if it has
IBM standard labels (SL), nonstandard labels (NSL), and no labels (NL). You
can also perform bypass label processing (BLP). There is NO support for
ANSI, ISO, or FIPS labeled tapes.

The location of block size information in IBM Standard labels


If SL tapes are being processed within a program, you need to allow for the
following change.

The existing fields describing block size in the HDR2/EOV2/EOF2 are 5 bytes
long and hold the value in EBCDIC. When large blocks are written, these
fields will be zero (all x’F0’s), and offsets 71 to 80 will contain the block size in
EBCDIC.

2.2.2.3 JCL changes


In this section, we discuss some changes that have been made to the JCL
interface.

Chapter 2. DFSMSdfp enhancements 37


Maximum BLKSIZE is changed to 2 gigabytes
The maximum value for the BLKSIZE parameter on a DD statement on a
pre-OS/390 Version 2 Release 10 system is 32760. In OS/390 Version 2
Release 10, the maximum value is now 2 gigabytes, which allows for future
tape devices that may support larger block sizes. You can code BLKSIZE in
bytes, kilobytes (1 kilobyte = 1,024 bytes), megabytes (1 megabyte = 1,024
kilobytes), or gigabytes (1 gigabyte = 1,024 megabytes). For example, if you
specify BLKSIZE=64K on a DD statement, this will mean BLKSIZE=65536.

Unlike in a pre-OS/390 Version 2 Release 10 system, the decision as to


whether the value is valid for the device is made at OPEN time. If this value is
exceeded, then a JCL error will be detected, and the job will not be executed
in pre-Release 10 systems. If you have a multi-access spool (MAS)
environment and you have OS/390 Version 2 Release 10 and a pre-Release
10 system in the MAS, you may consider that JCL which specifies large tape
block sizes will have a system affinity to an OS/390 Version 2 Release 10
system, as this would result in a JCL error on a pre-Release 10 system.

Figure 22 is an example of some JCL which has a system affinity.

//HGPARKA JOB MSGCLASS=X,NOTIFY=&SYSUID


/*JOBPARM SYSAFF=SC63
//STEP1 EXEC PGM=QSAMLBI
//DDOUT DD DSN=ITSO.TAPE.DSET,DISP=OLD
:
:
:
//

Figure 22. /*JOBPARM SYSAFF can be used to have system affinity

Note: We do not recommend that you have large block tape data sets until all
of your systems in a MAS have migrated to OS/390 Version 2 Release 10. We
also do not recommend that you specify the BLKSIZE parameter, as this may
create dependency on a certain devices.

New BLKSZLIM parameter


A new parameter, BLKSZLIM, has been introduced. It is used to assign the
maximum value to the block size that is permitted, when the system
calculates a system determined block size (SDB). Just like BLKSIZE, you can
code BLKSZLIM in bytes, kilobytes (1 kilobyte = 1,024 bytes), megabytes
(1 megabyte = 1,024 kilobytes), or gigabytes (1 gigabyte = 1,024 megabytes).

We describe SDB and BLKSIZLIM in more detail in 2.2.3.1, “System


determined block size” on page 39.

38 DFSMS Release 10 Technical Update


2.2.3 Considerations on using large tape block sizes
In this section, we describe some considerations on using the large tape
block size enhancement.

2.2.3.1 System determined block size


The following sections assume that the reader has an understanding of the
system determined block size (SDB). For those readers who are not familiar
with SDB, we give a brief explanation of its function below.

If a data set is being opened for output and there is no block size specified, or
if BLKSIZE=0 on return from the DCB OPEN exit and the installation OPEN
exit, the system will calculate the optimum value to be used. This is
described, in detail, in the section on BLKSIZE in the manual, OS/390
DFSMS Using Data Sets, SC26-7339. Here we describe the fundamental
principles.

Each device type has a block size which gives the best compromise between
space utilization and performance. For example, the maximum track capacity
of a 3390 system is 56,664 bytes, but the standard access methods only
support a maximum block size of 32,760. If 32,760 is used, only one block
can be written per track, and approximately 24 KB would be wasted. The
optimum size in this case would be half track blocking, that is, a block size of
approximately 28 KB, which would allow two records to be written to track
with minimal wastage.

For tapes, there is no penalty on the larger block size, and the system will use
a block size as close to 32,760 as possible in a pre-OS/390 Version 2
Release 10 system. In DFSMS Release 10, for programs which use LBI,
SMS may derive a large tape block size. We describe some factors that
would affect SDB, and how SMS determines SDB for LBI programs.

New TAPEBLKSZLIM and COPYSDB parameter on DEVSUPxx


Two new parameters have been introduced in PARMLIB(DEVSUPxx),
TAPEBLKSZLIM, and COPYSDB.

TAPEBLKSZLIM sets a system-level default which will be used as a


maximum value when the system calculates a system determined block size
for a tape device. To prevent programs from accidentally using values greater
than 32,760, and possibly creating tapes that cannot be read back, the
default is 32,760. We strongly recommend that you leave the value as the
default unless you have migrated all of your systems to OS/390 Version 2
Release 10.

Chapter 2. DFSMSdfp enhancements 39


COPYSDB sets the system-level default for the SDB keyword used by
IEBGENER. We describe COPYSDB in detail in 2.2.4.1, “IEBGENER” on
page 45.

The system sets these values in data facilities area (DFA), which is an area
that can be used, by programs, to find information about the DFP and
DFSMS.

Figure 23 is an excerpt from the IHADFA DSECT=YES macro expansion, regarding


these new values.

+DFACPSDB EQU X'F0' COPYSDB VALUES


+DFACPSNO EQU B'00010000' COPYSDB = NO
+DFACPSYE EQU B'00100000' COPYSDB = YES
+DFACPSSM EQU B'00100000' COPYSDB = SMALL (same as YES)
+DFACPSIN EQU B'00110000' COPYSDB = INPUT
+DFACPSLA EQU B'01000000' COPYSDB = LARGE
:
:
+DFABLKSZ DC 0D'0',F'0,32760' LIMIT ON SYSTEM DETERMINED BLOCK @LPA
+* SIZE. Default is 32760. May be @LPA
+* overridden by module IEAVNP16.
:

Figure 23. IHADFA macro expansion

Refer to MACLIB(IHADFA) and the manual, OS/390 DFSMSdfp Advanced


Services, SC26-7330, for complete information about DFA.

Note that you cannot change these values dynamically. If you need to change
these values, you need to re-IPL the system after you have modified
PARMLIB(DEVSUPxx).

Tape devices can now report the block sizes they support
Two new fields have been added to the UCB extension for tape devices which
contain the maximum and optimum block sizes supported by the device. The
system obtains this value from tape hardware and sets it in the UCB
extension when a tape device is brought online, if possible. You cannot
control over this value.

Then, how does the system select SDB for an LBI program?
The system picks the first non-zero number in the following order of
preference, as the block size limit value:
1. BLKSZLIM on the DD statement, if coded
2. Block size limit in a data class, if such a data class is assigned to the
data set

40 DFSMS Release 10 Technical Update


Refer to 2.2.3.3, “System-managed storage and SMS ACS routines” on
page 41 for more information about the data class support.
3. TAPEBLKSZLIM in DEVSUPxx, if coded
4. 32,760 if none of the above are present

Then the system compares the block size limit and the optimum block size
available in the UCB extension and selects the smaller value as the limit.

In summary, the system never selects a block size which would go beyond
the physical tape devices’ capabilities.

2.2.3.2 User specified block size


When you specify the BLKSIZE parameter through a DD statement, DCBE
macro, DCB OPEN exit, or installation OPEN exit, the system tries to honor
the value you specified, and SDB is not used. Below we describe some
considerations when a non-zero BLKSIZE is supplied.

What if BLKSIZE is not compatible with the allocated tape device?


If the BLKSIZE parameter on a DD statement or in DCBE has a non-zero
value, the system will check the value against the value in the UCB extension
for the tape device allocated. If BLKSIZE is greater than the maximum value
in the UCB extension, the job will get the error message IEC141I 013-68.

What if BLKSIZE is a large block and the program cannot use LBI?
Remember that the program cannot support LBI unless you code the DCBE
macro with the BLKSIZE parameter.

If you specify a large tape block size in the DCB macro and no DCBE macro
with the BLKSIZE parameter, the result will be unpredictable, as the macro
stores the right-most half-word. For example, if you specify BLKSIZE=69632
(X’11000’ in hexadecimal), the block size stored in DCB will be 4096 (X’1000’
in hexadecimal) and the program may work successfully with a block size that
you would not expect, or it may get the error message IEC141I 013-20 or
013-68.

Or, if you specify BLKSIZE on a DD statement that is larger than 32,760 and
your program does not have the DCBE macro with the BLKSIZE parameter,
the block size in DCB is left as zero, and SDB is used. Since the program
does not support LBI, the system selects SDB within 32,760.

2.2.3.3 System-managed storage and SMS ACS routines


In this section, we describe considerations on using system-managed storage
along with large tape block size.

Chapter 2. DFSMSdfp enhancements 41


New Block Size Limit parameter in data class
Some attributes in a data class are available for both system-managed data
sets and non-managed data sets. In addition, you can assign a data class,
not only to DASD data sets, but also to tape data sets. After application of
APAR OW44469/OW44470, a data class can have an attribute regarding the
block size limit. Figure 24 shows an example of a data class definition panel:

Panel Utilities Scroll Help


------------------------------------------------------------------------------
DATA CLASS DEFINE Page 3 of 3
Command ===>

SCDS Name . . . : SYS1.SMS.SCDS


Data Class Name : DCGT32

To DEFINE Data Class, Specify:


Data Set Name Type . . . . . . (EXT, HFS, LIB, PDS or blank)
If Ext . . . . . . . . . . . (P=Preferred, R=Required or blank)
Extended Addressability . . . N (Y or N)
Record Access Bias . . . . . (S=System, U=User or blank)
Reuse . . . . . . . . . . . . . N (Y or N)
Initial Load . . . . . . . . . R (S=Speed, R=Recovery or blank)
Spanned / Nonspanned . . . . . (S=Spanned, N=Nonspanned or blank)
BWO . . . . . . . . . . . . . . (TC=TYPECICS, TI=TYPEIMS, NO or blank)
Log . . . . . . . . . . . . . . (N=NONE, U=UNDO, A=ALL or blank)
Logstream Id . . . . . . . . .
Space Constraint Relief . . . . N (Y or N)
Reduce Space Up To (%) . . . (0 to 99 or blank)
Block Size Limit . . . . . . . (32760 to 2GB or blank)
Use ENTER to perform Verification; Use UP Command to View previous Panel;
Use HELP Command for Help; Use END Command to Save and Exit; CANCEL to Exit

Figure 24. Data class has new Block Size Limit parameter

The Block Size Limit parameter has the same meaning as the JCL BLKSZLIM
parameter. If a data class with this attribute is assigned to a data set, the system
takes the value from the data class when the corresponding DD statement does
not have a BLKSZLIM parameter.

New &BLKSIZE ACS read-only variable


You might redirect some types of new data set allocations (which are
supposed to create tape data sets) to DASDs by using ACS routines and
system-managed storage, so that you can use both DASD and tapes more
efficiently. This method is commonly known as tape mount management.

Now that DFSMS Release 10 has large tape block support, you need to make
sure that jobs which intend to make a data set with large tape blocks are not
redirected to DASDs. For example, if an LBI program with a 256K BLKSIZE
specification in DCBE tries to open a data set which is directed to DASD by
ACS routines, it will get the error message IEC141I 013-68.

In order to avoid such a situation, your ACS routine can find out the BLKSIZE
value on a DD statement, through the &BLKSIZE ACS read-only variable.

42 DFSMS Release 10 Technical Update


Figure 25 shows an example of ACS routines.

PROC STORCLAS
:
:
IF &BLKSIZE GT 32760 THEN DO /* IF BLOCKSIZE > 32760 @03 */
SET &STORCLAS = 'SCTAPE' /* DIRECT ALLOCATION TO @03 */
EXIT /* TAPE LIBRARY @03 */
END /* @03 */
:
:
:
END

Figure 25. New allocation requesting that large block should not go to DASD

2.2.3.4 SMF record changes


SMF records are changed to accommodate large tape block sizes. In this
section, we describe the change made to SMF records. Refer to the manual,
OS/390 MVS System Management Facilities, GC28-1783 for complete
information. We also recommend that you refer to macro expansion of these
SMF records.

SMF type 14 and 15 records


SMF type 14 or 15 record is written during CLOSE or EOV processing. When
the block size is larger than 32,767, the JFCB does not contain block size
value. Because of this, the JFCB section in an SMF type 14 or 15 record does
not contain block size information. If an LBI program processes a tape data
set, no matter how long the BLKSIZE is, SMF type 14 or 15 will have 8 bytes
of block size information (SMF14LBS) in a new section, called the Additional
Data Set Characteristic (ADC) section, in the Extended Information Segment.
The high order 4 bytes of SMF14LBS should be zero, so retrieving the low
order 4 bytes of SMF14LBS should be sufficient.

We recommend that you retrieve the block size information in ADC, if it is


present. Then, retrieve the block size in the copy of JFCB (JFCBBLKSI).

SMF type 21 records


An SMF type 21 record is basically written when a tape volume is demounted.
The record has a 2-byte field to hold the block size, and it is changed to have
a 4-byte block size. When a new flag, SMF21LB, in the SMF21FL1 field is on,
the block size is present in a new 4-byte field, SMF21LBS.

Chapter 2. DFSMSdfp enhancements 43


In addition, SMF21 is enhanced to hold a bigger STARTIO (SSCH) count.
When a new flag, SMF21LS, in the SMF21FL1 field is on, a new 4-byte field
SMF21LST contains the number of I/Os issued against the tape volume
during the mount period. This is especially useful for the latest high capacity
devices, such as the 3590.

We recommend that you take the following steps to retrieve the block size
information:
1. Check that SMF21LBS is valid.
Test SMF21FL1 with SMF21LB(X’20’). If it is on, SMF21LBS will be valid.
2. If it is not valid, use SMF21BSZ.
3. Check that SMF21LS is valid.
Test SMF21FL1 with SMF21LS(X’20’). If it is on, SMF21LBS will be valid.
4. If it is not valid, use SMF21SIO.

Note that block size information is not always available in the record. The
system takes block size information from DCBE, but this is not available,
depending on how a volume is demounted. For example, consider a case
when the system unallocates a tape device as a part of a job step
termination, and a volume mounted on the device needs to be demounted. In
this case, the job step (program) which used the volume has already finished
and the control has returned to system. Therefore, DCBE no longer exists in
the virtual storage.

SMF type 30 records


An SMF type 30 record has EXCP sections which contain I/O information.
This record is written at various times, such as when a job ends, or when an
SMF accounting interval has passed. Details are available in the manual,
OS/390 MVS System Management Facilities, GC28-1783.

A new 8-byte field SMF30XBS is added to hold the large tape block size
value. SMF30XBS should always contain a valid block size; therefore,
retrieving this new field only should be sufficient. Like SMF14LBS, the high
order 4 bytes of SMF30XBS should be zero, so retrieving the low order 4
bytes of SMF30LBS should be sufficient.

2.2.3.5 LOGREC changes


The outboard record (OBR) and miscellaneous data record (MDR) are written
when I/O error occurs and the system attempts to recovery, when a tape
volume is demounted. Both records have a 2-byte block size field, and these
are changed to have 4-byte fields.

44 DFSMS Release 10 Technical Update


OBR record
The OBR record now has a new 4-byte block size field at offset 76. When a
new bit (' 04' X) in the flag byte at offset 2 is set on, this field will have the
value.

A 4-byte block size field was used.

MDR record
The MDR record now has a new 4-byte block size field at offset 34 through 37
for IBM 3590 tape devices. For non-3590 tap devices, the block size field
remains unchanged; the offset 36 through 37 has a 2-byte block size.

2.2.4 IBM supplied programs and large tape block sizes


In this section, we describe considerations on using IBM supplied programs
that implement the large tape block size feature.

2.2.4.1 IEBGENER
IEBGENER has been changed to process large tape block sizes. Here we
describe major changes made to IEBGENER and some considerations on
using IEBGENER.

Now IEBGENER has PARM keyword


IEBGENER DFSMS Release 10 now interprets the PARM parameter on a
JCL EXEC statement so that you can tell IEBGENER how it should treat the
block size of an output data set when its block size is not present (or, when its
block size is zero). Any of the following are valid:
• SDB=LARGE
Specifies to use SDB greater than 32760 when the output data set is tape.
If the output data set is DASD, IEBGENER will use SDB no matter what
block size the input has. Remember that SDB for DASD is always less
than 32,760.
• SDB=SMALL
Specifies to use block size equal to or smaller than 32,760. If the output
data set is DASD, IEBGENER will use SDB no matter what block size the
input has.
• SDB=INPUT
Specifies to use large tape block size only when the input tape data sets
has block size greater than 32,760. If the output data set is DASD,
IEBGENER will use SDB no matter what block size the input has.

Chapter 2. DFSMSdfp enhancements 45


• SDB=YES
This has the same meaning as SDB=SMALL.
• SDB=NO
Specifies not to use SDB. The output data set will have the same block
size as the input data set. However, IEBGENER will use SDB for DASD
output data sets, if the input data set is tape and has large tape block size.

Note: The description of these values, in the manual OS/390 DFSMSdfp


Utilities, SC26-7343 may be incomplete; the YES and NO values have been
omitted. This depends on when you subscribed to the manual.

Important Notice: IEBGENER will issue the error message, IEB302I


INVALID PARAMETER LIST, and terminate the processing if any invalid key
words are present in PARM. Prior to DFSMS Release 10, IEBGENER just
ignored parameters supplied through PARM keyword on JCL EXEC
statement. You might have JCL which invokes IEBGENER with PARM coded
by mistake. Such JCL worked well with pre-DFSMS Release 10, but
IEBGENER in DFSMS Release 10 no longer ignores invalid PARM
parameters.

IEBGENER picks default from COPYSDB in DEVSUPxx


If you do not provide PARM, IEBGENER will pick the default from
COPYSDB= in PARMLIB(DEVSUPxx). Valid parameters are any of these:
• LARGE
• SMALL
• INPUT
• YES
• NO

Each parameter has the same meaning as we described in “IEBGENER picks


default from COPYSDB in DEVSUPxx” on page 46. If you do not supply
PARM to IEBGENER or do not have COPYSDB in DEVSUPxx, IEBGENER
will use SDB=INPUT as the default.

The actual block size written


If no DCB information is specified on the output DD statement, IEBGENER
will issue the message IEB352I. This message states that the input block size
has been copied, but this is not necessarily the case. TAPEBLKSZLIM limits
the SDB values; therefore, blocks greater than 32 KB might not be written,
even if LARGE or INPUT are specified.

If you need a specific block size for the output data set, you must code the
BLKSIZE parameter explicitly, rather than let the system choose. If you need

46 DFSMS Release 10 Technical Update


the maximum block size that is supported by the device, then you must code
BLKSIZE=0 along with BLKSZLIM=2G on the output DD statement.

2.2.4.2 DFSORT Release 14 and ICEGENER


The APAR PQ35111 (PTF UQ99323) has added functional enhancements to
DFSORT Release 14. As a part of the enhancements, ICEGENER uses the
SDB= parameter, just like IEBGENER. This is important when your
installation uses ICEGENER instead of IEBGENER. The available
parameters and the meanings are the same as those of IEBGENER, except
that ICEGENER may not pick the same block size as the input block size
when you specify SDB=NO. The IBM supplied default is SDB=INPUT;
therefore it agrees with IEBGENER’s default.

ICEGENER picks the default from ICEMAC


Note that ICEGENER takes the default from ICEMAC when no parameter is
specified, while IEBGENER takes COPYSDB in DEVSUPxx. In order to
maintain compatibility between these functions as much as possible, any
changes made to the COPYSDB default should also be made to ICEMAC.

The actual block size written


ICEGENER and SORT will issue messages ICE088I and ICE090I, which
detail the input and output block sizes.

If you need a specific block size for the output data set, you must code the
BLKSIZE parameter explicitly, rather than let the system choose. If you need
the maximum block size that is supported by the device, then you must code
BLKSIZE=0 along with BLKSZLIM=2G on the output DD statement.

2.2.4.3 IFHSTATR
IFHSTATR can be used to print the SMF type 21 records. As we described in
2.2.3.4, “SMF record changes” on page 43, the format of SMF type 21 record
is changed to have a 4-byte block size field and a 4-byte STARTIO count field.

IFHSTATR scales these values in the output when the BLOCK SIZE field or
the USAGE SIO field in the Type 21 record exceeds 99,999, IFHSTATR
scales the output. For example, if the field was greater than 99,999 but less
than 1,000,000 it will be scaled to multiples of 1,000 and the letter ”K” will be
appended. If the field is greater than 999,999 it will be scaled to multiples of
1,000,000 and the letter “M” will be appended.

Note that in this case the meanings of “K” and “M” differ from the meanings of
“K” and “M” in the BLKSIZE value on the DD statement. On the DD
Statement, they mean multiples of 1,024 and 1,048,576 respectively.

Chapter 2. DFSMSdfp enhancements 47


Figure 26 shows an example of IFHSTAR output.

1 MAGNETIC TAPE ERROR STATISTICS BY VOLUME 00/254


0VOLUME TIME DEV T/U BLOCK TAPE TEMP TEMP TEMP PRM PRM NOISE ERASE CLEAN USAGE MBYTES MBYTES
SERIAL DATE OF DAY ADR SER MODE SIZE FORMAT READ READB WRITE RD WRT BLOCK GAPS ACTS SIO READ WRITTEN
+______ _____ ________ ____ _____ ____ _____ ______ _____ _____ _____ ___ ___ _____ _____ ____ _____ ________ ________
TST105 00254 04:39:30 0B92 0A965 RF 16384 N/A 0 0 0 0 0 N/A 0 0 492 6 0
TST101 00254 04:41:27 0B90 0A965 RF 16384 N/A 0 0 0 0 0 N/A 0 0 3608 55 0
TST102 00254 04:44:02 0B90 0A965 RF 16384 N/A 0 0 0 0 0 N/A 0 0 8486 131 0
TST105 00254 04:44:14 0B93 0A965 RF 16384 N/A 0 0 0 0 0 N/A 0 0 30686 476 0
TST109 00254 04:45:16 0B92 0A965 RF 16384 N/A 0 0 0 0 0 N/A 0 0 28466 442 0
TST101 00254 04:45:19 0B90 0A965 RF 16384 N/A 0 0 0 0 0 N/A 0 0 1382 20 0
TST107 00254 04:50:36 0B91 0A965 RF 131K N/A 0 0 0 0 0 N/A 0 0 170K 0 7999
TST101 00254 04:51:22 0B90 0A965 RF 16384 N/A 0 0 0 0 0 N/A 0 0 940 13 0
TST109 00254 04:54:00 0B93 0A965 RF 16384 N/A 0 0 0 0 0 N/A 0 0 30240 469 0
TST110 00254 04:54:13 0B90 0A965 RF 16384 N/A 0 0 0 0 0 N/A 0 0 16922 262 0
TST105 00254 04:54:37 0B92 0A965 RF 16384 N/A 0 0 0 0 0 N/A 0 0 33794 525 0
TST104 00254 04:56:42 0B93 0A965 RF 16384 N/A 0 0 0 0 0 N/A 0 0 20028 310 0

Figure 26. An example of IFHSTATR output

2.2.4.4 Volume Mount Analyzer (VMA) changes


VMA is a standard tool to analyze how efficiently tape devices are used. This
tool uses SMF type 14/15 records as source information. Since these SMF
records may contain large tape block sizes in the SMF14LBS, VMA takes
care of it during extracting records.

2.2.4.5 IDCAMS
IDCAMS REPRO command does not support large tape block sizes.

2.2.4.6 High level languages


The existing programs written in COBOL do not require any changes to get
this support. All programs that run under OS/390 Version 2 Release 10 can
take this benefit. This is especially true for programs which describe BLOCK
CONTAINS 0 in the file section. If you need to code the BLOCK CONTAINS phrase
which would go beyond the 32,760 block size, then you need to use COBOL
for OS/390 Version 2 Release 2. Refer to the manual, COBOL for OS/390 and
VM Programming Guide Version 2 Release 2, SC26-9049-05, for more
information.

2.2.5 Programming considerations


In this section, we describe considerations on programming when using LBI.

2.2.5.1 The legacy problem


The information relating to block size is generally held in half-word fields. In
theory, the maximum supported block size could be increased from 32,760 to
65,535 without exceeding the half-word limitation, but this could cause
problems for programs that use the information as signed half-word integer.

48 DFSMS Release 10 Technical Update


2.2.5.2 Supported logical record length
This change supports a block size greater than 32,760. No change has been
made to the maximum logical record size.

2.2.5.3 OPEN for UPDAT


OPEN for UPDAT allows a program to update a record in process. You
cannot use this option with LBI. However, if your use of LBI is to process
large tape block sizes, then not having this support should not pose a
problem, as you cannot update tape data records in place.

If your LBI program tries to open a data set with this option, the system will
issue the error message IEC141I 013-FE.

2.2.5.4 OPTCD=H
OPTCD=H is used to bypass VSE embedded checkpoint records. If your LBI
program tries to open a data set with this option, the system will issue error
message IEC141I 013-FD. Since the current tapes you have do not contain
any tape data sets with large tape block sizes along with VSE checkpoint
records, this should not be a problem.

2.2.5.5 Variable length records and the BDW


A data set consisting of variable blocked (VB) records has control information
which describes the block, the block descriptor word (BDW), and each record
in the block, the record descriptor word (RDW).

As we explained in 2.2.5.2, “Supported logical record length” on page 49, the


maximum logical record length supported is unchanged, therefore the format
of RDW is unchanged. However, DFSMS Release 10 introduces a new BDW
format for large tape block sizes.

Figure 27 shows the BDW format that does not support blocks longer than
32,760 bytes.

0 Length of block Reserved


0 1 15 16 31
Figure 27. Traditional BDW format

Chapter 2. DFSMSdfp enhancements 49


In order to support large tape block sizes, reserved fields have been used.
Figure 28 shows the new BDW format for tape block sizes greater than
32,760.

1 Length of block
0 1 31
Figure 28. Extended BDW format

As you can see, the bit 0 indicates whether the BDW has the new format or
not. When the bit 0 is 1, BDW has the new format, and we refer it as extended
BDW format. When the bit 0 is 0, BDW has the traditional format, and we
refer it as non-extended BDW format .

When you use QSAM to create variable length blocked records, you do not
have to take care of the BDW format, as QSAM takes care of it for you.
However, when you use BSAM, you are responsible for maintaining this
format so that LBI application programs using QSAM can process the
variable blocked records correctly.

QSAM can process the extended BDW format correctly even though the
block length appearing in the field is equal to or less than 32,760. However,
we recommend that you use the extended BDW format only if the block
length is greater than 32,760, unless you can make sure that all of your
OS/390 application programs support LBI and recognize the extended BDW
format.

2.2.5.6 Buffer pool management


When using QSAM to process large tape block sizes, you must let the system
build the buffer pool automatically during OPEN. You cannot use the
GETPOOL, BUILD, or BUILDRCD macros to build a buffer pool for block
sizes, as these macros cannot build buffers which can hold block sizes
greater than 32,760. If, during OPEN, the system finds that the DCB contains
the address of a buffer control block, and that the buffer length is smaller than
the data set block size, you will get the error message IEC141I 013-4C. When
you issue CLOSE, the system frees the buffer pool built by OPEN. So you do
not have to worry about issuing the FREEPOOL macro in cases where your
application needs to repeat OPEN and CLOSE frequently.

50 DFSMS Release 10 Technical Update


When using BSAM to process large tape block sizes, we recommend that you
allocate data areas or buffers via GETMAIN, STORAGE, or CPOOL, and do
not request them to be allocated by the system during OPEN. When you
specify a non-zero BUFNO parameter in DCB to have the system build buffer
pool during OPEN, BUFL must be zero. You will get the error message
IEC141I 013-4C if you supply a non-zero BUFNO and a non-zero BUFL, and
the resulting block size is less than the block size you specified through DCB
or DCBE.

2.2.5.7 BUFNO for large block


If the block size is greater than 32,767, QSAM will get two buffers by default,
and if the block size is equal to or less than 32,767, QSAM will get five buffers
by default. Therefore, a large block whose size is between 32,760 and 32,767
will get five buffers, while blocks that are 32,768 and up will get two buffers by
default. For example, if you use QSAM with the latest 3590 drive, QSAM
would obtain buffers up to 512 KB by default. You may want to consider using
the RMODE31=BUFF parameter in the DCBE macro along with LBI, as this
tells QSAM to obtain buffers above the 16 MB line.

2.2.5.8 Using RECFM=U for large tape block sizes


RECFM=U or undefined record format can be used to process sequential
records as you desire. It is your responsibility for maintaining the logical
format of the records. When you use the U format to process records, you
need to tell your system the actual length of block being written or the
maximum possible block size being read. In pre-OS/390 Version 2 Release
10, either the BLKSIZE field or the LRECL field in DCB was used to
determine the length. However, for an LBI program, you cannot use these
fields, as they cannot hold values beyond 32,760; while LBI allows block
sizes up to 2 GB, in theory. In this section, we describe various
considerations on using RECFM=U when you use it along with QSAM or
BSAM.

Use DCBEBLKSI instead of DCBBLKSI or DCBLRECL


When you use LBI, you must not use the BLKSIZE field (DCBBLKSI) or the
LRECL field (DCBLRECL) in DCB to indicate the actual length of block to be
written, or the maximum possible block size to be read. Instead, you should
use BLKSIZE in DCBE (DCBEBLKSI) when you use either BSAM or QSAM.
When you use BSAM, in your READ/WRITE macros, you must specify ‘S’ as
a length, and the system will use the DCBEBLKSI properly. You cannot
specify the length explicitly in the WRITE/READ macro, for LBI.

Chapter 2. DFSMSdfp enhancements 51


Determining actual block length read
Now you can use a new field called IOBLENRD to determine the actual length
of the block that the system has read. You must NOT subtract residual count
from the block size or retrieve DCBLRECL like before. You should check the
contents of IOBLENRD before issuing other I/Os regarding access methods.
Figure 29 illustrates how you can access IOBLENRD.

QSAM
DCB

...
GET INDCB,BUFFER
... -4 IOBLENRD L WORKREG1,DCB+44
+44 DCBIOBA S WORKREG1,CONST4
L WORKREG2,0(WORKREG1)
...

BSAM

DECB
...
IOBLENRD READ DECB1,SF,BUFFER,'S'
-12 CHECK DECB1
L WORKREG1,DECB1+16
S WORKREG1,CONST12
L WORKREG2,0(WORKREG1)
+16

Figure 29. The location of the length-read field

If you use BSAM, you need to issue the CHECK macro to make sure that the
corresponding read operation has completed. After that, you can test as
shown in the above example.

Note that this method works only when you do not perform chained
scheduling. Chained scheduling is an I/O technique which issues multiple
read or write channel command words (CCWs) in a single channel program. It
is also known as command chaining. For QSAM, specifying BUFNO=1 should
be sufficient to ensure that you do not use chained scheduling. For BSAM,
specifying NCP less than 2, or, issuing a WRITE and CHECK macro as a pair,
would have the same effect.

2.2.5.9 DEVTYPE INFO=AMCAP


The new keyword AMCAP has added to the INFO parameter of the
DEVTYPE macro. DEVTYPE INFO=AMCAP provides you information about
the maximum and optimum block sizes supported for devices which you
query. Table 4 shows the 32 bytes of information returned by DEVTYPE
INFO=AMCAP.

52 DFSMS Release 10 Technical Update


Table 4. INFO=AMCAP return area

Offsetting Bytes Description

0(0) 1 Flags.

0(0) 1....... BSAM, QSAM, and (if DASD) BPAM support the large
block interface and the block size limit is in the next
double word.

1(1) 7 Reserved, currently set to zeros.

8(8) 8 Maximum block size supported. If you specify a DD


name to DEVTYPE for a data set concatenation, this
value is the largest for any of the DDs. On output, OPEN
does not allow a block size that exceeds this value
except with EXCP. On certain cartridge tape drives,
exceeding this limit can cause bypassing of hardware
buffering. This value could exceed 32,760 for a
magnetic tape or dummy data set and therefore
require EXCP or LBI to use this value as a block size.
In the future, IBM may support values that exceed
32,760 for other device types.

16(10) 8 Recommended optimum block size. This is less than or


equal to the maximum block size supported. Above this
length the device might be less efficient or less reliable.
If you specify a DD name to DEVTYPE for a data set
concatenation, this value is the largest for any of the
DDs. Consult hardware documentation for further
information

24(18) 8 Maximum unspanned logical record length supported by


BSAM, QSAM, or BPAM. Various types of data sets on
the device might have various maximum record lengths.
Therefore, if UCBLIST was coded on DEVTYPE and not
a DD name, this value is the smallest for the possible
data set types for BSAM, QSAM, and BPAM.

Table 5 shows the values that would be returned as optimum and maximum
values, as a response to DEVTYPE INFO=AMCAP.
Table 5. Optimum and maximum block size by device type

Device Type Optimum value Maximum value

DASD Half track (in most case) 32,760

Reel tape 32,760 32,760

3480, 3490 65,535 65,535

Chapter 2. DFSMSdfp enhancements 53


Device Type Optimum value Maximum value

3590 262,144 (256 KB) except 262,144 (256 KB)


on some older models, on
which it is 229,376
(224 KB)

DUMMY 16 5,000,000

For example, you can issue this macro against a DD which you are going to
open. If the device allocated for the DD is 3590, the macro would return the
value of 256 KB. You would then set up an appropriate DCBE parameter and
open it.

Note that DEVTYPE INFO=AMCAP returns binary zero values when the
program runs under a pre-OS/390 Version 2 Release 10 system.

2.2.5.10 RDJFCB
The block size value in JFCB (JFCBLKSZ) has a half-word length; therefore,
it cannot hold information when the JCL specifies a BLKSIZE value greater
than 32,760 on the respective DD statement. It also does not have
information about the BLKSZLIM or TAPEBLKSZLIM values.

For this reason, the response to a RDJFCB Allocation Retrieval has been
modified to allow a program to determine the value of BLKSZLIM (because
JFCBLKSZ can now hold the information).

The allocation retrieval area (ARA) header, mapped by the IHAARA macro,
has been modified. Figure 30 shows the header part of the expansion of
IHAARA macro.

ARA DSECT
ARALEN DS H Length of ARA info
ARAFLG DS B ARA flags
ARAXINF EQU X'80' ARA Extended Information Segment present
ARAXINOF DS B Offset in double words to Extended Info Segment
:

Figure 30. Macro expansion of ARA

As you can see, the system sets bit 0 of ARAFLG on when the respective DD
statement has a BLKSIZE value greater than 32,760 and/or the value
presented by BLKSZLIM/TAPEBLKSZLIM. You can retrieve these values
from the ARA extended information segment. The IHAARA macro also maps

54 DFSMS Release 10 Technical Update


this segment. Figure 31 shows the mapping of the ARA extended information
segment.

ARAXINFO DSECT
ARAXINLN DS H Length of Extended Info Segment
DS 6B Reserved
ARAXBLKS DS DL8 Blksize
ARABKSLM DS DL8 Blksize limit for DD

Figure 31. ARA extended information segment

The ARA is pointed to by field ARLAREA in the data returned by RDJFCB,


which is mapped by the IHAARL macro. Figure 32 shows a small sample
program to illustrate how to retrieve this information.

LBRDJFCB CSECT
LBRDJFCB AMODE 24
LBRDJFCB RMODE 24
BAKR 14,0 SAVE REGISTERS to LINKAGE STACK
BASR 12,0 USE GR12 AS BASE REGISTER
USING *,12
PSTART SR 7,7
SR 8,8
RDJFCB LBTEST READ THE JFCB
ICM 7,B’1111’,ARLAREA OBTAIN THE ADDRESS OF ARA
USING ARA,7 ESTABLISH ADDRESSABILITY TO ARA
TM ARAFLG,ARAXINF EXTENDED INFO SEGMENT PRESENT?
BNO FLAGOFF BRANCH IF NO
ICM 8,B’0001’,ARAXINOF INSERT THE OFFSET IN DWORD
SLL 8,3(0) MULTIPLY BY 8
AR 8,7 POSITION TO EXTENDED INFO SEGMENT
USING ARAXINFO,8 ESTABLISH ADRESSABILITY TO XINFO
* :
* PROCESS AS REQUIRED
* :
PR RETURN TO CONTROL PROGRAM
FLAGOFF DS 0H IF INFO SEGMENT NOT PRESENT..
* :
* PROCESS AS REQUIRED
* :
RETURN PR RETURN TO CONTROL PROGRAM
LBTEST DCB DDNAME=LBTEST,MACRF=(GM),EXLST=READAXA
READAXA DS 0F
DC X'13' REQUESTS ARA
DC AL3(ARLAREA)
DC X'80000000' END OF EXLST
ARLAREA IHAARL DSECT=NO
IHAARA MAPPING MACRO FOR ARA
END LBRDJFCB

Figure 32. Example for locating ARA extended information segment

Chapter 2. DFSMSdfp enhancements 55


2.2.6 Summary of recommendations on using large tape block sizes
The following are our recommendations for using large tape block sizes:
• TAPEBLKSZLIM should be 32760.
There are two ways to achieve this:
- Do not specify TAPEBLKSZLIM in PARMLIB(DEVSUPxx).
It will be 32760 by default.
- Set TAPEBLKSZLIM to 32760.
After you have migrated all of your systems, including disaster recovery
sites to OS/390 Version 2 Release 10 (or, DFSMS Release 10), you may
consider using a value bigger than 32760.
Remember that this parameter can only be changed by an IPL, which
makes it possible for systems in a sysplex to have different defaults.
• Do not code a BLKSIZE value.
We recommend that you do not specify BLKSIZE, other than zero, unless
a specific value is required.
• Specify BLKSZLIM=2G on a DD statement, if required.
If you need the maximum block size supported by the device for a certain
application, we recommend that you specify BLKSZLIM on a DD
statement. This will allow a device type to be changed without having to
change the JCL. For example, if BLKSZLIM=2G is specified, the block
size will always be the maximum, no matter which device is used.
• Consider the implications of using large tape block sizes.
You need to consider the following implications in production before all of
the systems are at the OS/390 Version 2 Release 10 level:
- Jobs that process large tape blocks will require system affinity.
This could mean changes to the scheduling system, during the
transition period — changes which will have to be removed when the
upgrade is complete.
- Jobs that are run on the wrong system might fail with JCL errors, or
give unexpected abends.

56 DFSMS Release 10 Technical Update


2.2.7 Worked examples
2.2.7.1 Performance measurements
We did not attempt any extensive testing, because performance is a complex
subject with many contributing factors, but we did run some very basic jobs to
illustrate some of the results that can be achieved.

We ran a job that wrote internally generated records to a 3590 tape. The
system was lightly loaded and there was no other tape activity.

The job was run three times writing 32K, 64K, and 256K blocks with tape unit
compression turned off, and then three times with compression turned on.
The results were as shown in Table 6.
Table 6. Large tape block size performance comparison

Block size Compression EXCP count Elapsed time

32K no 104,800 6 m, 54 s

64K no 55,250 6 m, 48 s

256K no 13,334 6 m, 52 s

32K yes 104,800 5 m, 38 s

64K yes 55,260 4 m, 42 s

256K yes 13,334 3 m, 38 s

The results for the tests without compression show no gain for the increase in
tape block size; the tests with compression show substantial gains. The
reasons for this difference are as follows.

When data is transferred from the host to the 3590, it is buffered in the control
unit and then written in a standard block size of 384K on the tape. These two
operations have a different bandwidth; the 3590 can receive a maximum of
17 MB from the host channel and transfer a maximum of 9 MB to the tape.

Because we were able to drive the channel at its maximum rate, the limiting
factor for the uncompressed data was the control unit to tape bandwidth.

The data we were generating was highly compressible. When we allowed the
control unit to perform compression, the limiting factor was the channel
speed, and the advantage of using larger blocks became clear.

In all real-life cases, low priority batch jobs should always benefit from an
increase in block size because they are able to make more effective use of
each I/O operation, by transferring more data each time.

Chapter 2. DFSMSdfp enhancements 57


2.2.8 Considerations
In this section, we describe considerations on using large tape block sizes.

2.2.8.1 Coexistence with supported DFSMS/MVS releases


Below, we describe considerations on coexistence with supported
DFSMS/MVS releases.

New &BLKSIZE ACS read-only variable


As explained in “New &BLKSIZE ACS read-only variable” on page 42, a new
ACS read-only variable &BLKSIZE has been introduced. Since this variable is
not available to pre-Release 10 systems, if you plan to test the &BLKSIZE
value in the ACS routines, and your SMS configuration is shared among
Release 10 and pre-Release 10 systems, you should be aware that
&BLKSIZE is not available for pre-Release 10 systems, and you will get 0
(zero) for &BLKSIZE.

Coexistence PTFs
Table 7 shows a list of coexistence APARs and their corresponding PTFs
regarding large tape block sizes.
Table 7. Coexistence PTFs for large tape block size

APAR\PTF DFSMS/MVS DFSMS/MVS DFSMS/MVS DFSMS/MVS


1.5.0 1.4.0 1.3.0 V1.2.0

OW41030 UW63366 UW63365 UW63364 UW63363

OW40629 UW62977 UW62976 UW62975 UW62974

OW40414 UW61954 UW61953 UW61952 UW61951

Following are the details of each APAR:


• OW41030
This APAR prevents a pre-DFSMS Release 10 system from processing a
tape data set which has large tape block sizes.
After applying the fix for this APAR, the system checks the record length
description in the IBM Standard Label to see if it has large tape block
sizes. If it does, the system fails open and issues the error message
IEC146I 513-2C.
• OW40629
This APAR prevents the pre-DFSMS Release 10 system from processing a
variable blocked record which has an extended BDW format. Even though
the APAR41030 can prevent a down-level system from opening, users

58 DFSMS Release 10 Technical Update


could attempt to process by overriding the RECFM, BLKSIZE, and LRECL
specifications.
After applying the fix for this APAR, the system returns to the application
with a Wrong Length Record error, as if it got this Wrong Length Record
error from the channel subsystem, when it detects an extended BDW
format. If a program does not provide any error recovery routines, it will
get the message IEC020I 001-4. If IEBGENER encounters the Wrong
Length Record error, it will issue the message IEB351 with
“WRNG.LEN.RECORD” text, and ICEGENER will issue the message
ICE061A 5 I/O ERROR indicating channel status word (CSW) as X’0C40’
(Channel End, Device End, Wrong Length Record)
Note that you will not get this error message, if you attempt to process a
tape data set which has large tape block size with RECFM=U. For
example, assume that your program attempts to read a data set with
256 KB tape block size by using RECFM=U and BLKSIZE=32760 at
down-level system. The system just reads the left-most 32760 bytes of the
block, and it does not return any error status. When the RECFM=U takes
in effect, the system contacts the channel program with the Suppress
Incorrect Length (SLI) bit on. The SLI bit tells the channel subsystem “I do
not care whether or not byte counts I requested do not match the actual
block length you have read”, therefore the channel subsystem does not
report the Wrong Length Record condition even though it detects it.
If you attempt to process large tape block sizes with RECFM=F(B) or V(B)
at a down-level system, the system will return to applications with a Wrong
Length Record error, as the system will not set the SLI bit on under these
RECFM.
• OW40414
This APAR allows IFHSTATR program at down-level systems to recognize
new SMF21 record formats, as we described in 2.2.3.4, “SMF record
changes” on page 43.
After applying this APAR, IFHSTATR at a down-level system gives you the
same output as IFHSTATR at DFSMS Release 10. Refer back to 2.2.4.3,
“IFHSTATR” on page 47 for more information about IFHSTATR.

2.3 UNIT=AFF ACS support


In this section, we describe this new enhancement made to the UNIT=AFF
ACS support for tape libraries.

Chapter 2. DFSMSdfp enhancements 59


2.3.1 Background of this enhancement
First, we review the UNIT=AFF processing; then we describe the problem
that was fixed by this enhancement.

2.3.1.1 What is “UNIT=AFF”?


UNIT=AFF is a technique which can be used to reduce the number of tape
device to be allocated for a job step. Let us consider the following JCL, for
example:
//STEP1 EXEC PGM=TAPEIO,...
//DD1 DD DSN=A,UNIT=CART,VOL=SER=A,...
//DD2 DD DSN=B,UNIT=CART,VOL=SER=B,...
//DD3 DD DSN=C,UNIT=CART,VOL=SER=C,...
//SYSPRINT DD SYSOUT=*,...

Assume that the program TAPEIO requests tape devices from an esoteric
group CART. In this example, the system allocates a device for each this job
step, therefore three devices are allocated in total. However, if the TAPEIO
does not process those DD resources concurrently, allocating only one
device will be sufficient for the program. You can use UNIT=AFF as in the
following code sample:
//STEP1 EXEC PGM=TAPEIO,...
//DD1 DD DSN=A,UNIT=CART,VOL=SER=A,...
//DD2 DD DSN=B,UNIT=AFF=DD1,VOL=SER=B,...
//DD3 DD DSN=C,UNIT=AFF=DD2,VOL=SER=C,...
//SYSPRINT DD SYSOUT=*,...

In this example, the system allocates a tape device for DD1, the same device
for DD2 and DD3. Therefore, the job step does not have to allocate more than
one device.

This has been a very common technique used for installation, which is known
as unit affinity. In order to simplify further discussion, we refer to a DD
resource which is referenced by other DDs by UNIT=AFF as a referenced
DD, and we refer to a DD resource which is referencing another DD through
UNIT=AFF as a referencing DD. In the previous example of JCL, DD1 is a
referenced DD, DD2 is a referencing DD as well as a referenced DD, and
DD3 is a referencing DD.

2.3.1.2 What was the problem?


Since system-managed storage was introduced, the system has had the
ability to ignore a request made on a DD resource, and redirect an allocation
to another device type, through SMS ACS routines. Therefore, it is possible

60 DFSMS Release 10 Technical Update


to allocate a system-managed data set, while the original DD specifies to
allocate a tape data set, or vice versa.

By using this logic, tape mount management (TMM) methodology was


invented, and is now used. The objective of TMM is to utilize tape capacity as
much as possible. If you use system-managed storage, you can redirect tape
data set allocations to DASD volumes, without changing the JCL. Once data
sets have been allocated on DASD volumes, you can have DFSMShsm move
them to tape volumes, which is known as migration. By using this technique,
you can stack data sets on tape volumes and utilize your storage more
efficiently.

In order to achieve this transparently, ACS routines should be smart enough


to avoid errors, and unit affinity was one of the problems on implementing
TMM. For example, if a referenced DD has been redirected to a
system-managed DASD, the referencing DD should be also directed to
DASD, otherwise, the system will fail the allocation, as the referencing DD is
requesting unit affinity to a DASD.

&UNIT returns “AFF=” for referencing DDs


However, before DFSMS Release 10, there was not enough information
about how a referencing DD was coded. Before DFSMS Release 10, the
system would set &UNIT to “AFF=” value when the ACS routines got control
for allocating a referencing DD.

If you could determine where the referencing DD should go by using this


“AFF=” along with the data set name, program name, or other information
available through the other ACS read-only variables, there would be no
problem. However, doing this was very difficult, especially for older JCL
resources that were not designed for use with system-managed storage.

Tape libraries and unit affinity


System-managed tape libraries had another difficulty. When you implement
IBM 3494 tape libraries, you need to allocate tape devices in the libraries
through ACS routines, unless you use basic tape library support (BTLS).
Your installation might not have had system-managed storage before, and
therefore you would need to implement it for using tape libraries. The easiest
way is to redirect allocations to system-managed tape libraries if UNIT
specifies a tape device number or a tape esoteric name.

However, this method does not work for referencing DDs, as they have the
&UNIT variable as “AFF=”. If you had system-managed tape libraries only,
this would not be a problem, as having the referencing DDs redirected to
system-managed libraries unconditionally would be sufficient. However, you

Chapter 2. DFSMSdfp enhancements 61


might have both system-managed libraries and non-system-managed tape
devices. If your JCL needs to allocate a non-system-managed tape device
and it needs to have unit affinity to the device, the implementation of ACS
would be also difficult.

VOL=REF and unit affinity


There have been similar situations for volume affinity. Volume affinity refers to
DDs which are requesting the same DDs through the VOL=REF= parameter
on a DD statement. As for volume affinity, the system has provided
information about a data set, which is referenced by VOL=REF parameter,
through &ANYVOL/&ALLVOL ACS variables, since DFSMS/MVS V1.3.0.
However, UNIT=AFF support has not been provided.

2.3.2 How does DFSMS Release 10 solve the problem?


In DFSMS Release 10, the system sets the &UNIT variable to the
characteristics of a referenced DD. Possible values are any of the following:
• “AFF=SMSD”
The DD is referencing another DD, by UNIT=AFF keyword. The
referenced DD is a system-manage data set. Therefore the data set is on
system-manage DASD volume.
• “AFF=SMST”
The DD is referencing another DD, by UNIT=AFF keyword. The
referenced DD is a tape data set on a system-manage tape volume.
Therefore, the data set resides in a system-managed tape library.
• “AFF=NSMS”
The DD is referencing another DD, by UNIT=AFF keyword. The
referenced DD is not a system-managed (DASD) data set, or a tape data
set in a system-managed library.

This will enable ACS routines to have an accurate view of the allocation

Here is a simple example of how this could be used:


:
:
IF &UNIT = ‘ATLTAPE’ | &UNIT = ‘AFF=SMST’ THEN DO
SET &STORGRP = ‘ATLSG’
EXIT
END

This would make sure that a unit affinity to an SMS tape would be assigned to
storage group ATLSG, which would include the tape library.

62 DFSMS Release 10 Technical Update


2.3.3 Considerations
In this section, we describe some considerations on using UNIT=AFF ACS
support for tape libraries.

2.3.3.1 Considerations on ACS routines


Below, we describe considerations on ACS routines.

Review your current ACS routines


If you migrate your system to OS/390 Version 2 Release 10, you need to
review your existing ACS routines. If your ACS routines have something like
the following logic, you MUST change it:
IF &UNIT =”AFF=” THEN
SET...
...
END

As we have stated, DFSMS Release 10 no longer sets &UNIT as “AFF=”, but


rather, it sets more detailed information.

Test &ANYVOL/&ALLVOL first


You could have some JCL which makes a NEW to OLD reference using the
VOL=REF parameter. For example:
//DD1 DD DSN=A,UNIT=CART,VOL=SER=ABC,DISP=OLD,...
//DD2 DD DSN=B,UNIT=AFF=DD1,VOL=REF=*.DD1,DISP=NEW,...

If ACS routines get control via JCL as shown this example, &UNIT will have a
null value. Therefore, if you test the &UNIT variable first, and then make an
additional decision, you will not receive the result you would expect. In order
to avoid this situation, you need to test &ALLVOL/&ANYVOL first to see if a
DD has VOL=REF processing, then test other ACS variables.

You also need to consider the value of “STK=.” for data set stacking
Since DFSMS/MVS V1.3.0, &UNIT could have value “STK=.”, if the system
detects data set stacking condition. For this reason, the system may invoke
the ACS routines up to three times. If a DD has UNIT=AFF, the system will set
the new value introduced to &UNIT in DFSMS Release 10 at the first call,
then will set “STK=.” to &UNIT for subsequent calls for data set stacking
conditions. Refer to the manual, OS/390 DFSMSdfp Storage Administration
Reference,SC26-7331, for more information about data set stacking.

2.3.3.2 Coexistence with supported MVS releases


UNIT=AFF ACS support is not available for DFSMS/MVS releases.
Therefore, you cannot implement the new &UNIT value into your ACS

Chapter 2. DFSMSdfp enhancements 63


routines, until you have migrated all of your down-level systems to OS/390
Version 2 Release 10.

2.4 DADSM rename of duplicate data sets


In this section, we describe the new DFSMSdfp capability which allows you to
rename non-system-managed and non-VSAM data sets that are considered
to be “in use”.

2.4.1 Background of this enhancement


There can be times when it is necessary to rename a data set that is
enqueued under the SYSDSN major name in a GRSplex. This can occur
when building systems, and a data set of the same name, but on a different
volume, is “in use” by a long-running function. For example, when you clone a
system, you could use DFSMSdss FULL COPY to make a full copy of a
production system’s system residence volume. After you have copied the
volume, you might need to rename the data set names so that you can
identify each system’s system data sets names easily, such as
SYS2.LINKLIB for the second system and SYS3.LINKLIB for the third
system.

However, you cannot rename a data set which is allocated by system


components or a long-running procedure. Load module libraries registered in
the LNKLST are a good example. The system address space LLA and
XCFAS have allocated them since the system was IPL’d, therefore these data
sets are ENQ’d. Prior to DFSMS Release 10, the system did not allow you to
rename a data set which has the same name as the data set “in use”.

The system considers that a data set is “in use” by seeing if the data set is
ENQ’d. SInce the ENQ resources do not contain any volume serial number
information, the system cannot determine which data set is actually “in use”
and ENQ’d when there are multiple data sets which have the identical name.
In order to be on the safe side, the system has to reject such requests to
avoid getting severe errors that would be caused by renaming a data set
which is actually “in use”.

For this reason, you cannot rename SYS1.LINKLIB on the copied volume,
while you do know that neither LLA or XCFAS uses the copied
SYS1.LINKLIB. The only solution is to stop the LLA address space and tell
XCFAS to unallocate the LNKLST data sets through the MODIFY
XCFAS,UNALLOCATE LNKLST command, but you might not want to do
these operations, as this would degrade module fetch performance on the
production system.

64 DFSMS Release 10 Technical Update


2.4.2 How does DFSMS Release 10 solve the problem?
Now, with DFSMS Release 10, you can rename a data set which has the
same name as the “in-use” data set, through an ISPF panel or a programming
interface. In order to simplify further discussion, we refer to such a data set as
a duplicate data set .

2.4.3 How to rename a duplicate data set


There are two ways to rename a duplicate data set:
• Using ISPF
• Using an application programming interface

In this section, we describe the common requirements for both methods of


performing this function, then we describe how to use each method.

2.4.3.1 Common requirements


In order to rename a duplicate data set, you must meet the all of the following
requirements:
• The data set must not be system-managed.
• New RACF profile:
A new RACF profile STGADMIN.DPDSRN.olddsname must be added into
FACILITY class. You need to have at least READ authority to this profile.
The profile can hold up to 23 characters for olddsname, where you specify
the name of a data set you want to rename. You may consider using a
generic name such as STGADMIN.DPDSRN.SYS1.* .
• RACF DATASET profile for target data set name:
Renaming a data set is considered to be making a new data set. So you
need to have ALTER authority against the DATASET profile where the new
data set name is supposed to belong.

These requirements are common to both methods.

2.4.3.2 Using ISPF for renaming


In this method, you can use the following steps to rename a duplicate data
set. Assume that the TSO user performing the operation has already met the
above requirements.

Chapter 2. DFSMSdfp enhancements 65


1. Invoke the Data Set Utility (Option 3.2) and the following screen will
appear:

Menu RefList Utilities Help

Data Set Utility


Option ===>

A Allocate new data set C Catalog data set


R Rename entire data set U Uncatalog data set
D Delete entire data set S Short data set information
blank Data set information V VSAM Utilities

ISPF Library:
Project . .
Group . . .
Type . . . .

Other Partitioned, Sequential or VSAM Data Set:


Data Set Name . . .
Volume Serial . . . (If not cataloged, required for option "C")

Data Set Password . . (If password protected)

2. Type “R” in the option field, specify the data set name you want to rename
and the volume serial number where it resides, and then press the Enter
key.

Menu RefList Utilities Help

Data Set Utility


Option ===>

A Allocate new data set C Catalog data set


R Rename entire data set U Uncatalog data set
D Delete entire data set S Short data set information
blank Data set information V VSAM Utilities

ISPF Library:
Project . .
Group . . .
Type . . . .

Other Partitioned, Sequential or VSAM Data Set:


Data Set Name . . . SYS1.LINBKLIB
Volume Serial . . . ITSOR1 (If not cataloged, required for option "C")

Data Set Password . . (If password protected)

66 DFSMS Release 10 Technical Update


3. If a data set is cataloged, the following warning message will appear:

Menu RefList Utilities Help

Rename Data Set


Command ===>

Data Set Name . : SYS1.LINKLIB


Volume Serial . : O10RA1

Enter new name below:

ISPF Library:
Project . .
Group . . .
Type . . . .

Other Partitioned or Sequential Data Set:


Data Set Name . .

Enter "/" to select option


Catalog the new data set name

You have specified a volume serial for the data set you want renamed. The
data set is also cataloged on that volume. In addition to renaming the data
set, you should select the "Catalog the new data set name" selection field
if you want the data set cataloged.

Press the Enter key after you have confirmed.

Chapter 2. DFSMSdfp enhancements 67


4. Type the desired name and press the Enter key:

Menu RefList Utilities Help

Rename Data Set


Command ===>

Data Set Name . : SYS1.LINKLIB


Volume Serial . : O10RA1

Enter new name below:

ISPF Library:
Project . .
Group . . .
Type . . . .

Other Partitioned or Sequential Data Set:


Data Set Name . .

Enter "/" to select option


Catalog the new data set name

5. You will get the following error message:

IEC614I RENAME FAILED - RC 008, DIAGNOSTIC INFORMATION IS (040B0446) ,


IKJACCT,ITSOR1,SYS1.LINKLIB
***

Press the Enter key.

68 DFSMS Release 10 Technical Update


6. The following screen will appear:

Menu RefList Utilities Help

Rename Data Set In Use


Command ===>

Data Set Name . : SYS1.LINKLIB


Volume . . . . : ITSOR1

The system detected that a data set with the above name is in use
(possibly on another system) but it cannot determine whether it is the
data set you wish to rename. If it is the same data set and any program
has it open, renaming it could cause serious system and data integrity
problems.

You have the extra security authority to rename the data set even though
its name is in use. Refer to the DFSMS documentation on the RENAME macro
for further information.

Instructions:
Press ENTER to override data set name protection and rename the data
set.
Enter CANCEL or EXIT to cancel the rename request.

The ISPF rename operation tries to rename the data set as usual. But it
gets an error message from the system, stating that it could not rename
the data set, because the data set name is ENQ’d. The system also tells
the ISPF that the requestor has an authority of STGADMIN.DPDSRN.
Therefore, ISPF asks the user if it can try renaming again.
If you press Enter here, the ISPF will issue the rename request to the
system again, but this time it tells the system “You can rename it, even
though the name is ENQ’d”. When the system gets the request, it checks
the RACF profile again to see if the requestor has an authorization. After
the system has verified the authorization, it renames the data set.

2.4.3.3 Using an application programming interface for renaming


If you plan to rename a duplicate data set, you can use the RENAME macro,
and this is the same programming interface as before. However, in order to
rename a duplicate data set, you must set a specific bit in the CAMLST macro
expansion. Figure 33 shows a sample program which renames
SYS1.LINKLIB on the volume ITSO01 to SYS2.LINKLIB.

Chapter 2. DFSMSdfp enhancements 69


MAIN CSECT
MAIN AMODE 31
MAIN RMODE ANY
USING *,15
BAKR 14,0
BAS 13,ENTER
SAVE DS 18F
DROP 15
USING SAVE,13
ENTER DS 0H
OI PLIST+2,X'10'
RENAME PLIST
LTR 15,15
BNZ ERROR
B EOJ
ERROR LR 11,0
ABEND 111,DUMP
EOJ PR
PLIST CAMLST RENAME,OLD_NAME,NEW_NAME,VOLLIST
OLD_NAME DC CL44'SYS1.LINKLIB'
NEW_NAME DC CL44'SYS2.LINKLIB'
VOLLIST DC H'1'
DC X'3030200F'
DC CL6'ITSO01'
DC H'0'
END

Figure 33. An example of RENAME macro

Remember that the user assigned to the program must have the required
authority, as we explained in 2.4.3.1, “Common requirements” on page 65.
Otherwise, the program will get an error code. In the case of this sample
program, it will issue a user abend with completion code 111.

2.4.4 Considerations on renaming data sets


The system does not check whether the data set being renamed is actually
“in use” or not. If you have the authority and tell the system to rename the
data set, the system will perform the function unconditionally. Therefore, you
need to be fully aware of how powerful and dangerous this function is. You
must avoid renaming a data set which is actually open, as this might cause
various errors.

2.5 High speed tape positioning


In this section, we describe the high speed tape positioning enhancement,
which improves the performance of multi-file processing.

70 DFSMS Release 10 Technical Update


2.5.1 Background of this enhancement
The IBM 3590 models B and E are capable of recording between 10 GB and
120 GB, depending on model type, tape length, and degree of compression.
In order to exploit the capacity, it is likely that there will be multiple data sets
on a tape volume. In such cases, the traditional method of positioning to the
start of a data set, other than the first, can take a significant amount of time.
We will now describe how the system orients a tape data set as you
requested.

2.5.1.1 Tape data sets


Figure 34 shows how tape data sets are recorded in IBM standard label
format.

IBM standard label format


VOL1 HDR1 HDR2 TM DATA ... TM EOF1 EOF2 TM HDR1 HDR2 TM DATA ...

Figure 34. Data set separation on an SL tape

The SL tape has a volume label (VOL1), to identify the volume, and a group
of header labels (HDR1,HDR2), user records followed by a group of trailer
labels (EOF1,EOF2). Header and trailer labels contain the information
required to identify the data set and its characteristics. As you can see in this
figure, a tape mark (TM) separates label groups and user records. A TM is a
special record for a tape device. The system uses a TM as a delimiter of data
blocks.

2.5.1.2 Current positioning method


When an application needs to orient a data set which is in the middle of a
tape volume, the system needs to validate not only the data set name in the
HDR1 label, but also the label sequence. This must be done so that the
system can make sure that it has oriented the location correctly, as it is could
be happening to write non-labelled data right after the trailer labels, by using
the BLP option.

In order to achieve this, the system uses the forward space file (FSF) channel
command. FSF tells the tape device to orient to the next tape mark from the
current position. As we described earlier, tape marks separate the label group
from the data, so if the system issues an FSF command when the tape
position is at the beginning of the first user record, the tape device will orient
to the beginning of the trailer label group.

Chapter 2. DFSMSdfp enhancements 71


For example, assume a program opens the fifth data set. The following
operations will take place.
1. Read the VOL1 label to make sure that this is the correct tape.
2. Read the HDR1 label for the first data set.
3. FSF is issued; the tape stops at the TM at the end of the first data set.
4. Read the EOF1 label for the second data set.
5. Read the HDR1 label for the second data set.
6. Repeat Step 3 to Step 5 and do an FSF until it reads EOF1 for the fourth
data set.
7. Read the HDR1 label for the fifth data set.

The data set is now ready to be processed.

This technique was adequate for the early tape units, with limited capacity,
and where the physical recording of the data matched the logical sequence.
That is, the second data set would always be further along the tape than the
first, and so on.

2.5.1.3 The IBM 3590 and tape positioning


With the IBM 3590, there is a significant difference between the physical and
the logical view of the data. Physically, the data is written until the end of the
tape is reached; then the motion is reversed, and the next data block is
written with the tape moving back towards its start point. The 3590 model B
does this four times in each direction, and the model E does it eight times in
each direction. This gives a layout which has similarities to a DASD cylinder,
with eight or sixteen tracks. For the purposes of this explanation, we will refer
to the data as being written on a track. Figure 35 shows an illustration of a
tape with five data sets on it.

72 DFSMS Release 10 Technical Update


Logical view

Data Set 1 Data Set 2 Data Set 3 Data Set 4 Data Set 5

Physical view

Track 1 Data Set 1 Data Set 2

Track 2 Data Set 3 Data Set 2

Track 3 Data Set 3 Data Set 4

Track ... Data Set 5

Figure 35. Logical and physical views of tape data sets in a volume

To position to the fifth data set from the beginning of the tape, using FSF, the
tape would change direction four times. This is clearly not very efficient, as
this means that the tape heads have passed the start of the fifth data set
three times. Also, the tape needs to be stopped at each TM in order to verify
label groups, and then be restarted to find the next TM. This start/stop
operation is significant, because the tape has a maximum speed of
approximately 18 kilometers per hour (11 MPH), and this takes some time to
reach, and more time to stop.

2.5.1.4 High speed search processing


Since the introduction of the IBM 3480 with the IDRC feature, IBM tape units
have written data to the cartridges in a standard block size, which is not
related to the block size observed by the host system. For example, the IBM
3490 writes128-KB blocks, and the IBM 3590 writes 384-KB blocks.

Each block has an identification associated with it, which you can obtain from
the subsystem using the Read Block ID channel command; or from BSAM;
you can use the NOTE TYPE=ABS macro. The Block ID can be used by the
Locate Block ID CCW to position the tape to a specific point directly, rather
than doing it sequentially. This function is referred to as high speed search.

Chapter 2. DFSMSdfp enhancements 73


A high speed search interface has been available for programs which perform
OPEN TYPE=J to connect to tape data sets. You can store a Block ID to a
certain field in the copy of JFCB, and tell the system to do a high speed
search (that is, to issue a Locate Block ID channel command). DFSMShsm is
a good example of using this function to position tape volumes.

The contents of the Block ID enable the 3590 to identify which track and
approximately how far along that track the block will be. The length of a 3590
extended cartridge is 600 meters; positioning to a data set with Locate Block
will always have a tape movement of less than this amount. Using FSF, the
tape could move up to 9,600 meters, on a Model E. As FSF tape movement is
5 meters per second, the saving of elapsed time could be considerable.

In the example in Figure 35 on page 73, if Locate Block ID had been used,
the processing would have been as follows:
1. Read the VOL1 to make sure that this is the correct tape.
2. Issue the Locate Block CCW.
3. Read the EOF for data set 4.
4. Read the HDR1 for data set 5.

2.5.2 How does DFSMSdfp and DFSMSrmm solve the problem?


Because DFSMSrmm controls and records the placement of data sets on
tape volumes, it is possible to move directly to the start of a data set without
reading the labels of the other data sets on the volume.

Using an interface with OPEN/CLOSE/EOV processing, DFSMSrmm obtains


the Block IDs of the start and end of a data set and the Block ID of the end of
the volume. This information is recorded in the DFSMSrmm control data set.

The information supplied to OPEN, to position the volume, will depend on the
disposition of the data set:
• For a new data set: Provide the Block ID of the end of the volume.
• For a data set that is to be extended (DISP=MOD): Provide the last Block
ID of the data set.
• For a data set that is to be read: Provide the first Block ID of the data set.

Except for the first data set on a volume, Locate Block ID is always faster
than FSF, and the gain is more significant when there are a large number of
data sets on the volume.

74 DFSMS Release 10 Technical Update


2.5.2.1 Damaged tapes
Another difference between FSF and Locate Block is that the Locate Block
does not have scan all of the tape surface in order to reach the required data
set. Therefore, if a tape is damaged, it might be possible to recover data sets
beyond the point of damage, using high speed search, which would not be
possible with FSF.

2.5.3 How to use this function


All you need is OS/390 Version 2 Release 10 and DFSMSrmm. The system
and DFSMSrmm will automatically record and retrieve Block IDs, and then
use these for high speed data set positioning whenever it is possible.

Or, you may build your own tape management system which interacts with
the system’s OPEN/CLOSE/EOV interface. Refer to the manual, OS/390
DFSMS Installation Exits , SC26-7392.

2.5.4 Worked examples


We performed some basic testing using a 3590 Model B. For the first test, we
wrote 21 data sets on a cartridge. The size of the data sets was such that the
last one was near to the end of the tape. We made a comparison when
DFSMSrmm was active and when DFSMSrmm was not active.

For the second test, we wrote four data sets on a cartridge. These were
larger data sets than in the first test, and the last one was written on the
second track. We made a comparison when DFSMSrmm was active and
when DFSMSrmm was not active. Figure 36 shows the test results.

Chapter 2. DFSMSdfp enhancements 75


Data set positioning time comparison

120

100
80

seconds
Time
60
40
20
0

No RMM/21 data sets No RMM/4 data sets


RMM/21 data sets RMM/4 data sets

Figure 36. Data set positioning time comparison

In the first pair of tests, the tape heads had to move the same distance along
the tape. It can be seen that there is a difference, in the elapsed time, of 59
seconds, which is approximately 3 seconds per data set.

In the second pair of tests, the tape heads had less distance to move using
Locate Block. The difference in elapsed time was approximately 38 seconds.

The conclusion is that there is a substantial saving in elapsed time when


there are a large number of small data sets on a volume, and when data sets
are written on tracks other than the first.

2.6 Enhanced catalog sharing availability enhancement


In this section, we describe the enhancement that has been made to the
enhanced catalog sharing (ECS) feature.

2.6.1 Background of this enhancement


ECS, which was introduced with DFSMS 1.5, is a method of sharing catalogs
in a sysplex. It gives almost equivalent performance for both shared and
non-shared catalogs, while maintaining the integrity of the shared catalogs.
The redbook, Enhanced Catalog Sharing and Management,SG24-5594,
gives a detailed description of this facility.

76 DFSMS Release 10 Technical Update


In the original implementation of ECS only one Coupling Facility (CF)
structure was supported and if this failed, for any reason, catalog sharing
reverted to VVDS sharing mode.

2.6.1.1 Structure rebuilds


Although CF structure failures are extremely rare, there can be occasions,
such as a power failure, when a structure is lost. In DFSMS.MVS V1.5.0, a
structure failure meant that catalog management reverted to the DASD-based
VVDS sharing mode. This had performance implications, and the installation
had to take manual action to restore the ECS capability.

2.6.2 How does DFSMSdfp Release 10 improve this function?


In DFSMSdfp Release 10, ECS will perform a user-managed rebuild of the
structure, in the following circumstances:
• When a loss of CF connectivity or CF structure failure is detected by ECS
• When the operator issues the SETXCF command:
START,REBUILD,STRNAME=SYSIGGCAS_ECS

When the rebuild of the structure has been done successfully, the catalog
restores the original status before the rebuild. So no operator intervention is
required to activate ECS mode again.

For this to be successful, the normal criteria for CF placement and free space
availability should have been followed. This will ensure that there will be
sufficient space available in another CF for the structure to be allocated.

In the unlikely event of the rebuild failing, catalog sharing will revert to VVDS
mode until a structure is made available.

2.6.3 Considerations
In this section, we describe considerations on the ECS enhancement.

2.6.3.1 Coexistence with supported DFSMS/MVS releases


ECS has been available since DFSMS/MVS V1.5.0. If any of your systems
sharing a catalog are prior to DFSMS/MVS V1.5.0, you cannot use ECS.

If a sysplex has a mixed environment, that is, a mix of DFSMS Release 10


and DFSMS/MVS V1.5.0 systems, DFSMSdfp Release 10 will recognize that
the earlier systems cannot take part in the rebuild. In this case, VVDS sharing
mode will be used, and manual intervention will be required to re-establish
ECS mode. There are no coexistence PTFs required for this enhancement.

Chapter 2. DFSMSdfp enhancements 77


78 DFSMS Release 10 Technical Update
Chapter 3. DFSMShsm enhancements

In this chapter, we describe the following DFSMShsm enhancements made to


DFSMS Release 10.
• Multiple DFSMShsm hosts
• Fast subsequent migration
• Data set backup enhancements
• ABARS support for large tape block sizes

3.1 Multiple DFSMShsm hosts


In this section, we describe the new DFSMShsm capability which allows you
to run multiple DFSMShsm address spaces in an OS/390 system.

3.1.1 Background of this enhancement


DFSMShsm is a program that helps you, as a storage administrator, to
manage your storage by backing up critical data sets, migrating aged data
sets to inexpensive storage devices, dumping volumes, and so on. As a
nature of such a data management program, it needs to handle a lot of data.
When DFSMShsm is handling data, system components have to serialize
internal resources on some phases.

Also, we have seen that the SYSZTIOT ENQ resource is a bottleneck on


DFSMShsm performance. What is the SYSZTIOT ENQ resource? This is an
ENQ resource which is widely used by system components when handling
VTOCs, allocating devices, and so on. You could say that many data
manipulations require SYSZTIOT on behalf of their processes.

Since SYSZTIOT ENQ is held at the address space level, if we can run
multiple DFSMShsm address spaces, we can have DFSMShsm process more
data. Before DFSMShsm Release 10, you could not run multiple DFSMShsm
address spaces within an OS/390 system; therefore you needed to configure
an HSMplex. This pertains to a situation in which you needed to share
DFSMShsm resources among multiple OS/390 systems; each OS/390 had a
DFSMShsm address space; and all of the DFSMShsm address spaces
shared the same control data sets (CDSs), journals, and storage pools.

As shown in Figure 37, multiple OS/390 images were needed to run multiple
DFSMShsm before Release 10.

© Copyright IBM Corp. 2000 79


OS/390 V2R9 OS/390 OS/390
V2R9 - A V2R9-B
DFSMShsm DFSMShsm DFSMShsm DFSMShsm
V1.5.0 V1.5.0 V.1.5.0 V1.5.0

DFSMShsm DFSMShsm
Control Data Sets Control Data Sets
MCDS MCDS
BCDS BCDS
OCDS OCDS
Journal Journal

Figure 37. Multiple OS/390 images to run multiple DFSMShsm before Release 10

Another inconvenience would occur if you get a tape device failure while
recalling data sets. DFSMShsm may sometimes hang up because of a tape
device failure, and you may need to cancel the DFSMShsm address space
while other DFSMShsm tasks, which have nothing to do with recall tasks, are
working.

3.1.2 How does DFSMShsm Release 10 solve the problem?


DFSMShsm Release 10 allows you to bring up multiple DFSMShsm hosts in
an OS/390 system. So now HSMplex is meant to be a combination of multiple
DFSMShsm hosts across multiple OS/390 systems.

DFSMShsm has a new startup parameter:

HOSTMODE={MAIN|AUX}

You can have only one MAIN host per OS/390 system, and you can have
multiple AUX hosts per OS/390 system. When you configure an HSMplex
across several OS/390 systems, you can have multiple MAIN hosts and AUX
hosts, as long as you have a MAIN host per OS/390 and the total number of
DFSMShsm hosts does not exceed 39.

80 DFSMS Release 10 Technical Update


Note: When you bring up a DFSMShsm host, you need to specify a unique
host ID within an HSMplex. You can use any of the characters 0 to 9, A to Z,
@, #, and $ as a host ID. Since the host ID must be a single character, the
maximum number of DFSMShsm hosts that can coexist is 39.

Figure 38 shows an example of an HSMplex which consists of multiple


DFSMShsm address spaces among two OS/390 systems.

O S/390 V2R 10 - A O S/390 V2R 10-B

DFS M Shsm DFS M Shsm DFSM S hsm DFSM S hsm DFSM S hs m DFSM Shsm DFS M Shsm DFS M Shsm
M AIN host AUX host 1 AUX host .. AUX hos t n M AIN host AUX host 1 AUX host .. AUX host n

DFSM Shsm
C ontrol D ata Sets
MC DS
BCDS
OCDS
Journal

Figure 38. HSMplex: multiple DFSMShsm address spaces across two OS/390s

3.1.3 Considerations
In this section, we describe several considerations on using multiple
DFSMShsm hosts.

3.1.3.1 Specifying CDSSHR, CDSQ, and CDSR parameters


These parameters specify how you want DFSMShsm to serialize DFSMShsm
control data sets across an HSMplex. In order to bring up multiple
DFSMShsm address spaces, you need to specify these startup parameters
as any of the following:

Chapter 3. DFSMShsm enhancements 81


• All MAIN and AUX hosts have CDSSHR=RLS.
This specifies that you want DFSMShsm to use VSAM record level sharing
(RLS) as a serialization technique. You need to configure a VSAM RLS
environment correctly in order to have DFSMShsm use VSAM RLS for its
control data sets. Minimally, you need to satisfy all of the following
requirements:
- You have a Coupling Facility.
- You have GRS or an equivalent product. However, if you do not need to
configure an HSMplex across multiple systems, then you do not have
to configure GRS.
- All DFSMShsm hosts in an HSMplex are in the same GRS
configuration.
• All MAIN and AUX hosts have both CDSSHR=YES and CDSQ=YES.
These parameters specify that DFSMShsm hosts share control data sets,
and they use global ENQs as a serialization technique. You need to have
GRS or an equivalent product, and all DFSMShsm hosts in an HSMplex
must be in the same GRS configuration. However, if you plan to use
DFSMShsm under a single OS/390, then GRS or an equivalent product is
not required.

3.1.3.2 Differences between MAIN host and AUX hosts


Table 8 shows the functional differences between the MAIN and AUX hosts.
Table 8. MAIN and AUX host differences
MAIN host AUX host

Maximum number of hosts in 1 39 (38 if MAIN host is up)*


an OS/390

Which JES is supported? Both JES2 and JES3 JES2 only

Can TSO DFSMShsm YES NO


commands be processed?

Can Implicit recall be YES NO


processed?

Can DFSMShsm commands YES YES


through MODIFY/STOP be
processed?

Can it be a primary host? YES YES

Can it perform ABARS YES NO


commands?

* The number here applies only if there are no other DFSMShsm hosts on other OS/390 systems
which are making up an HSMplex.

82 DFSMS Release 10 Technical Update


A primary host is a DFSMShsm host which can perform the following HSM
functions:
• Backing up control data sets as the first phase of automatic backup
• Backing up data sets that have migrated before being backed up
• Moving backup versions of data sets from migration level 1 (ML1) volumes
to backup volumes
• Deleting expired dump copies automatically
• Deleting excess dump VTOC copy data sets

You can have a DFSMShsm host perform this level function by specifying
HOST=’nY’ where n is host ID and the second digit is Y, or by specifying
PRIMARY=YES, which is a new parameter from DFSMShsm Release 10.

If you plan to have multiple DFSMShsm hosts in an OS/390 system, we


recommend that you define an AUX host as a primary host so that you can
off-load such workloads from the MAIN host.

3.1.3.3 Setting up multiple DFSMShsm hosts in an OS/390


Here, we describe some hints on setting up an environment with multiple
DFSMShsm hosts.

HSM and PDA logs cannot be shared among DFSMShsm hosts


Each DFSMShsm host needs its own set of HSM and/or PDA logs, as these
data sets cannot be shared among DFSMShsm hosts.

DFSMShsm startup procedure DFSMShsm


You can prepare a startup procedure for each DFSMShsm host, or just
prepare one member and use different parameters when starting
DFSMShsm. The following is an example of DFSMShsm startup procedure:
//DFSMSHSM PROC CMD=00, USE PARMLIB MEMBER ARCCMD00 FOR CMDS
// STR=00, PARMLIB MEMBER FOR STARTUP PARMS
// EMERG=NO, SETS HSM INTO NON-EMERGENCY MODE
// SIZE=0M, REGION SIZE FOR DFSMSHSM
// DDD=50, MAX DYNAMICALLY ALLOCATED DATASETS
// HOST=1, PROC.UNIT ID AND LEVEL FUNCTIONS
// PRIMARY=NO, LEVEL FUNCTIONS
// HOSTMODE=MAIN HOSTMODE
//******************************************************************
//DFSMSHSM EXEC PGM=ARCCTL,DYNAMNBR=&DDD,REGION=&SIZE,TIME=1440,
// PARM=('EMERG=&EMERG','CMD=&CMD',
// 'UID=HSM','HOST=&HOST','STR=&STR',
// 'PRIMARY=&PRIMARY','HOSTMODE=&HOSTMODE')
//*****************************************************************/
//HSMPARM DD DSN=SYS1.PARMLIB,DISP=SHR
//MSYSOUT DD SYSOUT=A
//MSYSIN DD DUMMY
//SYSPRINT DD SYSOUT=A,FREE=CLOSE
//SYSUDUMP DD SYSOUT=A

Chapter 3. DFSMShsm enhancements 83


//*
//*
//MIGCAT DD DSN=HSM.MCDS,DISP=SHR
//JOURNAL DD DSN=HSM.JRNL,DISP=SHR
//ARCLOGX DD DSN=HSM.LOGX&HOST,DISP=OLD
//ARCLOGY DD DSN=HSM.LOGY&HOST,DISP=OLD
//ARCPDOX DD DSN=HSM.PDOX&HOST,DISP=OLD
//ARCPDOY DD DSN=HSM.PDOY&HOST,DISP=OLD
//*

This procedure assumes that the respective host ID is appended to each of


the HSM/PDA log data sets, and they have already been allocated.

For example, if you would like to use this procedure to start a DFSMShsm
host as a MAIN host with no primary function, and another host as an AUX
host with primary function, you issue the following two startup commands:
S DFSMSHSM.HSM1,HOST=1,PRIMARY=NO,HOSTMODE=MAIN
S DFSMSHSM.HSM2,HOST=2,PRIMARY=YES,HOSTMODE=AUX

HSM1 uses HSM.LOGX1, HSM.LOGY1, HSM.PDOX1, and HSM.PDOY1


data sets, and HSM2 uses HSM.LOGX2, HSM.LOGY2 , HSM.PDOX2 and
HSM.PDOY2 for their own use. They will share the same control data sets
HSM.MCDS, HSM.BCDS, HSM.OCDS, and HSM.JRNL (see Figure 39).

OS/390
PDA data sets PDA data sets

HSM.PDOX1 HSM1 HSM2 HSM.PDOX2


HSM.PDOY1 HSM.PDOY2

HOSTMODE=MAIN HOSTMODE=AUX
HSM log data sets HSM log data sets
ID=1 ID=2
PRIMARY=NO PRIMARY=YES
HSM.LOGX1 HSM.LOGX2
HSM.LOGY1 HSM.LOGY2

DFSMShsm
Control Data Sets
HSM.MCDS
HSM.BCDS
HSM.OCDS
HSM.JRNLl

Figure 39. HSM1 as MAIN host and HSM2 as AUX host

84 DFSMS Release 10 Technical Update


Sharing ARCCMDxx among DFSMShsm hosts
AUX hosts cannot process the following commands:
• SETSYS CSALIMITS
• SETSYS ABARS...
• HOLD ABACKUP/ARECOVER
• RELEASE ABACKUP/ARECOVER

The AUX host will issue error messages and ignore these commands. If you
would like to put them into ARCCMDxx and you do not want to see error
messages regarding these commands, you need to use the ONLYIF
command so that these commands are directed to the MAIN host only.

3.1.3.4 Coexistence with supported DFSMS/MVS releases


Intermixing DFSMShsm Release 10 and pre-Release 10 in an HSMplex
works well (see Figure 40).

O S /3 9 0 O S /3 9 0 V 2 R 1 0
V2R 9

D F S M S h sm V 1 .5 .0 D FS M Sh sm R 10 D F SM Shsm R 10 D F SM S hsm R 10

D F S M S h sm
C o n tro l D ata S e ts

M CDS
BCDS
OCDS
J o u rn a l

Figure 40. Intermixing Release 10 and pre-Release 10 in an HSMplex

3.1.3.5 New monitor commands


The new HSM command QUERY IMAGE will show you active DFSMShsm
hosts within an OS/390 system. The following is an example of this command
output:

Chapter 3. DFSMShsm enhancements 85


F HSM1,Q IMAGE
ARC0101I QUERY IMAGE COMMAND STARTING ON HOST=1
ARC0250I HOST PROCNAME JOBID ASID MODE
ARC0250I 1 HSM 29752 0046 MAIN
ARC0250I 2 HSM 29754 0043 AUX
ARC0101I QUERY IMAGE COMMAND COMPLETED ON HOST=1

3.1.3.6 Determining from where DFSMShsm exits should get control


If you have DFSMShsm installation exits and these need to determine from
which DFSMShsm address space they should get control, or you need to
refer to some variables available in the in-storage DFSMShsm control blocks,
then you need to identify the respective MCVT correctly. The sample program
ARCTPEXT supplied in SAMPLIB should help you understand how to obtain
the procedure name, MCVT, for the desired DFSMShsm address space, and
so on. In addition, for more information, you can refer to the manual,OS/390
DFSMShsm Diagnosis Reference Guide, LY35-0112.

3.1.4 Worked examples


In this section, we introduce some worked examples regarding multiple
DFSMShsm address spaces.

3.1.4.1 Multiple DFSMShsm address spaces and WLM


SInce now we can have multiple DFSMShsm address spaces in an OS/390
system, you might want to have them different priorities. For example, when
you want to have DFSMShsm AUX hosts dedicated to backup functions,
putting these hosts in a lower priority than the MAIN host would make sense,
as the MAIN host could get recall requests which should be processed as
soon as possible.

You can achieve this by using either workload manager (WLM) compatible
mode or goal mode. Note that the DPRTY parameter on the EXEC statement
no longer works with OS/390. You can still code it, but the system simply
ignores it without issuing a message. Also note that IBM intends to
discontinue the support for WLM compatible mode. Therefore, we
recommend that you migrate your system policies to WLM goal mode, if your
installation has not already implemented it.

For this reason, our worked example is based on WLM goal mode, and we
describe how to set it up. Therefore, your installation should also be in goal
mode, in order to follow our worked example here.

86 DFSMS Release 10 Technical Update


The overview of WLM goal mode
WLM tunes the system workload, based on service requirement you defined.
The place where you define your service requirement is called a policy. In the
policy, you define a service class which describes how an entity needs to be
managed from the point of view of system resources. The service classes are
assigned to the unit of work running under the system, and WLM tunes the
system workload so that it can meet the service requirement defined for each
unit of work (UOW).

The group of definition is called a policy, and you define the service
requirements within a policy.

You define a service class which describes how an entity needs to be


managed from the point of view of system resources. The service classes are
assigned to the unit of work running under the system, and WLM tunes the
system workload so that it can meet the service requirement defined for each
UOW.

We recommend that you refer to the manual, OS/390 MVS Planning


Workload Management , GC28-1761.

Preparing a startup procedure for each host


We brought up two DFSMShsm hosts, and we prepared separate startup
procedures for each DFSMShsm host, so that we could make the
classification rule easier. The MAIN host had HSM1 and the AUX host had
HSM2.

The classification rule is used to define how you want to assign service
classes you defined to units of work. The simplest way is to use the jobname
or startup procedure name. For this reason, we separated the startup
procedures.

WLM has a default service class called SYSSTC, which is assigned to a


startup procedure which does not comply with the classification you defined.
SYSSTC has the next highest priority to system components. Startup
procedures using SYSSTC have the fixed dispatching priority specified as
(15,14). This class is suitable for the MAIN host, as it needs to respond to
implicit recall requests from other users, even from the database
management subsystems, therefore we did not define a specific service class
to the MAIN host. The example here is for giving a lower priority service to the
HSM2 startup procedure, the AUX host.

Chapter 3. DFSMShsm enhancements 87


Creating a service definition
When you invoke WLM from the ISPF main panel and press Enter, the first
panel you will see is the following screen:

File Help
--------------------------------------------------------------------------

Command ===> ______________________________________________________________

Choose Service Definition

Select one of the following options.


3 1. Read saved definition
2. Extract definition from WLM
couple data set
3. Create new definition

ENTER to continue

The service definition is where you define policies, service classes, and so
on. We selected the option 3 to create a new definition.

Defining a policy
We defined the definition name as HSMTEST, as shown in Figure 41.

File Utilities Notes Options Help


--------------------------------------------------------------------------
Functionality LEVEL001 Definition Menu WLM Appl LEVEL011
Command ===> ______________________________________________________________

Definition data set . . : none

Definition name . . . . . HSMTEST (Required)


Description . . . . . . . DFSMShsm function test

Select one of the


following options. . . . . 1__ 1. Policies
2. Workloads
3. Resource Groups
4. Service Classes
5. Classification Groups
6. Classification Rules
7. Report Classes
8. Service Coefficients/Options
9. Application Environments
10. Scheduling Environments

Figure 41. The definition top panel

Once a service definition has been created or selected, all tasks regarding
service level definition are made in reference to this definition. The panel
shown Figure 41 is the root menu for each task discussed below.

From this panel, we selected 1 to define a policy. (This panel is the place
where you can define your desired service policy). Since this was a new

88 DFSMS Release 10 Technical Update


service definition, no policy had yet been defined in it. We now selected 1 to
define a policy.

Service-Policy Notes Options Help


--------------------------------------------------------------------------
Create a Service Policy
Command ===> ______________________________________________________________

Enter or change the following information:

Service Policy Name . . . . . HSMPOLCY (Required)


Description . . . . . . . . . Policy designed for DFSMShsm

The policy is just a name which represents the total level of service
requirements. Therefore, at this phase, you can name it whatever you want,
just like the new definition name. We named it HSMPOLCY.

Defining a workload
Then, we went back to the definition top panel (Figure 41 on page 88), and
chose 2 to define a workload.

Workload Notes Options Help


--------------------------------------------------------------------------
Create a Workload
Command ===> ______________________________________________________________

Enter or change the following information:

Workload Name . . . . . . . . HSMWORK (Required)


Description . . . . . . . . . ________________________________

Workload is similar to policy. It represents a group of service level


requirement which is to be managed and monitored. Just as with policies, you
can name a workload whatever you want. We named it as HSMWORK.

Defining a service class


After defining a workload, we went back to the definition top menu again
(Figure 41 on page 88) and chose 4 to define a service class.

Service-Class Notes Options Help


--------------------------------------------------------------------------
Create a Service Class Row 1 to 1 of 1
Command ===> ______________________________________________________________

Service Class Name . . . . . . HSMAUX (Required)


Description . . . . . . . . . Service class for hsm AUX hosts
Workload Name . . . . . . . . HSMWORK (name or ?)
Base Resource Group . . . . . ________ (name or ?)
Cpu Critical . . . . . . . . . NO (YES or NO)

Specify BASE GOAL information. Action Codes: I=Insert new period,


E=Edit period, D=Delete period.

---Period--- ---------------------Goal---------------------
Action # Duration Imp. Description
I_
******************************* Bottom of data ********************************

Chapter 3. DFSMShsm enhancements 89


You define service level in a service class. We named a new service class
HSMAUX as a part of the HSMWORK workload, and typed I in the Action
field to define the service level for the HSM AUX host.

Service-Class Notes Options Help


- ----------------------------
Choose a goal type for period 1 ss Row 1 to 1 of 1
C _____________________________

S 3_ 1. Average response time ired)


D 2. Response time with percentile or hsm AUX hosts
W 3. Execution velocity or ?)
B 4. Discretionary or ?)
C or NO)

S I=Insert new period,


E

---Period--- ---------------------Goal---------------------
Action # Duration Imp. Description
I
******************************* Bottom of data ********************************

As you can see, there are several kinds of goals you can choose as a service
level. We chose 3 to define the service level based on CPU usage.

Service-Class Notes Options Help


- ----------------------------
Choose a goal type for period 1 ss Row 1 to 1 of 1
C _____________________________

S 3 1. Average response time ired)


D osts
W Execution velocity goal
B
C Enter an execution velocity for period 1

S Velocity . . . 40 (1-99) period,


E
Importance . . 1 (1=highest, 5=lowest)
Duration . . . _________ (1-999,999,999, or ----------
A none for last period)

* *********************

Then we specified 40 as the target execution velocity. The bigger value you
specify, the higher priority a unit of work will get. Since this is an AUX host, it
is not necessary to work in top priority.

Defining a classification rule


The classification rule is the place where you define your administration
policy about how you want to assign service classes to units of work in your
installation.

90 DFSMS Release 10 Technical Update


Subsystem-Type View Notes Options Help
--------------------------------------------------------------------------
Subsystem Type Selection List for Rules Row 1 to 14 of 14
Command ===> ______________________________________________________________

Action Codes: 1=Create, 2=Copy, 3=Modify, 4=Browse, 5=Print, 6=Delete,


/=Menu Bar

------Class-------
Action Type Description Service Report
__ ASCH Use Modify to enter YOUR rules
__ CB Use Modify to enter YOUR rules
__ CICS Use Modify to enter YOUR rules
__ DB2 Use Modify to enter YOUR rules
__ DDF Use Modify to enter YOUR rules
__ IMS Use Modify to enter YOUR rules
__ IWEB Use Modify to enter YOUR rules
__ JES Use Modify to enter YOUR rules
__ LSFM Use Modify to enter YOUR rules
__ MQ Use Modify to enter YOUR rules
__ OMVS Use Modify to enter YOUR rules
__ SOM Use Modify to enter YOUR rules
3_ STC Use Modify to enter YOUR rules
__ TSO Use Modify to enter YOUR rules
******************************* Bottom of data ********************************

We selected STC to define a rule for DFSMShsm, as a DFSMShsm host runs


as a started task, and entered 3 to modify the rule.

Subsystem-Type Xref Notes Options Help


--------------------------------------------------------------------------
Modify Rules for the Subsystem Type Row 1 to 1 of 1
Command ===> ____________________________________________ SCROLL ===> PAGE

Subsystem Type . : STC Fold qualifier names? Y (Y or N)


Description . . . Use Modify to enter YOUR rules

Action codes: A=After C=Copy M=Move I=Insert rule


B=Before D=Delete row R=Repeat IS=Insert Sub-rule
More ===>
--------Qualifier-------- -------Class--------
Action Type Name Start Service Report
DEFAULTS: ________ ________
____ 1 TN__ HSM2____ ___ HSMAUX__ ________
****************************** BOTTOM OF DATA ******************************

We entered TN and HSM2 as a qualifier, and HSMAUX as service class. TN


stands for transaction name or jobname. Therefore the rule we defined
meant that “Use the service class HSMAUX for the job HSM2”.

Since we did not define any other rules, the startup procedure HSM1 for the
MAIN host would have SYSSTC, which has a much higher priority than
HSMAUX

Installing the definition and activating the policy


At this point, we had finished setting up the minimal policy set, and it was
ready to activate. We returned to the top definition menu (Figure 41 on
page 88) and moved the cursor to Utilities, located in the first row, and
pressed Enter.

Chapter 3. DFSMShsm enhancements 91


File Utilities Notes Options Help
----- ----------------
Funct 1. Install definition Appl LEVEL011
Comma 2. Extract definition _________________
3. Activate service policy
Defin 4. Allocate couple data set
5. Allocate couple data set using CDS values
Defin
Description . . . . . . . DFSMShsm function test

Select one of the


following options. . . . . ___ 1. Policies
2. Workloads
3. Resource Groups
4. Service Classes
5. Classification Groups
6. Classification Rules
7. Report Classes
8. Service Coefficients/Options
9. Application Environments
10. Scheduling Environments
--------------------------------------------------------------------

First, we needed to install the service definition to the system, so we selected


1 to install. If there is already a service definition, the panel will ask you if it is
OK to overwrite. After the definition had installed, a policy in the definition
should be activated. Using the same Utilities menu, we selected 3 to activate
HSMPOLCY we made.

File Utilities Notes Options Help


-
F Policy Selection List Row 1 to 1 of 1
C Command ===> ________________________________________________________

D The following is the current Service Definition installed on the WLM


couple data set.
D
D Name . . . . : HSMTEST

S Installed by : HGPARK from system SC63


f Installed on : 2000/09/11 at 18:07:35

Select the policy to be activated with "/"

Sel Name Description


/ HSMPOLCY Policy desgined for DFSMShsm
************************** Bottom of data ***************************

92 DFSMS Release 10 Technical Update


SDSF output example
The following shows the screen images after activating the policy:

Display Filter View Print Options Help


-------------------------------------------------------------------------------
SDSF DA SC63 SC63 PAG 0 SIO 8 CPU 5/ 5 LINE 1-2 (2)
COMMAND INPUT ===> SCROLL ===> CSR
NP JOBNAME StepName ProcStep JobID Owner C Pos DP Real Paging SIO
HSM1 HSM1 DFSMSHSM STC02352 STC NS FE 3609 0.00 0.00
HSM2 HSM2 DFSMSHSM STC02467 STC NS F9 1331 0.00 0.00

Display Filter View Print Options Help


-------------------------------------------------------------------------------
SDSF DA SC63 SC63 PAG 0 SIO 8 CPU 5/ 5 LINE 1-2 (2)
COMMAND INPUT ===> SCROLL ===> CSR
NP JOBNAME U% Workload SrvClass SP ResGroup Server Quiesce ECPU-Time ECPU%
HSM1 5 SYSTEM SYSSTC 1 NO 274.83 0.00
HSM2 5 HSMWORK HSMAUX 1 NO 0.61 0.00

As you can see, HSM2 has HSMWORK service class, and it has a lower
dispatching priority than HSM1. The HSM2’s dispatching priority may vary,
depending on the system workload.

3.1.4.2 Performance measurements


We compared the performance of primary space management for various test
cases. We prepared 14 primary volumes (each volume containing 80 of these
7.2-MB data sets) and 14 ML1 volumes, and we tested the following two
cases:
• Single DFSMShsm host
We brought up only one DFSMShsm. The DFSMShsm had 14 volume
migration tasks and performed primary space management.
• Dual DFSMShsm hosts
We brought up two DFSMShsm in an OS/390. Each DFSMShsm had
seven volume migration tasks and performed primary space management
concurrently.

Chapter 3. DFSMShsm enhancements 93


Figure 42 shows the test results.

Primary space management

500

400

Elapsed time
seconds
300

200

100

Single DFSMShsm Dual DFSMShsm

Figure 42. Multiple DFSMShsm performed better than single DFSMShsm

Both tests were performed under the same hardware and software
configurations. Please remember that the purpose of this figure is only to give
you a general idea how this function can help DFSMShsm improve the
function. We do not guarantee that you would get the same results as this
figure, since there are many factors that affect performance measurements,
such as I/O configuration, software configuration, and workload distribution.

3.2 Fast subsequent migration


In this section, we describe new enhancements made to the migration
function, referred to as fast subsequent migration.

94 DFSMS Release 10 Technical Update


3.2.1 Background of this enhancement
Migration, one of DFSMShsm’s most powerful functions, moves unreferenced
data sets to a lower level of storage hierarchy. Typical data management
policy is to have DFSMShsm move unreferenced data sets to DASD volumes,
which are referred to as migration level 1 (ML1) volumes in compressed
format, for a certain period. Then, later, if these data sets still have not been
referenced, they are moved to tape volumes, which are referred to as
migration level 2 (ML2) volumes. Or, the data sets may be directly moved to
ML2 volumes (see Figure 43).

Day 0 Day 17 Day 60 Day 90

PRIMARY PRIMARY PRIMARY PRIMARY

PRIMARY DAYS NON USAGE = 17


LEVEL1 DAYS NON-USAGE=90 MIGRAT MIGRAT MIGRAT

Data Set A Data Set A Data Set A Data Set A


PRIMARY DAYS NON USAGE =60 PRIMARY DAYS NON USAGE =60
LEVEL1 DAYS NON-USAGE=0 LEVEL1 DAYS NON-USAGE=0 MIGRAT MIGRAT

Data Set B Data Set B Data Set B Data Set B

Primary to Level 1 Primary to Level 2


migration migration

ML1 ML1 ML1 ML1

A A A A

Level 1 to Level 2
migration

ML2 ML2 ML2 ML2


A
B B

Figure 43. Data movement through migration function

Chapter 3. DFSMShsm enhancements 95


Assume that we have a data set called ITSO.DSET, which has been migrated
to ML2 tape volume HSMC00. When the data set is being recalled,
DFSMShsm allocates space on primary DASD and restores the contents from
the tape migration copy. DFSMShsm marks the respective control record in
DFSMShsm control data sets as invalid, but never overwrites the tape
migration copy (see Figure 44).

Catalog Primary ML2 DFSMShsmControl Data Sets


Volume Volume

DSNAME VOLSER ... ITSO.DSET


ITSO.DSET MIGRAT ... PRIM00 HSMC00 Migrated? VOLSER ...
ML2 HSMC00 ...

HSMC00
ITSO.DSET
DSNAME LOCATION
... ...
ITSO.DSET xx
... ...

RECALL

DSNAME VOLSER ...


ITSO.DSET
ITSO.DSET PRIM00 ...
Migrated? VOLSER ...
PRIM00 HSMC00 ML2 HSMC00 ...

HSMC00
ITSO.DSET ITSO.DSET DSNAME LOCATION
... ...
ITSO.DSET xx
... ...

Figure 44. DFSMShsm invalidates control records after recall

96 DFSMS Release 10 Technical Update


When the data set is being migrated to an ML2 tape volume again, prior to
DFSMShsm Release 10, DFSMShsm did not reuse the existing tape copy,
even if the data set had not been modified since it was originally migrated.
Rather, DFSMShsm created a new tape copy and updated the respective
control records (see Figure 45).

Catalog Primary ML2 DFSMShsm Control Data Sets


Volume Volume

DSNAME VOLSER ... ITSO.DSET


ITSO.DSET PRIM00 ... PRIM00 HSMC00 Migrated? VOLSER ...
ML2 HSMC00 ...

HSMC00
ITSO.DSET ITSO.DSET
DSNAME LOCATION
... ...
ITSO.DSET xx
... ...

MIGRATION

DSNAME VOLSER ...


ITSO.DSET
ITSO.DSET MIGRAT ...
Migrated? VOLSER ...
PRIM00 HSMC00 ML2 HSMC00 ...

HSMC00
ITSO.DSET ITSO.DSET DSNAME LOCATION
... ...
ITSO.DSET
ITSO.DSET xx
... ...
ITSO.DSET yy

Figure 45. DFSMShsm creates new migration copy and updates control records

This means that the data already written on ML2 remains invalid, and the
space remains unused until the tape is processed by RECYCLE. It also
means that the migration process has to perform data movement, even
though a copy already exists that could be used instead.

Chapter 3. DFSMShsm enhancements 97


3.2.2 How does DFSMShsm Release 10 improve this function?
DFSMShsm Release 10 reuses existing tape migration copies when it is
going to migrate data sets to ML2 tape volumes, whenever it is possible. We
call this process reconnection. If DFSMShsm can reconnect a data set to
the existing ML2 tape copy, it does not have to perform any data movement;
rather, it just updates the control records and the catalog record to indicate
that the data set has been migrated.

3.2.3 How to use this new function


In order to use this function, you need to issue the following SETSYS
commands to DFSMShsm:
• SETSYS USERDATASETSERIALIZATION
This parameter indicates that data sets will be protected by data set
enqueues. If you are sharing data sets between OS/390 images, you will
also need GRS or an equivalent product that propagates this protection to
other OS/390 images.
• SETSYS TAPEMIGRATION(RECONNECT(ALL|ML2DIRECTEDONLY)
This is a new SETSYS parameter for this new function. By issuing this
command, you guarantee that DFSMShsm only modifies the change bit on
the format 1 DSCB, as this new function is dependent upon this bit. We
explain below how this works, so that you can understand the importance
of this command:
- ALL specifies that DFSMShsm tries to reconnect the ML2 tape migration
copy anyway, regardless of whether a data set is supposed to be
placed on ML1 DASDs or ML2 tape volumes.
- ML2DIRECTEDONLY specifies that DFSMShsm tries to reconnect only when
the data set is supposed to be placed on ML2 again.

3.2.3.1 How does fast subsequent migration work?


Even though you have specified these SETSYS commands, DFSMShsm may
not perform reconnection when it is not sure if the existing ML2 tape copy was
the same as the original. Let us explain how fast subsequent migration works,
so that you can understand all of the factors that would affect this function.

When a data set, which has been migrated to ML2 tape, is being recalled,
DFSMShsm checks if both SETSYS USERDATASETSERIAIZATIONS and SETSYS
TAPEMIGARARTION(RECONNECT(ALL|ML2DIRECTEDONLY)) will take effect. If they do,
DFSMShsm will set a bit in the respective catalog record indicating that this
data set is a candidate for reconnection (see Figure 46).

98 DFSMS Release 10 Technical Update


Catalog Primary ML2 DFSMShsm Control Data Sets
Volume Volume

ITSO.DSET
DSNAME VOLSER ... PRIM00 HSMC00 Migrated? VOLSER ...
ITSO.DSET PRIM00 ... ML2 HSMC00 ...

HSMC00
ITSO.DSET
DSNAME LOCATION
... ...
ITSO.DSET xx
... ...

RECALL
USERDATASETSERIAIZATION
RECONNECT(ALL or ML2DIRECTEDONLY)

ITSO.DSET
DSNAME VOLSER ...
Migrated? VOLSER ...
ITSO.DSET PRIM00 ... PRIM00 HSMC00 ML2 HSMC00 ...

HSMC00
Reconnectible ITSO.DSET ITSO.DSET DSNAME LOCATION
... ...
ITSO.DSET xx
... ...

Figure 46. DFSMShsm sets a bit on in the catalog record

When the data set has aged again and become eligible for migration,
DFSMShsm sees if the data set is a candidate for reconnection by checking
the bit. If it is eligible, DFSMShsm will see if all of the following are true:
• SETSYS USERDATASETSERIALIZATION takes effect.
• SETSYS TAPEMIGRATION(RECONNECT(ALL|ML2DIRECTEDONLY)) takes effect.
• When SETSYS TAPEMIGRATION(RECONNECT(ML2DIRECTEDONLY)) takes effect, and
the data set is being migrated through volume migration, the data set
should be migrated to ML2 tape directly.
For example, SETSYS TAPEMIGRATION(DIRECT) should take effect if the data
set is not system-managed, or it should have a management class with
Level 1 Days Non-Usage = 0 attribute if it is system-managed.
When the data set is being migrated through command data set migration,
DFSMShsm tries to reconnect, depending upon whether ALL or RECONNECT
has been specified.
• Migration control records for the data set still exist.
• The creation date of the data set is the same as the creation date stored in
the migration control record.

Chapter 3. DFSMShsm enhancements 99


• The data set has not been backed up since it was recalled.
• The change bit in format1 DSCB is off.
• The old ML2 migration copy exists and it does not span multiple volumes.
• The data set does not have any alternate indexes if it is a VSAM data set.
• The migration request is not for extent reduction, through the MIGRATE
command with the CONVERT keyword, or through the ARCHMIG macro
with the FORCEML1 =YES parameter.
• If ARCMDEXT exists and makes a decision for the data set, it allows
DFSMShsm to reconnect.

If all of the above conditions are met, DFSMShsm will reconnect the data set.
That is, it will update the migration control records/catalogs so that it can use
the existing ML2 copy again. DFSMShsm does not have to make a new ML2
tape copy when it can reconnect (see Figure 47).

Catalog Primary ML2 DFSMShsm Control Data Sets


Volume Volume

ITSO.DSET
DSNAME VOLSER ...
PRIM00 HSMC00 Migrated? VOLSER ...
ITSO.DSET PRIM00 ... ML2 HSMC00 ...

HSMC00
Reconnectible ITSO.DSET ITSO.DSET
DSNAME LOCATION
... ...
ITSO.DSET xx
... ...

MIGRATION
USERDATASETSERIAIZATION
RECONNECT(ALL or ML2DIRECTEDONLY)

ITSO.DSET
DSNAME VOLSER ...
Migrated? VOLSER ...
ITSO.DSET MIGRAT ... PRIM00 HSMC00 ML2 HSMC00 ...

HSMC00
ITSO.DSET ITSO.DSET DSNAME LOCATION
... ...
If not ITSO.DSET xx
modified ... ...

Figure 47. DFSMShsm does not make new ML2 tape copy if it can reconnect

If any of the above conditions are not met, DFSMShsm tries to migrate the
data set as usual. That is, it will make a new ML2 copy.

100 DFSMS Release 10 Technical Update


3.2.4 Considerations
In this section, we describe some considerations on using this function.

3.2.4.1 Changed bit in format 1 DSCB (DS1CHA)


In order to reconnect the old tape migration copy, DFSMShsm needs to make
sure if the data set has not been changed since it was recalled. DFSMShsm
examines the changed bit in format 1 DSCB (DS1CHA) to determine if the
data set has been modified, as we described in 3.2.3.1, “How does fast
subsequent migration work?” on page 98. For this reason, you need to ensure
that no products other than DFSMShsm can reset the change bit information.
Turning this bit on is OK, as DFSMShsm takes the usual migration process for
data sets with the changed bit on.

Even though DFSMShsm is the only software that touches the changed bit,
you need to make sure that you do not use a dump class which has the
RESET attribute. When you take a volume dump using the BACKVOL DUMP
command or automatic dump processing with such a dump class,
DFSMShsm resets the changed bit, while it leaves the catalog records as is.
Therefore, DFSMShsm might reconnect the old migration copy, which is no
longer valid.

You must not specify SETSYS RECONNECT(ALL|ML2DIRECTEDONLY) until you have


confirmed all of the above requirements.

In other words, by specifying SETSYS RECONNECT(ALL|ML2DIRECTEDONLY), you will


declare that all the changed bits are trustworthy.

3.2.4.2 MCDS could be increased


Under pre-DFSMShsm Release 10, migration control records have had no
value once the migrated data sets were recalled.

Note: The major purpose of retaining these records for a certain period is to
avoid unnecessary CI splits when a certain ranges of data sets are repeating
migration and recall.

You can control the retention period for these records through the first
parameter of SETSYS MIGRATIONCLEANUPDAYS, and DFSMShsm deletes
them based on your specification during migration cleanup processes.

As we described in 3.2.3.1, “How does fast subsequent migration work?” on


page 98, the fast subsequent migration function needs migration control
records for recalled data sets.

Chapter 3. DFSMShsm enhancements 101


The third MIGRATIONCLEANUPDAYS parameter
To address the concerns we described above, DFSMShsm Release 10
provides a third parameter for the SETSYS MIGRATIONCLEANUPDAYS
command:

SETSYS MIGRATIONCLEANUPDAYS(recalldays statdays reconnectdays)

DFSMShsm uses the third parameter to retain migration control records for
reconnection candidates data sets. For example, a data set recalled from an
ML2 tape volume can be a candidate of reconnection. DFSMShsm keeps the
migration control record of the reconnection candidate data set for the
predicted migration period plus reconnectdays. DFSMShsm calculates the
predicted migration period as the migration date minus the last reference
date. For example, if you referred to a data set on a certain day, the data set
is migrated two weeks later, and the predicted migration period will be 14.

Note: The default value for reconnectdays is 3.

3.2.4.3 Coexistence with supported DFSMS/MVS releases


Data sets recalled from a down-level DFSMShsm host are not candidates for
reconnection. If a data set is recalled from a down-level DFSMShsm host,
and DFSMShsm Release 10 needs to migrate the data set, DFSMShsm will
not reconnect; rather, it will use the normal migration technique.

3.2.4.4 ARCMDEXT changes


You might want to use ARCMDEXT so that you can control the migration
function on an exception basis. The input data structure has been changed so
that you can tell DFSMShsm whether or not you want to allow reconnection.
Please refer to the manual, OS/390 DFSMS Installation Exits, SC26-7392, for
more information.

102 DFSMS Release 10 Technical Update


3.2.5 Worked examples
We allocated 1,280 of the 7.2-MB data sets evenly across a storage group
which contains 16 model 3390-3 type volumes. We used the following
management class and storage group definitions:

Panel Utilities Scroll Help


_______________________________________________________________________________
MANAGEMENT CLASS DEFINE Page 2 of 5
Command ===>

SCDS Name . . . . . . : SYS1.SMS.SCDS


Management Class Name : MCGOML2

To DEFINE Management Class, Specify:

Partial Release . . . . . . . . . N (Y, C, YI, CI or N)

Migration Attributes
Primary Days Non-usage . . . . 0 (0 to 9999 or blank)
Level 1 Days Non-usage . . . . 0 (0 to 9999, NOLIMIT or blank)
Command or Auto Migrate . . . . BOTH (BOTH, COMMAND or NONE)

GDG Management Attributes


# GDG Elements on Primary . . . (0 to 255 or blank)
Rolled-off GDS Action . . . . . (MIGRATE, EXPIRE or blank)

Use ENTER to Perform Verification; Use UP/DOWN Command to View other Panels;
Use HELP Command for Help; Use END Command to Save and Exit; CANCEL to Exit.

Panel Utilities Help


______________________________________________________________________________
POOL STORAGE GROUP DEFINE
Command ===>

SCDS Name . . . . . : SYS1.SMS.SCDS


Storage Group Name : SGLSS67
To DEFINE Storage Group, Specify:
Description ==> TEST STORAGE GROUP FOR SMS R10 REDBOOK
==>
Auto Migrate . . Y (Y, N, I or P) Migrate Sys/Sys Group Name . .
Auto Backup . . Y (Y or N) Backup Sys/Sys Group Name . .
Auto Dump . . . N (Y or N) Dump Sys/Sys Group Name . . .

Dump Class . . . (1 to 8 characters)


Dump Class . . . Dump Class . . .
Dump Class . . . Dump Class . . .

Allocation/migration Threshold: High . . 2 (1-99) Low . . 1 (0-99)


Guaranteed Backup Frequency . . . . . . NOLIMIT (1 to 9999 or NOLIMIT)

DEFINE SMS Storage Group Status . . . N (Y or N)


Use ENTER to Perform Verification and Selection;
Use HELP Command for Help; Use END Command to Save and Exit; CANCEL to Exit.

As you can see, the management class specifies that data sets should go to
ML2 directly. Then we had DFSMShsm perform primary space management,
and it migrated 1,248 data sets totally to ML2 tape volumes. We recalled all of
these data sets to primary storage and modified some of the data sets. Then
we had DFSMShsm migrate them again.

Chapter 3. DFSMShsm enhancements 103


Figure 48 shows an example of the DFSMShsm active log:

ARC0520I PRIMARY SPACE MANAGEMENT STARTING

ARC0522I SPACE MANAGEMENT STARTING ON VOLUME HG6600(SMS) AT 14:56:01 ON 2000/08/23, SYSTEM SC63
ARC0522I SPACE MANAGEMENT STARTING ON VOLUME HG6700(SMS) AT 14:54:01 ON 2000/08/23, SYSTEM SC63
ARC0522I SPACE MANAGEMENT STARTING ON VOLUME HG6601(SMS) AT 14:56:01 ON 2000/08/23, SYSTEM SC63
ARC0522I SPACE MANAGEMENT STARTING ON VOLUME HG6701(SMS) AT 14:54:01 ON 2000/08/23, SYSTEM SC63
:
ARC0734I ACTION=MIG-RCN FRVOL=HG6600 TOVOL=TST108 TRACKS= 150 RC= 0, REASON= 0, AGE= 0, DSN=HGPARK.S0000
ARC0734I ACTION=MIG-RCN FRVOL=HG6600 TOVOL=TST108 TRACKS= 150 RC= 0, REASON= 0, AGE= 0, DSN=HGPARK.S0001
:
:
ARC0734I ACTION=MIG-RCN FRVOL=HG6706 TOVOL=TST110 TRACKS= 150 RC= 0, REASON= 0, AGE= 0, DSN=PARKHG.T6630
ARC0734I ACTION=MIG-RCN FRVOL=HG6706 TOVOL=TST110 TRACKS= 150 RC= 0, REASON= 0, AGE= 0, DSN=PARKHG.T6631
ARC0734I ACTION=MIGRATE FRVOL=HG6707 TOVOL=TST104 TRACKS= 150 RC= 0, REASON= 0, AGE= 0, DSN=HGPARK.T6702
ARC0734I ACTION=MIG-RCN FRVOL=HG6706 TOVOL=TST110 TRACKS= 150 RC= 0, REASON= 0, AGE= 0, DSN=PARKHG.T6632
ARC0734I ACTION=MIG-RCN FRVOL=HG6706 TOVOL=TST110 TRACKS= 150 RC= 0, REASON= 0, AGE= 0, DSN=PARKHG.T6633
ARC0734I ACTION=MIG-RCN FRVOL=HG6706 TOVOL=TST110 TRACKS= 150 RC= 0, REASON= 0, AGE= 0, DSN=PARKHG.T6634
ARC0734I ACTION=MIGRATE FRVOL=HG6605 TOVOL=TST101 TRACKS= 150 RC= 0, REASON= 0, AGE= 0, DSN=PARKHG.S6532
ARC0734I ACTION=MIGRATE FRVOL=HG6607 TOVOL=TST105 TRACKS= 150 RC= 0, REASON= 0, AGE= 0, DSN=HGPARK.S6718
ARC0734I ACTION=MIG-RCN FRVOL=HG6706 TOVOL=TST110 TRACKS= 150 RC= 0, REASON= 0, AGE= 0, DSN=PARKHG.T6635
ARC0734I ACTION=MIG-RCN FRVOL=HG6706 TOVOL=TST110 TRACKS= 150 RC= 0, REASON= 0, AGE= 0, DSN=PARKHG.T6638
ARC0521I PRIMARY SPACE MANAGEMENT ENDED SUCCESSFULLY

Figure 48. Example of DFSMShsm active log

As you can see in this figure, the keyword MIG-RCN appearing in the
ARC0734I message is a new keyword. It indicates that the data set was
reconnected instead of having used the normal migration technique.

Figure 49 shows the output of the REPORT DAILY command:

1--DFSMSHSM STATISTICS REPORT ------- AT 17:07:25 ON 2000/08/23 FOR SYSTEM=SC63

DAILY STATISTICS REPORT FOR 00/08/23

STARTUPS=003, SHUTDOWNS=002, ABENDS=000, WORK ELEMENTS PROCESSED=008555, BKUP VOL RECYCLED=00000, MIG VOL RECYCLED=00000
DATA SET MIGRATIONS BY VOLUME REQUEST= 0005227, DATA SET MIGRATIONS BY DATA SET REQUEST= 00000, BACKUP REQUESTS= 0000000
EXTENT REDUCTIONS= 0000000 RECALL MOUNTS AVOIDED= 01329 RECOVER MOUNTS AVOIDED= 00000
FULL VOLUME DUMPS= 000000 REQUESTED, 00000 FAILED; DUMP COPIES= 000000 REQUESTED, 00000 FAILED
FULL VOLUME RESTORES= 000000 REQUESTED, 00000 FAILED; DATASET RESTORES= 000000 REQUESTED, 00000 FAILED
ABACKUPS= 00000 REQUESTED,00000 FAILED; EXTRA ABACKUP MOUNTS=00000
DATA SET MIGRATIONS BY RECONNECTION = 001184, NUMBER OF TRACKS RECONNECTED TO TAPE = 00177600

NUMBER ------READ-------- -----WRITTEN------ ------REQUESTS---- AVERAGE ------AVERAGE TIME-------


HSM FUNCTION DATASETS TRK/BLK BYTES TRK/BLK BYTES SYSTEM USER FAILED AGE QUEUED WAIT PROCESS TOTAL

MIGRATION
PRIMARY - LEVEL 1 0002731 00409650 019264690K 00010924 000391678K 002731 00000 00000 00000 0000 00000 00003 00003
SUBSEQUENT MIGS 0000000 00000000 000000000K 00000000 000000000K 000000 00000 00000 00000 0000 00000 00000 00000
PRIMARY - LEVEL 2 0002496 00374400 017606990K 00000000 009278448K 002496 00000 00000 00000 0000 00000 00001 00001
RECALL
LEVEL 1 - PRIMARY 0003276 00013104 000469852K 00491400 023109170K 000000 07279 04003 00000 0115 00000 00002 00117
LEVEL 2 - PRIMARY 0001335 00000000 009441088K 00200250 009417189K 000000 01340 00005 00000 0508 00000 00002 00510
DELETE
MIGRATE DATA SETS 0000000 00000000 000000000K 00000000 000000000K 000000 00000 00000 00000 0000 00000 00000 00000
PRIMARY DATA SETS 0000000 00000000 000000000K 00000000 000000000K 000000 00000 00000 00000 0000 00000 00000 00000
BACKUP
DAILY BACKUP 0000000 00000000 000000000K 00000000 000000000K 000000 00000 00000 00000 0000 00000 00000 00000
SUBSEQUENT BACKUP 0000000 00000000 000000000K 00000000 000000000K 000000 00000 00000 00000 0000 00000 00000 00000
DELETE BACKUPS 0000000 00000000 000000000K 00000000 000000000K 000000 00000 00000 00000 0000 00000 00000 00000
RECOVER
BACKUP - PRIMARY 0000000 00000000 000000000K 00000000 000000000K 000000 00000 00000 00000 0000 00000 00000 00000
RECYCLE
BACKUP - SPILL 0000000 00000000 00000000 000000 00000 00000 00000 0000 00000 00000 00000
MIG L2 - MIG L2 0000000 00000000 00000000 000000 00000 00000 00000 0000 00000 00000 00000

Figure 49. Example of REPORT DAILY command output

104 DFSMS Release 10 Technical Update


Figure 50 shows a performance comparison between a normal ML2
migration, versus a mixture of fast subsequent migration and normal
migration.

Note: The text appearing in bold-face letters in Figure 50 is new information


about the fast subsequent migration. There are no changes on the PRIMARY -
LEVEL 2 row. The report accumulates data sets which have been migrated
through reconnection as well as normal migration.

Primary to ML2 migration

900
800
700
Elapsed Time

600
seconds

500
400
300
200
100
0

RECONNECT(NONE) RECONNECT(ALL)

Figure 50. Performance comparison — normal ML2 migration and reconnection

As we described in 3.1.4, “Worked examples” on page 86, we do not


guarantee that you would get the same result as ours.

Both tests were made under a single DFSMShsm address space. Please note
that the elapsed time includes tape mounting/demounting overhead.

The purpose of showing this figure is to demonstrate that this function has a
potential to improve migration performance, since DFSMShsm does not
physically read/write user data sets when it can reconnect to existing ML2
tape copy data sets. We had designed our test in order to have DFSMShsm
reconnect more than 90 percent of data sets. However, in reality, the
reconnection ratio could be much different from those in our test.

Chapter 3. DFSMShsm enhancements 105


3.3 Data set backup enhancements
In this section, we describe new enhancements made to the data set backup
function.

3.3.1 Background of these enhancements


DFSMShsm has a function to take a data set level backup on demand. You
can use the following methods to have DFSMShsm take a data set level
backup:
• ARCHBACK macro:
This macro provides a programming interface to DFSMShsm services.
• ARCINBAK program:
This program, which is known as in-line backup, allows you to take
backups using one job step program.
• BACKDS command:
You can issue this command along with the MODIFY operator command,
or the TSO HSEND command.
• HBACKDS command:
You can issue this command along with the MODIFY operator command,
or the TSO HSEND command.

To simplify further discussion, we refer to ARCHBACK, ARCINBAK,


BACKDS, and HBACKDS as data set backup commands, unless otherwise
noted.

Below, we describe the enhancements made to the data set backup function.

3.3.1.1 Command data set backup was a single task


If you need backups of the data sets used in your batch applications, you
might expect to back them up before your batch applications start, or after
these have finished. You may have considered having DFSMShsm back up
your data sets by issuing data set backup commands for this purpose, so that
you do not have to control the inventory of backups, and can simply let
DFSMShsm take care of them.

However, if you have many data sets that need to be backed up in the batch
job window, you may have given up on this idea after discovering that
DFSMShsm could back up only one data set at a time. This is because the
command data set backup requests were processed under a single task, prior
to DFSMShsm Release 10 (see Figure 51).

106 DFSMS Release 10 Technical Update


.
DFSMShsm
HSEND BACKDS
SYSADMIN.PRODLIB
Request Queue

HBACKDS TSOUSER.JCLLIB Backup SYSADMIN...


...
//ITSOPROD JOB MSGCLASS=X ...
//STEP1 EXEC PGM=ARCINBAK
//ARCPRINT DD SYSOUT=*
//BACK0001 DD DSN=ITSO.DSET001,DISP=OLD Backup ITSO.DSET002
//BACK0002 DD DSN=ITSO.DSET002,DISP=OLD
// : :
//BACK0999 DD DSN=ITSO.DSET999,DISP=OLD Data set backup task

MAIN CSECT Backup ITSO.DSET001


:
BACKUP ARCHBACK DSN=DSNAME,...
:
DSNAME DC CL44'PROD.SEQOUT1'
:

ML1

Figure 51. Data set backup is single task under DFSMShsm pre-Release 10

3.3.1.2 Backup versions were first made on ML1 DASDs


The command data set backup task first creates backup copies on ML1
DASD volumes. Then a DFSMShsm primary host (which was started with the
HOST=’xY’ or, PRIMARY=YES parameter in Release 10) moves them to backup
volumes as a part of automatic backup processing.

The original intention of this design was to get back to the requestor as soon
as possible, and allow DFSMShsm to move the backup copies to an
appropriate device category at a later time. This design would make sense if
the number of data sets which are backed up by commands are small
enough, even though this involves double data movement.

However, as the number of backup copies on ML1DASDs increases, the


DFSMShsm primary host has to spend more time on moving backup versions
from ML1 DASDs to backup volumes before performing volume level backup
functions. For this reason, your automatic backup window could finish without
making all of necessary backups, and you might need to make the backup
window longer than the current setting (see Figure 52).

Chapter 3. DFSMShsm enhancements 107


SETSYS AUTOMATICBACKUPSTART(1700 1800 2359)

Moving backup Still moving Start backing Autobackup ends without


versions from backup up data sets in making all necessary
ML1 DASDs versions from primary backups
ML1 DASDs volumes

ML1 ML1 PRIMARY PRIMARY

BACKUP COPY A Data set X Data set X


BACKUP COPY
B
BACKUP COPY
B ... Data set Y Data set Y
BACKUP COPY C BACKUP COPY C

BACKUP BACKUP BACKUP BACKUP


BACKUP COPY A BACKUP COPY A
BACKUP COPY A
BACKUP COPY BACKUP COPY
B B
BACKUP COPY C BACKUP COPY C
BACKUP COPY X

17:00 23:59
Figure 52. Moving backup versions from ML1 impacts primary volume processing

In addition, making backup versions on ML1 DASDs can result in the


following side effects:
• Lack of space for migration may cause failures.
The primary purpose of ML1 DASDs is not to keep backup copies, but to
keep inactive data sets. If backup copies occupy most of the available
space on ML1 DASDs, further migration requests could fail due to lack of
space.
• A backup of a data set may not fit into the ML1 volumes.
When a backup copy does not fit into the ML1 DASDs, DFSMShsm fails
the backup request, unless you have ML1 OVERFLOW volumes (available
since APAR OW07781), and also, the backup copy fits into these ML1
OVERFLOW volumes. In order to prevent command data set backups
from failing, you may need to define more ML1 OVERFLOW volumes.
However, if the data set does not even fit into the ML1 OVERFLOW
volumes, failure will be certain; whereas backing up to tape volumes does
not result in such a condition.

108 DFSMS Release 10 Technical Update


Refer to Figure 53 for an illustration of these problems.

PRIMARY PRIMARY PRIMARY PRIMARY

ITSO.PRODDS ITSO.HUGE.PRODDS

command
migration backup
Backup Too large to
versions fill ML1
up space!!
ML1 fit into ML1!

BACKUP COPY A
BACKUP COPY B
BACKUP COPYC

PRIMARY
Need to have more
ITSO.PRODDS ML1 OVERFLOW
as command
backup requests
increases
No space on ML1
volumes
for backup
versions command command
backup backup
ML1 ML1
ML1
OVERFLOW OVERFLOW
MIGRATION COPY A ...
BACKUP COPY B
BACKUP COPY C

Figure 53. Inconveniences on command data set backup

Chapter 3. DFSMShsm enhancements 109


3.3.2 How does DFSMShsm Release 10 improve this function?
In this section, we explain how DFSMShsm Release 10 solves these issues
of data set backup.

3.3.2.1 Data set backup tasks run up to 64


DFSMShsm Release 10 can have up to 64 command data set backup tasks
run concurrently. Also it allows you to choose tape devices as target. You can
use the following commands control the number of concurrent tasks:
• SETSYS DSBACKUP(DASD(TASKS(mm)))
In this command, mm is the maximum number of tasks, each of which
makes backup copies on ML1 DASDs. When you set mm to 0, DFSMShsm
does not allow you to make backup copies on ML1 DASDs. The default is
two DASD tasks.
• SETSYS DSBACKUP(TAPE(TASKS(nn)))
In this command, nn is the maximum number of tasks, each of which
makes backup copies on tapes. When you set nn to 0, DFSMShsm does
not allow you to make backup on tapes. The default is two tape tasks.

The sum of mm and nn must be equal, or less than 64 (see Figure 54).

SETSYS DSBACKUP(DASD(TASKS(mm)) TAPE(TASKS(nn)))

Request Queue DFSMShsm


...
...
Backup request B
Backup request A

DASD data set DASD data set Tape data set Tape data set
backup task 1 backup task mm backup task 1 backup task nn

... ...

ML1 ML1 ML1 BACKUP BACKUP BACKUP

0 to mm 0 to nn
0 to 64
Figure 54. DFSMShsm Release 10 have up to 64 data set backup tasks

110 DFSMS Release 10 Technical Update


Note that these numbers are meant to be the maximum number of tasks you
allow DFSMShsm to attach. It does not specify that DFSMShsm will always
have the number of command data set tasks run as you have specified.
DFSMShsm dynamically increases the command data set tasks based on its
decision, so that it can perform its best with less system resources.

For this reason, you may see only one tape command data set backup task
running, even if you have specified TAPE(TASKS(4)) and four backup to tape
requests are queued. This is different from other DFSMShsm tasks. For
example, if you specify SETSYS MAXMIGRATIONTASKS(4) and there are four
volumes to be processed during primary space management, DFSMShsm
will have four volume migration tasks run.

We recommend that you specify nn as the number of tape devices you can
reserve for command data set backup tasks. If nn is bigger than the number
of devices available, the additional task may not be able to allocate a tape
device (see Figure 55).

SETSYS DSBACKUP(TAPE(TASKS(3)))

R equest Q ueue R equest Queue


... ...
... ...
Bac kup req ues t B B ac kup req ue st B
Bac kup req ues t A B ac kup req ue st A

D FSMShsm
increas ed
Tape data set Tape data set tape ba ckup Tape data set Tape data set Tape data set
backup task 1 backup task 2 task bac kup task 1 backup task 2 backup task 3

BACK UP BAC KU P BACK UP BAC KU P

IEF238D
ARC0381A

Figure 55. Third task cannot allocate tape device when only 2 devices available

Chapter 3. DFSMShsm enhancements 111


3.3.2.2 You can choose the target device type to take backups
DFSMShsm Release 10 provides TARGET(DASD|TAPE) as a new optional
parameter of any data set backup commands, to allow you to choose the
target device type where you would like to make your backup copies through
these commands.

Use TARGET(DASD) to direct backups to ML1 DASDs


The following command example lets DFSMShsm back up the data set,
ITSO.SANJOSE.DATASET, on an ML1 DASD first.
HBACKDS ITSO.SANJOSE.DATASET TARGET(DASD)

The backup version will then be moved to backup volumes when a


DFSMShsm primary host gets the RELEASE BACKUP or the FREEVOL
ML1BACKUPVERSIONS command (if automatic backup is not scheduled for
the day), or when an automatic backup function starts at a DFSMShsm
primary host. This process (moving backup versions) is same as before
Release 10 (see Figure 56).

DFSMShsm

HBACKDS ITSO.SANJOSE.DSET TARGET(DASD)

DASD data set Tape data set


backup task 1 backup task 1

Automatic backup
BACKUP FREEVOL ML1BACKUPVERSIONS ML1 BACKUP
RELEASE BACKUP

Figure 56. TARGET(DASD) uses ML1 DASDs

112 DFSMS Release 10 Technical Update


Use TARGET(TAPE) to direct backups to tape volumes
The following command example lets DFSMShsm back up the data set,
ITSO.WHOLEORG.DATASET, directly to a tape backup volume (see Figure 57):
HBACKDS ITSO.WHOLEORG.DATASET TARGET(TAPE)

DFSMShsm

HBACKDS ITSO.WHOLEORG.DSET TARGET(TAPE)

DASD data set Tape data set


backup task 1 backup task 1

ML1 BACKUP

Figure 57. TARGET(TAPE) uses tape backup volumes

If there is a partial backup volume, that is not marked as full, DFSMShsm will
select it at first. If there are no partial backup volumes, DFSMShsm will select
a volume differently depending on how you have specified SETSYS
SELECTVOLUME parameter.

Chapter 3. DFSMShsm enhancements 113


If you have specified SETSYS SELECTVOLUME(SPECIFIC), DFSMShsm will try to
pick a backup tape volume from its inventory which meets all of the following
requirements:
• ADDVOL’d as a tape BACKUP volume:
SPILL volumes cannot be a candidate.
• Not marked as FULL :
The tape cannot be marked as FULL.
• DAILY volume:
If specific day is assigned, DFSMShsm tries to honor it. However, if a data
set backup task has already used a tape backup volume and is still
keeping it mounted, the task will go ahead and use the volume, regardless
of whether the assigned day is different from the day when the task is
scheduled to take a backup.

If any volumes in its inventory do not meet these criteria, or if you have
specified SETSYS SELECTVOLUME(SCRATCH), a data set backup task will make a
non-specific volume request when the task has not used a tape volume yet.

Otherwise, let DFSMShsm choose the best place for the backup
The following command example lets DFSMShsm back up the data set
ITSO.WHEREVER.DATASET on whatever device DFSMShsm considers the best
(see Figure 58):
HBACKDS ITSO.WHEREVER.DATASET

114 DFSMS Release 10 Technical Update


DFSMShsm

HBACKDS ITSO.WHEREVER.DSET
I need to decide
TARGET..

DASD data set Tape data set


backup task 1 backup task 1

Automatic backup
BACKUP FREEVOL ML1BACKUPVERSIONS ML1 BACKUP
RELEASE BACKUP

Figure 58. DFSHSM decides the best device when no TARGET parameter

DFSMShsm takes the following factors into account:


• Number of DASD and tape data sets tasks:
If you have specified SETSYS DSBACKUP(TAPE(TASKS(0))), DFSMShsm will not
choose tape devices.
If you have specified SETSYS DSBACKUP(DASD(TASKS(0))), DFSMShsm will not
choose ML1 DASDs.
• WAIT or NOWAIT:
Each data set backup command has these two options. The WAIT option
specifies that you want to wait until DFSMShsm has finished taking a
backup, so that you can make sure if you have got the copy before
proceeding any further operations. The NOWAIT specifies that you do not
have to wait, and you get control right after DFSMShsm has accepted your
request.

Chapter 3. DFSMShsm enhancements 115


If you request a backup with the NOWAIT option, DFSMShsm chooses a
tape device, since it is not concerned about mount/demount delay time.
However, if you have specified SETSYS DSBACKUP(TAPE(TASKS(0))),
DFSMShsm will take the backup on ML1 DASD, even you specify
NOWAIT.
If you request a backup with the WAIT option, and both tape devices and
ML1 DASDs are available for data set backup, DFSMShsm will be
concerned about the size of the data set which is being backed up, based
on the specification you made on this new SETSYS command.
Tape devices are fast to write or read data sequentially; however, the
mount/demount delay should affect their performance. If a data set is big
enough to justify these delay times, the backup should go to tape devices.
If a data set is too small to justify these delay times, the backup should go
to ML1 DASD, as the requestor might want DFSMShsm to return as soon
as it has taken a backup.
• SETSYS DASDSELECTIONSIZE(maximum standard)
In order to determine to where the backup should go, DFSMShsm uses
the values maximum and standard to categorize data sets into the following
three types:
- LARGE
If a data set is bigger than maximum, the backup copy should go to tape.
- MEDIUM
If a data set is bigger than standard, but smaller than maximum,
DFSMShsm will prefer to use tape over DASD. DFSMShsm may select
DASD if having tape data set backup task would take a long time to
begin processing a request.
- SMALL
If a data set is smaller than standard, DFSMShsm will prefer to use
DASD over tape. DFSMShsm may select tape if having DASD data set
backup task would take long time to begin processing a request.
If you do not specify DASDSELECTIONSIZE, DFSMShsm will set 3,000
kilobytes (KB) as maximum, and set 250 KB as standard by default.
Refer to Figure 59 for an illustration of these concepts.

116 DFSMS Release 10 Technical Update


SETSYS DSBACKUP(DASDSELECTIONSIZE(3000 250))
3000KB 250KB

LARGE
MEDIUM
SMALL

BACKUP BACKUP ML1 ML1 BACKUP

Figure 59. DFSMShsm selects target device based on the size of data sets

3.3.2.3 Controlling tape resources


Since DFSMShsm Release 10 allows you to use tape devices directly for
command data set backups, it has also added several commands which can
be used to control tape devices. You need to customize these settings
carefully so that you can exploit this enhancement efficiently.

Use SETSYS... DEMOUNTDELAY to keep tape volumes mounted


The following command lets you specify how long you want tape command
data set backup tasks to keep tape volumes mounted after they have finished
working, and there are no additional backup to tape requests to process.
SETSYS DSBACKUP(TAPE(DEMOUNTDELAY(MAXIDLETASKS(mm) MINUTES(min))))

If you do not specify MAXIDLETASKS, or you specify MAXIDLETASKS(0),


DFSMShsm detaches tape command data set backup tasks after they have
finished processing, and when there is no more work to do. In other words,
when a tape command data set backup task uses a tape volume, and it has
finished writing a backup copy on the volume, and there are no backup
requests to be processed, DFSMShsm demounts the volume.

Chapter 3. DFSMShsm enhancements 117


When you have enough tape devices to assign to DFSMShsm even for such
command operations, it is better to keep tape volumes allocated, as mounting
or unloading tape volumes takes time, even with automatic tape library
devices such as the IBM3494 Tape Library Data Server. By keeping tape
volumes allocated even after command backup tasks have finished using
them, DFSMShsm does not have to select or mount a backup tape volume
again.

Refer to Figure 60, and assume you have specified:


SETSYS DSBACKUP(TAPE(TASKS(10) DEMOUNTDELAY(MAXIDELTASKS(2))))

If DFSMShsm actually dispatches three tape data set backup tasks, one out
of the three tasks will be detached after they have finished working, and the
backup tape volume will be demounted. The remaining two tasks will keep
volumes mounted.

SETSYS DSBACKUP(TAPE(TASKS(10) DEMOUNTDELAY(MAXIDELTASKS(2))))


Request Queue Request Queue
...

Backup request B

Backup request A

Tape data set Tape data set Tape data set Tape data set Tape data set Tape data set
backup task 1 backup task 2 backup task 3 backup task 1 backup task 2 backup task 3

ACTIVE ACTIVE ACTIVE IDLE IDLE

BACKUP BACKUP BACKUP BACKUP BACKUP BACKUP

Figure 60. DFSMShsm deletes idle tasks beyond MAXIDLETASKS

Since the tape drives are still allocated and the tapes are still mounted, the
idle tasks can process future backup requests without waiting for a tape to be
mounted.

Now, what is the MINUTES parameter used for? This parameter specifies how
long you would like to keep the idle tape command backup tasks alive. This
value only applies to tasks which have no work, but remain alive due to the

118 DFSMS Release 10 Technical Update


MAXIDLETASKS specification. If at least min minutes have passed since an idle
task processed the last backup request, DFSMShsm will detach the task,
unallocate the tape device, and the tape volume used by the task will be
unloaded.

Refer to Figure 61, and assume you have specified:


SETSYS DSBACKUP(TAPE(DEMOUNTDELAY(MAXIDLETASKS(2) MINUTES(30))))

SETSYS DSBACKUP (TAPE(DEMOUNTDELAY(MAXIDLETASKS(2) MINUTES(30)))))

Request Queue Request Queue

Backup request

Tape data set Tape data set After 30+ Tape data set Tape data set
backup task 2 backup task 3 minutes backup task 2 backup task 3

ACTIVE IDLE IDLE IDLE


0 0 0 0
45 15 45 15 45 15 45 15
30 30 30 30

BACKUP BACKUP BACKUP BACKUP

Figure 61. Each tape task sets its own timer

Each task sets its own timer when it gets into idle status. This figure shows
that DFSMShsm detaches Task 3, as it has been in idle status for more than
30 minutes. Task 2 is still in idle status, as 30 minutes has not passed yet
since it got into idle status. If another tape backup request comes along,
Task 2 will reset the timer, process the request, and set the timer after it gets
into idle status again.

If you specify 1440 as min, the idle tasks keep alive indefinitely, unless you
shut down DFSMShsm, command data set backup tasks are held, or a
SWITCHTAPES event occurs. If you specify non-zero MAXIDLETASKS but do not
specify MINUTES, DFSMShsm will use MINUTES(60) as a default.

Chapter 3. DFSMShsm enhancements 119


Use DEFINE SWITCHTAPES...TIME to schedule demounting
If you keep tape command data set backup tasks alive by having non-zero
MAXIDLETASKS and non-zero MINUTES in the SETSYS..DEMOUNTDELAY
parameter, these idle tasks will not only keep tape volumes mounted, but also
keep tape devices allocated. However, you may want to keep a set of backup
tape volumes at another location. Or, you may want to free those tape
devices and make them available to another DFSMShsm operation, such as
automatic backup, dump, or migration.

Since the MINUTES parameter is a relative specification, it cannot be used to


guarantee that these tape devices or volumes are freed when you need them
for other operations. You can use the following commands to tell DFSMShsm
when you want to free those resources:
• DEFINE SWITCHTAPES(DSBACKUP(TIME(hhmm|0)))
In this command, hhmm specifies the local time of the processor where
DFSMShsm runs. For example, if you specify TIME(1730) and backup tape
volumes have been kept mounted, DFSMShsm will demount them and
free those tape devices at 5:30 PM. If a task is still working at the time
specified, the task will demount the tape volume after it has finished
processing the current request, and then the device will be unallocated
(see Figure 62).

120 DFSMS Release 10 Technical Update


SETSYS DSBACKUP... DEMOUNTDELAY(MAXIDLETASKS(2) MINUTES(60)))))
DEFINE SWITCHTAPES(DSBACKUP(TIME(1730)))

Request Queue Request Queue Request Queue

Backup request

Tape data set Tape data set Tape data set Tape data set Tape data set
backup task 1 backup task 2 backup task 1 backup task 2 backup task 1

ACTIVE IDLE ACTIVE IDLE


0 0 0 0 0
45 15 45 15 45 15 45 15 45 15
30 30 30 30 30

BACKUP BACKUP BACKUP BACKUP BACKUP

5:05 PM 5:30 PM 5:35 PM


Figure 62. Active task does not demount tape volume until current request done

If you specify 0, which is the default when you do not specify the TIME
parameter, DFSMShsm does not demount the tape volume or free the
tape devices. Therefore, idle tasks keep alive and tape volumes are kept
mounted unless you shut down DFSMShsm, command data set backup
tasks are held, or the time specified in SETSYS DSBACKUP..MINUTES has
passed.
• DEFINE SWITCHTAPES(DSBACKUP(AUTOMBACKUPEND))
If you specify AUTOBACKUPEND, DFSMShsm will demount tape volumes after
when automatic backup processing ends.
You might wonder why the automatic backup function has something to do
with demounting volumes. Let us explain this briefly. When you schedule
automatic backup, DFSMShsm performs volume level backup processing.
That is, each volume backup tasks process DFSMShsm-managed
volumes during the window you specified through the SETSYS
AUTOBACKUPSTART command.

Chapter 3. DFSMShsm enhancements 121


After DFSMShsm finishes processing all volumes or the window has
passed, DFSMShsm retries to backup those data sets which were in use
when volume backup tasks tried to back up. Since data set backup tasks
process these retries, your SETSYS ...DEMOUNTDELAY specification also
affects them. For this reason, tape backup volumes may be kept mounted
even after automatic backup processing (including those retries) has
finished. This would be inconvenient, especially when you need to keep
tape backup volumes taken through an automatic backup process at an
off-site location, as some of the tape backup volumes may be still mounted
if there are backup retries.
If you specify AUTOBACKUPEND, DFSMShsm will demount the tape volumes
after backup retries have finished. In other words, the AUTOBACKUPEND
parameter helps you identify all of the tape backup volumes produced by
automatic backup processing.
Figure 63 illustrates how AUTOBACKUPEND works with automatic backup
processing.

122 DFSMS Release 10 Technical Update


SETSYS BACKUP(INUSE(RETRY(Y)))
SETSYS DSBACKUP TAPE(DEMOUNTDELAY(MAXIDLETASKS(2) MINUTES(60))))
DEFINE SW ITCHTAPES(DSBACKUP(AUTOBACKUPEND))

Request Queue Request Queue

Backup request Backup request


Backup request Backup request
Backup request Backup request

Volume
ARC0720I
AUTOMATIC BACKUP processing
Volume Volume Volume Volume Volume Volume
STARTING ends
backup task 1 backup task 1 backup task 1 backup task 1 backup task 2 backup task 3

In use In use In use

Not in use Not in use Not in use

BACKUP BACKUP BACKUP

BACKUP BACKUP BACKUP

Request Queue

Backup request
Backup request
Backup request

ARC0699I AUTOMATIC BACKUP


ENDING EXCEPT FOR POSSIBLE
RETRIES OF IN USE DATA Tape data set Tape data set
SETS backup task 1 backup task 2

BACKUP BACKUP

Request Queue

Request Queue

ARC0721I AUTOMATIC
BACKUP ENDING ARC0254I SWITCHTAPES
PROCESS HAS ENDED
Tape data set Tape data set
backup task 1 backup task 2
ARC0253I SWITCHTAPES
Backup
retry ends PROCESS BEGINNING
Tape data set Tape data set
backup task 1 backup task 2

BACKUP BACKUP

BACKUP BACKUP

Figure 63. Automatic backup and SWITCHTAPES

Chapter 3. DFSMShsm enhancements 123


As you can see in this figure, a new message, ARC0699I, is issued after
automatic volume backup processing has finished. You should see the
message ARC0721I in DFSMShsm pre-Release 10, while there could be a
backup retry for data sets which were in use during volume backup
processing, after automatic volume backup processing.

Specify an action when tape volume is demounted


You can specify the PARTIALTAPE parameter along with the TIME parameter, so
that you can tell DFSMShsm how you want to treat the tape backup volume
after it has been demounted. The following three options are available:
• DEFINE SWITCHTAPES(DSBACKUP(PARTIALTAPE(MARKFULL)))
DFSMShsm marks the tape volume as full so that the volume will not be a
candidate of backup volume selection. If you use the IBM Virtual Tape
Server (VTS), you might want to use this option along with the SETSYS
SELECTVOLUME(BACKUP(SCRATCH)) setting, as VTS may take a longer time to
mount a specific logical volume than to mount a scratch volume.
• DEFINE SWITCHTAPES(DSBACKUP(PARTIALTAPE(REUSE)))
DFSMShsm does not mark the tape volume as full, so that the volume can
be a candidate for backup volume selection again.
• DEFINE SWITCHTAPES(DSBACKUP(PARTIALTAPE(SETSYS)))
DFSMShsm treats the tape volume according to the SETSYS PARTIALTAPE
setting.

3.3.2.4 You can use Concurrent Copy to take a backup


Another enhancement made to the data set backup function is exploitation of
the Concurrent Copy function. Concurrent Copy allows you to take a
point-in-time copy while you have access to data continuously. This function
requires DFSMSdss and a certain type of storage hardware, such as IBM
2105 Enterprise Storage Server, IBM 9393 RAMAC Virtual Array, and so on.

DFSMShsm Release 10 allows you to specify whether or not you want to use
Concurrent Copy technique for data set back up commands. Before
DFSMShsm Release 10, DFSMShsm could use Concurrent Copy only for
system-managed data sets which has an appropriate Backup Copy
Technique attribute on its management class. So there was no way to have
DFSMShsm backup non-system-managed data sets using Concurrent Copy,
or to override Backup Copy Technique attribute of system-managed data
sets.

124 DFSMS Release 10 Technical Update


As for system-managed data sets, DFSMShsm pre-Release 10 would not
return control to the users after when Concurrent Copy initialization has
completed. Therefore, this would be no different from a standard backup
technique, from a user’s point of view, though other users could use it, since
DFSMShsm would have released ENQs on data sets.

DFSMShsm Release 10 provides the new CC parameter for data set backup
commands to allow you to take a backup using Concurrent Copy technique.
The CC parameter has the following format:
CC(STANDARD|PREFERED|REQUIRED LOGICALEND|PHYSICALEND)
- STANDARD specifies that you want to use standard backup methods.
DFSMShsm backs up your data sets without using concurrent copy.
- PREFERRED specifies that concurrent copy is the preferred backup
method that you want to use for backup, if it is available. If concurrent
copy is not available or the user has no authorization to use the CC
parameter on the command, DFSMShsm ignores the PREFERRED
parameter and backs up the data set by using standard backup
methods.
- REQUIRED specifies that concurrent copy must be used as the backup
method, and the data set backup fails if concurrent copy is not
available or if the user has no authorization to use the CC parameter.
Note: DFSMShsm determines if you have authorization to use
Concurrent Copy by checking the RACF profile,
STGADMIN.ADR.DUMP.CNCURRNT.
- PHYSICALEND specifies that control returns to applications or users only
after the backup physically completes.
- LOGICALEND specifies that control returns to the application or user when
concurrent copy initialization completes.

Chapter 3. DFSMShsm enhancements 125


Table 9 shows how DFSMShsm treats a data set backup request when you
specify the CC keyword.
Table 9. CC parameter specification, authority, device capability, and results
STANDARD LOGICALEND Authorized to use Data set resides on Result
PREFERRED PHYSICALEND Concurrent Copy? a device which
REQUIRED supports
Concurrent Copy?

STANDARD PHYSICALEND YES/NO YES/NO Non-CC *1

PREFERRED LOGICALEND YES YES CC/LOGICAL*2

PREFERRED PHYSICALEND YES YES CC/PHYSICAL*3

PREFERRED LOGICALEND / YES NO Non-CC


PHYSICALEND

PREFERRED LOGICALEND / NO YES Non-CC


PHYSICALEND

STANDARD LOGICALEND YES/NO YES/NO Fail (ARC1313I)

PREFERRED LOGICALEND / NO NO Non-CC


PHYSICALEND

REQUIRED LOGICALEND YES YES CC/LOGICAL

REQUIRED PHYSICALEND YES YES CC/PHYSICAL

REQUIRED LOGICALEND / YES NO Fail(ARC1368I)


PHYSICALEND

REQUIRED LOGICALEND / NO YES Fail(ARC1359I)


PHYSICALEND

REQUIRED LOGICALEND / NO NO Fail(ARC1359I)


PHYSICALEND

*1: DFSMShsm will take backup, but will not use Concurrent Copy. Requestor will be notified
when the backup has physically completed.

*2: DFSMShsm will use Concurrent Copy. Requestor will be notified when Concurrent Copy
initialization completes.

*3: DFSMShsm will use Concurrent Copy. Requestor will be notified when the backup has
physically completed.

126 DFSMS Release 10 Technical Update


Figure 64 shows an overview of the DFSMShsm concurrent copy support.

BACKUP BACKUP
HBACKDS ITSO.DSET
CC(REQUIRED LE) Beginning Backing
backup up
DFSMShsm DFSMShsm DFSMShsm
ARC100I
ITSO.DSET
BACKDS
PROCESSING
END
Request
Concurrent Concurrent
Copy Copy
initialized

ESS(2105)
RVA(9393) ITSO.DSET ITSO.DSET
3990-6/3 ITSO.DSET ITSO.DSET ITSO.DSET

STOP GO

Application Application Application


Program Program Program

The actual point-in-time copy is maintained by the system sofware and storage control.
This drawing is not intended to describe the actual implementation of Concurrent Copy,
but is provided for explanatory purposes only.

Figure 64. DFSMShsm Concurrent Copy support overview

In this figure, the user issues the command HABCKDS CC(REQUIRED LE) to
take a backup of ITSO.DSET. DFSMShsm accepts the command, serializes
the data set, and has DFSMSdss execute Concurrent Copy. After DFSMSdss
has finished initializing the Concurrent Copy session, it notifies DFSMShsm.
Now DFSMShsm releases the data set, notifies the requester that backup
processing has ended, and continues to take a backup in background.

3.3.2.5 Recover takeaway functions provided


Depending on how you set up these tape management policies, as we have
explained, your tape backup volume might be mounted longer than before.
While a data set backup task is keeping a backup tape volume, DFSMShsm
may get a recover request which needs the volume to satisfy the request. If
this happens, DFSMShsm Release 10 will take the volume from the backup
task, and make it available for the recover request whenever possible. This is
due to a series of takeaway functions which have been implemented through
APARs.

Chapter 3. DFSMShsm enhancements 127


If DFSMShsm is sharing data across multiple OS/390 images, this “recover
takeaway” function requires GRS or an equivalent product to work, so you
need to put your HSMplex in a same GRS configuration (see Figure 65). This
is different from other takeaway functions.

DFSMShsm DFSMShsm

HRECOVER ITSO.DSET

GRS I need the


volume you
are using
Tape data set
Recovery task
backup task
OK. Here
it is.

BACKUP BACKUP

ITSO.DSET ITSO.DSET

Figure 65. Recover takeaway uses GRS to communicate with other DFSMShsm

3.3.3 Considerations
In this section, we describe some considerations on using the data set
backup functions.

3.3.3.1 MAIN host and AUX host


A DFSMShsm MAIN host can process all kinds of command data set backup
requests (the HBACKDS command, the HSEND BACKDS command, the
ARCHINBAK program, and the ARCHBACK macro).

A DFSMShsm AUX host can process command data set backup requests
only through the MODIFY operator command interface.

3.3.3.2 Coexistence with supported DFSMS/MVS releases


Here, we describe some considerations on coexistence.

Backup copies usability


These enhancements are only available for DFSMShsm Release 10.
However, there are no changes made to CDS formats or backup copies
regarding these enhancements. So you can recover a data set on
pre-Release 10 system from a backup version made on Release 10, or vice
versa.

128 DFSMS Release 10 Technical Update


Recover takeaway is not available on DFSMShsm pre-Release 10
Please remember that the recover takeaway function is not available on
DFSMShsm pre-Release 10. Therefore, you cannot recover a data set on a
pre-Release 10 system from a tape volume which is being used by a tape
backup task on Release 10.

Coexistence APAR
Table 10 shows an APAR/PTF list for this function.
Table 10. Coexistence APAR/PTF list for command data set backup

APAR\PTF DFSMS/MVS DFSMS/MVS DFSMS/MVS DFSMS/MVS


1.5.0 1.2.0 1.4.0 1.2.0

OW46125 UW73892 UW73891 UW73890 UW73889

After applying a respective PTF for this APAR, down-level DFSMShsm will
issue a warning message if you request a command data set backup with the
CC and/or TARGET keywords, and take a backup onto ML1 DASD (see Figure
66).

HBACKDS ITSO.SANJOSE.DSET TARGET(TAPE) HBACKDS ITSO.SANJOSE.DSET TARGET(TAPE)

//BACKUP EXEC PGM=ARCINBAK,PARM='CC(REQUIRED,LE)

ARC1070I
PARAMETERS NOT
TARGET AND CC
SUPPORTED ON THIS
NOT AUPPORTED
RELEASE, TARGET
UNTIL OS/390 R10
AND/OR CC IGNORED

DFSMShsm DFSMShsm
Releae 10 1.2.0 - 1.5.0
OW41864
DASD data set Tape data set Data set
backup task backup task backup task

ML1 BACKUP ML1

Figure 66. Down-level DFSMShsm ignores new keyword

Chapter 3. DFSMShsm enhancements 129


3.3.4 Worked example
We measured the elapsed time when making four data set backups in three
cases. Each data set has 1,000 cylinders of space allocated on 3390-3 track
format volume, and all of the allocated space was filled with data. Figure 67
shows a performance comparison of the three cases.

Command data set backup performance

500
(seconds)

400

300
Elapsed time

200

100

Single tasks(No CC) 4 tasks(No CC) 4 tasks (CC LE)

Figure 67. Command data set backup performance comparison

All of these tests were made under a single DFSMShsm address space in
DFSMS Release 10.

The left bar shows performance when issuing four backup requests at a time,
using the CC(STANDARD PE) TARGET(DASD) parameter for each request.
Before issuing those requests, we specified the maximum number of DASD
data set backup tasks as 1. Therefore the performance would be similar to
DFSMShsm pre-Release 10, as down-level DFSMShsm has single data set
command task, and it takes backups to ML1 DASDs.

The middle bar shows performance when issuing four backup requests at a
time, using the CC(STANDARD PE) TARGET(DASD) parameter for each
request. Before issuing those requests, we specified the maximum number of
DASD data set tasks as 4. We confirmed that all of the data sets were being

130 DFSMS Release 10 Technical Update


backed up concurrently by seeing the RMF monitor II device activity report at
that time.

The right bar shows performance when issuing four backup requests at a
time, using the CC(REQUIRED LE) TARGET(DASD) parameter for each
request. Before issuing those requests, we specified the maximum number of
DASD data set tasks as 4. We confirmed that all of the data sets were being
backed up concurrently by seeing the RMF monitor II device activity report at
that time. Note that the elapsed time measured ends when the program ends.
Therefore, in this case, DFSMShsm was performing backup operations after
the program had ended.

The purpose of showing this figure is to demonstrate the benefit of data set
backup multi-tasking. We do not guarantee that you would get the same
results as in this figure, since there are many factors that affect performance
measurement, such as I/O configuration, software configuration, workload
distributions, and so on.

3.4 ABARS support for large tape block sizes


In this section, we describe the ABARS support enhancement that was made
in DFSMS Release 10.

3.4.1 Background of this enhancement


ABARS is a unique function of DFSMShsm that addresses disaster recovery.
You can use ABARS to back up an aggregate group of data sets. An
aggregate backup can consist of the following data sets:
• Data sets that reside on primary volumes:
The term “primary volumes” refers to DASD data sets which application
programs can access directly.
ABARS supports both system-managed and non-system-managed data
sets.
• Migrated data sets:
Unlike DFSMSdss, ABARS can back up data sets which have been
migrated to either ML1 or ML2 storage.
• Tape data sets:
ABARS can back up tape data sets.

Chapter 3. DFSMShsm enhancements 131


In addition, you can define the following resources:
• Data sets that accompany an aggregate backup group:
For example, a tape data set that spans multiple volumes is best brought
to the recovery site, rather than having ABARS back it up as a part of an
aggregate group.
• Data sets that only need to be allocated:
Some kinds of data sets do not necessarily have to be restored, but only if
they have been allocated or defined. For example, consider an application
that creates GDG data sets.

ABARS makes tape backup volumes of the aggregate group, based on the
definition you specified.

Since DFSMS Release 10 supports large tape block size (block size greater
than 32,760 bytes), you may want your application to use this capability. You
may also want to include such tape data sets in an aggregate group and have
ABARS back them up (see Figure 68).

User data Aggregate backup

PRIMARY

ITSO.xx ITSO.xx ...


ITSO.yy
...
ITSO.zz

ABACKUP
ML1 ML1
ITSO.aa

ITSO.yy ITSO.bb
ITSO.zz

Selection data set


ML2 User tape
data set
INCLUDE(ITSO.**)
ITSO.aa
ITSO.bb ...

Figure 68. ABARS creates a set of backups from primary, migration volumes

132 DFSMS Release 10 Technical Update


3.4.2 ABACKUP/ARECOVER data sets with large tape block sizes
ABACKUP is enhanced to include user tape data sets with large tape block
sizes, and to write them as a part of an aggregate group. And ARECOVER is
enhanced to recover such data sets from an aggregate group.

Refer to 2.2, “Large tape block sizes” on page 36, for more information about
large tape block size support.

3.4.3 Considerations
In this section, we describe some considerations on ABARS support for large
tape block sizes.

Recovery site should be at OS/390 Version 2 Release 10 level


If you plan to include tape data sets which have large tape block size in an
aggregate backup group, you also need to prepare an OS/390 Version 2
Release 10 system in your recovery site, as a pre-Release 10 system cannot
perform ARECOVER successfully when an aggregate backup contains an
image of tape data sets which have large tape block size. Otherwise, you
need to apply the PTF so that you can avoid getting unpredictable results
during the ARECOVER process.

Coexistence PTFs for supported DFSMS/MVS releases


Table 11 contains a list of APARs provided for supported DFSMS/MVS
releases.
Table 11. Toleration APAR/PTF for ABARS large tape block size support

APAR\PTF DFSMS/MVS DFSMS/MVS DFSMS/MVS DFSMS/MVS


1.5.0 1.4.0 1.3.0 1.2.0

OW41865 UW68028 UW68027 UW68026 UW68025

After applying the respective fix for the APAR, down-level DFSMShsm will fail
an ABACKUP request with an error message if it finds a tape data set which
has a large tape block size that should be taken care of (see Figure 69).

Chapter 3. DFSMShsm enhancements 133


DFSMShsm
1.2.0-1.5.0
ARC6172E DATASET ITSO.bb
IS NOT SUPPORTED IN AN ...
OW41865 LIST FOR AGGREGATE
GROUP

ITSO.aa
ABACKUP
ITSO.bb
ITSO.aa > 32 K
BLOCKSIZE

ITSO.bb
> 32 K
BLOCKSIZE

Figure 69. Down-level system fails ABACKUP if data set has large tape block

Down-level DFSMShsm will also fail an ARECOVER request for a data set if
the data set had a large tape block size, and will issue an error message
(see Figure 70).

DFSMShsm DFSMShsm
Releae 10 1.2.0-1.5.0
ARC6172E DATASET ITSO.bb OW41865
IS NOT SUPPORTED IN AN
LIST FOR AGREGATE GROUP

ITSO.aa
ABACKUP ARECOVER ITSO.aa
ITSO.bb
> 32 K
ITSO.aa BLOCKSIZE

ITSO.bb
> 32 K
BLOCKSIZE

Figure 70. Down-level system cannot recover data set with large tape block size

134 DFSMS Release 10 Technical Update


Chapter 4. DFSMSrmm enhancements

In this chapter, we describe the following DFSMSrmm enhancements made


to DFSMS Release 10:
• Virtual Tape Server support enhancement
• Volume set management support
• Pre-ACS and ACS support
• Tivoli OPC example
• Miscellaneous enhancements

4.1 Virtual Tape Server (VTS) support enhancement


In this section, we describe VTS support enhancement made to DFSMSrmm
Release 10.

4.1.1 Background of this enhancement


Here, we describe the background of this enhancement. Since this
enhancement cannot be discussed without having a knowledge of tape
management through DFSMSrmm as well as the VTS advanced function, we
first describe the overview of DFSMSrmm inventory management processing
particularly location management. Then we provide an overview of the VTS
advanced function, followed by a detailed description of the enhancement.

4.1.1.1 Overview of DFSMSrmm inventory management


DFSMSrmm manages the removable media resources. Management policies
must be defined to DFSMSrmm as vital record specifications (VRSs). The
inventory management processing task is performed periodically to:
• Compare all resources with VRSs and assign the management policy to
each resource (VRSEL)
• Assign a location to be moved to the volumes according to the
management policy (DSTORE)
• Return expired volumes to scratch status according to the management
policy (EXPROC)
• Create a CDS extract file which is used to create various reports such as
volume movement or inventory (RPTEXT)
• Back up CDS and journal and clear journal (BACKUP)

You can perform these tasks by scheduling (running) a program called


EDGHSKP, commonly known as a housekeeping job. The keywords in

© Copyright IBM Corp. 2000 135


parentheses in the above list are the parameters for EDGHSKP. For example,
when you have DFSMSrmm return expired volumes to scratch status, you
code a JCL EXEC statement like the following:

//S1 EXEC PGM=EDGHSKP,PARM=’EXPROC’

You can specific multiple keywords by separating them with a comma (,). For
more information, refer to Chapter 13, “Performing Inventory Management”,
in the DFSMSrmm V1R5 Implementation and Planning Guide,
SC26-4932-06.

4.1.1.2 Overview of location management


The location management is one of the inventory management tasks. The
practical operation of location management is as follows:
1. Define VRS to move a resource to a vault location. The VRS can be
defined as either a volume or a data set.
2. Run VRSEL processing. DFSMSrmm assigns a required location to
volumes based on the VRS definition you made in step 1. When a volume
contains multiple data sets, individual data sets on the volume might have
different vault locations. In this case, DFSMSrmm uses the location
priority number to determine the required location to be vaulted. For
details, see “What is the location priority number?” on page 136.
3. Run DSTORE processing. DFSMSrmm assigns the destination to the
volume according to the result of step 2. Also, DFSMSrmm sets the
in-transit flag to yes, which means that the volume is in transit. If the tape
volume is in the tape library, the in-transit flag is not set to yes until the
cartridge is ejected from the tape library. VRSEL and DSTORE can be run
together in a single batch job step or separately.
4. Create volume movement reports. For details, see 4.1.1.4, “Creating
volume movement reports” on page 137.
5. Move the volume to the destination location according to the volume
movement reports.
6. Confirm the volume movement to DFSMSrmm. Issue the RMM CHANGEVOLUME
volser CONFIRMMOVE command. The current location of the volume
changes, and in-transit changes to no.

4.1.1.3 What is the location priority number?


If a tape volume contains multiple data sets, individual data sets might have
different locations. DFSMSrmm can pick only one location of these, as one
physical volume cannot be placed into multiple locations at the same time.
Therefore, DFSMSrmm needs to pick the most preferable location among

136 DFSMS Release 10 Technical Update


them. This location is referred to as the location priority number, and each
location must have a location priority number.

DFSMSrmm has four locations by default. These are SHELF, LOCAL,


REMOTE and DISTANT, such as a vault in a computer room, in an adjacent
building, in a secured room in the next city, or in a secure room in another
state, respectively. Table 12 shows the location priority number of each
location.
Table 12. Location priority number

Location name Location priority number

REMOTE 100

DISTANT 200

LOCAL 300

SHELF 5000

Lower numbers have higher priority. In this context, the bigger the location
priority number is, the closer is the distance location from on-site, and
DFSMSrmm prefers more distant locations. Assume a tape volume in SHELF
contains two data sets. When one data set is supposed to move to LOCAL,
and another one is supposed to move to REMOTE, DFSMSrmm picks
REMOTE as the location for the move.

In addition to those default locations, you can define any location and the
corresponding location priority number you want, in the LOCDEF statement in
the EDGRMMxx PARMLIB member.

Additionally, you can define any named on-site or vault location by using the
EDGRMMxx LOCDEF statement.

For more details about the location priority number, see Section 6.2, “Defining
Storage Locations: LOCDEF” in the OS/390 DFSMSrmm Implementation and
Customization Guide , SC26-7334.

4.1.1.4 Creating volume movement reports


Librarians have to know the volumes to be moved to another locations. You
can produce a volume movement report for this purpose. Any of the following
reporting tools can be used to create movement reports:
• EDGRPTD utility:
EDGRPTD is already installed if you order any DFSMS priced features
along with the OS/390 basic features. EDGRPTD creates volume

Chapter 4. DFSMSrmm enhancements 137


movement reports and inventory management reports. The format of the
report is fixed, and cannot be tailored.
• EDGJRPT member in SAMPLIB:
This IBM-supplied sample job creates a several types of volume
movement reports or inventory management reports. EDGJPRT is a kind
of reference jobs which demonstrates how you can extract data you need
from DFSMSrmm control data set and how to show it. So you can build
your own job by modifying this sample job.
• EDGJVLTM member in SAMPLIB:
This IBM-supplied sample job creates a report about volumes which are
moved to distant location. Like EDGJPRT, you can easily customize the
format of the report by modifying the JCL you can build your own job by
modifying this sample job. The only difference between EDGJRPT and
EDGJVLTM is the output format of the reports.
• Roll-your-own (RYO) user-developed tool:
The format of the CDS extract file is disclosed as a part of the general use
programming interfaces (GUPIs), so the user can develop any format of
report from the extract file.

Refer to the manual, OS/390 DFSMSrmm Reporting, SC26-7335, for more


information.

4.1.1.5 Overview of VTS


IBM 3494-B18 Virtual Tape Server (VTS) is an automated tape library which
emulates 3490E tape drives and two types of tape media (cartridge system
tape known as CST, or enhanced capacity cartridge system tape, known as
ECCST) to a connected host system. Emulated tape drive units are referred
as logical (or virtual) drive units, and emulated cartridges are referred as
virtual volumes.

The VTS has a number of 3590 tape drives (B or E models) physically writing
to 3590 type J cartridges. These physical resources are referred to as
physical drives and volumes.

When a host system writes a data set on a logical volume, the data is written
to the tape volume cache (TVC), which is a disk cache in the VTS unit, and
then the VTS controller automatically stacks the logical volumes into a
physical volume. So a physical volume is also referred as a stacked volume.

The existence of these physical resources is completely transparent to the


host systems. From a host system’s point of view, it uses the VTS as if it were

138 DFSMS Release 10 Technical Update


3490E drives and cartridges in a 3494 tape library (see Figure 71). Thus, the
VTS can use today’s high capacity tape cartridges more efficiently, while it
also provides high performance.

Using 3490E... 3494 VTS


Tape Volume Cache

Logical (Virtual) Volume

ESCON

HOST System

Physical (Stacked) Volume

Figure 71. Overview of VTS

4.1.1.6 Overview of the export/import function


Since logical volumes are not real cartridges, they cannot be ejected outside
of the tape library as real tape cartridges could be.

To solve this problem, a VTS export/import function was provided as an


advanced function feature with DFSMS/MVS Version 1 Release 4. With this
function, it is possible to export selected logical volumes from a VTS, or to
import them into a VTS.

Export processing moves the logical volumes specified by the user to an


empty stacked volume which is selected by the VTS controller (Figure 72).

Note: This is not a copy operation; therefore, there are no duplicate logical
volume images in the VTS.

The export processing is done as in the following instructions:


1. Create an export list volume file on any of the logical volumes in a VTS.
This is to inform the VTS which logical volumes are to be exported. The
export list volume file contains volume serial numbers of the logical
volumes to be exported.

Chapter 4. DFSMSrmm enhancements 139


2. Issue the LIBRARY EXPORT,volser operator command.
In this command, volser specifies the logical volume on which you wrote
the export list volume file. The system and the VTS start the export
processing after receiving the command.
3. After the export processing has completed, an operator can eject the
exported stacked volume from 3494 through the library manager (LM)
console.

In a DFSMSrmm environment, an exported stacked volume is also referred to


as a container.

3494-B18
3494-L1x
3494-D12

Eject from I/O station Move and re-stack

Figure 72. Overview of export processing

Import processing copies logical volumes from a stacked volume into a VTS.
After the import processing has completed, host systems can use the
imported logical volumes. Note that stacked volumes still have valid images of
the logical volumes, which have been imported.

The import processing is done as in the following instructions:


1. Insert a stacked volume which contains exported logical volumes into the
tape library, and set the import category to the volume using the LM
console.
2. Create an import list volume file on any of the logical volumes in a VTS.
This is to inform the VTS which logical volumes are to be imported. The
list contains volume serial numbers of the logical volumes to be imported.

140 DFSMS Release 10 Technical Update


3. Issue the LIBRARY IMPORT,volser operator command.
In this command, volser specifies the volume serial number of the logical
volume, which contains the import list volume file. The system and the
VTS start the import processing after receiving the command.
4. When you no longer need the exported stacked volume, you can use the
stacked volume as an empty stacked cartridge in the VTS, or eject the
cartridge from the VTS. Both operations require the LM console.

For more details about VTS itself, or advanced functions such as software
and hardware requirements, detailed operational procedures, and
considerations, refer to the IBM Redbook: IBM Magstar Virtual Tape Server:
Planning, Implementing, and Monitoring, SG24-2229.

4.1.1.7 The VTS advanced function basic support


Before having the VTS export/import function, stacked volumes were
completely transparent to connected host systems, and could not be ejected
anywhere with data on them. Therefore, location management of the stacked
volumes was not necessary.

For systems with the export/import function, DFSMSrmm was enhanced to


store information about a stacked volume, as a part of logical volume
information, when logical volumes are exported. This was introduced by the
following APARs:
• DFSMS/MVS V1.4.0: OW36349
• DFSMS/MVS V1.5.0: OW36350

In this book, we refer to this support as the VTS basic support.

Tracking of stacked volumes is now enabled


With this APAR, when logical volumes are exported, DFSMSrmm stores the
volume serial number of the container volume, where the logical volumes
have been exported, into a new in-container field. You can check where your
logical volumes have been exported by issuing the following command:
RMM LISTVOLUME volser STOR

You issue this command against the logical volumes you want to investigate,
and then check the in-container field in the command output.

4.1.1.8 What was the problem with the VTS basic support?
Here, we will discuss some inconveniences in the VTS basic support.

Chapter 4. DFSMSrmm enhancements 141


Location management is not based on stacked volumes
However, VRSEL and DSTORE processing still assigns the required location
or destination not on a container basis, but on a logical volume basis. What is
moved is a physical exported stacked volume, not logical volume images.
Location management should be done at a stacked volume level rather than
at a logical volume level.

EDGRTPD, enhanced for stacked volumes, is somewhat ambiguous


If you use movement reports created by EDGRPTD reports, location
management based on logical volumes would still work, as VTS basic support
also enhanced EDGRPTD to be able to create a movement report for stacked
volumes. EDGRPTD with basic support checks the in-container field of each
logical volume, in order to achieve this. Furthermore, although this is unlikely
to happen, if the logical volumes are supposed to be moved to different
locations, it is uncertain where to move the stacked volumes. In this case,
EDGRPTD checks and uses the location priority number to determine the
destination for the stacked volumes.

However, if you want to use a reporting tool other than EDGRPTD, to create a
movement report for container volumes, you need to modify it in order to take
the in-container field into account, just like EDGRPTD.

4.1.2 How does DFSMSrmm Release 10 improve this function?


DFSMSrmm Release 10 now creates a volume record for a stacked volume
when export processing is executed. You can also create volume records for
stacked volumes by issuing the following command:
RMM ADDVOLUME/CHANGEVOLUME volser TYPE(STACKED)

DSTORE processing is also modified to assign a destination to stacked


volumes, but not to logical volumes, while VRSEL processing still assigns the
required location for each logical volume.

If the required location of each logical volume in the same stacked volume
differs, DSTORE uses the location priority number to determine the
destination for that stacked volume.

The status of a stacked volume is always MASTER. It contains a stacked


volume count which shows the numbers of logical volume in it.

142 DFSMS Release 10 Technical Update


Table 13 shows a summary of this enhancement.
Table 13. Summary of the DFSMSrmm enhancement for stacked volumes

VTS Basic Support DFSMS Release 10


DFSMS/MVS 1.4.0 VTS support
DFSMS/MVS 1.5.0 enhancement

When does It never creates a stacked When logical volumes are


DFSMSrmm create a volume record. exported to an empty
stacked volume stacked volume. It contains
record? information about exported
logical volumes.

A logical volume Contains information about Contains no information.


record a stacked volume if it is
exported.

VRSEL processing Assigns the required Assigns the required


location to each logical location to each logical
volume. volume.

DSTORE processing Assigns the same Assigns the destination to


destination as the required the stacked volume based
location to logical volume. on the required locations to
the exported logical
volumes.

EDGRPTD volume Creates a movement report Does not create a


movement report of the logical volumes which movement report of the
has not been exported. logical volumes.
Also creates a movement Only creates a report of the
report of the stacked stacked volumes which
volumes which contains contains exported logical
exported logical volumes. volumes.

EDGJRPT and Creates a movement report Creates a report of the


EDGJVLTM volume of the logical volumes by stacked volumes which
movement report default. contains exported logical
Needs to be tailored to volumes by default.
create a report of the Does not create a
stacked volumes which movement report of the
contains the exported logical volumes.
logical volumes.

RYO volume Can create a movement Can create a movement


movement report report of the stacked report of exported stacked
volumes, if you code some volumes, without coding any
logic to check the container logic to check the container
field. field.

Chapter 4. DFSMSrmm enhancements 143


4.1.3 Export/import processing scenarios
In this section, we describe practical export/import processing scenarios
using the VTS basic support under DFSMS/MVS V1.4.0 and DFSMS/MVS
V1.5.0, and the VTS support enhancement in DFSMS Release 10.

Note that the VTS basic support is not a DFSMSrmm enhancement.


However, we introduce a sample procedure of export/import processing using
the VTS basic support, as there was no other documentation available.

For more information about export processing in a DFSMSrmm environment,


refer to Section 4.3.4 of the manual, “DFSMSrmm Support for Export
Processing” in the DFSMSrmm V1R5 Implementation and Planning Guide,
SC26-4932-06.

We assume the following scenario:


• A data set USER.DATA.SET is created on a logical volume LGV001 in a
VTS whose location name is LIBVTS.
• The volume containing the data set is moved to vault location VAULT for 10
days. That is, the VRS is defined by issuing the following command:
RMM ADDVRS DSNAME(‘USER.DATA.SET’) DAYS COUNT(10) LOCATION(VAULT)
• We use the export function to eject LGV001 from the VTS.
• After 10 days, the volume is returned to LIBVTS, and the LGV001 is
returned to scratch status.
• We use the import function to insert LGV001 into the VTS.

4.1.3.1 Export procedure with the VTS basic support


Perform the instructions below to move LGV001 from location LIBVTS to
VAULT:
1. Run VRSEL and DSTORE processing.
During VRSEL processing, the data set ‘USER.DATA.SET’ matches the
VRS definition and VAULT is assigned as the required location for the
LGV001.
On the next DSTORE processing, VAULT is assigned as the destination for
LGV001 according to its required location.
2. Determine the logical volumes to be exported.
Using any of the reporting tools described in 4.1.1.4, “Creating volume
movement reports” on page 137, create a movement report of the logical
volume to determine which logical volume should be exported, and where

144 DFSMS Release 10 Technical Update


to move it. The following screen is an example of a report created by
EDGRPTD:

REMOVABLE MEDIA MANAGER


VOLUMES TO BE MOVED FROM LOCATION LIBVTS TO LOCATION VAULT PAGE 1
------- -- -- ----- ---- -------- ----- -- -------- ------ DATE 2000/08/04
5647-A01 (C) Copyright IBM Corp. 1993,2000

RACK VOLUME BIN OWNER MEDIANAME T


------ ------ ------ -------- --------- -
LGV001 LGV001 1 USER1 Y
LGV002 LGV002 1 USER1 Y
LGV003 LGV003 1 USER1 Y
LGV004 LGV004 1 USER1 Y

TOTAL NUMBER OF ENTRIES LISTED = 4

Alternatively, you can use the following command:


RMM SEARCHVOLUME LOCATION(LIBVTS) DESTINATION(VAULT) OWNER(*) LIMIT(*)
Since the volumes appearing in this report are logical volumes, this also
means that the volumes should be exported.
3. Create an export list volume file according to the volume list in step 2.

EXPORT LIST 01
LGV001,VAULT

You can create the file by using the RMM SEARCHVOLME command as we
described in the above step, with the CLIST option.
4. Run the export function.
Using the export list volume file created in step 3, run the export function.
The export processing moves LGV001 to an empty stacked volume which
the VTS controller selects. In our scenario, assume that the VTS picks the
volume STV001 as a target stacked volume for exporting.
5. Create a movement report.
Next, create a movement report for stacked volumes which contain
exported logical volumes. Only EDGRPTD can create a movement report
for the stacked volumes, as we described in “EDGRTPD, enhanced for
stacked volumes, is somewhat ambiguous” on page 142.

Chapter 4. DFSMSrmm enhancements 145


6. Eject STV001 from 3494.
After export processing and the creation of a movement report have been
completed, eject STV001 from 3494 tape library, using the Manage
Export-Hold Volumes function through the LM console pull-down menu.
7. Move STV001 from LIBVTS to location VAULT.
Based on the movement report created in step 5, move STV001 from
location LIBVTS to VAULT.
8. Confirm the volume movement to DFSMSrmm.
When STV001 has been moved to location VAULT, let DFSMSrmm confirm
the movement by issuing the following command:
RMM CHANGEVOLUME LGV001 CONFIRMMOVE
Note that you should issue the CHANGEVOLUME command for the exported
logical volume LGV001, and not for STV001, as the STV001 does not
have any VOLUME record.

4.1.3.2 Import procedure with the VTS basic support


After 10 days have passed, perform the instructions below to move LGV001
from location VAULT to LIBVTS:
1. Run VRSEL and DSTORE processing
By running the VRSEL processing, DFSMSrmm recognizes that the data
set ‘USER.DATA.SET’ has expired, and that LGV001 should be returned to
its home location. Therefore, LIBVTS is assigned as the required location
for LGV001.
On the next DSTORE processing, LIBVTS is assigned as the destination
for LGV001 according to its required location.
2. Create a movement report.
You should create a volume movement report for stacked volumes, so that
you can identify stacked volumes to be moved for importing.
Only EDGRPTD, which is supplied with DFSMSrmm, can create a
movement report for stacked volume. An RYO tool could be used if it can
create a report for stacked volumes by recognizing the in-container field of
the logical volume. In our scenario, EDGRPTD would show that STV001
should be moved.
3. Move STV001 from VAULT to LIBVTS.
According to the report created in step 2, move STV001 from location
VAULT to LIBVTS.

146 DFSMS Release 10 Technical Update


4. Insert STV001 into VTS.
Using the LM console panel, insert STV001 as import category.
5. Determine the logical volumes to be imported.
To see the contents of STV001 and which logical volumes are to be
imported, issue the following command:
RMM SEARCHVOLUME CONTAINER(STV001) OWNER(*) LIMIT(*)
6. Create an import list volume file according to the list made in step 5.

IMPORT LIST 01
STV001,LGV001

7. Run the import function.


Using the import list volume file created in step 6, run the import function.
The import processing copies the logical volume specified in the import list
volume file into LIBVTS and automatically confirms the movement of
LGV001.
8. Run expiration processing.
After import processing has completed, run expiration processing to return
LGV001 to scratch status.
9. Change the volume category code of STV001.
If you do not need the contents of STV001 any more, you can change the
volume category code of STV001, so that it can be reused as a empty
stacked volume or can be ejected it from 3494. Use the LM console panel
and ‘Manage Import Volume’ pull-down menu.

4.1.3.3 Export procedure with the VTS enhanced support


Perform the instructions below to move LGV001 from location LIBVTS to
VAULT:
1. Run VRSEL processing.
During VRSEL processing, data set USER.DATA.SET matches the VRS
defined above, and VAULT is assigned as required location for LGV001.
2. Determine the logical volumes to be exported.
To confirm which logical volumes are to be exported to VAULT location,
issue the following command:
RMM SEARCHVOLUME LOCATION(LIBVTS) REQUIRED(VAULT) TYPE(LOGICAL) OWNER(*)
LIMIT(*)

Chapter 4. DFSMSrmm enhancements 147


Unlike the situation with VTS basic support, you cannot use any of the
reporting tools described in 4.1.1.4, “Creating volume movement reports”
on page 137 to create a movement report in this step, because the
destinations are not assigned to the logical volumes; only the required
location is assigned at this time.
3. Create an export list volume file according to the volume list in step 2

EXPORT LIST 01
LGV001,VAULT

You can create the file by using the RMM SEARCHVOLUME command, as we
describe in step 2, with CLIST option.
4. Run the export function.
Using the export list volume file created in step 3, run the export function.
The export processing moves LGV001 to an empty stacked volume which
the VTS controller selects. In this scenario, assume that the VTS picks the
volume STV001 as a target stacked volume for exporting.
5. Run DSTORE processing.
DSTORE processing assigns VAULT as destination for STV001, in this
scenario.
6. Create a movement report.
Create a movement report for stacked volume. Now the stacked volume
records exist and the destination is set to the stacked volume, by using
any of the reporting tools described in 4.1.1.4, “Creating volume
movement reports” on page 137.
7. Eject STV001 from 3494.
After export processing and the creation of the movement report have
completed, eject STV001 from 3494 tape library using the LM console
pull-down menu ‘Manage Export-Hold Volumes’ function.
8. Move STV001 from LIBVTS to location VAULT.
According to the report created in step 6, move STV001 from location
LIBVTS to VAULT.
9. Confirm the volume movement to DFSMSrmm.
When STV001 has been moved to location VAULT, let DFSMSrmm confirm
the movement by issuing the following command:
RMM CHANGEVOLUME STV001 CONFIRMMOVE

148 DFSMS Release 10 Technical Update


Note that you should issue this command for the stacked volume STV001,
and not for LGV001.

4.1.3.4 Import procedure with the VTS enhanced support


After 10 days have passed, perform the instructions below to move LGV001
from location VAULT to LIBVTS:
1. Run VRSEL and DSTORE processing.
By running VRSEL, DFSMSrmm recognizes that the data set
USER.DATA.SET has expired and LGV001 should be returned to its home
location. DFSMSrmm assigns LIBVTS as the required location to both
LGV001 and STV001.
On the next DSTORE processing, DFSMSrmm assigns LIBVTS as the
destination for STV001 according to its required location. Destination is
not assigned for LGV001.
2. Create a movement report.
You should create a volume movement report for stacked volumes which
contains logical volumes to be imported. Unlike the situation with VTS
basic support, if there are volume records for the stacked volumes and
they contain the destination, you can use any of the reporting tools
described in 4.1.1.4, “Creating volume movement reports” on page 137. In
this scenario, you would find that the stacked volume STV001 should be
moved to LIBVTS from the report of your choice.
3. Move STV001 from VAULT to LIBVTS.
Based on the report created in step 2, you move STV001 from location
VAULT to LIBVTS.
4. Insert STV001 into VTS.
Using the LM console panel, insert STV001 as import category.
5. Confirm the movement of STV001.
To confirm the movement, you can use the following command:
RMM CHANGEVOLUME STV001 CONFIRMMOVE
Or, you can also use DSTORE. The DSTORE processing can check to see
if the moving stacked volumes are now library resident.
Note that this step is optional, as you can import logical volumes from a
stacked volume even though it is not shown as library resident by
DFSMSrmm.

Chapter 4. DFSMSrmm enhancements 149


6. Create an import list volume file.
If you want to import all of the logical volumes in the stacked volume,
create an import list volume file as follows.

IMPORT LIST 01
STV001

Or, if you want to import the logical volume selectively, issue the following
command:
RMM SEARCHVOLUME CONTAINER(STV001) TYPE(LOGICAL) OWNER(*) LIMIT(*) CLIST
You do this to make a list of the logical volumes which are supposed to be
in the stacked volume STV001. Then, you edit the file as follows:

IMPORT LIST 01
STV001,LGV001

7. Run the import function.


Using the import list volume file created in step 6, run the import function.
The import processing copies the logical volume specified in the import list
volume file into LIBVTS and automatically confirms the movement of
LGV001.
8. Run EXPROC processing.
After importing the volume, you run EXPROC processing and the volume
LGV001 returns to scratch status, based on the policy we assumed.
9. Change the volume category code of STV001.
If you no longer need the contents of STV001, you can change the volume
category code of STV001 to reuse it as an empty (scratch) stacked
volume. Use the LM console panel and the Manage Import Volume
pull-down menu.

4.1.4 How to migrate to the VTS enhanced support environment


In order to use the VTS enhancement support, all of the operating systems
sharing the DFSMSrmm CDS must be OS/390 Version 2 Release 10. If your
installation has met this requirement, and you want to use this enhancement,
you need to take the following actions:

150 DFSMS Release 10 Technical Update


1. Check the current stacked volume support status.
To see the Stacked Volume status, you need to issue the following
command:
RMM LISTCONTROL
You could see a status of either NONE or DISABLED. If the status is
NONE, this indicates that export processing has never been executed
before. DISABLED indicates that export processing has executed at least
once before. When the status is DISABLED, DFSMSrmm runs at the VTS
basic support level.
2. Back up your CDS.
3. Enable the VTS support enhancement by running the following job:

//ENABLE JOB MSGCLASS=X,NOTIFY=&SYSUID


//CREATE EXEC PGM=EDGUTIL,PARM='UPDATE'
//SYSPRINT DD SYSOUT=*
//MASTER DD DISP=SHR,DSN=RMM.CONTROL.DSET
//SYSIN DD *
CONTROL STACKEDVOLUME(YES)
/*

After the job has run, the stacked volume support status changes to
ENABLED if it was NONE in step 1, or changes to MIXED if it was
DISABLED in step 1.
4. Verify the current stacked volume support status.
To see the Stacked Volume status, you need to issue the following
command:
RMM LISTCONTROL
You could see a status of either ENABLED or MIXED. If the status is
ENABLED, DFSMSrmm will be ready for exploiting the VTS support
enhancement. If the status is MIXED, run the following job:

//ENABLE JOB MSGCLASS=X,NOTIFY=&SYSUID


//MEND EXEC PGM=EDGUTIL,PARM='MEND'
//SYSPRINT DD SYSOUT=*
//MASTER DD DISP=SHR,DSN=RMM.CONTROL.DSET
//SYSIN DD DUMMY

MIXED status means that this new function is partially supported, because
not all stacked volume records might have been created. If you leave the
status as MIXED, the inventory management job will fail with the error
message EDG2315E. So, you should not leave the status as MIXED.

Chapter 4. DFSMSrmm enhancements 151


EDGUTIL MEND will fully enable the stacked volume support by creating
stacked volume records, based the in-container field in logical volume
records.
Note that MEND can be run only against an inactive DFSMSrmm CDS, so
you need to stop the DFSMSrmm address space or run this job against
the backup copy of the CDS.
We describe the MEND function in detail in 4.3, “Using 3-way audit
support” on page 158.
5. If the MEND job reports any error status, correct the errors manually, and
run MEND job again, if necessary.
6. Repeat step 5 until RMM LISTCONTROL shows the stacked volume support
status as ENABLED.

4.1.5 Considerations
In this section, we describe some considerations on using the VTS enhanced
support.
• Do not enable the stacked volume support until all systems sharing the
DFSMSrmm CDS are Release 10.
• Though stacked volume status can be changed from NONE to DISABLED,
DISABLED to MIXED, MIXED to ENABLED, and NONE to ENABLED,
there is no way to fall back.
• While you can define the VRSs for a stacked volume, housekeeping jobs
will ignore them. The stacked volume record remains in MASTER status,
and the location of the stacked volume is determined by the required
location of the logical volumes in it.
• The stacked volume records are not still created automatically while
inserting a stacked volume into the 3494, because the 3494 does not
notify the connected host systems about the insertion of stacked volumes.
• You may delete the stacked volume record when it is no longer used.
However, this is not a mandatory operation. When export processing
selects an empty stacked volume, and if its volume record has already
been in CDS, DFSMSrmm will reuse it.

4.2 Volume set management support


In this section, we describe volume set management support.

152 DFSMS Release 10 Technical Update


4.2.1 Background of this enhancement
Currently, DFSMSrmm manages tape volumes on an individual volume basis.
However, when volumes are dependent on each other, we might need to
manage the volumes on a volume group basis. For example, volumes
containing a multi-volume data set should be treated as a volume group.

In this book, we use the term volume set for a volume group. We describe the
practical cases in which this function is effective.

4.2.1.1 Converting from CA-1 to DFSMSrmm


CA-1 is a tape management software product provided by Computer
Associates. Many customers are converting from CA-1 to DFSMSrmm.

In a CA-1 environment, multiple spanned volumes were always retained and


moved based on the management policy of the first data set on the first
volume.

When we convert from the CA-1 environment to the DFSMSrmm


environment, only the VRS for the first file is created. If we do not define the
additional VRSs for following data sets manually, all of the volumes other than
the first volume are not managed by the VRSs (Figure 73).

Managed as a set Not Managed by VRS

Conversion
Data Set to RMM Data Set

CA-1 Management Policy VRS Managed

Figure 73. Converting from CA-1 to DFSMSrmm

4.2.1.2 Spilled data set in a specific mount request


If a volume mount request is specific, (that is, the VOL=SER= parameter is
specified on a DD statement), the requested volumes are used.

If a data set requires more volumes than those specified (that is, it spills),
OS/390 automatically issues a non-specific volume mount request, to enable
the data set to be created successfully.

If you are using the volume VRSs to manage these resources, the volumes
selected by a non-specific mount will not managed by the VRSs (Figure 74),
even though the creation of the data set was successful.

Chapter 4. DFSMSrmm enhancements 153


Scratch Request
Data Set when
Data set spills

Volume VRS Not VRS managed

Figure 74. Spilled data set in a specific mount request

4.2.1.3 Different management policies for multi-volume data sets


If you have multiple data sets on multiple volumes, and you are using the data
set VRSs to manage these resources, different management policies might
apply to a single volume.

In this case, the retention period of each volume is set to be the same as the
longest retention period of all the data sets in this volume. The location of
each volume is assigned according to the location priority number.

Because of this location management method, the volumes can be moved to


the different locations, even if they contain the same data set (see Figure 75).

Location A Location B
By priority number

Data Set

Data set VRS Data set VRS

Figure 75. Different management policies for multi-volume data sets

4.2.2 How does DFSMSrmm Release 10 improve this function?


DFSMSrmm R10 now provides options to manage volumes as a set. Before
explaining the new management option, let us review how a volume set is
created.

A volume set (see Figure 76) is created when:


• A multi-volume data set is created:
DFSMSrmm automatically chains the volumes by recording the previous
and next volume for each volume. The multi-volume may contain single or
multiple data sets.

154 DFSMS Release 10 Technical Update


• You issue the RMM CHANGEVOLUME PREVVOL command:
For example, if you issue:
RMM CHANGEVOLUME DEF PREVVOL(ABC)
DFSMSrmm will chain the volume DEF after the ABC
You can check the volume chaining status by issuing the command:
RMM LISTVOLUME volser STATS
You would then check the Previous volume or Next volume field. Both of
these fields are blank for a single (non-chained) volume.

Vol1 Vol2 Vol3 Vol4


Prev: Prev: Vol1 Prev: Vol2 Prev: Vol3
Next: Vol2 Next: Vol3 Next: Vol4 Next:

Figure 76. A Volume set

Even before DFSMSrmm Release 10, DFSMSrmm made volume sets.


DFSMSrmm Release 10 has introduced the following new OPTION
parameters in the EDGRMMxx PARMLIB to select the management policy for
volume retention and volume movement:
• RETAINBY(VOLUME|SET)
• MOVEBY(VOLUME|SET)

RETAINBY(SET) specifies that you retain volumes by set. When you retain by
set, if any volume in a set is retained by a vital record specification, all
volumes in the set are retained as vital records. DFSMSrmm uses the highest
retention date of all volumes in the set as the retention date for all volumes
retained as vital records in a set. If no volume in a set is retained by a vital
record specification, DFSMSrmm performs expiration processing by set.
DFSMSrmm does not expire volumes in a set if at least one volume in a set is
still not ready to expire because it has not reached its expiration date and you
have not specified that you want the expiration date ignored.

For volumes which should be retained for this reason, DFSMSrmm sets a
new set retained flag to YES. You can check this flag by using the RMM
LISTVOLUME command.

MOVEBY(SET) specifies that you move volumes by set. When you move by set,
all of the volumes in a set are retained in the same location selected by the
VRS specification or location priority for the volume.

Chapter 4. DFSMSrmm enhancements 155


You can choose a different management policy for volume movement and
volume retention. For example, volumes can be retained as a set, but moved
individually.

Specify SET if you want to manage the chained volumes as a set (see
Figure 77).

8 Days Manage as
at REMOTE a SET

Vol1 Vol2 Vol3 Vol4


Prev: Prev: Vol1 Prev: Vol2 Prev: Vol3
Next: Vol2 Next: Vol3 Next: Vol4 Next:

Figure 77. Manage as a set

Specify VOLUME if you want to manage the volumes as an individual volume


regardless of the chaining status (see Figure 78). Before DFSMSrmm
Release 10, DFSMSrmm performs volume retention and movement on an
individual volume basis, therefore specifying VOLUME works the same as before
DFSMSrmm Release 10.

Manage as Manage as Manage as Manage as


a VOLUME a VOLUME a VOLUME a VOLUME

Vol1 Vol2 Vol3 Vol4


Prev: Prev: Vol1 Prev: Vol2 Prev: Vol3
Next: Vol2 Next: Vol3 Next: Vol4 Next:
3 Days 5 Days 8 Days 3 Days
at SHELF at REMOTE at SHELF at LOCAL

Figure 78. Manage as a volume

4.2.3 Considerations
In this section, we describe some considerations on using volume set
management.

4.2.3.1 Automatic maintenance of volume chaining status


The chaining status of the volumes in a set is maintained by DFSMSrmm
automatically. Consider the following examples.

156 DFSMS Release 10 Technical Update


• Case 1
Assume that RETAINBY(VOLUME) is specified in the OPTION statement, in the
EDGRMMxx PARMLIB member, and volumes are chained as shown in
Figure 76 on page 155. If Vol3 returns to a scratch status, the volume
chaining status becomes as shown in Figure 79.

Vol1 Vol2 Vol4 Vol3


Prev: Prev: Vol1 Prev: Vol2 Prev:
Next: Vol2 Next: Vol4 Next: Next:

Returned to Scratch

Figure 79. Automatic chain status maintenance of Case 1

As you can see, Vol1, Vol2, and Vol4 will remain as a set, and Vol3 will
become an independent volume.
• Case 2
The volumes are chained as shown in Figure 76 on page 155, and a user
creates a data set using the following DD card:
//OUT1 DD DSN=DS.NAME,UNIT=TAPE,DISP=NEW,VOL=SER=(VOL1,VOL2)
If the data set is too large and requires more than the two volumes
specified, a non-specific volume mount request is issued. In this case, the
volume chaining status becomes as shown in Figure 80.

Vol1 Vol2 Vol5 Vol3 Vol4


Prev: Prev: Vol1 Prev: Vol2 Prev: Prev: Vol3
Next: Vol2 Next: Vol5 Next: Next: Vol4 Next:

User Request Selected by Scratch Req.

Figure 80. Automatic chain status maintenance of Case 2

This diagram assumes that Vol5 is used to satisfy the non-specific volume
mount request. As you can see, the volume set has been broken into two
volume sets, and DFSMSrmm chains Vol5 after the Vol2.

4.2.3.2 Coexistence support with supported DFSMS/MVS releases


Only DFSMSrmm Release 10 systems can use this function. If you are
sharing the DFSMSrmm CDS with lower level systems, and you want to
manage volumes as a set, your inventory management job must be run on
the Release 10 system.

Chapter 4. DFSMSrmm enhancements 157


4.2.3.3 Other considerations of coexistence support
Below, we describe some other considerations of coexistence support:
• When you use the RMM CHANGEVOLUME PREVVOL command to create a volume
set, you cannot chain the scratch status of a volume.
• Do not assign a required location to a volume in a set.
You can assign a required location to a volume manually by the RMM
CHANGEVOLUME volser LOCATION(location_name) command. But the required
location, which is assigned manually, is not used for the volume set
management, because the volume set management is performed in
VRSEL processing, and this processing assigns the same required
location to all of the volumes. The DSTORE processing still assigns the
destination to each volume according to the required location of the
volume.
Note: There is a case in which VRSEL uses the manual assigned
locations. If you have a VRS with LOCATION(CURRENT), the VRSEL
processing function can use the manually assigned location, therefore it
can move the complete set of volumes based on the manual setting of the
location of a single volume, depending on location priority numbers and
the VRS.
• Do not chain different type of volumes.
When you use RMM CHANGEVOLUME PREVVOL command all types of volumes,
logical, physical and stacked can be chained each other.
However, we do not recommend that you mix these types of volumes in the
same volume chain, because it is not a realistic operation from the point of
view of either location management or retention management.

4.3 Using 3-way audit support


In this section, we describe enhancements to the 3-way audit support.

158 DFSMS Release 10 Technical Update


4.3.1 Background of this enhancement
In a system-managed tape library environment, tape volume information is
recorded in the following three places:
• Library manager database (LM DB):
The library manager (LM) is a control workstation of the IBM 3494 or 3495
automated tape library or 3495-M10 manual tape library. LM DB records
information about all of the volumes in the tape library. A four digit decimal
category code, which indicates the media type and volume status, is
assigned to all of the tape volumes in the tape library. In an OS/390
environment, the meaning of each category code is defined in the
DEVSUPxx PARMLIB member.
• Tape configuration database (TCDB):
The TCDB is an ICF catalog marked as a volume catalog (VOLCAT). All
volumes, which are in the connected tape library, are recorded in the
TCDB, unless rejected by the tape management subsystem. The TCDB
records the residence (library name or shelf location) and use-attribute
(private or scratch) of each volume. In addition, media type, recording
technology (number of tracks), storage group, and other information is
recorded in a TCDB.
To check the information recorded in the TCDB, issue the IDCAMS
command:
LISTCAT VOLENT(Vvolser) CAT(volcat name) ALL
A TCDB is always required when you control tape libraries by DFSMS,
regardless of the existence of DFSMSrmm.
Another way to control tape libraries is to use basic tape library support
(BTLS) which is a priced feature to control the IBM tape libraries. In a
BTLS environment, TCDB is not used.
• DFSMSrmm control data set (CDS):
This is required for DFSMSrmm use. All volumes, not only tape library
resident, but also non-tape library tape volumes, are recorded in the
DFSMSrmm CDS. More detailed information than that in the TCDB is
recorded. Tape data set information is also recorded.
To browse the volume information recorded in DFSMSrmm CDS, issue the
command:
RMM LISTVOLUME volser ALL

All of the tape volume information in the LM DB, TCDB and DFSMSrmm CDS
should be consistent (see Figure 81).

Chapter 4. DFSMSrmm enhancements 159


RMM CDS TCDB
VOL001 VOL001
Status: MASTER Use Attribute: PRIVATE
Location: LIB1 Library: LIB1
Media Type: HPCT Media Type: MEDIA3
Storage Group: SGLIB1 Storage Group: SGLIB1
... ...

LIB1

VOL001
LM DB Category Code:000F

Figure 81. Tape volume information in an SMS managed tape library

Their consistency may be monitored by running regular batch jobs, or you


may be required to verify them if you suspect that some errors might exist. If
any errors are found, you should correct them.

DFSMSrmm provides the EDGUTIL utility for CDS maintenance purpose.


Also, you can use the ISMF AUDIT function to compare TCDB with LM DB.

4.3.1.1 Current CDS maintenance method and the limitations


Before DFSMSrmm Release 10, EDGUTIL could perform verification and
error correction, using the specified parameter below:
• PARM=’VERIFY’
Used to verify either the active or inactive DFSMSrmm CDS. It only checks
the consistency between the records in the DFSMSrmm CDS. For
example, if a volume record has an owner name, the existence of the
owner name is verified.
Volume information in the TCDB or LM DB is not checked.
• PARM=’MEND’
The same verification as PARM=’VERIFY’ is performed and any error
status found is corrected. Only the inconsistencies between the records in
the DFSMSrmm CDS are corrected. A MEND job can be run only against
an inactive CDS, which is normally a copy of the production CDS. It cannot
be run against an active CDS. Always back up the DFSMSrmm CDS
before running the MEND job.

160 DFSMS Release 10 Technical Update


Volume information in the TCDB or LM DB is not checked.
• PARM=’VERIFY(VOLCAT)’
Performs verification between the DFSMSrmm CDS and TCDB. If any
inconsistencies are found, you can:
- Issue the IDCAMS CREATE/ALTER/DELETE VOLENT command to correct the
TCDB information.
- Issue the RMM ADDVOLUME/CHANGEVOLUME/DELETEVOLUME command to
correct the DFSMSrmm CDS information.
Volume information in the LM DB is not checked.

EDGUTIL did not audit against LM DB. Also, there is no way to cross-check
against LM DB, TCDB, and DFSMSrmm CDS to see if these are inconsistent.

4.3.2 How does DFSMSrmm improve this function?


In DFSMSrmm Release 10, the EDGUTIL VERIFY and MEND processing
has been enhanced to perform a 3-way audit of the DFSMSrmm CDS, TCDB,
and LM DB.

The following new parameters are provided for EDGUTIL:


• VERIFY(SMSTAPE)
DFSMSrmm checks the synchronization of the DFSMSrmm CDS with the
TCDB, and additionally checks the LM DB if needed.
DFSMSrmm scans both the DFSMSrmm CDS and the TCDB sequentially
to check that the volume records exist in both. If any SMS-managed
volume is found either in DFSMSrmm CDS or TCDB, DFSMSrmm also
checks against the LM DB.
DFSMSrmm also checks stacked volumes against the LM DB. The
information regarding stacked volumes are not in TCDB, as these are not
system-managed volumes. However, DFSMSrmm knows they should be in
the LMDB.
• MEND(SMSTAPE)
The same verification as for PARM=’VERIFY(SMSTAPE)’ is performed,
and any error status found is corrected, if possible.
This processing assumes that the volume status and TCDB related
information known to DFSMSrmm is correct. The DFSMSrmm CDS is
used as the master, for status and storage group information, to correct
the TCDB and LM DB.

Chapter 4. DFSMSrmm enhancements 161


Unlike the MEND job, which must be run against an inactive CDS, you can
run MEND(SMSTAPE) against either the active or the inactive CDS.
• MEND
MEND corrects the information in the DFSMSrmm CDS, but it has also
been enhanced to perform a 3-way audit so that it can correct the
DFSMSrmm CDS information from TCDB or LM DB.
Optionally, MEND processing creates a stacked volume record using the
in-container volume serial number of the logical volume record, in order to
enable the VTS enhanced support, as we described in 4.1, “Virtual Tape
Server (VTS) support enhancement” on page 135. MEND also performs a
3-way audit to check the existence, residency, volume type and other
information about stacked and logical volumes, during making stacked
volume records.
A MEND job must be run against an inactive CDS, which is normally a
copy of the production CDS. You cannot use MEND against an active
CDS.

4.3.3 CDS maintenance scenario


In this section, we describe a maintenance scenario under a OS/390 Version
2 Release 10 environment.

4.3.3.1 CDS health check for regular processing


In DFSMSrmm Release 10, we have the following three verification methods:
• EDGUTIL,PARM=‘VERIFY’
• EDGUTIL,PARM=‘VERIFY(VOLCAT)’
• EDGUTIL,PARM=‘VERIFY(SMSTAPE)’

We recommend that you perform both VERIFY and VERIFY(SMSTAPE)


regularly, so that you can detect error conditions or inconsistencies as much
as possible. Since these two methods verify CDSs in different ways, it is
possible that VERIFY may not detect an error while VERIFY(SMSTAPE)
would, or vice versa.

You do not have to consider running VERIFY(VOLCAT), as all inspection


done by this option is included in VERIFY(SMSTAPE).

Any error conditions found should be corrected, as documented in the


following sections.

162 DFSMS Release 10 Technical Update


4.3.3.2 Recovery of the errors found by VERIFY
If EDGUTIL with VERIFY detects errors, there may be inconsistencies
between records in the DFSMSrmm CDS. These kinds of errors should be
rare, as long as you use DFSMSrmm correctly. Possible situations which
could cause this kind of problem would be:
• The conversion process from another vendor tape management
subsystem
• A system failure

Because these kinds of errors are rare, we recommend that you obtain
guidance from your IBM service representative if such errors are found, and
you are unsure how to fix them.

In the case of simple errors, you can correct these by yourself. To correct the
error status, follow the instructions below:
1. Check the JOBLOG of the VERIFY job carefully. It will tell you what kind of
inconsistencies exist.
2. Determine if a DFSMSrmm command can correct the errors. If possible,
issue the command.
For example, if an owner of a volume is set, but the owner record does not
exist, you can fix it by issuing the command:
RMM ADDOWNER owner_name DEPARTMENT(dept_name)
One good method is to run a MEND job against a copy of the production
CDS. The JOBLOG will give you some useful information as to how the
CDS can be mended. You can use this information to determine which
DFSMSrmm command to issue manually.
3. Run the VERIFY job again to check that the error status has been
corrected. If the error still exists, repeat the procedure.

Though MEND can be used to detect and correct these kinds of errors
automatically, we recommend that you correct them manually. This is
because, although the MEND job will automatically correct the CDS, its
decision may be incorrect. The decision as to how to correct the error status
should be made manually by a user or an administrator.

Also, you should determine why the discrepancy occurred, so that it can be
prevented from occurring in the future.

You should use the MEND function only when enabling stacked volume
support or making container information consistent. Otherwise, if errors are

Chapter 4. DFSMSrmm enhancements 163


found which are not expected, we recommend that you obtain guidance from
your IBM service representative.

4.3.3.3 Recovery of the errors found by VERIFY(SMSTAPE)


If errors are found by EDGUTIL with the VERIFY(SMSTAPE) parameter, then
an inconsistency exists between the volume records in the DFSMSrmm CDS,
TCDB and LM DB. The first thing you need to do is to make DFSMSrmm CDS
reliable, so that you can use MEND(SMSTAPE) at later.

To correct the error status, perform the instructions below:


1. Inspect the JOBLOG of the VERIFY(SMSTAPE) job carefully.
The JOBLOG will tell you what kind of inconsistencies exist.
2. Determine the location of information to be corrected.
If DFSMSrmm CDS information is correct, do nothing at this time. Or, if the
DFSMSrmm CDS information is incorrect, use RMM commands to fix it.
3. Run the VERIFY(SMSTAPE) job again.
See if all of the errors in DFSMSrmm CDS have been fixed. If it still has
errors, go back to step 1.

At this point, the DFSMSrmm CDS information should be reliable. Therefore,


it is now ready to use MEND(SMSTAPE), as MEND(SMSTAPE) fixes TCDB
and LM DB automatically, based on DFSMSrmm CDS information.

We recommend that you perform MEND(SMSTAPE) against an active


DFSMSrmm CDS. This is because the TCDB and LM DB are active, and their
information is being updated in parallel with DFSMSrmm CDS, regardless of
whether the DFSMSrmm CDS being used is active or not. Furthermore, while
we can back up and restore the DFSMSrmm CDS and TCDB, we cannot back
up or restore the LM DB, so any changes made to LMDB cannot be backed
out.

4.3.4 How MEND/MEND(SMSTAPE) works


If you find any inconsistencies between your DFSMSrmm CDS, TCDB, or LM
DB, we recommend that you check the error status and correct them
manually. If you decide to use MEND(SMSTAPE), its actions should be
understood; these are detailed in this section.

Table 14 shows the possible error conditions and the correction processing of
MEND(SMSTAPE) for each error. In this table “,” is used as an AND operator,
and the “/” is used as an OR operator.

164 DFSMS Release 10 Technical Update


Table 14. MEND/MEND(SMSTAPE) processing
# RMM CDS TCDB LM DB Action MEND

1 not found found not checked EDG6834I - missing from no mend


RMM

2 found, loc=lib found found EDG6516I - missing from no mend


TCDB

3 found, loc=lib not found not found EDG6828I - missing from LM no mend

4 master/ not checked not checked EDG6823I - status mismatch CUA to private
user/init

5 scratch not checked not checked EDG6823I status mismatch CUA to scratch

6 master/ not private not private EDG6823I - status mismatch CUA to private
user/init

7 scratch any not scratch, not scratch, not EDG6823I - status mismatch CUA to scratch
not error error

8 any error error EDG6824I - volume in error CUA to rmm status


category

9 loctype=atl/mtl, not found not found EDG6511I - lib name EDG6829I - set intransit
not intransit inconsistent

10 loc not lib found in lib found in lib EDG6511I - lib name EDG6830I - set loc=lib
inconsistent

11 loc not lib not found in lib not found in lib EDG6828I - lib name no mend
inconsistent

12 VTS, type not logical logical EDG6807I - type not EDG6808I - set logical
logical consistent

13 VTS, stacked stacked EDG6807I - type not EDG6808I - set stacked


type not stacked consistent

14 any stacked stacked EDG6831I - stacked has no mend


TCDB entry

15 not VTS, physical physical EDG6807I - type not EDG6808I set physical
type not physical consistent

16 atl/mtl, not found/ found/ EDG6516I - missing from EDG6829I - set intransit
intransit, not not found not found TCDB (if not found in LM)
stacked

17 wrong media type media type media type EDG6822I - media type EDG6820I - set media type
mismatch

18 wrong media type media type media type EDG6822I - media type EDG6821I set media type
mismatch

19 wrong media type not found not found EDG6822I - media type EDG6820I set media type
mismatch

20 wrong media type wrong media wrong media type EDG6822I - media type no mend
type mismatch

21 private, not checked not checked EDG6827I - SG mismatch EDG6825I - CUA to rmm
wrong SG SG/
EDG6826I - set rmm SG

22 ATL/MTL found in lib found in lib EDG6511I - LM and TCDB no mend


inconsistent

Chapter 4. DFSMSrmm enhancements 165


Refer to the manual, OS/390 MVS System Messages, Vol. 2, GC28-1785 to
get the full descriptions of the messages appearing in the table.

4.3.5 Considerations
In this section, we describe some considerations on maintaining the
DFSMSrmm CDS.
• In a multi-system environment, always run EDGUTIL, to verify your CDS,
on the system with the highest level of software available. This ensures
that EDGTUIL uses the latest control data set record format information, to
verify the contents of the CDS.
• Note that VERIFY(SMSTAPE) is meant to be the replacement for
VERIFY(VOLCAT). We recommend that you use VERIFY(SMSTAPE)
instead of VERIFY(VOLCAT).

4.4 Pre-ACS interface/ACS support


In this section, we describe the enhancement made to support pre-ACS
interface and ACS routine invocation for volume pooling and VRS
management.

4.4.1 Background of this enhancement


Before we describe the background of this enhancement, let us first review
the way we currently manage scratch pooling and VRS, so that you can better
understand the enhancement.

4.4.1.1 Understanding scratch pooling and VRS management


In an OS/390 environment, regardless of the use of DFSMSrmm, all tape
volumes are categorized by two types:
• System-managed tape volumes:
These are tape volumes that reside in IBM 3494 or 3495 automated tape
libraries or 3495-M10 manual tape libraries, which are managed by
DFSMS.
• Non-system-managed tape volumes:
These are tape volumes other than system-managed tape volume, such
as tape volumes being used by non-tape library devices, tape volumes in a
BTLS managed tape libraries, tape volumes in non-IBM tape libraries.

166 DFSMS Release 10 Technical Update


In a DFSMSrmm environment, the management of these two types of tape
volumes is totally different, so we will first provide an overview of each
method of management.

4.4.1.2 Managing pooling and VRS for system-managed tapes


Regardless of the use of DFSMSrmm, system-managed tape libraries are
grouped by storage group (SG). One SG consists of one or more tape
libraries. One tape library can be a member of one or more SGs. VTSs and
native tape libraries cannot be in the same SG.

For a system-managed tape library, each connected host system can have a
scratch pool for each media types.

When a data set is newly created on a system-managed tape volume, a data


class (DC), storage class (SC), management class (MC) and storage group
(SG) are assigned through SMS automatic class selection (ACS) routines.

The SG is used to select a group of tape libraries, as described above. A tape


library and a tape drive is selected within the SG assigned by the system. If
the request was a non-specific mount request, which means no VOL=SER=
parameter specified in the DD statement, the tape volume is selected by the
library manager, according to the media type requested. The media type can
be specified in the DC.

In the case of a system-managed DASD data set, the DC, SC, MC and SG
are stored in the data set catalog entry. But in the case of the data set created
on a system-managed tape volume, these class names are not recorded in
the data set catalog entry. This is because the data set catalog entry for a
tape data set is the same format as a non-system-managed DASD data set
entry. Only a SG name is recorded in TCDB by OAM.

In a DFSMSrmm environment, DFSMSrmm keeps tracks of MC name


assigned to a tape data set, in the DFSMSrmm CDS. The MC names can be
used by a housekeeping job to assign a VRS. For example, if you want to
retain a data set for 10 days, whose MC is MCATL1, you can define the VRS
by issuing the command:
RMM ADDVRS DSNAME(‘MCATL1’) DAYS COUNT(10)

You can check the assigned MC name of each data set by issuing the
command:
RMM LISTDATASET dsname VOLUME(volser)

Chapter 4. DFSMSrmm enhancements 167


In this way, the SG is used to select a group of tape libraries, and the MC is
used to assign a VRS to the data set (Figure 82).

New Allocation

DC ACS ADDVRS DSNAME('MCATL1') -


DAYS COUNT(50) -
LOCATION(LOCAL)
SC ACS
ADDVRS DSNAME('MCATL2') -
Can be used to assign VRS WHILECATALOG
MC ACS

SG ACS
Library group selection

Volume is selected by LM

Figure 82. System-managed tape environment

4.4.1.3 Managing pooling and VRS for non-system-managed tapes


For non-system-managed tape volumes, we can have full support of multiple
scratch pools, because the selection of the tape volume is done by a operator
manually. Operators should select the correct tape pool by checking a job
name, data set name or any other information available.

In the DFSMSrmm environment, we can create this non-system-managed


multiple scratch pool environment by using the EDGUX100 user exit.

You can define multiple pools by the VLPOOL statement in the EDGRMMxx
PARMLIB member.

If a user requests a non-specific volume mount request, EDGUX100 gets


control and select a pool, before the write-to-operator (WTO) mount message
is issued. Then DFSMSrmm modifies the WTO mount message and the
display pod on the allocated drive, based on the MNTMSG statements in the
EDGRMMxx PARMLIB member.

MNTMSG is used to modify the WTO mount message and drive display to
include the selected pool name, pool prefix or rack number so that the
operators can easily recognize the pool to select.

After the volume is mounted, DFSMSrmm checks if the volume is selected


from the correct pool.

168 DFSMS Release 10 Technical Update


When a data set is created on a non-SMS managed tape volume, only a data
class (DC) is assigned to the data set during ACS processing. The DC name
is not recorded in the data set catalog entry, but DFSMSrmm records it.

EDGUX100 gets called at OPEN, CLOSE and EOV time, and can assign a
VRS management value to the data set, which housekeeping job will use to
assign a VRS.

For example, if you want to retain a data set for 10 days, code EDGUX100 to
assign MCATL1 VRS management value to the data set, and define the VRS
by issuing the command:
RMM ADDVRS DSNAME(‘MCATL1’) DAYS COUNT(10)

You can check the VRS management value assigned to a tape data set by
issuing the command:
RMM LISTDATASET dsname VOLUME(volser)

In this way, EDGUX100 can be used to modify the WTO mount message and
drive display so that the operator can select the correct pool easily. Also, it is
used to assign the VRS management value to each tape data set (Figure 83).

New Allocation

DC ACS
ADDVRS DSNAME('MCATL1') -
SC=NULL DAYS COUNT(50) -
SC ACS
LOCATION(LOCAL)

EDGUX100
DFSMSrmm (at OCE or
ADDVRS DSNAME('MCATL2') -
WHILECATALOG
Mount Request)

Assign VRS Management Value


to a data set

Assign Pool Name


Can be used to modify mount message

IEF233A M 480,PRIVAT,,JOBA,DSNA - POOL=SCRTCH00

Figure 83. Non-system-managed tape environment

4.4.1.4 What is the problem?


Now we will give you an overview of tape management. In this section, we
describe why the enhancements were introduced.

Chapter 4. DFSMSrmm enhancements 169


Information through the ACS variables may not be sufficient
When you have system-managed volumes along with non-system-managed
volumes, you need to carefully design SC ACS routines, based on your
system administration policies. However, the environmental information
available through ACS read-only variables may not provide enough
information to implement the policy into ACS routines. The following APARs
have addressed these problems:
• OW36342 and OW36351 for DFSMS/MVS V1R4.
• OW36343 and OW36352 for DFSMS/MVS V1R5.

The intention of the enhancement is to provide more flexibility to your system


administration policy, by including factors not currently available in the ACS
variable available. The enhancement has introduced a new exit, the
IGDACSXT exit, which is also known as the pre-ACS exit, as it gets control
before the ACS routines. By using this exit, you can set your own value to new
&MSDEST, &MSPARM, &MSPOLICY and & MSPOOL ACS read-only
variables, Then your ACS routines can refer to these variables.

However, if you want to use this exit and set values for these new variables,
this could be very complex to achieve, depending on your tape management
policy.

Non-system-managed pooling policy through EDGUX100


For a non-SMS managed tape volume, the scratch pooling decisions and
policy assignment rules must be coded in the EDGUX100 user exit, and you
need to write it in assembler language. This exit could complicate the
implementation of your tape management policy, and make its maintenance
more complex.

VRS assignment for non-system-managed volumes need EDGUX100


If you want to assign a VRS policy to a tape data set on a
non-system-managed data set automatically, you need to code EDGUX100,
while tape data sets on system-managed volume can do this through the MC
ACS routine. When you have both system-managed tape volumes and
non-system-managed volumes, and want to consolidate the policy
assignment at one place, the EDGUX100 exit is the only means available.

4.4.2 How does DFSMSrmm Release 10 improve this function?


For a data set created on a non-system-managed tape volume, DFSMSrmm
Release 10 now allows you to use the SMS ACS routines, where you can use
any of the existing ACS input variables as a base for assignment. In this
invocation of the ACS routine, MC name is used as a VRS management
value, and SG name is used as a pool name.

170 DFSMS Release 10 Technical Update


DFSMSrmm uses the following techniques to enable this function.

4.4.2.1 EDGUX100 to set &MSPOOL/&MSPOLCY


In a DFSMSrmm Release 10 environment, you can use EDGUX100 to set
values to &MSPOLICY and/or &MSPOOL, instead of having IGDACSXT set
them.

EDGUX100 gets control right after IGDACSXT returns to the system (if it
exists), but before ACS routines for new data set allocation are called. Note
that the values supplied through EDGUX100 are set only if IGDACSXT does
not get set.

MC and SG ACS invocation for pool selection


DFSMSrmm Release 10 now sets &ACSENVIR to “RMMPOOL” and calls the
MC and SG ACS routines before issuing the mount message, when a
non-system-managed tape drive is allocated for a non-specific volume mount
request.

When &STORGRP has a non-null SG after the invocation of MC and SG ACS


routine, DFSMSrmm uses the name of the SG as a pool name, and
DFSMSrmm does NOT call EDGUX100.

When &STORGRP has a null SG, DFSMSrmm calls EDGUX100 to get a pool
name assigned.

DFSMSrmm also calls MC and SG ACS routine during open processing, so


that it can validate non-specific volume requests with the name of SG.

MC ACS invocation for VRS management value assignment


DFSMSrmm Release 10 now sets &ACSENVIR to “RMMVRS” and calls MC
ACS routines during open processing for VRS assignment.

When &MGMTCLAS has a non-null MC after the invocation of MC ACS


routine, DFSMSrmm stores the name of the MC. DFSMSrmm does NOT call
EDGUX100. Figure 84 shows the non-SMS managed tape environment of
Release 10 systems.

Chapter 4. DFSMSrmm enhancements 171


New Allocation &MSPOLICY &MSPDEST
&MSPOOL &MSPARM

IGDACSXT
&MSPOLICY (for All new request)
&MSPOOL (for Scratch request)
EDGUX100
if not set

DC ACS MC ACS Pool Decision


SC=NULL (for Scratch)
SC ACS SG ACS
DFSMSrmm (&ACSENVIR=
RMMPOOL) EDGUX100
(for Scratch
Mount Request) if not set

Issue Mount Message VRS Decision


(for All)
MC ACS
(&ACSENVIR=
RMMVRS) EDGUX100
if not set
DFSMSrmm
(at OPEN) MC ACS Pool Decision
(for All)
SG ACS
(&ACSENVIR=
RMMPOOL) EDGUX100
if not set

Figure 84. Non-system-managed environment of Release 10

4.4.3 Migrating from EDGUX100 management to ACS management


In this section, we describe how to migrate from EDGUX100 management to
ACS management. The migration scenario can be considered as following
three stages:
• Stage 1. Transparent migration
• Stage 2. Exploiting the new function
• Stage 3. Disconnecting the old method (optional)

We describe each step below:

4.4.3.1 Stage 1. Transparent migration


The first stage describes how to use the existing EDGUX100 routine when
you are upgrading to OS/390 Version 2 Release 10. To do so, follow the
instructions below:
1. Update your existing ACS routines.

172 DFSMS Release 10 Technical Update


Before you upgrade to OS390 R10, you MUST put the following ACS
statements right above the existing MC and SG ACS logic in your
installation, so you can make sure that your current system administration
policy implemented through the ACS routine is not affected by this new
support:

WHEN (&ACSENVIR = ‘RMMPOOL’ | &ACSENVIR = ‘RMMVRS’)


DO
EXIT
END

Otherwise, you could get unexpected errors.


For example, assume your existing SG ACS routines assign an SG
unconditionally without referring to any ACS variables, and you also have
EDGUX100 to assign a pool to a tape data set. This works as you
expected under pre-Release 10, but will not work under OS/390 Version 2
Release 10. The reason is that DFSMSrmm calls the MC and SG routines
and gets the non-null SG value, which is not intended for pool
management use, so it will not call EDGUX100.
2. Upgrade your operating system to OS/390 Version 2 Release 10.
Since the above logic prevents ACS routines from assigning a non-null
value when DFSMSrmm calls them, DFSMSrmm calls the existing
EDGUX100 assign a pool name and VRS management value.

4.4.3.2 Stage 2. Exploiting the new function


The second stage describes how to exploit the new function. In this stage,
you design ACS routines for pool selection and VRS assignment.
1. Define pools by using the VLPOOL statement with the NAME parameter in
the EDGRMMxx PARMLIB member. In this support, the SG name is used
as a pool name.
2. Define a pseudo SGs through ISMF.
You are going to use an SG as a pool name. So you need to define pseudo
SGs which have the same names as the pool name you defined in step 1.
In order to achieve this, you need to define a TAPE type storage group,
and define a non-existent tape library in it. This does not have to be a real
tape library.
3. Define VRSs by using the RMM ADDVRS DSNAME(mc_name) command.
You use the name of MC as a VRS assignment value. In this support, the
MC name is used as a VRS name.

Chapter 4. DFSMSrmm enhancements 173


4. Define pseudo MCs through ISMF.
You need to define MCs using the same name as the VRS name you
defined in step 3. You do not have to define any parameters appearing in
the ISMF definition panels, as these are for system-managed DASD data
sets only. Remember that you use the name of MCs as VRS assignment
values.
5. Modify the MC and SG ACS routines for RMMPOOL.
When MC ACS routine gets control under RMMPOOL, &MGMTCLAS is
not used to assign a VRS assignment value. Therefore, you can set any
name of existing MCs in SMS configuration, and then you can refer to the
&MGMTCLAS in the SG ACS routine.
6. Modify the MC ACS routine for RMMVRS.

After you activate the SMS configuration, including modifications done


through these steps, MC and SG are assigned when the ACS routines are
called. The existing EDGUX100 is not called for pool selection nor VRS
management value assignment.

Note that the decision to use ACS or EDGUX100 can be taken at the data set
level. Therefore, you can partially implement this function and stage it in
gradually, if necessary.

4.4.3.3 Stage 3. Disconnecting the old method (optional)


The third stage is not a mandatory. This describes the optional migration
procedure to disconnect the old method.

At this stage, you would be fully exploiting the ACS support. Remember that
EDGUX100 will not get control if ACS routines set a non-null value to MC/SG
during RMMPOOL, RMMVRS, so you need to stop modifying EDGUX100
logic for pool or VRS assignment.

If you use EDGUX100 for pool or VRS assignment only, you can delete
EDGUX100 from your installation.

You might not want to remove the EDGUX100 itself, because the EDGUX100
can be used for other purposes, such as:
• Clearing “special EXPDT” from JFCB if it is used
• Permitting the use of a volume which is not registered to the DFSMSrmm
CDS
• Permitting the use of a duplicate volume serial which is registered to the
DFSMSrmm CDS

174 DFSMS Release 10 Technical Update


• Creating sticky labels, which are supported since DFSMS/MVS V1R5

4.4.4 Considerations
In this section, we describe some considerations on coexistence with
supported DFSMS/MVS releases.

4.4.4.1 Coexistence with supported DFSMS/MVS releases


This function is only available in OS/390 Version 2 Release 10 systems. If you
use the same EDGUX100 among OS/390 Version 2 Release 10 and
pre-OS/390 Version 2 Release 10 systems, you should not exploit this
function until all systems are at OS/390 Version 2 Release 10 level, as you
need to maintain both EDGUX100 and ACS logics.

Since this enhancement does not change any DFSMSrmm CDS or SMS CDS
format, you can share them with any supported lower level systems, as long
as you maintain the CDS from highest level of systems. No coexistence PTFs
regarding this support exist.

4.5 Providing OPC batch loader sample JCL


Tivoli Operations Planning and Control licensed program is IBM's foundation
for enterprise workload management. It provides a comprehensive set of
services for managing and automating the workload.

In this section, the enhancement to DFSMSrmm R10 which provides sample


JCL for OPC is described.

4.5.1 Background of this enhancement


In an DFSMSrmm environment, several inventory management jobs are
executed regularly. See 4.1.1, “Background of this enhancement” on page
135 for more details.

OPC is a system management product that functions as a batch job


scheduler. The definitions required for OPC to schedule jobs could be difficult
for storage administrators, if they are not familiar with OPC.

OPC has a batch configuration interface, known as the batch loader.

DFSMSrmm R10 now provides sample batch loader JCL to configure the
typical DFSMSrmm batch job flow to OPC. Sample jobs to be scheduled by
OPC are also provided.

Chapter 4. DFSMSrmm enhancements 175


This is not a functional enhancement; what DFSMSrmm Release 10 provides
is the simplified usability of OPC customization. Of course, if you wish, you
can modify the sample JCL provided by DFSMSrmm.

4.5.2 Understanding OPC


You will need to understand the overview of OPC to use this sample job,
therefore some basic concepts of OPC are described in this section.

For more details, see the OPC manual, TME 10 OPC Planning and
Scheduling the Workload, SH19-4376.

4.5.2.1 OPC configuration


An OPC configuration is a group of systems which have work scheduled from
the same systems management policy. Within the OPC configuration there
are two major functions which are performed by one controller and one or
more trackers.

The controller controls the job scheduling across the OPC configuration and
can be considered to be a server. The tracker acts as a client, passing
information about the status of jobs to the controller, and there must be one
on each system in the OPC configuration.

On the system where the controller runs, the controller and tracker can be in
same address space.

Figure 85 shows an example of the OPC configuration.

OPC Data Base

OPC
Controller

OPC OPC OPC


Tracker Tracker Tracker

SYSA SYSB SYSC

Figure 85. OPC components

176 DFSMS Release 10 Technical Update


4.5.2.2 How OPC schedules jobs
OPC uses the following resources to schedule jobs:
• Calendars
The data that specifies a planned date which defines the operation
department's work time in terms of work days and free days.
• Workstations
We describe this in 4.5.2.4, “Understanding workstations” on page 179.
• Applications
We describe this in 4.5.2.5, “Understanding applications” on page 179.
• Special resources (Optional)
Represent any type of limited resource, such as data sets, tape drives, or
communication lines. For example, a data set can be defined as a special
resource with exclusive usage to serialize the access to it.

The DFSMSrmm Release 10 sample batch loader JCL helps you to define
applications and special resources. Calendars and workstations must be
defined by using the OPC dialog prior to the execution of the batch loader.

After all of the resources have been defined, you run an OPC batch job to
create a long-term plan (LTP). The LTP is a high-level plan of system activity
that covers a long period of time. Each application in the LTP is not executed
immediately, but scheduled and executed when they are reflected in a current
plan (CP) by OPC batch job.

OPC uses some additional resources, such as event-trigger tracking (see


4.5.2.3, “Event-triggered tracking” on page 178 for more details), to create a
current plan. You must define the required resources before creating a current
plan.

A CP is a more detailed plan of system activity, which is used by OPC to


submit jobs and control operations, typically it covers 1 or 2 days. The CP is
initially created by an OPC batch job and can be updated by job-tracking
events, OPC dialog users, the program interface, the automatic recovery
function, and the event-triggered tracking function. When an application is
scheduled in a CP, the operations in that application are executed when all
conditions are met. For example:
• There are no pre-requested jobs in the CP, or all pre-requested jobs have
completed successfully.
• All resources such as workstation or special resources are available.

Chapter 4. DFSMSrmm enhancements 177


Figure 86 shows the overview of OPC job scheduling.

EDGJLOPC
Sample batch loader

Calendar and Special


Application Workstation
Period Resource
Description Description
Description Description

Long-term planning
(batch process)

Long Term
Plan

Daily planning
(batch process)

Current Applications are executed


Plan

Figure 86. Overview of OPC job scheduling

4.5.2.3 Event-triggered tracking


As well as scheduling jobs on a planned basis by using the CP, OPC can also
schedule them dynamically using event-triggered scheduling.

Event-triggered tracking (ETT) gives you a method of controlling and tracking


workload that cannot be planned in advance in your production environment.
It can add applications to the CP based on an event (the trigger). You can use
ETT to track work that has been submitted outside OPC control, or simply to
respond to an on-demand request for processing.

Figure 87 shows a diagram of event-triggered tracking.

178 DFSMS Release 10 Technical Update


Current Plan
when someone...

APPLB
OPC

JES Track!
Submit JOBA

JOBA
add APPLB to cuttent plan

Figure 87. OPC event-triggered tracking

4.5.2.4 Understanding workstations


A workstation is a logical place where work occurs. The user can specify the
number of jobs which can be run concurrently. The activity that occurs at
each workstation is called an operation.

There are three type of workstations.


• Computer workstations:
The majority of OPC operations are batch jobs and started tasks. These
operations are specified to run on computer workstations where they are
automatically started by OPC when the specified prerequisites are
complete, or at a particular time of day, and when all required resources
are available. Batch jobs and started tasks are automatically tracked to
completion by OPC. To automate jobs and started tasks, create at least
one job computer workstation and one started-task workstation.
• Printer workstations:
With a printer workstation you can track (but not control) the production of
print output. When an tracked output group stops printing OPC is notified
by an event record, and the corresponding operation is set to completed
status. If the print operation completes successfully, any successor
operations can be started.
• General workstations:
A general workstation lets you control operations that are normally not
controlled automatically.

4.5.2.5 Understanding applications


An application is the unit of production work which OPC schedules; it consists
of one or more operations. Each operations has a job name, workstation ID,
and an operation number.

Chapter 4. DFSMSrmm enhancements 179


A job name specifies the JCL member name to be executed, the workstation
ID specifies the logical place where the job is to be executed. The operation
number is specified to uniquely identify the operation in the application.
Operation numbers can be used to define the sequence of the operations to
be executed in the application. All operations in the same application are
executed one after another according to the pre-defined sequence.

Additionally, you can define dependencies for each operation, these specify a
relationship between two operations, and mean that the first operation must
successfully finish before the second operation can begin.

Applications can be grouped if they have the same run cycle. For example,
we can define daily, weekly and monthly applications group.

Figure 88 shows the relationship of applications, operations, workstations


and dependencies. In this case, workstation WS01 is configured as a single
task workstation, JOBA and JOBD are not executed concurrently.

Application ID: APPL01


(Scheduled based on plans)

WS01 JOBA JOBD


Job name: JOBA
Operation No: 10
Workstation ID: WS01 APPL02
WS02 JOBB
Job name: JOBB
JOBC starts JOBC
Operation No: 20
after JOBB finishes 10 WS03
Workstation ID: WS02
WS03 JOBC

JOBD
20 WS01

Figure 88. OPC resources and example of job scheduling

4.5.3 Batch job flow DFSMSrmm provides


The sample batch loader JCL is provided in SYS1.SAMPLIB(EDGJLOPC). If
you run this sample JCL with no modification, the job flow shown in Figure 89
is created.

180 DFSMS Release 10 Technical Update


RMMMTH RMMBKP RMMHKPD RMMPOST RMMMOVE
Friday in week1 Every workday Every workday Every workday Monday
except monday
EDGBETT EDGJBKP2
EDGJEJC
10 STC1 20 CPU1
20 CPU1

EDGJVFY EDGJBKP1 EDGJDHKP EDGJINER


20 CPU1 EDGJMOVE
20 CPU1 20 CPU1 40 CPU1
30 CPU1

EDGJMOVE
Workstation
50 PRT1
ID's RMMWK
Monday
Operation EDGJCMOV
EDGJWHKP
numbers
20 CPU1 55 TLIB RMMEXP
Every workday

EDGSETT
EDGJCMOV
10 STC1
60 CPU1

IF RC=8
EDGJEXP
20 CPU1

Groups Rules RMMVRSVER


GRMMDAY workdaily EDGJSCRL
EDGJVRSV 30 CPU1
GRMMWK weekly
20 CPU1
GRMMMTH monthly

Figure 89. Default sample job flow

In this job flow, the input arrival date of all applications is 6:00am and the
deadline is 8:00am. That is, applications are scheduled and executed in the
CP at 6:00am and should be completed by 8:00am. If the applications
scheduled are not complete by the deadline, a deadline miss is reported on
the OPC reports. You can additionally specify that WTO message be issued,
when applications misses their deadline.

Applications are scheduled as follows.


• On each work day except monday, RMMBKP, RMMHKPD, RMMPOST and
RMMEXP are scheduled.
• On monday, RMMBKP, RMMWK, RMMPOST, RMMMOVE and RMMEXP
are scheduled.
• On friday in week 1, RMMMTH, RMMBKP, RMMHKPD, RMMPOST and
RMMEXP are scheduled.
• Additionally, if the journal threshold is reached, RMMBKP is dynamically
added to the CP, by the OPC event trigger tracking function, RMMEXP is
dynamically added if the scratch threshold is reached.
• RMMVRSVER is dynamically added to the current plan if RMMHKPD or
RMMWK finish with return code 8.

Chapter 4. DFSMSrmm enhancements 181


More than one RMMBKP might be scheduled at the same time, either as a
scheduled task or as an event triggered task. If this happens, they will not run
concurrently, because OPC defines the data sets, which are required to
perform housekeeping jobs, as an exclusive special resource. This is also
true for the RMMEXP application.

See Table 15 for a detailed explanation of each application.


Table 15. Applications configured
Application ID Job name Description

RMMMTH EDGJVFY Performs verification of the RMM control data set. EDGUTIL job with
VERIFY parameter is run.

RMMBKP EDGBETT Performs nothing (IEFBR14). If the journal is reached to the threshold, this
EDGBETT job is invoked by BACKUPPROC of EDGRMMxx, and
RMMBKP is added to the OPC current plan by event trigger tracking
function.

EDGJBKP1 Backs up the CDS and Journal, and clears the journal after successful
backup.

RMMHKPD EDGJDHKP Executes inventory management VRSEL, EXPROC and RPTEXT


processing.

RMMWK EDGJWHKP Executes inventory management VRSEL, DSTORE, EXPROC and


RPTEXT processing.

RMMPOST EDGJBKP2 Backs up the CDS and Journal, and clears the journal after successful
backup.

EDGJINER Initializes the initialize pending volumes and erases the erase pending
volumes. Six 3480 tape volumes are processed by default.

RMMEXP EDGSETT Performs nothing (IEFBR14). If the number of scratch volumes in the SMS
managed tape library reaches the scratch threshold, this EDGSETT job is
invoked by SCRATCHPROC of EDGRMMxx, and RMMEXP is added to
the OPC current plan by the event trigger tracking function.

EDGJEXP Executes inventory management EXPROC processing. In addition, if any


global moves have been confirmed in EDGJCMOV of RMMMOVE, the
volume moves are marked complete.

EDGJSCRL Creates the latest CDS extract file and generate the latest scratch lists.

RMMMOVE EDGJEJC Ejects volumes from a system-managed library. By default, library name is
ROBBIE and ejected to bulk I/O station.

EDGJMOVE Creates the latest CDS extract file and produce movement reports.

EDGJMOVE Prints movement reports using the printer workstation.

EDGJCMOV Manually verifies completeness of movements using the manual


workstation.

EDGJCMOV Issues a global confirm move.

RMMVRSVER EDGJVRSV Executes inventory management VRSEL and VERIFY processing.


Dynamically added to the current plan if the daily or weekly main inventory
management job fails with RC=8.

182 DFSMS Release 10 Technical Update


4.5.4 How to use this function
If your OPC environment if new one, you must define the following resources:
• A calendar: At least one calendar must be defined to OPC to schedule
applications.
• Workstations: The sample job flow uses these workstations:
- STC1: Computer workstation for running started tasks.
- CPU1: Computer workstation for running batch jobs.
- PRT1: Printer workstation for printing movement reports.
- TLIB: General workstation for manually use by the tape librarian or
operator to mark movement of volumes completed.

If you are adding the DFSMSrmm job schedule to an existing OPC


environment, these calendars or workstations might be already defined, so
make sure that they are correctly defined in your OPC environment.

To create a DFSMSrmm supplied sample batch job flow, follow the


instructions below:
1. Pre-allocate data sets which DFSMSrmm uses, for regular processing, as
generation data groups (GDGs).
The EDGHSKP job uses some of the MESSAGE, REPORT, BACKUP,
JRNLBKUP or ACTIVITY DD statement depending on what kind of
housekeeping job you run. Therefore you must define these data sets if
the housekeeping job is to be run.
In this sample, you have to define these data sets as GDG, because that is
what each DFSMSrmm job scheduled by OPC expects.
Sample JCL is provided by DFSMSrmm in SAMPLIB member EDGJHKPA
to create these data sets. You can customize and run this JCL to define
these data sets as GDG bases and create first generations, as necessary.
2. Change the OPTION statement in the EDGRMMxx PARMLIB member.
Normally, OPTION BACKUPPROC is used to automate the CDS and
journal backup and journal clearance. SCRATCHPROC is used to perform
expiration processing.
In this sample environment, these backup and expiration procedures are
used as event triggered tracking. Specify BACKUPPROC(EDGBETT) and
SCRATCHPROC(EDGSETT).

Chapter 4. DFSMSrmm enhancements 183


3. Customize the DFSMSrmm supplied jobs and procedures, and make them
available to OPC on your running systems.
The sample DFSMSrmm jobs scheduled by OPC are provided in
SAMPLIB. You must copy them to a production PROCLIB, and modify the
JCL, as necessary, for execution in your environment.
4. Customize the sample job EDGJLOPC.
For a more detailed explanation of each statement of the batch loader
JCL, see Chapter 8, “Defining applications in batch” in the manual, TME
10 OPC V2R1 Planning and Scheduling the Workload, SH19-4376-01.
5. Run EDGJLOPC
This is the batch loader job; it creates a job flow.
6. Add the event trigger tracking entries to OPC using the OPC dialog, these
are as follows
• Define EDGBETT. This job is executed by DFSMSrmm if the journal
threshold is reached, and triggers the dynamic addition of the RMMBKP
application to the current plan.
• Define EDGSETT. This job is executed by DFSMSrmm if the scratch
threshold is reached, and triggers the dynamic addition of the RMMEXP
application to the current plan.

After all of these resources are defined, you must define these resources to
your long term plan, by running an OPC batch job.

When all of the changes are reflected in the LTP, create or modify the CP, by
running an OPC batch job.

4.5.5 When manual interventions are required


The job flow is mostly automated, but there are occasions when manual
intervention is required:
• Confirmation of tape volume movement requires someone to go to the
OPC terminal, the workstation TLIB1, and mark moves completed,
enabling the following jobs to run.
• If the main housekeeping jobs, RMMHKPD or RMMWK, complete with
return code 8, RMMVRSVER is dynamically added to the CP and
executed.
All of the following applications scheduled at that time, such as
RMMPOST, RMMMOVE or RMMEXP, are not executed until the main job
is restarted and completes with return code 4 or less.

184 DFSMS Release 10 Technical Update


Depending on the reason for failure, restart/recovery of the main job may
requires visual checking. Some other recovery jobs/actions, such as
altering the VRS definitions, or adding additional empty bin numbers may
be required.
When recovery is marked complete, the main job restarts from the
beginning.
• If the main job completes with return codes higher than 8 or abends, a
recovery action will need to be taken by the support programmer.

4.5.6 Considerations
In this section, we describe some considerations on manual intervention:
• DFSMSrmm just provides the sample batch loader JCL and the sample
DFSMSrmm jobs to be scheduled by OPC.
You are required to modify these supplied JCLs to suit your environment.
• These JCLs can be used in the lower level DFSMSrmm systems.

4.6 Miscellaneous enhancements


In this section, we describe miscellaneous enhancements to DFSMSrmm
Release 10.

4.6.1 Fast tape positioning support


DFSMSrmm now records the block ID of each data set when it is created. The
block ID of the beginning of the data set and end of the data set is recorded.
At OPEN time, DFSMSrmm supplies these value to DFSMSdfp and fast tape
positioning will be used.

Refer to 2.5, “High speed tape positioning” on page 70 for more information.

4.6.2 Large tape block size support


DFSMSdfp allows QSAM/BSAM to use block sizes greater than 32,760 for
tape data sets. DFSMSrmm is also enhanced to record these large tape block
sizes.

Refer to 2.2, “Large tape block sizes” on page 36 for more information.

Chapter 4. DFSMSrmm enhancements 185


4.7 Sample LISTDATASET and LISTVOLUME output
Figure 90 shows the output of DISPLAY DATA SET INFORMATION in the
ISPF DFSMSrmm panel. This information is the same as the LISTDATASET
output.

Data set name . . : 'KOHJI.TEST.SEQ1'


Volume serial . . : HGT001 Physical file sequence number . . : 1
Owner . . . . . . : Data set sequence number . . . . : 1
Job name . . . . : KOHJIDG
Step name . . . . : STEP1 Record format . . . . : FB
Program name . . : QSAMPUT Block size . . . . . : 60000
DD name . . . . . : SEQOUT Logical record length : 80
Create date . . . : 2000/209 YYYY/DDD Block count . . . . . : 1
Create time . . . : 17:12:16 Total block count . . : 1
System id . . . . : SC63 Percent of volume . . : 0
Device number . . . . : 0B36
Last job name . . : KOHJIDG Last DD name . . . . : SEQOUT
Last step name . : STEP1 Last device number . : 0B36
Last program name : QSAMPUT
Date last read . : 2000/209 VRS management value :
Date last written : 2000/209 Management class . . : MCDB22
Data class . . . . . :
Retention date . : Storage class . . . . :
VRS retained . . : NO Storage group . . . . :
Security name . . :
Classification . :
Primary VRS details:
VRS name . . . : *
Job name . . . : VRS type . . . . . : SMSMC
Subchain name : Subchain start date :
Secondary VRS details:
Value or class :
Job name . . . :
Subchain name : Subchain start date :
Catalog status . : NO
Abend while open : NO

Figure 90. Sample LISTDATASET output

Figure 91 shows the output of DISPLAY VOLUME INFORMATION in the


IMSF DFSMSrmm panel. This information is the same as the LISTVOLUME
output.

186 DFSMS Release 10 Technical Update


Volume . . . . . . : TST106 Rack number . . . . . . . : TST106
Media name . . . . : 3590 Status . . . . . . . . . . : USER
Volume type . . . : PHYSICAL Stacked count . . . . . . : 0
Retention date . . : PERMANENT Expiration date . . . . . : 2000/210
Set retained . . . : NO Original expiration date . :
Description . . . :
Data set name . . : 'BYRNEF.LBI.LBIGENER'
Media type . . . . : HPCT Release actions:
Label . . . . . . : SL Return to SCRATCH pool . : YES
Current version : Replace volume . . . . . : NO
Required version : Return to owner . . . . : NO
Density . . . . . : IDRC Initialize volume . . . : NO
Recording format . : 128TRACK Erase volume . . . . . . : NO
Compaction . . . . : YES Notify owner . . . . . . : NO
Attributes . . . . : NONE Expiry date ignore . . . : NO
Availability . . . : VITAL RECORD Scratch immediate . . . : NO
Owner . . . . . . : BYRNEF Owner access . . . . . . . : ALTER
Assigned date . . : 2000/200 Assigned time . . . . . . : 19:02:00
Security name . . :
Classification . . :
Account number . . : 999,POK
MVS use . . . . . . . . . : YES
Jobname . . . . . : BYRNEFG1 VM use . . . . . . . . . . : NO
Loan location . . : Last changed by . . . . . : *OCE
Previous volume . : Next volume . . . . . . . :
Volume access list : Access . . . . . . . . . . : NONE
User . . . . . . : User . . . . . . . . . . :
User . . . . . . : User . . . . . . . . . . :
User . . . . . . : User . . . . . . . . . . :
User . . . . . . : User . . . . . . . . . . :
User . . . . . . : User . . . . . . . . . . :
User . . . . . . : User . . . . . . . . . . :
Volume use count . : 43 Volume usage (Kb) . . . . : 320
Capacity (Mb) . . : 9536 Percent full . . . . . . . : 0
Create date . . . : 2000/196 Create time . . . . . . . : 18:34:23
Date last read . . : 2000/210 Date last written . . . . : 2000/210
Drive last used . : 0B90
Volume sequence . : 1 Number of data sets . . . : 1
Data set recording . . . . : ON
Errors:
Temporary read . : 0 Temporary write . . . . . : 0
Permanent read . : 0 Permanent write . . . . . : 0
Actions pending:
Return to SCRATCH pool . : NO Initialize volume . . . . : NO
Replace volume . . . . . : NO Erase volume . . . . . . . : NO
Return to owner . . . . : NO Notify owner . . . . . . . : NO
Location . . . . . : LIB1 Destination . . . . . . . :
Location type . . : AUTO In transit . . . . . . . . : NO
In container . . . :
Storage group . . : SGLIB1 Home location . . . . . . : LIB1
Required location . . . . :
Move mode . . . . : AUTO Movement tracking date . . :
Bin number . . . . : Media name . . . . . . . . :
Old bin number . . : Media name . . . . . . . . :
Product details:
Product number . :
Level . . . . . :
Feature code . . :

Figure 91. Sample LISTVOLUME output

Chapter 4. DFSMSrmm enhancements 187


188 DFSMS Release 10 Technical Update
Appendix A. Special notices

This publication is intended to help storage administrators understand new


enhancements made to DFSMS Release 10 so that they can plan to install
DFSMS Release 10. The information in this publication is not intended as the
specification of any programming interfaces that are provided by OS/390
Version 2 Release 10. See the PUBLICATIONS section of the IBM
Programming Announcement for OS/390 Version 2 Release 10 for more
information about what publications are considered to be product
documentation.

References in this publication to IBM products, programs or services do not


imply that IBM intends to make these available in all countries in which IBM
operates. Any reference to an IBM product, program, or service is not
intended to state or imply that only IBM's product, program, or service may be
used. Any functionally equivalent program that does not infringe any of IBM's
intellectual property rights may be used instead of the IBM product, program
or service.

Information in this book was developed in conjunction with use of the


equipment specified, and is limited in application to those specific hardware
and software products and levels.

IBM may have patents or pending patent applications covering subject matter
in this document. The furnishing of this document does not give you any
license to these patents. You can send license inquiries, in writing, to the IBM
Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY
10504-1785.

Licensees of this program who wish to have information about it for the
purpose of enabling: (i) the exchange of information between independently
created programs and other programs (including this one) and (ii) the mutual
use of the information which has been exchanged, should contact IBM
Corporation, Dept. 600A, Mail Drop 1329, Somers, NY 10589 USA.

Such information may be available, subject to appropriate terms and


conditions, including in some cases, payment of a fee.

The information contained in this document has not been submitted to any
formal IBM test and is distributed AS IS. The use of this information or the
implementation of any of these techniques is a customer responsibility and
depends on the customer's ability to evaluate and integrate them into the
customer's operational environment. While each item may have been
reviewed by IBM for accuracy in a specific situation, there is no guarantee

© Copyright IBM Corp. 2000 189


that the same or similar results will be obtained elsewhere. Customers
attempting to adapt these techniques to their own environments do so at their
own risk.

Any pointers in this publication to external Web sites are provided for
convenience only and do not in any manner serve as an endorsement of
these Web sites.

The following terms are trademarks of the International Business Machines


Corporation in the United States and/or other countries:
AFP AIX
AIX/ESA AS/400
AT BookManager
CICS CICS/ESA
CICS/MVS CUA
DATABASE 2 DB2
DFSMS DFSMSdfp
DFSMSdss DFSMShsm
DFSMSrmm DFSORT
ES/9000 ESA/390
ESCON GDDM
Hiperspace IIBM
IMS IMS/ESA
MVS MVS/DFP
MVS/ESA MVS/SP
OS/2 OS/390
OS/400 Parallel Sysplex
Print Services Facility QMF
RACF RAMAC
Redbooks Redbooks Logo
RETAIN RISC System/6000
RMF RS/6000
S/370 S/390
System/36 System/38
System/370 System/390
VM/ESA VSE/ESA

The following terms are trademarks of other companies:

Tivoli, Manage. Anything. Anywhere.,The Power To Manage., Anything.


Anywhere.,TME, NetView, Cross-Site, Tivoli Ready, Tivoli Certified, Planet
Tivoli, and Tivoli Enterprise are trademarks or registered trademarks of Tivoli
Systems Inc., an IBM company, in the United States, other countries, or both.
In Denmark, Tivoli is a trademark licensed from Kjøbenhavns Sommer - Tivoli
A/S.

190 DFSMS Release 10 Technical Update


CA-1 is a registered trademark of Computer Associates International, Inc. in
the United States and/or other countries.

C-bus is a trademark of Corollary, Inc. in the United States and/or other


countries.

Java and all Java-based trademarks and logos are trademarks or registered
trademarks of Sun Microsystems, Inc. in the United States and/or other
countries.

Microsoft, Windows, Windows NT, and the Windows logo are trademarks of
Microsoft Corporation in the United States and/or other countries.

PC Direct is a trademark of Ziff Communications Company in the United


States and/or other countries and is used by IBM Corporation under license.

ActionMedia, LANDesk, MMX, Pentium and ProShare are trademarks of Intel


Corporation in the United States and/or other countries.

UNIX is a registered trademark in the United States and other countries


licensed exclusively through The Open Group.

SET, SET Secure Electronic Transaction, and the SET Logo are trademarks
owned by SET Secure Electronic Transaction LLC.

Other company, product, and service names may be trademarks or service


marks of others.

Appendix A. Special notices 191


192 DFSMS Release 10 Technical Update
Appendix B. Related publications

The publications listed in this section are considered particularly suitable for a
more detailed discussion of the topics covered in this redbook.

B.1 IBM Redbooks


For information on ordering these publications see “How to get IBM
Redbooks” on page 195.
• Enhanced Catalog Sharing and Management, SG24-5594
• IBM Magstar Virtual Tape Server: Planning, Implementing, and
Monitoring, SG24-2229

B.2 IBM Redbooks collections


Redbooks are also available on the following CD-ROMs. Click the CD-ROMs
button at ibm.com/redbooks for information about all the CD-ROMs offered,
updates and formats.
CD-ROM Title Collection Kit
Number
IBM System/390 Redbooks Collection SK2T-2177
IBM Networking Redbooks Collection SK2T-6022
IBM Transaction Processing and Data Management Redbooks Collection SK2T-8038
IBM Lotus Redbooks Collection SK2T-8039
Tivoli Redbooks Collection SK2T-8044
IBM AS/400 Redbooks Collection SK2T-2849
IBM Netfinity Hardware and Software Redbooks Collection SK2T-8046
IBM RS/6000 Redbooks Collection SK2T-8043
IBM Application Development Redbooks Collection SK2T-8037
IBM Enterprise Storage and Systems Management Solutions SK3T-3694

B.3 Other resources


These publications are also relevant as further information sources:
• OS/390 Planning for Installation, GC28-1726
• OS/390 Summary of Message Changes, GC28-1499
This manual is not orderable. You can get the softcopy by visiting the
following URL:
http://www.s390.ibm.com/os390/bkserv/r10pdf/os390_sys.html
• OS/390 MVS Planning Workload Management , GC28-1761

© Copyright IBM Corp. 2000 193


• OS/390 MVS System Management Facilities, GC28-1783
• OS/390 MVS System Messages, Vol. 1, GC28-1784
• OS/390 MVS System Messages, Vol. 2, GC28-1785
• OS/390 MVS System Messages, Vol. 3, GC28-1786
• OS/390 MVS System Messages, Vol. 4, GC28-1787
• OS/390 MVS System Messages, Vol. 5, GC28-1788
• OS/390 DFSMS Introduction , SC26-7344
• OS/390 DFSMS Migration, SC26-7329
• OS/390 DFSMS Using Data Sets , SC26-7339
• OS/390 DFSMS Using Magnetic Tapes, SC26-7341
• OS/390 DFSMSdfp Storage Administration Reference,SC26-7331
• OS/390 DFSMSdfp Advanced Services , SC26-7330
• OS/390 DFSMSdfp Utilities, SC26-7343
• OS/390 DFSMShsm Storage Administration Guide, SC35-0388
• OS/390 DFSMShsm Diagnosis Reference Guide, LY35-0112
• OS/390 DFSMSrmm Implementation and Customization Guide ,
SC26-7334
• OS/390 DFSMSrmm Reporting, SC26-7335
• DFSMSrmm V1R5 Implementation and Planning Guide, SC26-4932-06
• COBOL for OS/390 and VM Programming Guide Version 2 Release 2,
SC26-9049-05
• TME 10 OPC Planning and Scheduling the Workload, SH19-4376

B.4 Referenced Web sites


These Web sites are also relevant as further information sources:
• http://s390.ibm.com/ IBM S/390 Web site.
• http://www.storage.ibm.com/ IBM Storage Web site.

194 DFSMS Release 10 Technical Update


How to get IBM Redbooks

This section explains how both customers and IBM employees can find out about IBM Redbooks,
redpieces, and CD-ROMs. A form for ordering books and CD-ROMs by fax or e-mail is also provided.
• Redbooks Web Site ibm.com/redbooks
Search for, view, download, or order hardcopy/CD-ROM Redbooks from the Redbooks Web site.
Also read redpieces and download additional materials (code samples or diskette/CD-ROM images)
from this Redbooks site.
Redpieces are Redbooks in progress; not all Redbooks become redpieces and sometimes just a few
chapters will be published this way. The intent is to get the information out much quicker than the
formal publishing process allows.
• E-mail Orders
Send orders by e-mail including information from the IBM Redbooks fax order form to:
e-mail address
In United States or Canada pubscan@us.ibm.com
Outside North America Contact information is in the “How to Order” section at this site:
http://www.elink.ibmlink.ibm.com/pbl/pbl
• Telephone Orders
United States (toll free) 1-800-879-2755
Canada (toll free) 1-800-IBM-4YOU
Outside North America Country coordinator phone number is in the “How to Order”
section at this site:
http://www.elink.ibmlink.ibm.com/pbl/pbl
• Fax Orders
United States (toll free) 1-800-445-9269
Canada 1-403-267-4455
Outside North America Fax phone number is in the “How to Order” section at this site:
http://www.elink.ibmlink.ibm.com/pbl/pbl

This information was current at the time of publication, but is continually subject to change. The latest
information may be found at the Redbooks Web site.

IBM Intranet for Employees


IBM employees may register for information on workshops, residencies, and Redbooks by accessing
the IBM Intranet Web site at http://w3.itso.ibm.com/ and clicking the ITSO Mailing List button.
Look in the Materials repository for workshops, presentations, papers, and Web pages developed
and written by the ITSO technical professionals; click the Additional Materials button. Employees may
access MyNews at http://w3.ibm.com/ for redbook, residency, and workshop announcements.

© Copyright IBM Corp. 2000 195


IBM Redbooks fax order form

Please send me the following:


Title Order Number Quantity

First name Last name

Company

Address

City Postal code Country

Telephone number Telefax number VAT number

Invoice to customer number

Credit card number

Credit card expiration date Card issued to Signature

We accept American Express, Diners, Eurocard, Master Card, and Visa. Payment by credit card not
available in all countries. Signature mandatory for credit card payment.

196 DFSMS Release 10 Technical Update


Glossary

A ADATA. Associated data.

access method services. A multifunction aggregate backup. The process of copying an


service program that manages VSAM and aggregate group and recovery instructions so
non-VSAM data sets, as well as integrated that a collection of data sets can be recovered
catalog facility (ICF). later as a group. aggregate group. A collection
of related data sets and control information that
ACDS. See Active control data set. have been pooled to meet a defined backup or
ACS. Automatic class selection. recovery strategy.

activate. To load the contents of a source alternate index. In VSAM, a collection of


control data set (SCDS) into Storage index entries related to a given base cluster and
Management Subsystem address space storage organized by an alternate key, that is, a key
and into an active control data set (ACDS), or to other than the prime key of the associated base
load the contents of an existing ACDS into cluster data records; it gives an alternate
subsystem address space storage. This directory for finding records in the data
establishes a new storage management policy component of a base cluster.
for the subsystem complex. active configuration. American National Standards Institute
The most recently activated SCDS, which now (ANSI). An organization that establishes
controls storage management for the Storage voluntary industry standards for information
Management Subsystem complex. processing, particularly for control characters
active control data set (ACDS). A VSAM and magnetic tape labels.
linear data set that contains an SCDS that has AMODE. Addressing mode.
been activated to control the storage
management policy for the installation. When ANSI. See American National Standards
activating an SCDS, you determine which ACDS Institute.
will hold the active configuration (if you have APAR. Authorized Program Analysis Report.
defined more than one ACDS). The ACDS is
shared by each system that is using the same APF. Authorized program facility.
SMS configuration to manage storage. See also API. See Application programming
source control data set and communications interface.
data set.
application programming interface (API). A
active data. (1) Data that can be accessed functional interface supplied by the operating
without any special action by the user, such as system or by a separately orderable licensed
data on primary storage or migrated data. Active program that allows an application program
data also can be stored on tape volumes. (2) written in a high-level language to use specific
For tape mount management, application data data or functions of the operating system or the
that is frequently referenced, small in size, and licensed program.
managed better on DASD than on tape.
ATLDS. Automated Tape Library Dataserver.
Contrast with inactive data .
automated tape library. A device consisting of
actual UCB. The UCB used for all I/O robotic components, cartridge storage areas,
operations. It has an address that is consistent tape subsystems, and controlling hardware and
in any address space. The actual UCB can software, together with the set of tape volumes
reside in common storage either above or below that reside in the library and can be mounted on
16 MB. the library tape drives. See also tape library.
Contrast with manual tape library.

© Copyright IBM Corp. 2000 197


automatic backup. (1) In DFSMShsm, the backup-while-open (BWO). This makes a
process of automatically copying data sets from backup copy of a data set while the data set is
primary storage volumes or migration volumes to open for update. The backup copy can contain
backup volumes. (2) In OAM, the process of partial updates. base configuration. The part of
automatically copying objects from DASD, an SMS configuration that contains general
optical, or tape volumes contained in an object storage management attributes, such as the
storage group, to backup volumes contained in default management class, default unit, and
an object backup storage group. default device geometry. It also identifies the
systems or system groups that an SMS
automatic class selection (ACS) routine. A
configuration manages.
procedural set of ACS language statements.
Based on a set of input variables, the ACS BCDS. See Backup control data set .
language statements generate the name of a
BCS. Basic catalog structure.
predefined SMS class, or a list of names of
predefined storage groups, for a data set. BDAM. Basic direct access method.
automatic dump. In DFSMShsm, the process binder. The DFSMS program that processes
of using DFSMSdss automatically to do a the output of language translators and compilers
full-volume dump of all allocated space on a into an executable program (load module or
primary storage volume to designated tape dump program object). It replaces the linkage editor
volumes. and batch loader in OS/390.
automatic primary space management. In block count. The number of data blocks on a
DFSMShsm, the process of deleting expired data magnetic tape volume.
sets, deleting temporary data sets, releasing BLP. Bypass label processing.
unused space, and migrating data sets from
primary storage volumes automatically. BPAM. Basic partitioned access method.

automatic secondary space management. In BSAM. Basic sequential access method.


DFSMShsm, the process of automatically BTLS. Basic Tape Library Support.
deleting expired migrated data sets, deleting
expired records from the migration control data buffer. A routine or storage used to
sets, and migrating eligible data sets from compensate for a difference in rate of flow of
migration level 1 volumes to migration level 2 data, or time of occurrence of events, when
volumes. transferring data from one device to another.

automatic volume space management. In C


DFSMShsm, the process that includes automatic cache fast write. A storage control capability in
primary space management and interval which the data is written directly to cache without
migration. availability. For a storage subsystem, using nonvolatile storage. Cache fast write is
the degree to which a data set or object can be useful for temporary data or data that is readily
accessed when requested by a user. recreated, such as the sort work files created by
B DFSORT. Contrast with DASD fast write.

backup. The process of creating a copy of a cache set. A parameter on storage class and
data set or object to be used in case of accidental defined in the base configuration information that
loss. backup control data set (BCDS). In maps a logical name to a set of CF cache
DFSMShsm, a VSAM key-sequenced data set structure names. capacity planning. The process
that contains information about backup versions of forecasting and calculating the appropriate
of data sets, backup volumes, dump volumes, amount of physical computing resources required
and volumes under control of the backup and to accommodate an expected workload.
dump functions of DFSMShsm.

198 DFSMS Release 10 Technical Update


captured UCB. A virtual window into the actual these resources is a client.A machine can run
UCB which resides in private storage below 16 client and server processes at the same time.
MB. All the virtual windows on the actual UCB
cluster. In VSAM, a named structure consisting
see the same data at the same time. Only actual
of a group of related components. For example,
UCBs above the 16 MB line are captured.
when the data is key-sequenced, the cluster
Cartridge System Tape. The base tape contains both the data and the index
cartridge media used with 3480 or 3490 Magnetic components.
Tape Subsystems. Contrast with Enhanced
Coded Character Set Identifier (CCSID) . A
Capacity Cartridge System Tape.
16-bit number that identifies a specific encoding
Catalog Search Interface. An application scheme identifier, character set identifiers, code
programming interface (API) to the catalog page identifiers, and additional coding required
accessible from assembler and high-level information. The CCSID uniquely identifies the
languages. As an alternative to LISTCAT, it coded graphic character representation used.
allows tailoring of output, provides additional
COMMDS. See Communications data set.
information not provided by LISTCAT, while
requiring less I/O than LISTCAT, because of communications data set (COMMDS). The
using generic locates. primary means of communication among systems
governed by a single SMS configuration. The
CCSID. See Coded Character Set Identifier.
COMMDS is a VSAM linear data set that contains
CDS. See Control data set. the name of the ACDS and current utilization
statistics for each system-managed volume,
CF. See Coupling facility.
which helps balance space among systems
CFRM. Coupling facility resource manager. running SMS. See also active control data set
Character Data Representation Architecture and source control data set.
(CDRA) API. A set of identifiers, services, compatibility mode. For DFSMS, it is the mode
supporting resources, and conventions for of running SMS in which no more than eight
consistent representation, processing, and names—representing systems, system groups, or
interchange of character data. both—are supported in the SMS configuration.
CI. Control interval. When running in this mode, the DFSMS system
can share SCDSs, ACDSs and COMMDSs with
CICS. Customer Information Control System. other systems running OS/390 or DFSMS
CICSVR. CICS VSAM Recovery. releases prior to DFSMS/MVSV1R3, and with
other DFSMS systems running in compatibility
class transition. An event that brings about mode.
change to an object’s service-level criteria,
causing OAM to invoke ACS routines to assign a compress. (1) To reduce the amount of storage
new storage class or management class to the required for a given data set by having the
object. system replace identical words or phrases with a
shorter token associated with the word or phrase.
client. (1) A function that requests services (2) To reclaim the unused and unavailable space
from a server, and makes them available to the in a partitioned data set that results from deleting
user. (2) An address space in OS/390 that is or modifying members by moving all unused
using TCP/IP services. (3) A term used in an space to the end of the data set.
environment to identify a machine that uses the
resources of the network. See also source. concurrent copy. A function to increase the
accessibility of data by enabling you to make a
client-server relationship. Any process that consistent backup or copy of data concurrent with
provides resources to other processes on a the usual application program processing.
network is a server. Any process that employs

199
configuration (Storage Management in the storage control until the data is completely
Subsystem) . A base configuration, definitions written to the DASD, providing data integrity
of Storage Management Subsystem classes and equivalent to writing directly to the DASD. Use of
storage groups, and automatic class selection DASD fast write for system-managed data sets is
routines that DFSMS uses to manage storage. controlled by storage class attributes to improve
performance. See also dynamic cache
connectivity. (1) The considerations regarding
management. Contrast with cache fast write.
how storage controls are joined to DASD and
processors to achieve adequate data paths (and DASD volume. A DASD space identified by a
alternative data paths) to meet data availability common label and accessed by a set of related
needs. (2) In a system-managed storage addresses. See also volume, primary storage,
environment, the system status of volumes and migration level 1, migration level 2.
storage groups. construct. One of the following:
data class. A collection of allocation and space
data class, storage class, management class,
attributes, defined by the storage administrator,
storage group, aggregate group, base
that are used to create a data set.
configuration.
Data Facility Sort. An IBM licensed program
control data set (CDS). With respect to the
that is a high-speed data processing utility.
Storage Management Subsystem, a VSAM linear
DFSORT provides an efficient and flexible way to
data set containing configurational, operational,
handle sorting, merging, and copying operations,
or communication information. The Storage
as well as providing versatile data manipulation
Management Subsystem introduces three types
at the record, field, and bit level.
of control data sets that guide the execution of
the Storage Management Subsystem: the source data set. In DFSMS, the major unit of data
control data set, the active control data set, and storage and retrieval, consisting of a collection of
the communications data set. data in one of several prescribed arrangements
and described by control information to which the
control interval (CI). A fixed-length area of
system has access. In OS/390 non-UNIX
auxiliary storage space in which VSAM stores
environments, the terms data set and file are
records. It is the unit of information (an integer
generally equivalent and sometimes are used
multiple of block size) transmitted to or from
interchangeably. See also file. In OS/390 UNIX
auxiliary storage by VSAM.
environments, the terms data set and file have
CUA. Common user access. quite distinct meanings.
coupling facility (CF). The hardware that data set collection. A group of data sets which
provides high-speed caching, list processing, and are intended to be allocated on the same tape
locking functions in a Parallel Sysplex. volume or set of tape volumes as a result of data
set stacking. data set stacking. The function used
coupling facility (CF) lock structure. The CF
to place several data sets on the same tape
hardware that supports sysplex-wide locking.
volume or set of tape volumes. It increases the
D efficiency of tape media usage and reduces the
DADSM. Direct access device space overall number of tape volumes needed by
management. allocation. It also allows an installation to group
related data sets together on a minimum number
DASD. Direct access storage device. of tape volumes, which is useful when sending
DASD fast write. An extended function of some data offsite.
models of the IBM 3990 Storage Control in which DB2. Data Base 2.
data is written concurrently to cache and
nonvolatile storage and automatically scheduled DDM. Distributed Data Management.
for destaging to DASD. Both copies are retained

200 DFSMS Release 10 Technical Update


default device geometry. Part of the SMS base DFSMShsm control data set. In DFSMShsm,
configuration, it identifies the number of bytes per one of three VSAM key-sequenced data sets that
track and the number of tracks per cylinder for contain records used in DFSMShsm processing.
converting space requests made in tracks or See also backup control data set, migration
cylinders into bytes, when no unit name has been control data set, and offline control data set.
specified.
DFSMShsm-managed volume. (1) A primary
default management class. Part of the SMS storage volume, which is defined to DFSMShsm
base configuration, it identifies the management but which does not belong to a storage group. (2)
class that should be used for system-managed A volume in a storage group, which is using
data sets that do not have a management class DFSMShsm automatic dump, migration, or
assigned. backup services. Contrast with system-managed
volume and DFSMSrmm-managed volume.
default unit. Part of the SMS base
configuration, it identifies an esoteric (such as DFSMShsm-owned volume . A storage volume
SYSDA) or generic (such as 3390) device name. on which DFSMShsm stores backup versions,
If a user omits the UNIT parameter on the JCL or dump copies, or migrated data sets.
the dynamic allocation equivalent, SMS applies
DFSMS Network File System. See OS/390
the default unit if the data set has a disposition of
Network File System.
MOD or NEW and is not system-managed.
DFSMS Optimizer Feature. A DFSMS feature
DES. Data Encryption Standard.
that provides an analysis and reporting capability
DFM. Distributed FileManager. for SMS and non-SMS environments.
device category. A storage device DFSMSrmm. A DFSMS functional component
classification used by SMS. The device or base element of OS/390, that manages
categories are as follows SMS-managed DASD, removable media.
SMS-managed tape, non-SMS-managed DASD
DFSMSrmm-managed volume. A tape volume
non-SMS-managed tape.
that is defined to DFSMSrmm. Contrast with
device management. The task of defining input system-managed volume and
and output devices to the operating system, and DFSMShsm-managed volume .
then controlling the operation of these devices.
DFSORT. Data Facility Sort.
Device Support Facilities (ICKDSF). A
dictionary. A table that associates words,
program used for initialization of DASD volumes
phrases, or data patterns to shorter tokens. The
and track recovery.
tokens replace the associated words, phrases, or
DFSMSdfp. A DFSMS functional component or data patterns when a data set is compressed.
base element of OS/390, that provides functions
direct access device space management
for storage management, data management,
(DADSM). A collection of subroutines that
program management, device management, and
manages space on disk volumes. The
distributed data access.
subroutines are: Create, Scratch, Extend, and
DFSMSdss. A DFSMS functional component or Partial Release.
base element of OS/390, used to copy, move,
disaster recovery. A procedure for copying and
dump, and restore data sets and volumes.
storing an installation’s essential business data in
DFSMShsm. A DFSMS functional component a secure location, and for recovering that data in
or base element of OS/390, used for backing up the event of a catastrophic problem. Compare
and recovering data, and managing space on with vital records.
volumes in the storage hierarchy.
Distributed Data Management (DDM). (1) A
data protocol architecture for data management

201
services across distributed systems in an SNA EPLPA. Extended pageable link pack area.
environment. DDM provides a common data
erase-on-scratch. physical erasure of data on
management language for data interchange
a DASD data set when the data set is deleted
among different IBM system platforms. (2) (1)
(scratched).
The term used to describe the SAA architectures
and programming support that provide distributed ESA. Enterprise Systems Architecture.
file access capabilities between SAA systems. (2) ESCON. Enterprise System Connection.
The DFSMS component that implements the
DDM target server. ESD. External symbol dictionary.

DIV. Data in Virtual. ESDS. Entry-sequenced data set.

DSCB. Data set control block. EXCP. Execute channel program.

DSORG. Data set organization. expiration. The process by which data sets or
objects are identified for deletion because their
DTL. Data tag language. expiration date or retention period has passed.
dual copy. A high availability function made On DASD, data sets and objects are deleted. On
possible by nonvolatile storage in some models tape, when all data sets have reached their
of the IBM 3990 Storage Control. Dual copy expiration date, the tape volume is available for
maintains two functionally identical copies of reuse.
designated DASD volumes in the logical 3990 extended addressability. The ability to create
subsystem, and automatically updates both and access a VSAM data set that is greater than
copies every time a write operation is issued to 4 GB in size. Extended addressability data sets
the dual copy logical volume. must be allocated with DSNTYPE=EXT and
dump class. A set of characteristics that EXTENDED ADDRESSABILITY=Y.
describes how volume dumps are managed by extended format. The format of a data set that
DFSMShsm. has a data set name type (DSNTYPE) of
duplexing. The process of writing two sets of EXTENDED. The data set is structured logically
identical records in order to create a second copy the same as a data set that is not in extended
of data. format but the physical format is different. See
also striped data set and compressed format .
dynamic cache management. A function that
extended link pack area (ELPA). The
automatically determines which data sets will be
extension of the link pack area that resides above
cached based on the 3990 subsystem load, the
16 MB in virtual storage. See also link pack
characteristics of the data set, and the
area .
performance requirements defined by the storage
administrator. extended pageable link pack area
(EPLPA). The extension of the pageable link
E
pack area that resides above 16 MB in virtual
EC. Extended control. storage. See also pageable link pack area.
ELPA. Extended link pack area. extended remote copy. Extended Remote
Copy (XRC) is a technique involving both the
Enhanced Capacity Cartridge System
DFSMS host and the I/O Subsystem that keeps a
Tape. Cartridge system tape with increased
“real time” copy of designated data at another
capacity that can only be used with 3490E
location. Updates to the primary center are
Magnetic Tape Subsystems. Contrast with
replicated at the secondary center
Cartridge System Tape.
asynchronously.
EOV. End-of-volume.
F

202 DFSMS Release 10 Technical Update


FCB. Forms control buffer. GSR. Global shared resources. GUI. Graphical
user interface.
file. A collection of information treated as a unit.
In non-OS/390 UNIX environments, the terms H
data set and file are generally equivalent and are
hardware configuration definition (HCD). An
sometimes used interchangeably. See also data
interactive interface in OS/390 that enables an
set.
installation to define hardware configurations
file system. In the OS/390 UNIX HFS from a single point of control.
environment, the collection of files and file
HCD. See Hardware configuration definition .
management structures on a physical or logical
mass storage device, such as a diskette or HIDAM. Hierarchic indexed direct access
minidisk. See also HFS data set. method.
filtering. The process of selecting data sets hierarchical file system (HFS) data set. A
based on specified criteria. These criteria consist data set that contains a POSIX-compliant file
of fully or partially-qualified data set names or of system, which is a collection of files and
certain data set characteristics. directories organized in a hierarchical structure,
that can be accessed using OS/390 UNIX System
FIPS. Federal Information Processing Standard.
Services. See also file system .
FLPA. Fixed link pack area.
Hiperspace ®. A high performance space
G backed by either expanded storage or auxiliary
storage, which provides high performance
GB. Gigabyte.
storage and retrieval of data.
GDG. Generation data group.
HSM complex (HSMplex). One or more
GDS. Generation data stream. OS/390 images running DFSMShsm that share a
giga (G). The information-industry meaning common set of control data sets (MCDS, BCDS,
depends upon the context: 1. G = OCDS, and Journal).
1,073,741,824(230 ) for real and virtual storage 2. I
G = 1,000,000,000 for disk storage capacity 3. G
ICF. Integrated catalog facility.
= 1,000,000,000 for transmission rates.
ICKDSF. Device Support Facilities.
global resource serialization (GRS). A
component of OS/390 used for serializing use of IDR. Identification record.
system resources and for converting hardware
improved data recording capability (IDRC). A
reserves on DASD volumes to data set
recording mode that can increase the effective
enqueues.
cartridge data capacity and the effective data rate
GRS complex (GRSplex). One or more OS/390 when enabled and used. IDRC is always enabled
images that share a common global resource on the 3490E Magnetic Tape Subsystem.
serialization policy in either a ring or star
IMS. Information Management System.
configuration.
inactive configuration. A configuration
group. (1) With respect to partitioned data sets,
contained in an SCDS. A configuration that is not
a member and the member’s aliases that exist in
currently being used by the Storage Management
a PDS or PDSE, or in an unloaded PDSE. (2) A
Subsystem.
collection of users who can share access
authorities for protected resources. inactive data. (1) A copy of active data, such as
vital records or a backup copy of a data set.
GSAM. Generalized sequential access method.
Inactive data is never changed, but can be
deleted or superseded by another copy. (2) In

203
tape mount management, data that is written ISPF. Interactive System Productivity Facility.
once and never used again. The majority of this
J
data is point-in-time backups. (3) Objects
infrequently accessed by users and eligible to be JCL. Job control language.
moved to the optical library or shelf. Contrast with JES. Job entry subsystem.
active data .
JES3. An OS/390 subsystem that receives jobs
indexed VTOC. A volume table of contents with into the system, converts them to internal format,
an index that contains a list of data set names selects them for execution, processes their
and free space information, which allows data output, and purges them from the system. In
sets to be located more efficiently. complexes that have several loosely coupled
in-place conversion. The process of bringing a processing units, the JES3 program manages
volume and the data sets it contains under the processors so that the global processor exercises
control of SMS without data movement, using centralized control over the local processors and
DFSMSdss. integrated catalog facility catalog. A distributes jobs to them via a common job
catalog that is composed of a basic catalog enqueue.
structure (BCS) and its related volume tables of K
contents (VTOCs) and VSAM volume data sets
(VVDSs). See also basic catalog structure and KB. Kilobyte.
VSAM volume data set. kilo (K). The information-industry meaning
integrated catalog facility. The name of the depends upon the context: 1. K = 1024(210 ) for
catalog in DFSMSdfp that is a functional real and virtual storage 2. K = 1000 for disk
replacement for OS CVOLs and VSAM catalogs. storage capacity 3. K = 1000 for transmission
rates.
Interactive Storage Management Facility
(ISMF). The interactive interface of DFSMS that key-sequenced data set (KSDS). A VSAM
allows users and storage administrators access data set whose records are loaded in ascending
to the storage management functions. key sequence and controlled by an index.
interval migration. In DFSMShsm, automatic KSDS. Key-sequenced data set.
migration that occurs when a threshold level of L
occupancy is reached or exceeded on a
DFSMShsm-managed volume, during a specified LDS. See Linear data set. linear data set (LDS).
time interval. Data sets are moved from the A VSAM data set that contains data but contains
volume, largest eligible data set first, until the low no control information. A linear data set can be
threshold of occupancy is reached. accessed as a byte-addressable string in virtual
storage.
I/O. Input/output.
link pack area (LPA). In OS/390, an area of
IPL. Initial program load. virtual storage that contains reenterable routines
ISAM. Indexed sequential access method. that are loaded at IPL time and can be used
concurrently by all tasks in the system. load
ISMF. See Interactive Storage Management module. An executable program stored in a
Facility. partitioned data set program library. See also
ISO. International Organization for program object .
Standardization. logical storage. With respect to data, the
ISO/ANSI. When referring to magnetic tape attributes that describe the data and its usage, as
labels and file structure, any tape that conforms opposed to the physical location of the data.
to certain standards established by the ISO and LPA. See Link pack area.
ANSI.

204 DFSMS Release 10 Technical Update


LSR. Local shared resources. migration level 1. DFSMShsm-owned DASD
volumes that contain data sets migrated from
M
primary storage volumes. The data can be
MB. Megabyte. compressed. See also storage hierarchy.
mega (M). The information-industry meaning Contrast with primary storage and migration
depends upon the context: 1. M = 1,048,576(2 20 ) level 2 .
for real and virtual storage 2. M = 1,000,000 for migration level 2. DFSMShsm-owned tape or
disk storage capacity 3. M = 1,000,000 for DASD volumes that contain data sets migrated
transmission rates. from primary storage volumes or from migration
management class. A collection of level 1 volumes. The data can be compressed.
management attributes, defined by the storage See also storage hierarchy. Contrast with
administrator, used to control the release of primary storage and migration level 1 .
allocated but unused space; to control the ML1. See Migration level 1 .
retention, migration, and backup of data sets; to
ML2. See Migration level 2 .
control the retention and backup of aggregate
groups, and to control the retention, backup, and MLPA. See modified link pack area.
class transition of objects.
modified link pack area (MLPA). An area of
manual tape library. A manual tape library is virtual storage containing reenterable routines
an installation-defined set of tape drives and the from the SYS1.LINKLIB, SYS1.SVCLIB, or
set of volumes that can be mounted on the SYS1.LPALIB system data sets that are to be
drives. The IBM implementation includes one or part of the pageable extension of the link pack
more 3490 subsystems, each connected by a area during the current IPL. See also link pack
Library Attachment Facility to a processor area .
running the Library Manager application, and a
MVS. Multiple Virtual Storage.
set of volumes, defined by the installation as part
of the library, which resides in shelf storage MVS/ESA. Multiple Virtual Storage/Enterprise
located near the 3490 subsystems. Systems Architecture. An OS/390 operating
system environment that supports ESA/390.
MEDIA2. Enhanced Capacity Cartridge System
Tape. MVS/ESA SP. An IBM licensed program used to
control the OS/390 operating system. MVS/ESA
MEDIA3. High Performance Cartridge Tape.
SP together with DFSMS compose the base
MEDIA4. Extended High Performance Cartridge MVS/ESA operating environment. See also
OS/390.
Tape migration.
N
The process of moving unused data to lower cost
storage in order to make space for NaviQuest. A component of DFSMSdfp for
high-availability data. If you wish to use the data implementing, verifying, and maintaining your
set, it must be recalled. See also migration level DFSMS SMS environment in batch mode. It
1and migration level 2. provides batch testing and reporting capabilities
that can be used to automatically create test
migration control data set (MCDS). In
cases in bulk, run many other storage
DFSMShsm, a VSAM key-sequenced data set
management tasks in batch mode, and use
that contains statistics records, control records,
supplied ACS code fragments as models when
user records, records for data sets that have
creating your own ACS routines.
migrated, and records for volumes under
migration control of DFSMShsm. NFS. OS/390 Network File System.
NSR. Non-shared resources.

205
nonvolatile storage (NVS) . Additional random optical library. A storage device that houses
access electronic storage with a backup battery optical drives and optical cartridges, and contains
power source, available with an IBM Cache a mechanism for moving optical disks between a
Storage Control, used to retain data during a cartridge storage area and optical disk drives.
power outage. Nonvolatile storage, accessible
optical volume. Storage space on an optical
from all storage directors, stores data during
disk, identified by a volume label. See also
DASD fast write and dual copy operations.
volume.
O
OSAM. Overflow sequential access method.
OAM. Object Access Method. OAM-managed
OS/390. OS/390 is a network computing-ready,
volumes. Optical or tape volumes controlled by
integrated operating system consisting of more
the object access method (OAM).
than 50 base elements and integrated optional
object. A named byte stream having no specific features delivered as a configured, tested
format or record orientation. system.
object access method (OAM). An access OS/390 UNIX System Services (OS/390
method that provides storage, retrieval, and UNIX). The set of functions provided by the
storage hierarchy management for objects and SHELL and UTILITIES, kernel, debugger, file
provides storage and retrieval management for system, C/C++ Run-Time Library, Language
tape volumes contained in system-managed Environment, and other elements of the OS/390
libraries. operating system that allow users to write and run
application programs that conform to UNIX
object backup storage group. A type of
standards.
storage group that contains optical or tape
volumes used for backup copies of objects. See P
also storage group.
pageable link pack area (PLPA). An area of
object storage group. A type of storage group virtual storage containing SVC routines, access
that contains objects on DASD, tape, or optical methods, and other read-only system and user
volumes. See also storage group. programs that can be shared among users of the
system. See also link pack area .
object storage hierarchy. A hierarchy
consisting of objects stored in DB2 table spaces partitioned data set (PDS). A data set on direct
on DASD, on optical or tape volumes that reside access storage that is divided into partitions,
in a library, and on optical or tape volumes that called members, each of which can contain a
reside on a shelf. See also storage hierarchy. program, part of a program, or data.
OCDS. Offline control data set. partitioned data set extended (PDSE). A
system-managed data set that contains an
offline control data set (OCDS). In
indexed directory and members that are similar to
DFSMShsm, a VSAM key-sequenced set that
the directory and members of partitioned data
contains information about tape backup volumes
sets. A PDSE can be used instead of a
and tape migration level 2 volumes.
partitioned data set.
OLTP. Online transaction processing.
PDS. See partitioned data set .
optical disk drive. The mechanism used to
PDSE. See partitioned data set extended .
seek, read, and write data on an optical disk. An
optical disk drive can be operator-accessible, performance. (1) A measurement of the
such as the 3995 Optical Library Dataserver, or amount of work a product can produce with a
stand-alone, such as the 9346 or 9347 optical given amount of resources. (2) In a
disk drives. system-managed storage environment, a
measurement of effective data processing speed

206 DFSMS Release 10 Technical Update


with respect to objectives set by the storage PTF. Program Temporary Fix.
administrator. Performance is largely determined
Q
by throughput, response time, and system
availability. QSAM. Queued sequential access method.
permanent data set. A user-named data set R
that is normally retained for longer than the RACF. See Resource Access Control Facility.
duration of a job or interactive session. Contrast
with temporary data set . RBA. Relative byte address.

physical storage. With respect to data, the recovery. The process of rebuilding data after it
actual space on a storage device that is to has been damaged or destroyed, often by using a
contain data. backup copy of the data or by reapplying
transactions recorded in a log.
PLPA. Pageable link pack area.
Redundant Array of Independent Disks
pool storage group. A type of storage group (RAID). A disk subsystem architecture that
that contains system-managed DASD volumes. combines two or more physical disk storage
Pool storage groups allow groups of volumes to devices into a single logical device to achieve
be managed as a single entity. See also storage data redundancy.
group.
relative byte address (RBA). In VSAM, the
PPRC. Peer-to-peer remote copy. displacement of a data record or a control interval
primary data set. When referring to an entire from the beginning of the data set to which it
data set collection, the primary data set is the first belongs independent of the manner in which the
data set allocated. For individual data sets being data set is stored.
stacked, the primary data set is the one in the relative-record data set (RRDS). A VSAM data
data set collection that precedes the data set set whose records are loaded into fixed-length
being stacked and is allocated closest to it. slots.
primary storage. A DASD volume available to removable media library. The volumes that are
users for data allocation. The volumes in primary available for immediate use, and the shelves
storage are called primary volumes. See also where they could reside.
storage hierarchy. Contrast with migration
level 1 and migration level 2 . residence mode (RMODE). The attribute of a
load module or program object.
program management. The task of preparing
programs for execution, storing the programs, Resource Access Control Facility (RACF). An
load modules, or program objects in program IBM licensed program that is included in OS/390
libraries, and executing them on the operating Security Server and is also available as a
system. separate program for the OS/390 and VM
environments. RACF provides access control by
program object. All or part of a computer identifying and verifying the users to the system,
program in a form suitable for loading into virtual authorizing access to protected resources,
storage for execution. Program objects are stored logging detected unauthorized attempts to enter
in PDSE program libraries and have fewer the system, and logging detected accesses to
restrictions than load modules. Program objects protected resources.
are produced by the binder.
Resource Measurement Facility (RMF). An
PSCB. Protected step control block. IBM licensed program or optional element of
PSF. PSF for OS/390. OS/390, that measures selected areas of system
activity and presents the data collected in the
PSP. Program Services Period.
format of printed reports, system management

207
facilities (SMF) records, or display reports. Use shelf. A place for storing removable media,
RMF to evaluate system performance and such as tape and optical volumes, when they are
identify reasons for performance problems. not being written to or read.
resource profile. A profile that provides RACF shelf location. A single space on a shelf for
protection for one or more resources. User, storage of removable media.
group, and connect profiles are not resource
small-data-set packing (SDSP). In
profiles. The information in a resource profile can
DFSMShsm, the process used to migrate data
include the data set profile name, profile owner,
sets that contain equal to or less than a specified
universal access authority, access list, and other
amount of actual data. The data sets are written
data. Resource profiles can be discrete profiles
as one or more records into a VSAM data set on
or generic profiles.
a migration level 1 volume.
RLS. Record-level sharing.
SMF. See system management facility.
RMF. See Resource Measurement Facility.
SMS. See Storage Management Subsystem or
RMODE. Residence mode. System Managed Storage.
RRDS. Relative-record data set. SMS complex. A collection of systems or
system groups that share a common
RSECT. Read-only control section.
configuration. All systems in an SMS complex
S share a common active control data set (ACDS)
SCDS. See source control data set. and a communications data set (COMMDS). The
systems or system groups that share the
SDSP. Small data set packing. configuration are defined to SMS in the SMS
service level (Storage Management base configuration.
Subsystem). A set of logical characteristics of SMS control data set. A VSAM linear data set
storage required by a Storage Management containing configurational, operational, or
Subsystem-managed data set (for example, communications information that guides the
performance, security, availability). execution of the Storage Management
service-level agreement. (1) An agreement Subsystem. See also source control data set,
between the storage administration group and a active control data set, and communications
user group defining what service-levels the data set.
former will provide to ensure that users receive source control data set (SCDS). A VSAM
the space, availability, performance, and security linear data set containing an SMS configuration.
they need. (2) An agreement between the The SMS configuration in an SCDS can be
storage administration group and operations changed and validated using ISMF. See also
defining what service-level operations will provide active control data set and communications
to ensure that storage management jobs required data set.
by the storage administration group are
completed. storage administration group. A centralized
group within the data processing center that is
sharing control data set. A VSAM linear data responsible for managing the storage resources
set that contains information DFSMSdfp needs to within an installation.storage administrator. A
ensure the integrity of the data sharing person in the data processing center who is
environment. responsible for defining, implementing, and
SHCDS. Sharing control data set. maintaining storage management policies.
storage class. A collection of storage attributes
that identify performance goals and availability
requirements, defined by the storage

208 DFSMS Release 10 Technical Update


administrator, used to select a device that can class, storage class, management class, storage
meet those goals and requirements. group, and ACS routine definitions.
storage control. The component in a storage storage subsystem. A storage control and its
subsystem that handles interaction between attached storage devices. See also tape
processor channel and storage devices, runs subsystem.
channel commands, and controls storage
stripe. In DFSMS, the portion of a striped data
devices.
set that resides on one volume. The records in
storage group. A collection of storage volumes that portion are not always logically consecutive.
and attributes, defined by the storage The system distributes records among the stripes
administrator. The collections can be a group of such that the volumes can be read from or written
DASD volumes or tape volumes, or a group of to simultaneously to gain better performance.
DASD, optical, or tape volumes treated as a Whether it is striped is not apparent to the
single object storage hierarchy. See also VIO application program.
storage group, pool storage group, tape
striped data set. In DFSMS, an
storage group, object storage group, object
extended-format data set consisting of two or
backup storage group, and dummy storage
more stripes. SMS determines the number of
group.
stripes to use based on the value of the
storage hierarchy. An arrangement of storage SUSTAINED DATA RATE in the storage class.
devices with different speeds and capacities. The Striped data sets can take advantage of the
levels of the storage hierarchy include main sequential data striping access technique. See
storage (memory, DASD cache), primary storage striping and stripe.
(DASD containing uncompressed data),
striping. A software implementation of a disk
migration level 1 (DASD containing data in a
array that distributes a data set across multiple
space-saving format), and migration level 2 (tape
volumes to improve performance.
cartridges containing data in a space-saving
format). See also primary storage, migration system data. The data sets required by OS/390
level 1 , migration level 2 , and object storage or its subsystems for initialization and control.
hierarchy. system group. All systems that are part of the
storage location. A location physically same Parallel Sysplex and are running the
separate from the removable media library where Storage Management Subsystem with the same
volumes are stored for disaster recovery, backup, configuration, minus any systems in the Parallel
and vital records management. storage Sysplex that are explicitly defined in the SMS
management. The activities of data set allocation, configuration.
placement, monitoring, migration, backup, recall, system-managed buffering for VSAM. A
recovery, and deletion. These can be done either facility available for system-managed
manually or by using automated processes. The extended-format VSAM data sets in which
Storage Management Subsystem automates DFSMSdfp determines the type of buffer
these processes for you, while optimizing storage management technique along with the number of
resources. See also Storage Management buffers to use, based on data set and application
Subsystem. specifications.
Storage Management Subsystem (SMS). A system-managed data set. A data set that has
DFSMS facility used to automate and centralize been assigned a storage class.
the management of storage. Using SMS, a
storage administrator describes data allocation system-managed storage. Storage managed
characteristics, performance and availability by the Storage Management Subsystem. SMS
goals, backup and retention requirements, and attempts to deliver required services for
storage requirements to the system through data availability, performance, and space to

209
applications. See also system-managed storage tape library. A set of equipment and facilities
environment. that support an installation’s tape environment.
This can include tape storage racks, a set of tape
system-managed storage environment. An
drives, and a set of related tape volumes
environment that helps automate and centralize
mounted on those drives. See also
the management of storage. This is achieved
system-managed tape library and automated
through a combination of hardware, software,
tape library.
and policies. In the system-managed storage
environment for OS/390, the function is provided Tape Library Dataserver. A hardware device
by DFSORT, RACF, and the combination of that maintains the tape inventory associated with
DFSMS and OS/390. a set of tape drives. An automated tape library
dataserver also manages the mounting, removal,
system-managed tape library. A collection of
and storage of tapes.
tape volumes and tape devices, defined in the
tape configuration database. A system-managed tape mount management. The methodology
tape library can be automated or manual. See used to optimize tape subsystem operation and
also tape library. use, consisting of hardware and software
facilities used to manage tape data efficiently.
system-managed volume. A DASD, optical, or
tape volume that belongs to a storage group. tape storage group. A type of storage group
Contrast with DFSMShsm-managed volume and that contains system-managed private tape
DFSMSrmm-managed volume. volumes. The tape storage group definition
specifies the system-managed tape libraries that
system management facilities (SMF) . A
can contain tape volumes. See also storage
component of OS/390 that collects input/output
group.
(I/O) statistics, provided at the data set and
storage class levels, which helps you monitor the tape subsystem. A magnetic tape subsystem
performance of the direct access storage consisting of a controller and devices, which
subsystem. allows for the storage of user data on tape
cartridges. Examples of tape subsystems include
system programmer. A programmer who
the IBM 3490 and 3490E Magnetic Tape
plans, generates, maintains, extends, and
Subsystems.
controls the use of an operating system and
applications with the aim of improving overall tape volume. A tape volume is the recording
productivity of an installation. space on a single tape cartridge or reel. See also
volume.
T
temporary data set. An uncataloged data set
TB. Terabyte.
whose name begins with & or &&, that is normally
tera (T). The information-industry meaning used only for the duration of a job or interactive
depends upon the context: 1. T = session. Contrast with permanent data set.
1,099,511,627,776(2 40 ) for real and virtual
threshold. A storage group attribute that
storage 2. T = 1,000,000,000,000 for disk storage
controls the space usage on DASD volumes, as a
capacity 3. T = 1,000,000,000,000 for
percentage of occupied tracks versus total tracks.
transmission rates.
The low migration threshold is used during
tape configuration database. One or more primary space management and interval
volume catalogs used to maintain records of migration to determine when to stop processing
system-managed tape libraries and tape data. The high allocation threshold is used to
volumes. determine candidate volumes for new data set
tape librarian. The person who manages the allocations. Volumes with occupancy lower than
tape library. the high threshold are selected over volumes that
meet or exceed the high threshold value.

210 DFSMS Release 10 Technical Update


TMM. See tape mount management. label. See also DASD volume, optical volume,
and tape volume.
TMP. Terminal monitor program.
volume mount analyzer. A program that helps
TSO. Time sharing option.
you analyze your current tape environment. With
U tape mount management, you can identify data
UCB. See unit control block. sets that can be redirected to the DASD buffer for
management using SMS facilities.
UIM. Unit information module.
volume status. In the Storage Management
unit affinity. Requests that the system allocate Subsystem, indicates whether the volume is fully
different data sets residing on different removable available for system management.
volumes to the same device during execution of
the step to reduce the total number of tape drives VRS. Vital record specification.
required to execute the step. Explicit unit affinity VRRDS. Variable-length relative-record data
is specified by coding the UNIT=AFF JCL set.
keyword on a DD statement. Implicit unit affinity
VSAM. Virtual Storage Access Method.
exists when a DD statement requests more
volumes than devices. VSAM record-level sharing (VSAM RLS). An
extension to VSAM that provides direct
unit control block (UCB). A control block in
record-level sharing of VSAM data sets from
storage that describes the characteristics of a
multiple address spaces across multiple systems.
particular I/O device on the operating system.
Record-level sharing uses the System/390
user group. A group of users in an installation Coupling Facility to provide cross-system locking,
who represent a single department or function local buffer invalidation, and cross-system data
within the organization. caching.
V VSAM sphere. The base cluster of a VSAM
validate. To check the completeness and data set and its associated alternate indexes.
consistency of an individual ACS routine or an VSAM volume data set (VVDS). A data set that
entire SMS configuration. describes the characteristics of VSAM and
VIO. Virtual I/O. system-managed data sets residing on a given
DASD volume; part of an integrated catalog
virtual input/output (VIO) storage group. A facility catalog. See also basic catalog
type of storage group that allocates data sets to structure and integrated catalog facility
paging storage, which simulates a DASD volume. catalog .
VIO storage groups do not contain any actual
DASD volumes. See also storage group. VTOC. Volume table of contents.

vital records. A data set or volume maintained VTS. Virtual tape server.
for meeting an externally-imposed retention VVDS. See VSAM volume data set.
requirement, such as a legal requirement.
W
Compare with disaster recovery.
WTO. Write-to-operator.
vital record specification. Policies defined to
manage the retention and movement of data sets X
and volumes for disaster recovery and vital
XRC. Extended remote copy.
records purposes.
volume. The storage space on DASD, tape, or
optical devices, which is identified by a volume

211
212 DFSMS Release 10 Technical Update
Index
C
CAMLST macro 69
Numerics
3-way audit 158 Candidate volumes 24
CC keyword
for data set backup function 125
A Concurrent Copy 124
ABARS 131 Control Area
ACS read-only variable size calculation 26
&ACSENVIR 171 Control Interval
&BLKSIZE 42 ensuring adequate size 27
&MSPOLICY 171 COPYSDB 40
&MGMTCLAS 171 Coupling Facility 77
&MSPOOL 171 CPOOL macro 51
&STORGRP 171
&UNIT 61
ACS routines D
modifying for VSAM data striping 15 DADSM 64
Allocation data set stacking 63
No secondary space 19 DCBEBLKSI 51
Secondary space 17 DEFINE command
ARA 54 SWITCHTAPES AUTOBACKUPEND 121
ARCHBACK macro 106 SWITCHTAPES PARTIALTAPE 124
ARCHMIG macro DEVSERV command
FORCEML1 =YES 100 QDASD 12
ARCINBAK program 106 DEVTYPE
ARCMDEXT installation exit 102 INFO=AMCAP 52
ARCTPEXT sample program 86 DFSMShsm startup parameter
AUX host 82 CDSQ=YES 82
CDSSHR=RLS 82
CDSSHR=YES 82
B HOST= 83
BACKDS command 106 HOSTMODE= 80
BACKUP processing 135 PRIMARY=YES 83
BACKVOL command DFSORT 47
DUMP 101 using large tape block size 47
BDW DPRTY parameter 86
Extended format 50 DS1CHA
Non-extended format 49 data set changed bit 101
BLKSIZE 38 DSTORE processing 135
BLKSZLIM 38
BSAM access method 37
BUFL 51 E
BUFNO 51 ECS
BUILD macro 50 Enhanced Catalog Sharing 76
BUILDRCD macro 50 EDGHSKP utility 135
EDGJRPT sample job 138
EDGJVLTM sample job 138
EDGRMMxx OPTION parameter

© Copyright IBM Corp. 2000 213


MOVEBY 155 L
RETAINBY 155 Large block interface 36
EDGRPTD utility 137 Large tape block size 36
EDGUTIL utility LBI 36
MEND parameter 160 library manager database 159
VERIFY parameter 160 LISTCAT 29
VERIFY(VOLCAT) parameter 161 Locate Block ID 73
EDGUX100 installation exit 169 location priority number 136
Enhanced catalog sharing 76
Event-triggered tracking 178
exporting logical tape volumes 139 M
MCDS 101
EXPROC processing 135
MCVT control block 86
MDR record 45
F MEND 162
Fast subsequent migration 94 MEND(SMSTAPE) 161
Forward Space File command 71 MIGRATE command
FREEPOOL 50 CONVERT 100
migration volumes
ML1 95
G
GETMAIN macro 51 ML1 OVERFLOW 108
GETPOOL macro 50 ML2 95
Guaranteed space Multiple DFSMShsm hosts 79
allocating primary space 17
O
OBR record 45
H
HBACKDS command 106 OPEN for UPDAT 49
High Allocated RBA 30 OPTCD=H 49
High speed search 73
High Used RBA 30 P
HOLD command pre-ACS exit 170
ABACKUP 85
ARECOVER 85
Q
QSAM access method 37
I QUERY command
ICEGENER utility 47 IMAGE 85
ICEMAC option 47
IDCAMS utility 48
IEBGENER R
RACF
PARM 45
STGADMIN.ADR.DUMP.CNCURRNT 125
IEBGENER utility 45
STGADMIN.DPDSRN 65
IFHSTATR utility 47
RDJFCB macro 54
IGDACSXT installation exit 170
Read Block ID command 73
IHAARA 54
Rebuilt of CF Structure 77
importing logical tape volumes 140
reconnection 98
inventory management 135
Recover takeaway 127
IOBLENRD 52
RECYCLE command 97
ISPF utility 65

214 DFSMS Release 10 Technical Update


RELEASE command Undefined record 51
ABACKUP 85 UNIT=AFF 59
Release command
ARECOVER 85
RENAME macro 69
V
Variable Blocked records 49
REPORT command
VERIFY(SMSTAPE) 161
DAILY 104
Virtual Tape Server
RMODE31=BUFF 51
Advanced Function 139
RPTEXT processing 135
Overview 138
VMA 48
S VOL=REF 62
SDB= Volume set 154
INPUT 45 VRSEL processing 135
LARGE 45 VSAM data striping 9
SMALL 45 Layering 28
YES 46 Structure 25
SETSYS command supported organizations 10
ABARS 85
CSALIMITS 85
DASDSELECTIONSIZE 116
W
Work load manager 87
DEMOUNTDELAY 117
workstation 179
DSBACKUP TASKS 110
MIGRATIONCLEANUPDAYS 101
TAPEMIGRATION RECONNECT 98
USERDATASETSERIALIZATION 98
SMF records
Type 14 and 15 43
Type 21 43
Type 30 44
Space amount calculation 26
STORAGE macro 51
Sustained Data Rate 12
system determined block size
for LBI 40

T
tape configuration database 159
Tape labels
supported for large tape block size 37
TAPEBLKSZLIM 39
TARGET keyword
for data set backup function 112
TMM 61

U
UCB
extension 40

215
216 DFSMS Release 10 Technical Update
IBM Redbooks review
Your feedback is valued by the Redbook authors. In particular we are interested in situations where a
Redbook "made the difference" in a task or problem you encountered. Using one of the following
methods, please review the Redbook, addressing value, subject matter, structure, depth and
quality as appropriate.
• Use the online Contact us review redbook form found at ibm.com/redbooks
• Fax this form to: USA International Access Code + 1 914 432 8264
• Send your comments in an Internet note to redbook@us.ibm.com

Document Number SG24-6120-00


Redbook Title DFSMS Release 10 Technical Update

Review

What other subjects would you


like to see IBM Redbooks
address?

Please rate your overall O Very Good O Good O Average O Poor


satisfaction:

Please identify yourself as O Customer O Business Partner O Solution Developer


belonging to one of the O IBM, Lotus or Tivoli Employee
following groups: O None of the above

Your email address:


The data you provide here may
be used to provide you with O Please do not use the information collected here for future
information from IBM or our marketing or promotional contacts or other communications beyond
business partners about our the scope of this transaction.
products, services or activities.

Questions about IBM’s privacy The following link explains how we protect your personal information.
policy? ibm.com/privacy/yourprivacy/

© Copyright IBM Corp. 2000 217


DFSMS Release 10 Technical Update

(0.2”spine)
0.17”<->0.473”
90<->249 pages
®

DFSMS Release 10
Technical Update
One-stop guide to DFSMS, formerly known as DFSMS/MVS, continues to add
know all of the enhancements to performance, availability, system
INTERNATIONAL
enhancements to throughput, and usability for data access and storage TECHNICAL
DFSMS! management. SUPPORT
ORGANIZATION
DFSMS Release 10 is the first release of DFSMS that is
MUST-HAVE
available solely with OS/390. DFSMS Release 10 is packaged
information for
and shipped with OS/390 Version 2 Release 10 and offers the
installation ease of installation, integration, and maintenance inherent in BUILDING TECHNICAL
planning! the OS/390 product. INFORMATION BASED ON
PRACTICAL EXPERIENCE
Many worked This IBM Redbook provides an in-depth description of all the
examples! new enhancements made to DFSMS Release 10. This book is IBM Redbooks are developed by
designed to help storage administrators plan, install, and the IBM International Technical
migrate to DFSMS Release 10. Support Organization. Experts
from IBM, Customers and
Partners from around the world
create timely technical
information based on realistic
scenarios. Specific
recommendations are provided
to help you implement IT
solutions more effectively in
your environment.

For more information:


ibm.com/redbooks

SG24-6120-00 ISBN 0-7384-1824-2

You might also like