Professional Documents
Culture Documents
Technical Update
One-stop guide to know all of the
enhancements to DFSMS!
Hyeong-Ge Park
Frank Byrne
Kohji Otsutomo
ibm.com/redbooks
SG24-6120-00
November 2000
Take Note!
Before using this information and the product it supports, be sure to read the general information in
Appendix A, “Special notices” on page 189.
This edition applies to DFSMS Release 10 for use with OS/390 Version 2 Release 10, Program Number
5657-A01.
When you send information to IBM, you grant IBM a non-exclusive right to use or distribute the
information in any way it believes appropriate without incurring any obligation to you.
Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Tables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xi
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
The team that wrote this redbook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Comments welcome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
v
How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
IBM Redbooks fax order form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
ix
x DFSMS Release 10 Technical Update
Tables
DFSMS Release 10 is the first release of DFSMS that is available solely with
OS/390. DFSMS Release 10 is packaged and shipped with OS/390 Version 2
Release 10 and offers the ease of installation, integration, and maintenance
inherent in the OS/390 product.
Thanks to the following people for their invaluable contributions to this project:
xv
xvi DFSMS Release 10 Technical Update
Chapter 1. Introduction to DFSMS Release 10
1.1.1 DFSMSdfp
DFSMSdfp provides a foundation for storage management, data
management, and program management.
DFSMS binder and loader are the foundation for program management. They
provide functions to create, load, modify, list, read, transport, and copy
executable programs.
1.1.2 DFSMSdss
DFSMSdss provides comprehensive DASD data manipulation functions. You
can use DFSMSdss for data movement and replication, eliminating DASD
free-space fragmentation, data backup, and recovery, at either the data set
and volume levels, as well as data set and volume conversion to
system-managed storage.
1.1.4 DFSMSrmm
DFSMSrmm helps you manage your removable media, such as tape
cartridges, reels, and optical volumes. DFSMSrmm provides a central on-line
inventory of the resources in your removable media library and in storage
locations outside your removable media library. For example, DFSMSrmm
can keep track of the usage of tape cartridges at both the volume and data
set levels by interacting with DFSMSdfp and/or automatic tape libraries, and
create movement reports based on the administration policy you defined.
New with DFSMS Release 10, the ACS routines can now determine whether
a referenced DD resides on SMS DASDs, SMS tapes, or non-SMS devices
when data sets are not stacked. This permits direct allocation using the ACS
routines and prevents job failures. Without having to make JCL changes, this
will allow you to direct allocations to either disk or tape based on their
characteristics, rather than just knowing the fact that they have UNIT=AFF
specified.
Up to 39 HSM hosts can share a common set of control data sets. These
hosts can be all on a single MVS system image, or spread over several
systems.
ABARS is not impacted by this change. The main host will still manage up to
64 ABARS secondary address spaces per MVS image.
Table 1 shows the available feature code for OS/390 Version 2 Release 10
base.
Table 1. OS/390 Version 2 Release 10 base feature code
6113 6112
5976 5713 O O O
Please note that you have only two options, while you could select other
combinations of those components than those appearing in the table prior to
DFSMS Release 10.
For example, sharing data sets among OS/390 Version 2 Release 10 and
OS/390 Version 2 Release 4 with DFSMS/MVS Version 1 Release 4 is
beyond the scope of “N-3”. However, we provide coexistence PTFs for
DFSMS/MVS Version 1 Release 4. This is to prevent your data sets/catalogs
from being corrupted. If you do not involve any sysplex functions as a part of
data set sharing, you can share data sets beyond “N-3”. However, you need
to be careful about certain DFSMS functions which involve OS/390’s sysplex
services, such as VSAM RLS sharing. Or your applications or DBMS may
exploit these services. If any of these types of situations could apply to your
installation, it is VERY important that you keep your installations under four
consecutive OS/390 releases, so you can be confident that your installations
are fully supported.
However, to make sure that you have the latest and most complete
information, we recommend that you refer to the latest version of the
following manuals:
• OS/390 DFSMS Migration, SC26-7329
• OS/390 Planning for Installation, GC28-1726
Also, you need to ask your IBM service representative to get the latest
maintenance information. Preventive service package (PSP) bucket
information is available under the OS390R10 entry.
D1 D2 D3 D4 D1
D2
D3
D4
Striping
D1 D1
D3 D3
D4 D4
Figure 2. Example of data class definition requesting extended format data set
DEVSERV QDASD,6600
IEE459I 17.09.38 DEVSERV QDASD 631
UNIT VOLSER SCUTYPE DEVTYPE CYL SSID SCU-SERIAL DEV-SERIAL EF-CHK
6600 SS6600 2105E20 2105 3339 8906 0113-xxxxx-12089 **OK**
**** 1 DEVICE(S) MET THE SELECTION CRITERIA
**** 0 DEVICE(S) FAILED EXTENDED FUNCTION CHECKING
If EF-CHK field shows **OK**, the volume will be eligible for extended format
data sets.
SDR is a numeric value which represents the data transfer rate required to
process the data set. The system uses SDR to derive the number of stripes.
The value is divided by four if the DASD volumes are 3390 track format and
by three if they are 3380 track format, and the result is the number of stripe
volumes. Since Figure 4 specifies SDR=16, the system tries to allocate a
data set with this storage class across four 3390 volumes or six 3380
volumes. For more detailed information, refer to the manual OS/390
DFSMSdfp Storage Administration Reference,SC26-7331.
Guaranteed Space . . . . . . . . . Y (Y or N)
Guaranteed Synchronous Write . . . (Y or N)
CF Cache Set Name . . . . . . . . (up to 8 chars or blank)
CF Direct Weight . . . . . . . . . (1 to 11 or blank)
CF Sequential Weight . . . . . . . (1 to 11 or blank)
The system derives the number of stripes from the maximum of the numbers
of volume serial numbers specification, including asterisks (*), unit counts on
JCL, or units counts on the data class. For example, if you define a VSAM
data set through IDCAMS DEFINE CLUSTER command with VOLUMES(* * * * * *)
and it has a Guaranteed Space attribute and non-zero SDR, the system will
try to allocate the data set across six volumes. Or, if you allocate a VSAM
data set through a JCL DD card with UNIT=(xxxx,5) keyword, the system will
try to allocate the data set across five volumes.
The many anomalies related to the use of guaranteed space with specific
volume specifications apply to VSAM, consistent with the current selection
implementation for non-VSAM. We strongly recommend that you avoid the
use of specific volume specifications with a Guaranteed Space attribute.
PROC STORCLAS
:
:
FILTLIST STRIPE INCLUDE(*.CRIT.** ) /* @02 */
:
:
p = (P ÷ N)
Guaranteed Space=N
Stripe count =4
The system will divide the primary quantity by four since SDR=16 implies the
stripe count as four. Assuming there are at least four volumes available for
satisfying this request and the system does not reduce the number of stripe, it
will allocate 30 cylinder on each volume.
Note: If you actually allocate a VSAM striped data set with this example, the
system will adjust the amount you specified — CYL(120) — and it will allocate
120 cylinders and eight tracks in total, 30 cylinders, and two tracks on each
volume. We describe in detail how the system adjusts allocation for VSAM
striped data sets in “Space amount calculation for VSAM striped data sets” on
page 26.
Guaranteed Space=Y
3390
120
3390
120
s = (S ÷ N)
Note: VSAM striped data sets always use the secondary amount you
specified to perform a secondary allocation. Other system-managed VSAM
data sets can use the primary amount for secondary allocation if the
corresponding data class has the Add’l Volume Amount=P attribute, but this
attribute is ignored for VSAM striped data sets.
A B C D
P P P P
E x te n d
A B C D
P P P P
S S S S
Figure 9. Secondary space allocation when all of the stripes have enough space
Figure 10 shows an example of what occurs when one of the stripes does not
have enough space.
A B C D
P P P P
S S S S
O th e r d a ta s e ts
E x te n d
A B C D
P P P P
S S S S
S O th e r d a ta s e ts S S
E
S
Figure 10. Secondary space allocation when insufficient space for one stripes
Unlike non-VSAM striped data sets, VSAM striped data sets can extend to
another volumes.
When the system finds that the volume B does not have enough space to
extend, it extends to another volume E. This example assumes that the data
set has a candidate volume in the catalog, and it selects the volume E to
satisfy the allocation request. If the data set does not have any candidate
volumes in catalog entries, then extension will fail.
Note that the system will not share the stripe when extending. This is why the
data set in the figure extends to the volume E, and not to A,C, or D.
A A A
120 Extend 120 Extend 120
B B
120 120
C
120
Figure 11. Non-striped data set uses primary amount to extend another volume
When this data set extends to another volume, the primary quantity of 120
cylinders is allocated on the volume B, and then the volume C. If it is a
guaranteed space data set, it will have 120 cylinders of space on each
volume at the primary allocation.
Since a striped data set is already on multiple volumes, the zero specification
is interpreted in a different way. In this case, it is assumed that the
requirement is to extend each stripe, by an amount up to the value of the
primary allocation. As with normal secondary allocation, the new extents can
be on the existing volumes, or on new ones as long as it has candidate
volumes.
A A
B C
A
40
40 40
40 40
40
Extend
A A
B C
A
40
40 40
40 40
40
40 40 40
Extend
A A
B C
A
40
40 40
40 40
40
40 40 40
40 40 40
Since this is a non-guaranteed space data set, each volume has 40 cylinders
of space at primary allocation. When it extends, it uses the primary amount,
40 cylinders to extend. Each stripe can extend up to 120 cylinders, and up to
360 cylinders in total. If VSAM cannot allocate space due to lack of space, it
will not extend the data set, as the data set has no candidate volumes.
A A
B C
A
40
40 40
40 40
40
Extend
...
Extend
A A
B C
A
40 40 40
40 40 40
40 40 40
40 40 40
40 40 40
Figure 13. Striped VSAM data set also takes candidate volumes into account
Since this is a non-guaranteed space data set, each volume has 40 cylinders
of space at primary allocation. This is the same as in Figure 12 on page 21.
When this data set extends, it uses the primary amount of 40 cylinders to
extend. Each stripe can extend up to 160 cylinders, and up to 480 cylinders in
total, as the system takes candidate volumes into account. If the system
cannot allocate space due to lack of space, it will extend to another volume,
as the data set has a candidate volume.
A A
B C
A
40
120 40
120 40
120
Extend
A A
B C
A
120 120 120
40 40 40
40 40 40
Figure 14. Guaranteed space VSAM striped data set should have same amount
Since this is a guaranteed space data set, it has 120 cylinders of space on
each volume at the primary allocation. It can extend up to 160 cylinders per
stripe, and up to 480 cylinders in total. The amount used for secondary
allocation is the primary amount divided by the striped count. Therefore,
VSAM uses 40 cylinders in this example. It may extend to another volume, as
soon as the data set has a candidate volume.
As you can see these two examples, there is also no difference in the
maximum amount of space available between guaranteed space and
non-guaranteed space.
3390
Non Extended Format
Candidate Candidate Candidate Candidate Candidate Candidate Candidate
Guranteed Space=N
Extended Format 3390 3390 3390 3390 3390 3390 3390 3390
Guranteed Space=Y
SDR=0
Non-striped multi-volume data set (no candidate volumes)
Extended Format 3390 3390 3390 3390 3390 3390 3390 3390
Guranteed Space=Y
SDR <> 0
8-striped data set (no candidate volumes)
Extended Format 3390 3390 3390 3390 3390 3390 3390 3390
Guranteed Space=N
SDR=32
8-striped data set (no candidate volumes)
CI 0 CI 3 CI 6 CI 9 CI 1 CI 4 CI 7 CI 10 CI 2 CI 5 CI 8 CI 11
CI 12 CI 15 CI 18 CI 21 CI 13 CI 16 CI 19 CI 22 CI 14 CI 17 CI 20 CI 23
CI 24 CI 27 CI 30 CI 33 CI 25 CI 28 CI 31 CI 34 CI 26 CI 29 CI 32 CI 35
CI 36 CI 39 CI 42 CI 45 CI 37 CI 40 CI 43 CI 46 CI 38 CI 41 CI 44 CI 47
CI 48 CI 51 CI 54 CI 57 CI 49 CI 52 CI 55 CI 58 CI 50 CI 53 CI 56 CI 59
CI 60 CI 63 CI 66 CI 69 CI 61 CI 64 CI 67 CI 70 CI 62 CI 65 CI 68 CI 71
CI 72 CI 75 CI 78 CI 81 CI 73 CI 76 CI 79 CI 82 CI 74 CI 77 CI 80 CI 83
CI 84 CI 87 CI 90 CI 93 CI 85 CI 88 CI 91 CI 94 CI 86 CI 89 CI 92 CI 95
CI 38 CI 65 Control interval(CI)
Tracks
Control area (CA)
Figure 16. Physical layout of a VSAM striped data set
A control interval (CI) is a minimal I/O unit when the system makes I/O
requests. It contains logical records and control information, such as
remaining free space in the CI. By definition, a CI can consist of from one to
multiple equal length physical blocks. You can specify CI size manually, or the
system can determine it based on your request. Once the CI size has been
determined, the system also decides the size of each physical block and the
number of physical blocks per CI.
A control area (CA) is a fixed length contiguous area where CIs are grouped
together. The system allocates a space in an amount that consists of
multiples of CA. When you use a KSDS, an index CI in the lowest level
(referred as sequence set) controls all CIs in a data CA.
As you can see the figure, data is striped by CI for VSAM striped data sets,
and a CA encompasses all of the stripes.
The CA size calculation is different from that of non-striped VSAM data sets.
A CA is spread equally across all of the stripe volumes. Therefore, the
maximum CA size is changed to 16 tracks, as a striped VSAM data set can
have up to 16 stripes, and the minimum CA size is the number of tracks which
is equal to the stripe count. The CA size calculation is based on the primary
quantity and secondary quantity you would specify, and the number of
stripes. You can use the following rules along with those values to figure out
the CA size that the system would derive:
• If the stripe count is greater than 8, it will be the CA size.
• Otherwise:
- If the stripe count is equal to or greater than the minimum of primary
quantity in tracks and secondary quantity, it will be the CA size.
- Else, the value derived by the following formula will be the CA size, if it
is equal to or smaller than 16. If it is greater than 16, then the value
minus the stripe count will be the CA size.
For example, if you define a non-striped KSDS data set with TRACKS(1 1),
the size of data CA will be one track. Please assume that four data CIs will fit
into a track, and a sequence set CI can hold index entries up to four. On this
assumption, all data CIs in a data CA can be used (see Figure 17).
When non-striped...
DEFINE CLUSTER(...TRACKS( 1 1))
Index set
Sequence set CI 0 CI 1 CI 2 CI 3 CI 4
0 CI 5
1 CI 6
2 CI 7
3
VOLA
CI 0 CI 1 CI 2 CI 3
CI 4 CI 5 CI 6 CI 7
However, what if this is a 3-striped KSDS with the same definition? In this
case, a sequence set (the lowest level index) CI should hold 12 index entries
to control all CIs in a CA, as the CA size is three tracks. If we make the
same assumption on the size of the index CI size, it can hold four index
entries, therefore eight out of 12 data CIs in a data CA cannot be used (see
Figure 18).
Index set
Sequence set CI 0 CI 1 CI 2 CI 3 CI
CI 12
0 CI
CI 13
1 CI
CI 14
2 CI
CI 15
3
CI 12 CI 15 CI 18 CI 21 CI 13 CI 16 CI 19 CI 22 CI 14 CI 17 CI 20 CI 23
Figure 18. An index CI that is too small to hold all index entries for a data CA
You need to ensure that the index CI size is big enough to hold index entries
for all data CIs in a CA, in order to avoid waste of space.
For example, we defined an ESDS with 2500 cylinders of primary space and 500
cylinders of secondary space. The storage group has an SDR value of 28.
An output of IDCAMS LISTCAT of the data set, before any data was loaded,
showed the following fields:
ATTRIBUTES
STRIPE-COUNT-----------7
ALLOCATION
SPACE-TYPE---------TRACK HI-A-RBA------1920307200
SPACE-PRI----------37506 HI-U-RBA---------------0
SPACE-SEC-----------7504
EXTENTS----------------7
The cylinder allocation values have been converted to tracks and rounded up
to the next highest multiple of 14, because the stripe count is 7 and the CA
size is 14 (as discussed in 2.1.5.5, “VSAM structure and space calculation”
on page 25).
As you can see, the high allocated RBA is that of the whole cluster and not of
the individual stripe.
The number of tracks per CA is shown as 2, as the stripe count is 7. The true
number of tracks per CA is 14 (as discussed in “CA size calculation for VSAM
striped data sets” on page 26); this value is not listed anywhere.
For stripe 1:
ALLOCATION
HI-A-RBA------2304512000 EXTENT-NUMBER----------2
HI-U-RBA------2211840000 EXTENT-TYPE--------X'00'
LOW-RBA----------------0 TRACKS-------5378
HIGH-RBA------1920307199
LOW-RBA-------1920307200 TRACKS-------1072
HIGH-RBA------2304511999
For stripes 2 - 7:
HI-A-RBA------2304512000 EXTENT-NUMBER----------2
HI-U-RBA---------------0 EXTENT-TYPE--------X'00'
LOW-RBA---------------0 TRACKS----------5358
HIGH-RBA------1920307199
LOW-RBA-------1920307200 TRACKS----------1072
HIGH-RBA------2304511999
As you can see, the only the volume record for stripe 1 contains a value for
the high used RBA.
1000
900
800
700
Elapsed time
seconds
600
500
400
300
200
100
0
Note that the purpose of the performance measurement is only to give you a
general idea how this new function is useful. The test we made was not
formal and we do not guarantee that you would get the same results as this
figure since there are many factors that affect performance measurement,
such as I/O configuration, software configuration, workload distribution, and
so on.
However, this does not mean that migrating non-striped VSAM data sets to
striped VSAM data sets is not worth doing, because you generally do not have
any data sets that are only accessed randomly during their life cycle. In almost all
cases, sequential processing is also involved, such as loading data into VSAM
data sets, making backup copies of VSAM data sets, or passing data
sequentially to another program like DFSORT. Batch processing is invariably
sequential in nature. Since VSAM striping will improve this sequential
processing, the overall performance of your applications should improve.
In addition, VSAM striping will help improve performance even for direct
processing in the following cases:
• When the CA split process is involved.
When your application processes a KSDS, CA splits may occur. During
the split process, the system moves half of the data in a CA to a new CA.
Since the system performs this series of I/Os sequentially, the process for
a striped data set can be completed faster than that of a non-striped data
set, as a CA is striped across multiple volumes.
• When single volume data sets are converted to striped data sets.
When you use a single volume data set and an application makes
concurrent I/O requests against the data set, a volume level I/O contention
can be observed unless you use ESS with its PAV/MA features. Since
VSAM striping spreads data across multiple volumes, the chance of
getting the volume level I/O contention gets lower than with a single
volume data set. This would be the same as for a non-striped multi-volume
data set, if you have chosen to allocate non-striped multi-volume data sets
for this purpose.
So why is VSAM striping used? The reason involves channel speed versus
physical device speed. When an application transfers data to or from a
device, it uses only one channel path, therefore the maximum data bandwidth
per volume is 17 MB per second, when an ESCON channel is used.
On the other hand, ESS’s lower interface has 40 MB per second of bandwidth
per direction, and each device adaptor has two paths for read, the other two
paths for write. As you can see, this is much faster than the bandwidth of
single ESCON channel, therefore striping from the host side would be worth
doing.
When you use VSAM striping, the volumes in the storage group should be
spread across DASD subsystems as much as possible. Otherwise, they will
be competing for the same channel and subsystem resources.
For example, if an active data set with a stripe count of five has to be backed
up when Concurrent Copy is not usable, then perform the following steps:
1. Create an intermediate data set, which also has a stripe count of five.
2. Stop access to the active data set.
3. Copy the active to the temporary data set.
4. Allow access to the active data set.
5. Copy the temporary data set to tape.
6. Delete the temporary data set.
The purpose of these PTFs is the same — that is, to prevent the pre-release
system from corrupting the VSAM striped data sets. However, these PTFs
work a bit differently (see Figure 21), so this information may help you plan.
Figure 21. Down-level system cannot open VSAM striped data sets
Otherwise, the system will not allow you to open the data set. Even if all of
the above criteria are met, there will be no support for extension. The system
will fail the request with the error message IEC070I, as you can see from
Figure 21, if it needs to extend (even within the same volume).
There would be little increase in the track capacity of the widely used 3380
and 3390 device types. In addition to this, the major reason is that there are a
large number of programs, both supplied by vendors and written in-house,
which browse/edit disk data sets, all of which would need to be changed.
The existing fields describing block size in the HDR2/EOV2/EOF2 are 5 bytes
long and hold the value in EBCDIC. When large blocks are written, these
fields will be zero (all x’F0’s), and offsets 71 to 80 will contain the block size in
EBCDIC.
Note: We do not recommend that you have large block tape data sets until all
of your systems in a MAS have migrated to OS/390 Version 2 Release 10. We
also do not recommend that you specify the BLKSIZE parameter, as this may
create dependency on a certain devices.
If a data set is being opened for output and there is no block size specified, or
if BLKSIZE=0 on return from the DCB OPEN exit and the installation OPEN
exit, the system will calculate the optimum value to be used. This is
described, in detail, in the section on BLKSIZE in the manual, OS/390
DFSMS Using Data Sets, SC26-7339. Here we describe the fundamental
principles.
Each device type has a block size which gives the best compromise between
space utilization and performance. For example, the maximum track capacity
of a 3390 system is 56,664 bytes, but the standard access methods only
support a maximum block size of 32,760. If 32,760 is used, only one block
can be written per track, and approximately 24 KB would be wasted. The
optimum size in this case would be half track blocking, that is, a block size of
approximately 28 KB, which would allow two records to be written to track
with minimal wastage.
For tapes, there is no penalty on the larger block size, and the system will use
a block size as close to 32,760 as possible in a pre-OS/390 Version 2
Release 10 system. In DFSMS Release 10, for programs which use LBI,
SMS may derive a large tape block size. We describe some factors that
would affect SDB, and how SMS determines SDB for LBI programs.
The system sets these values in data facilities area (DFA), which is an area
that can be used, by programs, to find information about the DFP and
DFSMS.
Note that you cannot change these values dynamically. If you need to change
these values, you need to re-IPL the system after you have modified
PARMLIB(DEVSUPxx).
Tape devices can now report the block sizes they support
Two new fields have been added to the UCB extension for tape devices which
contain the maximum and optimum block sizes supported by the device. The
system obtains this value from tape hardware and sets it in the UCB
extension when a tape device is brought online, if possible. You cannot
control over this value.
Then, how does the system select SDB for an LBI program?
The system picks the first non-zero number in the following order of
preference, as the block size limit value:
1. BLKSZLIM on the DD statement, if coded
2. Block size limit in a data class, if such a data class is assigned to the
data set
Then the system compares the block size limit and the optimum block size
available in the UCB extension and selects the smaller value as the limit.
In summary, the system never selects a block size which would go beyond
the physical tape devices’ capabilities.
What if BLKSIZE is a large block and the program cannot use LBI?
Remember that the program cannot support LBI unless you code the DCBE
macro with the BLKSIZE parameter.
If you specify a large tape block size in the DCB macro and no DCBE macro
with the BLKSIZE parameter, the result will be unpredictable, as the macro
stores the right-most half-word. For example, if you specify BLKSIZE=69632
(X’11000’ in hexadecimal), the block size stored in DCB will be 4096 (X’1000’
in hexadecimal) and the program may work successfully with a block size that
you would not expect, or it may get the error message IEC141I 013-20 or
013-68.
Or, if you specify BLKSIZE on a DD statement that is larger than 32,760 and
your program does not have the DCBE macro with the BLKSIZE parameter,
the block size in DCB is left as zero, and SDB is used. Since the program
does not support LBI, the system selects SDB within 32,760.
Figure 24. Data class has new Block Size Limit parameter
The Block Size Limit parameter has the same meaning as the JCL BLKSZLIM
parameter. If a data class with this attribute is assigned to a data set, the system
takes the value from the data class when the corresponding DD statement does
not have a BLKSZLIM parameter.
Now that DFSMS Release 10 has large tape block support, you need to make
sure that jobs which intend to make a data set with large tape blocks are not
redirected to DASDs. For example, if an LBI program with a 256K BLKSIZE
specification in DCBE tries to open a data set which is directed to DASD by
ACS routines, it will get the error message IEC141I 013-68.
In order to avoid such a situation, your ACS routine can find out the BLKSIZE
value on a DD statement, through the &BLKSIZE ACS read-only variable.
PROC STORCLAS
:
:
IF &BLKSIZE GT 32760 THEN DO /* IF BLOCKSIZE > 32760 @03 */
SET &STORCLAS = 'SCTAPE' /* DIRECT ALLOCATION TO @03 */
EXIT /* TAPE LIBRARY @03 */
END /* @03 */
:
:
:
END
Figure 25. New allocation requesting that large block should not go to DASD
We recommend that you take the following steps to retrieve the block size
information:
1. Check that SMF21LBS is valid.
Test SMF21FL1 with SMF21LB(X’20’). If it is on, SMF21LBS will be valid.
2. If it is not valid, use SMF21BSZ.
3. Check that SMF21LS is valid.
Test SMF21FL1 with SMF21LS(X’20’). If it is on, SMF21LBS will be valid.
4. If it is not valid, use SMF21SIO.
Note that block size information is not always available in the record. The
system takes block size information from DCBE, but this is not available,
depending on how a volume is demounted. For example, consider a case
when the system unallocates a tape device as a part of a job step
termination, and a volume mounted on the device needs to be demounted. In
this case, the job step (program) which used the volume has already finished
and the control has returned to system. Therefore, DCBE no longer exists in
the virtual storage.
A new 8-byte field SMF30XBS is added to hold the large tape block size
value. SMF30XBS should always contain a valid block size; therefore,
retrieving this new field only should be sufficient. Like SMF14LBS, the high
order 4 bytes of SMF30XBS should be zero, so retrieving the low order 4
bytes of SMF30LBS should be sufficient.
MDR record
The MDR record now has a new 4-byte block size field at offset 34 through 37
for IBM 3590 tape devices. For non-3590 tap devices, the block size field
remains unchanged; the offset 36 through 37 has a 2-byte block size.
2.2.4.1 IEBGENER
IEBGENER has been changed to process large tape block sizes. Here we
describe major changes made to IEBGENER and some considerations on
using IEBGENER.
If you need a specific block size for the output data set, you must code the
BLKSIZE parameter explicitly, rather than let the system choose. If you need
If you need a specific block size for the output data set, you must code the
BLKSIZE parameter explicitly, rather than let the system choose. If you need
the maximum block size that is supported by the device, then you must code
BLKSIZE=0 along with BLKSZLIM=2G on the output DD statement.
2.2.4.3 IFHSTATR
IFHSTATR can be used to print the SMF type 21 records. As we described in
2.2.3.4, “SMF record changes” on page 43, the format of SMF type 21 record
is changed to have a 4-byte block size field and a 4-byte STARTIO count field.
IFHSTATR scales these values in the output when the BLOCK SIZE field or
the USAGE SIO field in the Type 21 record exceeds 99,999, IFHSTATR
scales the output. For example, if the field was greater than 99,999 but less
than 1,000,000 it will be scaled to multiples of 1,000 and the letter ”K” will be
appended. If the field is greater than 999,999 it will be scaled to multiples of
1,000,000 and the letter “M” will be appended.
Note that in this case the meanings of “K” and “M” differ from the meanings of
“K” and “M” in the BLKSIZE value on the DD statement. On the DD
Statement, they mean multiples of 1,024 and 1,048,576 respectively.
2.2.4.5 IDCAMS
IDCAMS REPRO command does not support large tape block sizes.
If your LBI program tries to open a data set with this option, the system will
issue the error message IEC141I 013-FE.
2.2.5.4 OPTCD=H
OPTCD=H is used to bypass VSE embedded checkpoint records. If your LBI
program tries to open a data set with this option, the system will issue error
message IEC141I 013-FD. Since the current tapes you have do not contain
any tape data sets with large tape block sizes along with VSE checkpoint
records, this should not be a problem.
Figure 27 shows the BDW format that does not support blocks longer than
32,760 bytes.
1 Length of block
0 1 31
Figure 28. Extended BDW format
As you can see, the bit 0 indicates whether the BDW has the new format or
not. When the bit 0 is 1, BDW has the new format, and we refer it as extended
BDW format. When the bit 0 is 0, BDW has the traditional format, and we
refer it as non-extended BDW format .
When you use QSAM to create variable length blocked records, you do not
have to take care of the BDW format, as QSAM takes care of it for you.
However, when you use BSAM, you are responsible for maintaining this
format so that LBI application programs using QSAM can process the
variable blocked records correctly.
QSAM can process the extended BDW format correctly even though the
block length appearing in the field is equal to or less than 32,760. However,
we recommend that you use the extended BDW format only if the block
length is greater than 32,760, unless you can make sure that all of your
OS/390 application programs support LBI and recognize the extended BDW
format.
QSAM
DCB
...
GET INDCB,BUFFER
... -4 IOBLENRD L WORKREG1,DCB+44
+44 DCBIOBA S WORKREG1,CONST4
L WORKREG2,0(WORKREG1)
...
BSAM
DECB
...
IOBLENRD READ DECB1,SF,BUFFER,'S'
-12 CHECK DECB1
L WORKREG1,DECB1+16
S WORKREG1,CONST12
L WORKREG2,0(WORKREG1)
+16
If you use BSAM, you need to issue the CHECK macro to make sure that the
corresponding read operation has completed. After that, you can test as
shown in the above example.
Note that this method works only when you do not perform chained
scheduling. Chained scheduling is an I/O technique which issues multiple
read or write channel command words (CCWs) in a single channel program. It
is also known as command chaining. For QSAM, specifying BUFNO=1 should
be sufficient to ensure that you do not use chained scheduling. For BSAM,
specifying NCP less than 2, or, issuing a WRITE and CHECK macro as a pair,
would have the same effect.
0(0) 1 Flags.
0(0) 1....... BSAM, QSAM, and (if DASD) BPAM support the large
block interface and the block size limit is in the next
double word.
Table 5 shows the values that would be returned as optimum and maximum
values, as a response to DEVTYPE INFO=AMCAP.
Table 5. Optimum and maximum block size by device type
DUMMY 16 5,000,000
For example, you can issue this macro against a DD which you are going to
open. If the device allocated for the DD is 3590, the macro would return the
value of 256 KB. You would then set up an appropriate DCBE parameter and
open it.
Note that DEVTYPE INFO=AMCAP returns binary zero values when the
program runs under a pre-OS/390 Version 2 Release 10 system.
2.2.5.10 RDJFCB
The block size value in JFCB (JFCBLKSZ) has a half-word length; therefore,
it cannot hold information when the JCL specifies a BLKSIZE value greater
than 32,760 on the respective DD statement. It also does not have
information about the BLKSZLIM or TAPEBLKSZLIM values.
For this reason, the response to a RDJFCB Allocation Retrieval has been
modified to allow a program to determine the value of BLKSZLIM (because
JFCBLKSZ can now hold the information).
The allocation retrieval area (ARA) header, mapped by the IHAARA macro,
has been modified. Figure 30 shows the header part of the expansion of
IHAARA macro.
ARA DSECT
ARALEN DS H Length of ARA info
ARAFLG DS B ARA flags
ARAXINF EQU X'80' ARA Extended Information Segment present
ARAXINOF DS B Offset in double words to Extended Info Segment
:
As you can see, the system sets bit 0 of ARAFLG on when the respective DD
statement has a BLKSIZE value greater than 32,760 and/or the value
presented by BLKSZLIM/TAPEBLKSZLIM. You can retrieve these values
from the ARA extended information segment. The IHAARA macro also maps
ARAXINFO DSECT
ARAXINLN DS H Length of Extended Info Segment
DS 6B Reserved
ARAXBLKS DS DL8 Blksize
ARABKSLM DS DL8 Blksize limit for DD
LBRDJFCB CSECT
LBRDJFCB AMODE 24
LBRDJFCB RMODE 24
BAKR 14,0 SAVE REGISTERS to LINKAGE STACK
BASR 12,0 USE GR12 AS BASE REGISTER
USING *,12
PSTART SR 7,7
SR 8,8
RDJFCB LBTEST READ THE JFCB
ICM 7,B’1111’,ARLAREA OBTAIN THE ADDRESS OF ARA
USING ARA,7 ESTABLISH ADDRESSABILITY TO ARA
TM ARAFLG,ARAXINF EXTENDED INFO SEGMENT PRESENT?
BNO FLAGOFF BRANCH IF NO
ICM 8,B’0001’,ARAXINOF INSERT THE OFFSET IN DWORD
SLL 8,3(0) MULTIPLY BY 8
AR 8,7 POSITION TO EXTENDED INFO SEGMENT
USING ARAXINFO,8 ESTABLISH ADRESSABILITY TO XINFO
* :
* PROCESS AS REQUIRED
* :
PR RETURN TO CONTROL PROGRAM
FLAGOFF DS 0H IF INFO SEGMENT NOT PRESENT..
* :
* PROCESS AS REQUIRED
* :
RETURN PR RETURN TO CONTROL PROGRAM
LBTEST DCB DDNAME=LBTEST,MACRF=(GM),EXLST=READAXA
READAXA DS 0F
DC X'13' REQUESTS ARA
DC AL3(ARLAREA)
DC X'80000000' END OF EXLST
ARLAREA IHAARL DSECT=NO
IHAARA MAPPING MACRO FOR ARA
END LBRDJFCB
We ran a job that wrote internally generated records to a 3590 tape. The
system was lightly loaded and there was no other tape activity.
The job was run three times writing 32K, 64K, and 256K blocks with tape unit
compression turned off, and then three times with compression turned on.
The results were as shown in Table 6.
Table 6. Large tape block size performance comparison
32K no 104,800 6 m, 54 s
64K no 55,250 6 m, 48 s
256K no 13,334 6 m, 52 s
The results for the tests without compression show no gain for the increase in
tape block size; the tests with compression show substantial gains. The
reasons for this difference are as follows.
When data is transferred from the host to the 3590, it is buffered in the control
unit and then written in a standard block size of 384K on the tape. These two
operations have a different bandwidth; the 3590 can receive a maximum of
17 MB from the host channel and transfer a maximum of 9 MB to the tape.
Because we were able to drive the channel at its maximum rate, the limiting
factor for the uncompressed data was the control unit to tape bandwidth.
The data we were generating was highly compressible. When we allowed the
control unit to perform compression, the limiting factor was the channel
speed, and the advantage of using larger blocks became clear.
In all real-life cases, low priority batch jobs should always benefit from an
increase in block size because they are able to make more effective use of
each I/O operation, by transferring more data each time.
Coexistence PTFs
Table 7 shows a list of coexistence APARs and their corresponding PTFs
regarding large tape block sizes.
Table 7. Coexistence PTFs for large tape block size
Assume that the program TAPEIO requests tape devices from an esoteric
group CART. In this example, the system allocates a device for each this job
step, therefore three devices are allocated in total. However, if the TAPEIO
does not process those DD resources concurrently, allocating only one
device will be sufficient for the program. You can use UNIT=AFF as in the
following code sample:
//STEP1 EXEC PGM=TAPEIO,...
//DD1 DD DSN=A,UNIT=CART,VOL=SER=A,...
//DD2 DD DSN=B,UNIT=AFF=DD1,VOL=SER=B,...
//DD3 DD DSN=C,UNIT=AFF=DD2,VOL=SER=C,...
//SYSPRINT DD SYSOUT=*,...
In this example, the system allocates a tape device for DD1, the same device
for DD2 and DD3. Therefore, the job step does not have to allocate more than
one device.
This has been a very common technique used for installation, which is known
as unit affinity. In order to simplify further discussion, we refer to a DD
resource which is referenced by other DDs by UNIT=AFF as a referenced
DD, and we refer to a DD resource which is referencing another DD through
UNIT=AFF as a referencing DD. In the previous example of JCL, DD1 is a
referenced DD, DD2 is a referencing DD as well as a referenced DD, and
DD3 is a referencing DD.
However, this method does not work for referencing DDs, as they have the
&UNIT variable as “AFF=”. If you had system-managed tape libraries only,
this would not be a problem, as having the referencing DDs redirected to
system-managed libraries unconditionally would be sufficient. However, you
This will enable ACS routines to have an accurate view of the allocation
This would make sure that a unit affinity to an SMS tape would be assigned to
storage group ATLSG, which would include the tape library.
If ACS routines get control via JCL as shown this example, &UNIT will have a
null value. Therefore, if you test the &UNIT variable first, and then make an
additional decision, you will not receive the result you would expect. In order
to avoid this situation, you need to test &ALLVOL/&ANYVOL first to see if a
DD has VOL=REF processing, then test other ACS variables.
You also need to consider the value of “STK=.” for data set stacking
Since DFSMS/MVS V1.3.0, &UNIT could have value “STK=.”, if the system
detects data set stacking condition. For this reason, the system may invoke
the ACS routines up to three times. If a DD has UNIT=AFF, the system will set
the new value introduced to &UNIT in DFSMS Release 10 at the first call,
then will set “STK=.” to &UNIT for subsequent calls for data set stacking
conditions. Refer to the manual, OS/390 DFSMSdfp Storage Administration
Reference,SC26-7331, for more information about data set stacking.
The system considers that a data set is “in use” by seeing if the data set is
ENQ’d. SInce the ENQ resources do not contain any volume serial number
information, the system cannot determine which data set is actually “in use”
and ENQ’d when there are multiple data sets which have the identical name.
In order to be on the safe side, the system has to reject such requests to
avoid getting severe errors that would be caused by renaming a data set
which is actually “in use”.
For this reason, you cannot rename SYS1.LINKLIB on the copied volume,
while you do know that neither LLA or XCFAS uses the copied
SYS1.LINKLIB. The only solution is to stop the LLA address space and tell
XCFAS to unallocate the LNKLST data sets through the MODIFY
XCFAS,UNALLOCATE LNKLST command, but you might not want to do
these operations, as this would degrade module fetch performance on the
production system.
ISPF Library:
Project . .
Group . . .
Type . . . .
2. Type “R” in the option field, specify the data set name you want to rename
and the volume serial number where it resides, and then press the Enter
key.
ISPF Library:
Project . .
Group . . .
Type . . . .
ISPF Library:
Project . .
Group . . .
Type . . . .
You have specified a volume serial for the data set you want renamed. The
data set is also cataloged on that volume. In addition to renaming the data
set, you should select the "Catalog the new data set name" selection field
if you want the data set cataloged.
ISPF Library:
Project . .
Group . . .
Type . . . .
The system detected that a data set with the above name is in use
(possibly on another system) but it cannot determine whether it is the
data set you wish to rename. If it is the same data set and any program
has it open, renaming it could cause serious system and data integrity
problems.
You have the extra security authority to rename the data set even though
its name is in use. Refer to the DFSMS documentation on the RENAME macro
for further information.
Instructions:
Press ENTER to override data set name protection and rename the data
set.
Enter CANCEL or EXIT to cancel the rename request.
The ISPF rename operation tries to rename the data set as usual. But it
gets an error message from the system, stating that it could not rename
the data set, because the data set name is ENQ’d. The system also tells
the ISPF that the requestor has an authority of STGADMIN.DPDSRN.
Therefore, ISPF asks the user if it can try renaming again.
If you press Enter here, the ISPF will issue the rename request to the
system again, but this time it tells the system “You can rename it, even
though the name is ENQ’d”. When the system gets the request, it checks
the RACF profile again to see if the requestor has an authorization. After
the system has verified the authorization, it renames the data set.
Remember that the user assigned to the program must have the required
authority, as we explained in 2.4.3.1, “Common requirements” on page 65.
Otherwise, the program will get an error code. In the case of this sample
program, it will issue a user abend with completion code 111.
The SL tape has a volume label (VOL1), to identify the volume, and a group
of header labels (HDR1,HDR2), user records followed by a group of trailer
labels (EOF1,EOF2). Header and trailer labels contain the information
required to identify the data set and its characteristics. As you can see in this
figure, a tape mark (TM) separates label groups and user records. A TM is a
special record for a tape device. The system uses a TM as a delimiter of data
blocks.
In order to achieve this, the system uses the forward space file (FSF) channel
command. FSF tells the tape device to orient to the next tape mark from the
current position. As we described earlier, tape marks separate the label group
from the data, so if the system issues an FSF command when the tape
position is at the beginning of the first user record, the tape device will orient
to the beginning of the trailer label group.
This technique was adequate for the early tape units, with limited capacity,
and where the physical recording of the data matched the logical sequence.
That is, the second data set would always be further along the tape than the
first, and so on.
Data Set 1 Data Set 2 Data Set 3 Data Set 4 Data Set 5
Physical view
Figure 35. Logical and physical views of tape data sets in a volume
To position to the fifth data set from the beginning of the tape, using FSF, the
tape would change direction four times. This is clearly not very efficient, as
this means that the tape heads have passed the start of the fifth data set
three times. Also, the tape needs to be stopped at each TM in order to verify
label groups, and then be restarted to find the next TM. This start/stop
operation is significant, because the tape has a maximum speed of
approximately 18 kilometers per hour (11 MPH), and this takes some time to
reach, and more time to stop.
Each block has an identification associated with it, which you can obtain from
the subsystem using the Read Block ID channel command; or from BSAM;
you can use the NOTE TYPE=ABS macro. The Block ID can be used by the
Locate Block ID CCW to position the tape to a specific point directly, rather
than doing it sequentially. This function is referred to as high speed search.
The contents of the Block ID enable the 3590 to identify which track and
approximately how far along that track the block will be. The length of a 3590
extended cartridge is 600 meters; positioning to a data set with Locate Block
will always have a tape movement of less than this amount. Using FSF, the
tape could move up to 9,600 meters, on a Model E. As FSF tape movement is
5 meters per second, the saving of elapsed time could be considerable.
In the example in Figure 35 on page 73, if Locate Block ID had been used,
the processing would have been as follows:
1. Read the VOL1 to make sure that this is the correct tape.
2. Issue the Locate Block CCW.
3. Read the EOF for data set 4.
4. Read the HDR1 for data set 5.
The information supplied to OPEN, to position the volume, will depend on the
disposition of the data set:
• For a new data set: Provide the Block ID of the end of the volume.
• For a data set that is to be extended (DISP=MOD): Provide the last Block
ID of the data set.
• For a data set that is to be read: Provide the first Block ID of the data set.
Except for the first data set on a volume, Locate Block ID is always faster
than FSF, and the gain is more significant when there are a large number of
data sets on the volume.
Or, you may build your own tape management system which interacts with
the system’s OPEN/CLOSE/EOV interface. Refer to the manual, OS/390
DFSMS Installation Exits , SC26-7392.
For the second test, we wrote four data sets on a cartridge. These were
larger data sets than in the first test, and the last one was written on the
second track. We made a comparison when DFSMSrmm was active and
when DFSMSrmm was not active. Figure 36 shows the test results.
120
100
80
seconds
Time
60
40
20
0
In the first pair of tests, the tape heads had to move the same distance along
the tape. It can be seen that there is a difference, in the elapsed time, of 59
seconds, which is approximately 3 seconds per data set.
In the second pair of tests, the tape heads had less distance to move using
Locate Block. The difference in elapsed time was approximately 38 seconds.
When the rebuild of the structure has been done successfully, the catalog
restores the original status before the rebuild. So no operator intervention is
required to activate ECS mode again.
For this to be successful, the normal criteria for CF placement and free space
availability should have been followed. This will ensure that there will be
sufficient space available in another CF for the structure to be allocated.
In the unlikely event of the rebuild failing, catalog sharing will revert to VVDS
mode until a structure is made available.
2.6.3 Considerations
In this section, we describe considerations on the ECS enhancement.
Since SYSZTIOT ENQ is held at the address space level, if we can run
multiple DFSMShsm address spaces, we can have DFSMShsm process more
data. Before DFSMShsm Release 10, you could not run multiple DFSMShsm
address spaces within an OS/390 system; therefore you needed to configure
an HSMplex. This pertains to a situation in which you needed to share
DFSMShsm resources among multiple OS/390 systems; each OS/390 had a
DFSMShsm address space; and all of the DFSMShsm address spaces
shared the same control data sets (CDSs), journals, and storage pools.
As shown in Figure 37, multiple OS/390 images were needed to run multiple
DFSMShsm before Release 10.
DFSMShsm DFSMShsm
Control Data Sets Control Data Sets
MCDS MCDS
BCDS BCDS
OCDS OCDS
Journal Journal
Figure 37. Multiple OS/390 images to run multiple DFSMShsm before Release 10
Another inconvenience would occur if you get a tape device failure while
recalling data sets. DFSMShsm may sometimes hang up because of a tape
device failure, and you may need to cancel the DFSMShsm address space
while other DFSMShsm tasks, which have nothing to do with recall tasks, are
working.
HOSTMODE={MAIN|AUX}
You can have only one MAIN host per OS/390 system, and you can have
multiple AUX hosts per OS/390 system. When you configure an HSMplex
across several OS/390 systems, you can have multiple MAIN hosts and AUX
hosts, as long as you have a MAIN host per OS/390 and the total number of
DFSMShsm hosts does not exceed 39.
DFS M Shsm DFS M Shsm DFSM S hsm DFSM S hsm DFSM S hs m DFSM Shsm DFS M Shsm DFS M Shsm
M AIN host AUX host 1 AUX host .. AUX hos t n M AIN host AUX host 1 AUX host .. AUX host n
DFSM Shsm
C ontrol D ata Sets
MC DS
BCDS
OCDS
Journal
Figure 38. HSMplex: multiple DFSMShsm address spaces across two OS/390s
3.1.3 Considerations
In this section, we describe several considerations on using multiple
DFSMShsm hosts.
* The number here applies only if there are no other DFSMShsm hosts on other OS/390 systems
which are making up an HSMplex.
You can have a DFSMShsm host perform this level function by specifying
HOST=’nY’ where n is host ID and the second digit is Y, or by specifying
PRIMARY=YES, which is a new parameter from DFSMShsm Release 10.
For example, if you would like to use this procedure to start a DFSMShsm
host as a MAIN host with no primary function, and another host as an AUX
host with primary function, you issue the following two startup commands:
S DFSMSHSM.HSM1,HOST=1,PRIMARY=NO,HOSTMODE=MAIN
S DFSMSHSM.HSM2,HOST=2,PRIMARY=YES,HOSTMODE=AUX
OS/390
PDA data sets PDA data sets
HOSTMODE=MAIN HOSTMODE=AUX
HSM log data sets HSM log data sets
ID=1 ID=2
PRIMARY=NO PRIMARY=YES
HSM.LOGX1 HSM.LOGX2
HSM.LOGY1 HSM.LOGY2
DFSMShsm
Control Data Sets
HSM.MCDS
HSM.BCDS
HSM.OCDS
HSM.JRNLl
The AUX host will issue error messages and ignore these commands. If you
would like to put them into ARCCMDxx and you do not want to see error
messages regarding these commands, you need to use the ONLYIF
command so that these commands are directed to the MAIN host only.
O S /3 9 0 O S /3 9 0 V 2 R 1 0
V2R 9
D F S M S h sm V 1 .5 .0 D FS M Sh sm R 10 D F SM Shsm R 10 D F SM S hsm R 10
D F S M S h sm
C o n tro l D ata S e ts
M CDS
BCDS
OCDS
J o u rn a l
You can achieve this by using either workload manager (WLM) compatible
mode or goal mode. Note that the DPRTY parameter on the EXEC statement
no longer works with OS/390. You can still code it, but the system simply
ignores it without issuing a message. Also note that IBM intends to
discontinue the support for WLM compatible mode. Therefore, we
recommend that you migrate your system policies to WLM goal mode, if your
installation has not already implemented it.
For this reason, our worked example is based on WLM goal mode, and we
describe how to set it up. Therefore, your installation should also be in goal
mode, in order to follow our worked example here.
The group of definition is called a policy, and you define the service
requirements within a policy.
The classification rule is used to define how you want to assign service
classes you defined to units of work. The simplest way is to use the jobname
or startup procedure name. For this reason, we separated the startup
procedures.
File Help
--------------------------------------------------------------------------
ENTER to continue
The service definition is where you define policies, service classes, and so
on. We selected the option 3 to create a new definition.
Defining a policy
We defined the definition name as HSMTEST, as shown in Figure 41.
Once a service definition has been created or selected, all tasks regarding
service level definition are made in reference to this definition. The panel
shown Figure 41 is the root menu for each task discussed below.
From this panel, we selected 1 to define a policy. (This panel is the place
where you can define your desired service policy). Since this was a new
The policy is just a name which represents the total level of service
requirements. Therefore, at this phase, you can name it whatever you want,
just like the new definition name. We named it HSMPOLCY.
Defining a workload
Then, we went back to the definition top panel (Figure 41 on page 88), and
chose 2 to define a workload.
---Period--- ---------------------Goal---------------------
Action # Duration Imp. Description
I_
******************************* Bottom of data ********************************
---Period--- ---------------------Goal---------------------
Action # Duration Imp. Description
I
******************************* Bottom of data ********************************
As you can see, there are several kinds of goals you can choose as a service
level. We chose 3 to define the service level based on CPU usage.
* *********************
Then we specified 40 as the target execution velocity. The bigger value you
specify, the higher priority a unit of work will get. Since this is an AUX host, it
is not necessary to work in top priority.
------Class-------
Action Type Description Service Report
__ ASCH Use Modify to enter YOUR rules
__ CB Use Modify to enter YOUR rules
__ CICS Use Modify to enter YOUR rules
__ DB2 Use Modify to enter YOUR rules
__ DDF Use Modify to enter YOUR rules
__ IMS Use Modify to enter YOUR rules
__ IWEB Use Modify to enter YOUR rules
__ JES Use Modify to enter YOUR rules
__ LSFM Use Modify to enter YOUR rules
__ MQ Use Modify to enter YOUR rules
__ OMVS Use Modify to enter YOUR rules
__ SOM Use Modify to enter YOUR rules
3_ STC Use Modify to enter YOUR rules
__ TSO Use Modify to enter YOUR rules
******************************* Bottom of data ********************************
Since we did not define any other rules, the startup procedure HSM1 for the
MAIN host would have SYSSTC, which has a much higher priority than
HSMAUX
As you can see, HSM2 has HSMWORK service class, and it has a lower
dispatching priority than HSM1. The HSM2’s dispatching priority may vary,
depending on the system workload.
500
400
Elapsed time
seconds
300
200
100
Both tests were performed under the same hardware and software
configurations. Please remember that the purpose of this figure is only to give
you a general idea how this function can help DFSMShsm improve the
function. We do not guarantee that you would get the same results as this
figure, since there are many factors that affect performance measurements,
such as I/O configuration, software configuration, and workload distribution.
A A A A
Level 1 to Level 2
migration
HSMC00
ITSO.DSET
DSNAME LOCATION
... ...
ITSO.DSET xx
... ...
RECALL
HSMC00
ITSO.DSET ITSO.DSET DSNAME LOCATION
... ...
ITSO.DSET xx
... ...
HSMC00
ITSO.DSET ITSO.DSET
DSNAME LOCATION
... ...
ITSO.DSET xx
... ...
MIGRATION
HSMC00
ITSO.DSET ITSO.DSET DSNAME LOCATION
... ...
ITSO.DSET
ITSO.DSET xx
... ...
ITSO.DSET yy
Figure 45. DFSMShsm creates new migration copy and updates control records
This means that the data already written on ML2 remains invalid, and the
space remains unused until the tape is processed by RECYCLE. It also
means that the migration process has to perform data movement, even
though a copy already exists that could be used instead.
When a data set, which has been migrated to ML2 tape, is being recalled,
DFSMShsm checks if both SETSYS USERDATASETSERIAIZATIONS and SETSYS
TAPEMIGARARTION(RECONNECT(ALL|ML2DIRECTEDONLY)) will take effect. If they do,
DFSMShsm will set a bit in the respective catalog record indicating that this
data set is a candidate for reconnection (see Figure 46).
ITSO.DSET
DSNAME VOLSER ... PRIM00 HSMC00 Migrated? VOLSER ...
ITSO.DSET PRIM00 ... ML2 HSMC00 ...
HSMC00
ITSO.DSET
DSNAME LOCATION
... ...
ITSO.DSET xx
... ...
RECALL
USERDATASETSERIAIZATION
RECONNECT(ALL or ML2DIRECTEDONLY)
ITSO.DSET
DSNAME VOLSER ...
Migrated? VOLSER ...
ITSO.DSET PRIM00 ... PRIM00 HSMC00 ML2 HSMC00 ...
HSMC00
Reconnectible ITSO.DSET ITSO.DSET DSNAME LOCATION
... ...
ITSO.DSET xx
... ...
When the data set has aged again and become eligible for migration,
DFSMShsm sees if the data set is a candidate for reconnection by checking
the bit. If it is eligible, DFSMShsm will see if all of the following are true:
• SETSYS USERDATASETSERIALIZATION takes effect.
• SETSYS TAPEMIGRATION(RECONNECT(ALL|ML2DIRECTEDONLY)) takes effect.
• When SETSYS TAPEMIGRATION(RECONNECT(ML2DIRECTEDONLY)) takes effect, and
the data set is being migrated through volume migration, the data set
should be migrated to ML2 tape directly.
For example, SETSYS TAPEMIGRATION(DIRECT) should take effect if the data
set is not system-managed, or it should have a management class with
Level 1 Days Non-Usage = 0 attribute if it is system-managed.
When the data set is being migrated through command data set migration,
DFSMShsm tries to reconnect, depending upon whether ALL or RECONNECT
has been specified.
• Migration control records for the data set still exist.
• The creation date of the data set is the same as the creation date stored in
the migration control record.
If all of the above conditions are met, DFSMShsm will reconnect the data set.
That is, it will update the migration control records/catalogs so that it can use
the existing ML2 copy again. DFSMShsm does not have to make a new ML2
tape copy when it can reconnect (see Figure 47).
ITSO.DSET
DSNAME VOLSER ...
PRIM00 HSMC00 Migrated? VOLSER ...
ITSO.DSET PRIM00 ... ML2 HSMC00 ...
HSMC00
Reconnectible ITSO.DSET ITSO.DSET
DSNAME LOCATION
... ...
ITSO.DSET xx
... ...
MIGRATION
USERDATASETSERIAIZATION
RECONNECT(ALL or ML2DIRECTEDONLY)
ITSO.DSET
DSNAME VOLSER ...
Migrated? VOLSER ...
ITSO.DSET MIGRAT ... PRIM00 HSMC00 ML2 HSMC00 ...
HSMC00
ITSO.DSET ITSO.DSET DSNAME LOCATION
... ...
If not ITSO.DSET xx
modified ... ...
Figure 47. DFSMShsm does not make new ML2 tape copy if it can reconnect
If any of the above conditions are not met, DFSMShsm tries to migrate the
data set as usual. That is, it will make a new ML2 copy.
Even though DFSMShsm is the only software that touches the changed bit,
you need to make sure that you do not use a dump class which has the
RESET attribute. When you take a volume dump using the BACKVOL DUMP
command or automatic dump processing with such a dump class,
DFSMShsm resets the changed bit, while it leaves the catalog records as is.
Therefore, DFSMShsm might reconnect the old migration copy, which is no
longer valid.
Note: The major purpose of retaining these records for a certain period is to
avoid unnecessary CI splits when a certain ranges of data sets are repeating
migration and recall.
You can control the retention period for these records through the first
parameter of SETSYS MIGRATIONCLEANUPDAYS, and DFSMShsm deletes
them based on your specification during migration cleanup processes.
DFSMShsm uses the third parameter to retain migration control records for
reconnection candidates data sets. For example, a data set recalled from an
ML2 tape volume can be a candidate of reconnection. DFSMShsm keeps the
migration control record of the reconnection candidate data set for the
predicted migration period plus reconnectdays. DFSMShsm calculates the
predicted migration period as the migration date minus the last reference
date. For example, if you referred to a data set on a certain day, the data set
is migrated two weeks later, and the predicted migration period will be 14.
Migration Attributes
Primary Days Non-usage . . . . 0 (0 to 9999 or blank)
Level 1 Days Non-usage . . . . 0 (0 to 9999, NOLIMIT or blank)
Command or Auto Migrate . . . . BOTH (BOTH, COMMAND or NONE)
Use ENTER to Perform Verification; Use UP/DOWN Command to View other Panels;
Use HELP Command for Help; Use END Command to Save and Exit; CANCEL to Exit.
As you can see, the management class specifies that data sets should go to
ML2 directly. Then we had DFSMShsm perform primary space management,
and it migrated 1,248 data sets totally to ML2 tape volumes. We recalled all of
these data sets to primary storage and modified some of the data sets. Then
we had DFSMShsm migrate them again.
ARC0522I SPACE MANAGEMENT STARTING ON VOLUME HG6600(SMS) AT 14:56:01 ON 2000/08/23, SYSTEM SC63
ARC0522I SPACE MANAGEMENT STARTING ON VOLUME HG6700(SMS) AT 14:54:01 ON 2000/08/23, SYSTEM SC63
ARC0522I SPACE MANAGEMENT STARTING ON VOLUME HG6601(SMS) AT 14:56:01 ON 2000/08/23, SYSTEM SC63
ARC0522I SPACE MANAGEMENT STARTING ON VOLUME HG6701(SMS) AT 14:54:01 ON 2000/08/23, SYSTEM SC63
:
ARC0734I ACTION=MIG-RCN FRVOL=HG6600 TOVOL=TST108 TRACKS= 150 RC= 0, REASON= 0, AGE= 0, DSN=HGPARK.S0000
ARC0734I ACTION=MIG-RCN FRVOL=HG6600 TOVOL=TST108 TRACKS= 150 RC= 0, REASON= 0, AGE= 0, DSN=HGPARK.S0001
:
:
ARC0734I ACTION=MIG-RCN FRVOL=HG6706 TOVOL=TST110 TRACKS= 150 RC= 0, REASON= 0, AGE= 0, DSN=PARKHG.T6630
ARC0734I ACTION=MIG-RCN FRVOL=HG6706 TOVOL=TST110 TRACKS= 150 RC= 0, REASON= 0, AGE= 0, DSN=PARKHG.T6631
ARC0734I ACTION=MIGRATE FRVOL=HG6707 TOVOL=TST104 TRACKS= 150 RC= 0, REASON= 0, AGE= 0, DSN=HGPARK.T6702
ARC0734I ACTION=MIG-RCN FRVOL=HG6706 TOVOL=TST110 TRACKS= 150 RC= 0, REASON= 0, AGE= 0, DSN=PARKHG.T6632
ARC0734I ACTION=MIG-RCN FRVOL=HG6706 TOVOL=TST110 TRACKS= 150 RC= 0, REASON= 0, AGE= 0, DSN=PARKHG.T6633
ARC0734I ACTION=MIG-RCN FRVOL=HG6706 TOVOL=TST110 TRACKS= 150 RC= 0, REASON= 0, AGE= 0, DSN=PARKHG.T6634
ARC0734I ACTION=MIGRATE FRVOL=HG6605 TOVOL=TST101 TRACKS= 150 RC= 0, REASON= 0, AGE= 0, DSN=PARKHG.S6532
ARC0734I ACTION=MIGRATE FRVOL=HG6607 TOVOL=TST105 TRACKS= 150 RC= 0, REASON= 0, AGE= 0, DSN=HGPARK.S6718
ARC0734I ACTION=MIG-RCN FRVOL=HG6706 TOVOL=TST110 TRACKS= 150 RC= 0, REASON= 0, AGE= 0, DSN=PARKHG.T6635
ARC0734I ACTION=MIG-RCN FRVOL=HG6706 TOVOL=TST110 TRACKS= 150 RC= 0, REASON= 0, AGE= 0, DSN=PARKHG.T6638
ARC0521I PRIMARY SPACE MANAGEMENT ENDED SUCCESSFULLY
As you can see in this figure, the keyword MIG-RCN appearing in the
ARC0734I message is a new keyword. It indicates that the data set was
reconnected instead of having used the normal migration technique.
STARTUPS=003, SHUTDOWNS=002, ABENDS=000, WORK ELEMENTS PROCESSED=008555, BKUP VOL RECYCLED=00000, MIG VOL RECYCLED=00000
DATA SET MIGRATIONS BY VOLUME REQUEST= 0005227, DATA SET MIGRATIONS BY DATA SET REQUEST= 00000, BACKUP REQUESTS= 0000000
EXTENT REDUCTIONS= 0000000 RECALL MOUNTS AVOIDED= 01329 RECOVER MOUNTS AVOIDED= 00000
FULL VOLUME DUMPS= 000000 REQUESTED, 00000 FAILED; DUMP COPIES= 000000 REQUESTED, 00000 FAILED
FULL VOLUME RESTORES= 000000 REQUESTED, 00000 FAILED; DATASET RESTORES= 000000 REQUESTED, 00000 FAILED
ABACKUPS= 00000 REQUESTED,00000 FAILED; EXTRA ABACKUP MOUNTS=00000
DATA SET MIGRATIONS BY RECONNECTION = 001184, NUMBER OF TRACKS RECONNECTED TO TAPE = 00177600
MIGRATION
PRIMARY - LEVEL 1 0002731 00409650 019264690K 00010924 000391678K 002731 00000 00000 00000 0000 00000 00003 00003
SUBSEQUENT MIGS 0000000 00000000 000000000K 00000000 000000000K 000000 00000 00000 00000 0000 00000 00000 00000
PRIMARY - LEVEL 2 0002496 00374400 017606990K 00000000 009278448K 002496 00000 00000 00000 0000 00000 00001 00001
RECALL
LEVEL 1 - PRIMARY 0003276 00013104 000469852K 00491400 023109170K 000000 07279 04003 00000 0115 00000 00002 00117
LEVEL 2 - PRIMARY 0001335 00000000 009441088K 00200250 009417189K 000000 01340 00005 00000 0508 00000 00002 00510
DELETE
MIGRATE DATA SETS 0000000 00000000 000000000K 00000000 000000000K 000000 00000 00000 00000 0000 00000 00000 00000
PRIMARY DATA SETS 0000000 00000000 000000000K 00000000 000000000K 000000 00000 00000 00000 0000 00000 00000 00000
BACKUP
DAILY BACKUP 0000000 00000000 000000000K 00000000 000000000K 000000 00000 00000 00000 0000 00000 00000 00000
SUBSEQUENT BACKUP 0000000 00000000 000000000K 00000000 000000000K 000000 00000 00000 00000 0000 00000 00000 00000
DELETE BACKUPS 0000000 00000000 000000000K 00000000 000000000K 000000 00000 00000 00000 0000 00000 00000 00000
RECOVER
BACKUP - PRIMARY 0000000 00000000 000000000K 00000000 000000000K 000000 00000 00000 00000 0000 00000 00000 00000
RECYCLE
BACKUP - SPILL 0000000 00000000 00000000 000000 00000 00000 00000 0000 00000 00000 00000
MIG L2 - MIG L2 0000000 00000000 00000000 000000 00000 00000 00000 0000 00000 00000 00000
900
800
700
Elapsed Time
600
seconds
500
400
300
200
100
0
RECONNECT(NONE) RECONNECT(ALL)
Both tests were made under a single DFSMShsm address space. Please note
that the elapsed time includes tape mounting/demounting overhead.
The purpose of showing this figure is to demonstrate that this function has a
potential to improve migration performance, since DFSMShsm does not
physically read/write user data sets when it can reconnect to existing ML2
tape copy data sets. We had designed our test in order to have DFSMShsm
reconnect more than 90 percent of data sets. However, in reality, the
reconnection ratio could be much different from those in our test.
Below, we describe the enhancements made to the data set backup function.
However, if you have many data sets that need to be backed up in the batch
job window, you may have given up on this idea after discovering that
DFSMShsm could back up only one data set at a time. This is because the
command data set backup requests were processed under a single task, prior
to DFSMShsm Release 10 (see Figure 51).
ML1
Figure 51. Data set backup is single task under DFSMShsm pre-Release 10
The original intention of this design was to get back to the requestor as soon
as possible, and allow DFSMShsm to move the backup copies to an
appropriate device category at a later time. This design would make sense if
the number of data sets which are backed up by commands are small
enough, even though this involves double data movement.
17:00 23:59
Figure 52. Moving backup versions from ML1 impacts primary volume processing
ITSO.PRODDS ITSO.HUGE.PRODDS
command
migration backup
Backup Too large to
versions fill ML1
up space!!
ML1 fit into ML1!
BACKUP COPY A
BACKUP COPY B
BACKUP COPYC
PRIMARY
Need to have more
ITSO.PRODDS ML1 OVERFLOW
as command
backup requests
increases
No space on ML1
volumes
for backup
versions command command
backup backup
ML1 ML1
ML1
OVERFLOW OVERFLOW
MIGRATION COPY A ...
BACKUP COPY B
BACKUP COPY C
The sum of mm and nn must be equal, or less than 64 (see Figure 54).
DASD data set DASD data set Tape data set Tape data set
backup task 1 backup task mm backup task 1 backup task nn
... ...
0 to mm 0 to nn
0 to 64
Figure 54. DFSMShsm Release 10 have up to 64 data set backup tasks
For this reason, you may see only one tape command data set backup task
running, even if you have specified TAPE(TASKS(4)) and four backup to tape
requests are queued. This is different from other DFSMShsm tasks. For
example, if you specify SETSYS MAXMIGRATIONTASKS(4) and there are four
volumes to be processed during primary space management, DFSMShsm
will have four volume migration tasks run.
We recommend that you specify nn as the number of tape devices you can
reserve for command data set backup tasks. If nn is bigger than the number
of devices available, the additional task may not be able to allocate a tape
device (see Figure 55).
SETSYS DSBACKUP(TAPE(TASKS(3)))
D FSMShsm
increas ed
Tape data set Tape data set tape ba ckup Tape data set Tape data set Tape data set
backup task 1 backup task 2 task bac kup task 1 backup task 2 backup task 3
IEF238D
ARC0381A
Figure 55. Third task cannot allocate tape device when only 2 devices available
DFSMShsm
Automatic backup
BACKUP FREEVOL ML1BACKUPVERSIONS ML1 BACKUP
RELEASE BACKUP
DFSMShsm
ML1 BACKUP
If there is a partial backup volume, that is not marked as full, DFSMShsm will
select it at first. If there are no partial backup volumes, DFSMShsm will select
a volume differently depending on how you have specified SETSYS
SELECTVOLUME parameter.
If any volumes in its inventory do not meet these criteria, or if you have
specified SETSYS SELECTVOLUME(SCRATCH), a data set backup task will make a
non-specific volume request when the task has not used a tape volume yet.
Otherwise, let DFSMShsm choose the best place for the backup
The following command example lets DFSMShsm back up the data set
ITSO.WHEREVER.DATASET on whatever device DFSMShsm considers the best
(see Figure 58):
HBACKDS ITSO.WHEREVER.DATASET
HBACKDS ITSO.WHEREVER.DSET
I need to decide
TARGET..
Automatic backup
BACKUP FREEVOL ML1BACKUPVERSIONS ML1 BACKUP
RELEASE BACKUP
Figure 58. DFSHSM decides the best device when no TARGET parameter
LARGE
MEDIUM
SMALL
Figure 59. DFSMShsm selects target device based on the size of data sets
If DFSMShsm actually dispatches three tape data set backup tasks, one out
of the three tasks will be detached after they have finished working, and the
backup tape volume will be demounted. The remaining two tasks will keep
volumes mounted.
Backup request B
Backup request A
Tape data set Tape data set Tape data set Tape data set Tape data set Tape data set
backup task 1 backup task 2 backup task 3 backup task 1 backup task 2 backup task 3
Since the tape drives are still allocated and the tapes are still mounted, the
idle tasks can process future backup requests without waiting for a tape to be
mounted.
Now, what is the MINUTES parameter used for? This parameter specifies how
long you would like to keep the idle tape command backup tasks alive. This
value only applies to tasks which have no work, but remain alive due to the
Backup request
Tape data set Tape data set After 30+ Tape data set Tape data set
backup task 2 backup task 3 minutes backup task 2 backup task 3
Each task sets its own timer when it gets into idle status. This figure shows
that DFSMShsm detaches Task 3, as it has been in idle status for more than
30 minutes. Task 2 is still in idle status, as 30 minutes has not passed yet
since it got into idle status. If another tape backup request comes along,
Task 2 will reset the timer, process the request, and set the timer after it gets
into idle status again.
If you specify 1440 as min, the idle tasks keep alive indefinitely, unless you
shut down DFSMShsm, command data set backup tasks are held, or a
SWITCHTAPES event occurs. If you specify non-zero MAXIDLETASKS but do not
specify MINUTES, DFSMShsm will use MINUTES(60) as a default.
Backup request
Tape data set Tape data set Tape data set Tape data set Tape data set
backup task 1 backup task 2 backup task 1 backup task 2 backup task 1
If you specify 0, which is the default when you do not specify the TIME
parameter, DFSMShsm does not demount the tape volume or free the
tape devices. Therefore, idle tasks keep alive and tape volumes are kept
mounted unless you shut down DFSMShsm, command data set backup
tasks are held, or the time specified in SETSYS DSBACKUP..MINUTES has
passed.
• DEFINE SWITCHTAPES(DSBACKUP(AUTOMBACKUPEND))
If you specify AUTOBACKUPEND, DFSMShsm will demount tape volumes after
when automatic backup processing ends.
You might wonder why the automatic backup function has something to do
with demounting volumes. Let us explain this briefly. When you schedule
automatic backup, DFSMShsm performs volume level backup processing.
That is, each volume backup tasks process DFSMShsm-managed
volumes during the window you specified through the SETSYS
AUTOBACKUPSTART command.
Volume
ARC0720I
AUTOMATIC BACKUP processing
Volume Volume Volume Volume Volume Volume
STARTING ends
backup task 1 backup task 1 backup task 1 backup task 1 backup task 2 backup task 3
Request Queue
Backup request
Backup request
Backup request
BACKUP BACKUP
Request Queue
Request Queue
ARC0721I AUTOMATIC
BACKUP ENDING ARC0254I SWITCHTAPES
PROCESS HAS ENDED
Tape data set Tape data set
backup task 1 backup task 2
ARC0253I SWITCHTAPES
Backup
retry ends PROCESS BEGINNING
Tape data set Tape data set
backup task 1 backup task 2
BACKUP BACKUP
BACKUP BACKUP
DFSMShsm Release 10 allows you to specify whether or not you want to use
Concurrent Copy technique for data set back up commands. Before
DFSMShsm Release 10, DFSMShsm could use Concurrent Copy only for
system-managed data sets which has an appropriate Backup Copy
Technique attribute on its management class. So there was no way to have
DFSMShsm backup non-system-managed data sets using Concurrent Copy,
or to override Backup Copy Technique attribute of system-managed data
sets.
DFSMShsm Release 10 provides the new CC parameter for data set backup
commands to allow you to take a backup using Concurrent Copy technique.
The CC parameter has the following format:
CC(STANDARD|PREFERED|REQUIRED LOGICALEND|PHYSICALEND)
- STANDARD specifies that you want to use standard backup methods.
DFSMShsm backs up your data sets without using concurrent copy.
- PREFERRED specifies that concurrent copy is the preferred backup
method that you want to use for backup, if it is available. If concurrent
copy is not available or the user has no authorization to use the CC
parameter on the command, DFSMShsm ignores the PREFERRED
parameter and backs up the data set by using standard backup
methods.
- REQUIRED specifies that concurrent copy must be used as the backup
method, and the data set backup fails if concurrent copy is not
available or if the user has no authorization to use the CC parameter.
Note: DFSMShsm determines if you have authorization to use
Concurrent Copy by checking the RACF profile,
STGADMIN.ADR.DUMP.CNCURRNT.
- PHYSICALEND specifies that control returns to applications or users only
after the backup physically completes.
- LOGICALEND specifies that control returns to the application or user when
concurrent copy initialization completes.
*1: DFSMShsm will take backup, but will not use Concurrent Copy. Requestor will be notified
when the backup has physically completed.
*2: DFSMShsm will use Concurrent Copy. Requestor will be notified when Concurrent Copy
initialization completes.
*3: DFSMShsm will use Concurrent Copy. Requestor will be notified when the backup has
physically completed.
BACKUP BACKUP
HBACKDS ITSO.DSET
CC(REQUIRED LE) Beginning Backing
backup up
DFSMShsm DFSMShsm DFSMShsm
ARC100I
ITSO.DSET
BACKDS
PROCESSING
END
Request
Concurrent Concurrent
Copy Copy
initialized
ESS(2105)
RVA(9393) ITSO.DSET ITSO.DSET
3990-6/3 ITSO.DSET ITSO.DSET ITSO.DSET
STOP GO
The actual point-in-time copy is maintained by the system sofware and storage control.
This drawing is not intended to describe the actual implementation of Concurrent Copy,
but is provided for explanatory purposes only.
In this figure, the user issues the command HABCKDS CC(REQUIRED LE) to
take a backup of ITSO.DSET. DFSMShsm accepts the command, serializes
the data set, and has DFSMSdss execute Concurrent Copy. After DFSMSdss
has finished initializing the Concurrent Copy session, it notifies DFSMShsm.
Now DFSMShsm releases the data set, notifies the requester that backup
processing has ended, and continues to take a backup in background.
DFSMShsm DFSMShsm
HRECOVER ITSO.DSET
BACKUP BACKUP
ITSO.DSET ITSO.DSET
Figure 65. Recover takeaway uses GRS to communicate with other DFSMShsm
3.3.3 Considerations
In this section, we describe some considerations on using the data set
backup functions.
A DFSMShsm AUX host can process command data set backup requests
only through the MODIFY operator command interface.
Coexistence APAR
Table 10 shows an APAR/PTF list for this function.
Table 10. Coexistence APAR/PTF list for command data set backup
After applying a respective PTF for this APAR, down-level DFSMShsm will
issue a warning message if you request a command data set backup with the
CC and/or TARGET keywords, and take a backup onto ML1 DASD (see Figure
66).
ARC1070I
PARAMETERS NOT
TARGET AND CC
SUPPORTED ON THIS
NOT AUPPORTED
RELEASE, TARGET
UNTIL OS/390 R10
AND/OR CC IGNORED
DFSMShsm DFSMShsm
Releae 10 1.2.0 - 1.5.0
OW41864
DASD data set Tape data set Data set
backup task backup task backup task
500
(seconds)
400
300
Elapsed time
200
100
All of these tests were made under a single DFSMShsm address space in
DFSMS Release 10.
The left bar shows performance when issuing four backup requests at a time,
using the CC(STANDARD PE) TARGET(DASD) parameter for each request.
Before issuing those requests, we specified the maximum number of DASD
data set backup tasks as 1. Therefore the performance would be similar to
DFSMShsm pre-Release 10, as down-level DFSMShsm has single data set
command task, and it takes backups to ML1 DASDs.
The middle bar shows performance when issuing four backup requests at a
time, using the CC(STANDARD PE) TARGET(DASD) parameter for each
request. Before issuing those requests, we specified the maximum number of
DASD data set tasks as 4. We confirmed that all of the data sets were being
The right bar shows performance when issuing four backup requests at a
time, using the CC(REQUIRED LE) TARGET(DASD) parameter for each
request. Before issuing those requests, we specified the maximum number of
DASD data set tasks as 4. We confirmed that all of the data sets were being
backed up concurrently by seeing the RMF monitor II device activity report at
that time. Note that the elapsed time measured ends when the program ends.
Therefore, in this case, DFSMShsm was performing backup operations after
the program had ended.
The purpose of showing this figure is to demonstrate the benefit of data set
backup multi-tasking. We do not guarantee that you would get the same
results as in this figure, since there are many factors that affect performance
measurement, such as I/O configuration, software configuration, workload
distributions, and so on.
ABARS makes tape backup volumes of the aggregate group, based on the
definition you specified.
Since DFSMS Release 10 supports large tape block size (block size greater
than 32,760 bytes), you may want your application to use this capability. You
may also want to include such tape data sets in an aggregate group and have
ABARS back them up (see Figure 68).
PRIMARY
ABACKUP
ML1 ML1
ITSO.aa
ITSO.yy ITSO.bb
ITSO.zz
Figure 68. ABARS creates a set of backups from primary, migration volumes
Refer to 2.2, “Large tape block sizes” on page 36, for more information about
large tape block size support.
3.4.3 Considerations
In this section, we describe some considerations on ABARS support for large
tape block sizes.
After applying the respective fix for the APAR, down-level DFSMShsm will fail
an ABACKUP request with an error message if it finds a tape data set which
has a large tape block size that should be taken care of (see Figure 69).
ITSO.aa
ABACKUP
ITSO.bb
ITSO.aa > 32 K
BLOCKSIZE
ITSO.bb
> 32 K
BLOCKSIZE
Figure 69. Down-level system fails ABACKUP if data set has large tape block
Down-level DFSMShsm will also fail an ARECOVER request for a data set if
the data set had a large tape block size, and will issue an error message
(see Figure 70).
DFSMShsm DFSMShsm
Releae 10 1.2.0-1.5.0
ARC6172E DATASET ITSO.bb OW41865
IS NOT SUPPORTED IN AN
LIST FOR AGREGATE GROUP
ITSO.aa
ABACKUP ARECOVER ITSO.aa
ITSO.bb
> 32 K
ITSO.aa BLOCKSIZE
ITSO.bb
> 32 K
BLOCKSIZE
Figure 70. Down-level system cannot recover data set with large tape block size
You can specific multiple keywords by separating them with a comma (,). For
more information, refer to Chapter 13, “Performing Inventory Management”,
in the DFSMSrmm V1R5 Implementation and Planning Guide,
SC26-4932-06.
REMOTE 100
DISTANT 200
LOCAL 300
SHELF 5000
Lower numbers have higher priority. In this context, the bigger the location
priority number is, the closer is the distance location from on-site, and
DFSMSrmm prefers more distant locations. Assume a tape volume in SHELF
contains two data sets. When one data set is supposed to move to LOCAL,
and another one is supposed to move to REMOTE, DFSMSrmm picks
REMOTE as the location for the move.
In addition to those default locations, you can define any location and the
corresponding location priority number you want, in the LOCDEF statement in
the EDGRMMxx PARMLIB member.
Additionally, you can define any named on-site or vault location by using the
EDGRMMxx LOCDEF statement.
For more details about the location priority number, see Section 6.2, “Defining
Storage Locations: LOCDEF” in the OS/390 DFSMSrmm Implementation and
Customization Guide , SC26-7334.
The VTS has a number of 3590 tape drives (B or E models) physically writing
to 3590 type J cartridges. These physical resources are referred to as
physical drives and volumes.
When a host system writes a data set on a logical volume, the data is written
to the tape volume cache (TVC), which is a disk cache in the VTS unit, and
then the VTS controller automatically stacks the logical volumes into a
physical volume. So a physical volume is also referred as a stacked volume.
ESCON
HOST System
Note: This is not a copy operation; therefore, there are no duplicate logical
volume images in the VTS.
3494-B18
3494-L1x
3494-D12
Import processing copies logical volumes from a stacked volume into a VTS.
After the import processing has completed, host systems can use the
imported logical volumes. Note that stacked volumes still have valid images of
the logical volumes, which have been imported.
For more details about VTS itself, or advanced functions such as software
and hardware requirements, detailed operational procedures, and
considerations, refer to the IBM Redbook: IBM Magstar Virtual Tape Server:
Planning, Implementing, and Monitoring, SG24-2229.
You issue this command against the logical volumes you want to investigate,
and then check the in-container field in the command output.
4.1.1.8 What was the problem with the VTS basic support?
Here, we will discuss some inconveniences in the VTS basic support.
However, if you want to use a reporting tool other than EDGRPTD, to create a
movement report for container volumes, you need to modify it in order to take
the in-container field into account, just like EDGRPTD.
If the required location of each logical volume in the same stacked volume
differs, DSTORE uses the location priority number to determine the
destination for that stacked volume.
EXPORT LIST 01
LGV001,VAULT
You can create the file by using the RMM SEARCHVOLME command as we
described in the above step, with the CLIST option.
4. Run the export function.
Using the export list volume file created in step 3, run the export function.
The export processing moves LGV001 to an empty stacked volume which
the VTS controller selects. In our scenario, assume that the VTS picks the
volume STV001 as a target stacked volume for exporting.
5. Create a movement report.
Next, create a movement report for stacked volumes which contain
exported logical volumes. Only EDGRPTD can create a movement report
for the stacked volumes, as we described in “EDGRTPD, enhanced for
stacked volumes, is somewhat ambiguous” on page 142.
IMPORT LIST 01
STV001,LGV001
EXPORT LIST 01
LGV001,VAULT
You can create the file by using the RMM SEARCHVOLUME command, as we
describe in step 2, with CLIST option.
4. Run the export function.
Using the export list volume file created in step 3, run the export function.
The export processing moves LGV001 to an empty stacked volume which
the VTS controller selects. In this scenario, assume that the VTS picks the
volume STV001 as a target stacked volume for exporting.
5. Run DSTORE processing.
DSTORE processing assigns VAULT as destination for STV001, in this
scenario.
6. Create a movement report.
Create a movement report for stacked volume. Now the stacked volume
records exist and the destination is set to the stacked volume, by using
any of the reporting tools described in 4.1.1.4, “Creating volume
movement reports” on page 137.
7. Eject STV001 from 3494.
After export processing and the creation of the movement report have
completed, eject STV001 from 3494 tape library using the LM console
pull-down menu ‘Manage Export-Hold Volumes’ function.
8. Move STV001 from LIBVTS to location VAULT.
According to the report created in step 6, move STV001 from location
LIBVTS to VAULT.
9. Confirm the volume movement to DFSMSrmm.
When STV001 has been moved to location VAULT, let DFSMSrmm confirm
the movement by issuing the following command:
RMM CHANGEVOLUME STV001 CONFIRMMOVE
IMPORT LIST 01
STV001
Or, if you want to import the logical volume selectively, issue the following
command:
RMM SEARCHVOLUME CONTAINER(STV001) TYPE(LOGICAL) OWNER(*) LIMIT(*) CLIST
You do this to make a list of the logical volumes which are supposed to be
in the stacked volume STV001. Then, you edit the file as follows:
IMPORT LIST 01
STV001,LGV001
After the job has run, the stacked volume support status changes to
ENABLED if it was NONE in step 1, or changes to MIXED if it was
DISABLED in step 1.
4. Verify the current stacked volume support status.
To see the Stacked Volume status, you need to issue the following
command:
RMM LISTCONTROL
You could see a status of either ENABLED or MIXED. If the status is
ENABLED, DFSMSrmm will be ready for exploiting the VTS support
enhancement. If the status is MIXED, run the following job:
MIXED status means that this new function is partially supported, because
not all stacked volume records might have been created. If you leave the
status as MIXED, the inventory management job will fail with the error
message EDG2315E. So, you should not leave the status as MIXED.
4.1.5 Considerations
In this section, we describe some considerations on using the VTS enhanced
support.
• Do not enable the stacked volume support until all systems sharing the
DFSMSrmm CDS are Release 10.
• Though stacked volume status can be changed from NONE to DISABLED,
DISABLED to MIXED, MIXED to ENABLED, and NONE to ENABLED,
there is no way to fall back.
• While you can define the VRSs for a stacked volume, housekeeping jobs
will ignore them. The stacked volume record remains in MASTER status,
and the location of the stacked volume is determined by the required
location of the logical volumes in it.
• The stacked volume records are not still created automatically while
inserting a stacked volume into the 3494, because the 3494 does not
notify the connected host systems about the insertion of stacked volumes.
• You may delete the stacked volume record when it is no longer used.
However, this is not a mandatory operation. When export processing
selects an empty stacked volume, and if its volume record has already
been in CDS, DFSMSrmm will reuse it.
In this book, we use the term volume set for a volume group. We describe the
practical cases in which this function is effective.
Conversion
Data Set to RMM Data Set
If a data set requires more volumes than those specified (that is, it spills),
OS/390 automatically issues a non-specific volume mount request, to enable
the data set to be created successfully.
If you are using the volume VRSs to manage these resources, the volumes
selected by a non-specific mount will not managed by the VRSs (Figure 74),
even though the creation of the data set was successful.
In this case, the retention period of each volume is set to be the same as the
longest retention period of all the data sets in this volume. The location of
each volume is assigned according to the location priority number.
Location A Location B
By priority number
Data Set
RETAINBY(SET) specifies that you retain volumes by set. When you retain by
set, if any volume in a set is retained by a vital record specification, all
volumes in the set are retained as vital records. DFSMSrmm uses the highest
retention date of all volumes in the set as the retention date for all volumes
retained as vital records in a set. If no volume in a set is retained by a vital
record specification, DFSMSrmm performs expiration processing by set.
DFSMSrmm does not expire volumes in a set if at least one volume in a set is
still not ready to expire because it has not reached its expiration date and you
have not specified that you want the expiration date ignored.
For volumes which should be retained for this reason, DFSMSrmm sets a
new set retained flag to YES. You can check this flag by using the RMM
LISTVOLUME command.
MOVEBY(SET) specifies that you move volumes by set. When you move by set,
all of the volumes in a set are retained in the same location selected by the
VRS specification or location priority for the volume.
Specify SET if you want to manage the chained volumes as a set (see
Figure 77).
8 Days Manage as
at REMOTE a SET
4.2.3 Considerations
In this section, we describe some considerations on using volume set
management.
Returned to Scratch
As you can see, Vol1, Vol2, and Vol4 will remain as a set, and Vol3 will
become an independent volume.
• Case 2
The volumes are chained as shown in Figure 76 on page 155, and a user
creates a data set using the following DD card:
//OUT1 DD DSN=DS.NAME,UNIT=TAPE,DISP=NEW,VOL=SER=(VOL1,VOL2)
If the data set is too large and requires more than the two volumes
specified, a non-specific volume mount request is issued. In this case, the
volume chaining status becomes as shown in Figure 80.
This diagram assumes that Vol5 is used to satisfy the non-specific volume
mount request. As you can see, the volume set has been broken into two
volume sets, and DFSMSrmm chains Vol5 after the Vol2.
All of the tape volume information in the LM DB, TCDB and DFSMSrmm CDS
should be consistent (see Figure 81).
LIB1
VOL001
LM DB Category Code:000F
EDGUTIL did not audit against LM DB. Also, there is no way to cross-check
against LM DB, TCDB, and DFSMSrmm CDS to see if these are inconsistent.
Because these kinds of errors are rare, we recommend that you obtain
guidance from your IBM service representative if such errors are found, and
you are unsure how to fix them.
In the case of simple errors, you can correct these by yourself. To correct the
error status, follow the instructions below:
1. Check the JOBLOG of the VERIFY job carefully. It will tell you what kind of
inconsistencies exist.
2. Determine if a DFSMSrmm command can correct the errors. If possible,
issue the command.
For example, if an owner of a volume is set, but the owner record does not
exist, you can fix it by issuing the command:
RMM ADDOWNER owner_name DEPARTMENT(dept_name)
One good method is to run a MEND job against a copy of the production
CDS. The JOBLOG will give you some useful information as to how the
CDS can be mended. You can use this information to determine which
DFSMSrmm command to issue manually.
3. Run the VERIFY job again to check that the error status has been
corrected. If the error still exists, repeat the procedure.
Though MEND can be used to detect and correct these kinds of errors
automatically, we recommend that you correct them manually. This is
because, although the MEND job will automatically correct the CDS, its
decision may be incorrect. The decision as to how to correct the error status
should be made manually by a user or an administrator.
Also, you should determine why the discrepancy occurred, so that it can be
prevented from occurring in the future.
You should use the MEND function only when enabling stacked volume
support or making container information consistent. Otherwise, if errors are
Table 14 shows the possible error conditions and the correction processing of
MEND(SMSTAPE) for each error. In this table “,” is used as an AND operator,
and the “/” is used as an OR operator.
3 found, loc=lib not found not found EDG6828I - missing from LM no mend
4 master/ not checked not checked EDG6823I - status mismatch CUA to private
user/init
5 scratch not checked not checked EDG6823I status mismatch CUA to scratch
6 master/ not private not private EDG6823I - status mismatch CUA to private
user/init
7 scratch any not scratch, not scratch, not EDG6823I - status mismatch CUA to scratch
not error error
9 loctype=atl/mtl, not found not found EDG6511I - lib name EDG6829I - set intransit
not intransit inconsistent
10 loc not lib found in lib found in lib EDG6511I - lib name EDG6830I - set loc=lib
inconsistent
11 loc not lib not found in lib not found in lib EDG6828I - lib name no mend
inconsistent
12 VTS, type not logical logical EDG6807I - type not EDG6808I - set logical
logical consistent
15 not VTS, physical physical EDG6807I - type not EDG6808I set physical
type not physical consistent
16 atl/mtl, not found/ found/ EDG6516I - missing from EDG6829I - set intransit
intransit, not not found not found TCDB (if not found in LM)
stacked
17 wrong media type media type media type EDG6822I - media type EDG6820I - set media type
mismatch
18 wrong media type media type media type EDG6822I - media type EDG6821I set media type
mismatch
19 wrong media type not found not found EDG6822I - media type EDG6820I set media type
mismatch
20 wrong media type wrong media wrong media type EDG6822I - media type no mend
type mismatch
21 private, not checked not checked EDG6827I - SG mismatch EDG6825I - CUA to rmm
wrong SG SG/
EDG6826I - set rmm SG
4.3.5 Considerations
In this section, we describe some considerations on maintaining the
DFSMSrmm CDS.
• In a multi-system environment, always run EDGUTIL, to verify your CDS,
on the system with the highest level of software available. This ensures
that EDGTUIL uses the latest control data set record format information, to
verify the contents of the CDS.
• Note that VERIFY(SMSTAPE) is meant to be the replacement for
VERIFY(VOLCAT). We recommend that you use VERIFY(SMSTAPE)
instead of VERIFY(VOLCAT).
For a system-managed tape library, each connected host system can have a
scratch pool for each media types.
In the case of a system-managed DASD data set, the DC, SC, MC and SG
are stored in the data set catalog entry. But in the case of the data set created
on a system-managed tape volume, these class names are not recorded in
the data set catalog entry. This is because the data set catalog entry for a
tape data set is the same format as a non-system-managed DASD data set
entry. Only a SG name is recorded in TCDB by OAM.
You can check the assigned MC name of each data set by issuing the
command:
RMM LISTDATASET dsname VOLUME(volser)
New Allocation
SG ACS
Library group selection
Volume is selected by LM
You can define multiple pools by the VLPOOL statement in the EDGRMMxx
PARMLIB member.
MNTMSG is used to modify the WTO mount message and drive display to
include the selected pool name, pool prefix or rack number so that the
operators can easily recognize the pool to select.
EDGUX100 gets called at OPEN, CLOSE and EOV time, and can assign a
VRS management value to the data set, which housekeeping job will use to
assign a VRS.
For example, if you want to retain a data set for 10 days, code EDGUX100 to
assign MCATL1 VRS management value to the data set, and define the VRS
by issuing the command:
RMM ADDVRS DSNAME(‘MCATL1’) DAYS COUNT(10)
You can check the VRS management value assigned to a tape data set by
issuing the command:
RMM LISTDATASET dsname VOLUME(volser)
In this way, EDGUX100 can be used to modify the WTO mount message and
drive display so that the operator can select the correct pool easily. Also, it is
used to assign the VRS management value to each tape data set (Figure 83).
New Allocation
DC ACS
ADDVRS DSNAME('MCATL1') -
SC=NULL DAYS COUNT(50) -
SC ACS
LOCATION(LOCAL)
EDGUX100
DFSMSrmm (at OCE or
ADDVRS DSNAME('MCATL2') -
WHILECATALOG
Mount Request)
However, if you want to use this exit and set values for these new variables,
this could be very complex to achieve, depending on your tape management
policy.
EDGUX100 gets control right after IGDACSXT returns to the system (if it
exists), but before ACS routines for new data set allocation are called. Note
that the values supplied through EDGUX100 are set only if IGDACSXT does
not get set.
When &STORGRP has a null SG, DFSMSrmm calls EDGUX100 to get a pool
name assigned.
IGDACSXT
&MSPOLICY (for All new request)
&MSPOOL (for Scratch request)
EDGUX100
if not set
Note that the decision to use ACS or EDGUX100 can be taken at the data set
level. Therefore, you can partially implement this function and stage it in
gradually, if necessary.
At this stage, you would be fully exploiting the ACS support. Remember that
EDGUX100 will not get control if ACS routines set a non-null value to MC/SG
during RMMPOOL, RMMVRS, so you need to stop modifying EDGUX100
logic for pool or VRS assignment.
If you use EDGUX100 for pool or VRS assignment only, you can delete
EDGUX100 from your installation.
You might not want to remove the EDGUX100 itself, because the EDGUX100
can be used for other purposes, such as:
• Clearing “special EXPDT” from JFCB if it is used
• Permitting the use of a volume which is not registered to the DFSMSrmm
CDS
• Permitting the use of a duplicate volume serial which is registered to the
DFSMSrmm CDS
4.4.4 Considerations
In this section, we describe some considerations on coexistence with
supported DFSMS/MVS releases.
Since this enhancement does not change any DFSMSrmm CDS or SMS CDS
format, you can share them with any supported lower level systems, as long
as you maintain the CDS from highest level of systems. No coexistence PTFs
regarding this support exist.
DFSMSrmm R10 now provides sample batch loader JCL to configure the
typical DFSMSrmm batch job flow to OPC. Sample jobs to be scheduled by
OPC are also provided.
For more details, see the OPC manual, TME 10 OPC Planning and
Scheduling the Workload, SH19-4376.
The controller controls the job scheduling across the OPC configuration and
can be considered to be a server. The tracker acts as a client, passing
information about the status of jobs to the controller, and there must be one
on each system in the OPC configuration.
On the system where the controller runs, the controller and tracker can be in
same address space.
OPC
Controller
The DFSMSrmm Release 10 sample batch loader JCL helps you to define
applications and special resources. Calendars and workstations must be
defined by using the OPC dialog prior to the execution of the batch loader.
After all of the resources have been defined, you run an OPC batch job to
create a long-term plan (LTP). The LTP is a high-level plan of system activity
that covers a long period of time. Each application in the LTP is not executed
immediately, but scheduled and executed when they are reflected in a current
plan (CP) by OPC batch job.
EDGJLOPC
Sample batch loader
Long-term planning
(batch process)
Long Term
Plan
Daily planning
(batch process)
APPLB
OPC
JES Track!
Submit JOBA
JOBA
add APPLB to cuttent plan
Additionally, you can define dependencies for each operation, these specify a
relationship between two operations, and mean that the first operation must
successfully finish before the second operation can begin.
Applications can be grouped if they have the same run cycle. For example,
we can define daily, weekly and monthly applications group.
JOBD
20 WS01
EDGJMOVE
Workstation
50 PRT1
ID's RMMWK
Monday
Operation EDGJCMOV
EDGJWHKP
numbers
20 CPU1 55 TLIB RMMEXP
Every workday
EDGSETT
EDGJCMOV
10 STC1
60 CPU1
IF RC=8
EDGJEXP
20 CPU1
In this job flow, the input arrival date of all applications is 6:00am and the
deadline is 8:00am. That is, applications are scheduled and executed in the
CP at 6:00am and should be completed by 8:00am. If the applications
scheduled are not complete by the deadline, a deadline miss is reported on
the OPC reports. You can additionally specify that WTO message be issued,
when applications misses their deadline.
RMMMTH EDGJVFY Performs verification of the RMM control data set. EDGUTIL job with
VERIFY parameter is run.
RMMBKP EDGBETT Performs nothing (IEFBR14). If the journal is reached to the threshold, this
EDGBETT job is invoked by BACKUPPROC of EDGRMMxx, and
RMMBKP is added to the OPC current plan by event trigger tracking
function.
EDGJBKP1 Backs up the CDS and Journal, and clears the journal after successful
backup.
RMMPOST EDGJBKP2 Backs up the CDS and Journal, and clears the journal after successful
backup.
EDGJINER Initializes the initialize pending volumes and erases the erase pending
volumes. Six 3480 tape volumes are processed by default.
RMMEXP EDGSETT Performs nothing (IEFBR14). If the number of scratch volumes in the SMS
managed tape library reaches the scratch threshold, this EDGSETT job is
invoked by SCRATCHPROC of EDGRMMxx, and RMMEXP is added to
the OPC current plan by the event trigger tracking function.
EDGJSCRL Creates the latest CDS extract file and generate the latest scratch lists.
RMMMOVE EDGJEJC Ejects volumes from a system-managed library. By default, library name is
ROBBIE and ejected to bulk I/O station.
EDGJMOVE Creates the latest CDS extract file and produce movement reports.
After all of these resources are defined, you must define these resources to
your long term plan, by running an OPC batch job.
When all of the changes are reflected in the LTP, create or modify the CP, by
running an OPC batch job.
4.5.6 Considerations
In this section, we describe some considerations on manual intervention:
• DFSMSrmm just provides the sample batch loader JCL and the sample
DFSMSrmm jobs to be scheduled by OPC.
You are required to modify these supplied JCLs to suit your environment.
• These JCLs can be used in the lower level DFSMSrmm systems.
Refer to 2.5, “High speed tape positioning” on page 70 for more information.
Refer to 2.2, “Large tape block sizes” on page 36 for more information.
IBM may have patents or pending patent applications covering subject matter
in this document. The furnishing of this document does not give you any
license to these patents. You can send license inquiries, in writing, to the IBM
Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY
10504-1785.
Licensees of this program who wish to have information about it for the
purpose of enabling: (i) the exchange of information between independently
created programs and other programs (including this one) and (ii) the mutual
use of the information which has been exchanged, should contact IBM
Corporation, Dept. 600A, Mail Drop 1329, Somers, NY 10589 USA.
The information contained in this document has not been submitted to any
formal IBM test and is distributed AS IS. The use of this information or the
implementation of any of these techniques is a customer responsibility and
depends on the customer's ability to evaluate and integrate them into the
customer's operational environment. While each item may have been
reviewed by IBM for accuracy in a specific situation, there is no guarantee
Any pointers in this publication to external Web sites are provided for
convenience only and do not in any manner serve as an endorsement of
these Web sites.
Java and all Java-based trademarks and logos are trademarks or registered
trademarks of Sun Microsystems, Inc. in the United States and/or other
countries.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of
Microsoft Corporation in the United States and/or other countries.
SET, SET Secure Electronic Transaction, and the SET Logo are trademarks
owned by SET Secure Electronic Transaction LLC.
The publications listed in this section are considered particularly suitable for a
more detailed discussion of the topics covered in this redbook.
This section explains how both customers and IBM employees can find out about IBM Redbooks,
redpieces, and CD-ROMs. A form for ordering books and CD-ROMs by fax or e-mail is also provided.
• Redbooks Web Site ibm.com/redbooks
Search for, view, download, or order hardcopy/CD-ROM Redbooks from the Redbooks Web site.
Also read redpieces and download additional materials (code samples or diskette/CD-ROM images)
from this Redbooks site.
Redpieces are Redbooks in progress; not all Redbooks become redpieces and sometimes just a few
chapters will be published this way. The intent is to get the information out much quicker than the
formal publishing process allows.
• E-mail Orders
Send orders by e-mail including information from the IBM Redbooks fax order form to:
e-mail address
In United States or Canada pubscan@us.ibm.com
Outside North America Contact information is in the “How to Order” section at this site:
http://www.elink.ibmlink.ibm.com/pbl/pbl
• Telephone Orders
United States (toll free) 1-800-879-2755
Canada (toll free) 1-800-IBM-4YOU
Outside North America Country coordinator phone number is in the “How to Order”
section at this site:
http://www.elink.ibmlink.ibm.com/pbl/pbl
• Fax Orders
United States (toll free) 1-800-445-9269
Canada 1-403-267-4455
Outside North America Fax phone number is in the “How to Order” section at this site:
http://www.elink.ibmlink.ibm.com/pbl/pbl
This information was current at the time of publication, but is continually subject to change. The latest
information may be found at the Redbooks Web site.
Company
Address
We accept American Express, Diners, Eurocard, Master Card, and Visa. Payment by credit card not
available in all countries. Signature mandatory for credit card payment.
backup. The process of creating a copy of a cache set. A parameter on storage class and
data set or object to be used in case of accidental defined in the base configuration information that
loss. backup control data set (BCDS). In maps a logical name to a set of CF cache
DFSMShsm, a VSAM key-sequenced data set structure names. capacity planning. The process
that contains information about backup versions of forecasting and calculating the appropriate
of data sets, backup volumes, dump volumes, amount of physical computing resources required
and volumes under control of the backup and to accommodate an expected workload.
dump functions of DFSMShsm.
199
configuration (Storage Management in the storage control until the data is completely
Subsystem) . A base configuration, definitions written to the DASD, providing data integrity
of Storage Management Subsystem classes and equivalent to writing directly to the DASD. Use of
storage groups, and automatic class selection DASD fast write for system-managed data sets is
routines that DFSMS uses to manage storage. controlled by storage class attributes to improve
performance. See also dynamic cache
connectivity. (1) The considerations regarding
management. Contrast with cache fast write.
how storage controls are joined to DASD and
processors to achieve adequate data paths (and DASD volume. A DASD space identified by a
alternative data paths) to meet data availability common label and accessed by a set of related
needs. (2) In a system-managed storage addresses. See also volume, primary storage,
environment, the system status of volumes and migration level 1, migration level 2.
storage groups. construct. One of the following:
data class. A collection of allocation and space
data class, storage class, management class,
attributes, defined by the storage administrator,
storage group, aggregate group, base
that are used to create a data set.
configuration.
Data Facility Sort. An IBM licensed program
control data set (CDS). With respect to the
that is a high-speed data processing utility.
Storage Management Subsystem, a VSAM linear
DFSORT provides an efficient and flexible way to
data set containing configurational, operational,
handle sorting, merging, and copying operations,
or communication information. The Storage
as well as providing versatile data manipulation
Management Subsystem introduces three types
at the record, field, and bit level.
of control data sets that guide the execution of
the Storage Management Subsystem: the source data set. In DFSMS, the major unit of data
control data set, the active control data set, and storage and retrieval, consisting of a collection of
the communications data set. data in one of several prescribed arrangements
and described by control information to which the
control interval (CI). A fixed-length area of
system has access. In OS/390 non-UNIX
auxiliary storage space in which VSAM stores
environments, the terms data set and file are
records. It is the unit of information (an integer
generally equivalent and sometimes are used
multiple of block size) transmitted to or from
interchangeably. See also file. In OS/390 UNIX
auxiliary storage by VSAM.
environments, the terms data set and file have
CUA. Common user access. quite distinct meanings.
coupling facility (CF). The hardware that data set collection. A group of data sets which
provides high-speed caching, list processing, and are intended to be allocated on the same tape
locking functions in a Parallel Sysplex. volume or set of tape volumes as a result of data
set stacking. data set stacking. The function used
coupling facility (CF) lock structure. The CF
to place several data sets on the same tape
hardware that supports sysplex-wide locking.
volume or set of tape volumes. It increases the
D efficiency of tape media usage and reduces the
DADSM. Direct access device space overall number of tape volumes needed by
management. allocation. It also allows an installation to group
related data sets together on a minimum number
DASD. Direct access storage device. of tape volumes, which is useful when sending
DASD fast write. An extended function of some data offsite.
models of the IBM 3990 Storage Control in which DB2. Data Base 2.
data is written concurrently to cache and
nonvolatile storage and automatically scheduled DDM. Distributed Data Management.
for destaging to DASD. Both copies are retained
201
services across distributed systems in an SNA EPLPA. Extended pageable link pack area.
environment. DDM provides a common data
erase-on-scratch. physical erasure of data on
management language for data interchange
a DASD data set when the data set is deleted
among different IBM system platforms. (2) (1)
(scratched).
The term used to describe the SAA architectures
and programming support that provide distributed ESA. Enterprise Systems Architecture.
file access capabilities between SAA systems. (2) ESCON. Enterprise System Connection.
The DFSMS component that implements the
DDM target server. ESD. External symbol dictionary.
DSORG. Data set organization. expiration. The process by which data sets or
objects are identified for deletion because their
DTL. Data tag language. expiration date or retention period has passed.
dual copy. A high availability function made On DASD, data sets and objects are deleted. On
possible by nonvolatile storage in some models tape, when all data sets have reached their
of the IBM 3990 Storage Control. Dual copy expiration date, the tape volume is available for
maintains two functionally identical copies of reuse.
designated DASD volumes in the logical 3990 extended addressability. The ability to create
subsystem, and automatically updates both and access a VSAM data set that is greater than
copies every time a write operation is issued to 4 GB in size. Extended addressability data sets
the dual copy logical volume. must be allocated with DSNTYPE=EXT and
dump class. A set of characteristics that EXTENDED ADDRESSABILITY=Y.
describes how volume dumps are managed by extended format. The format of a data set that
DFSMShsm. has a data set name type (DSNTYPE) of
duplexing. The process of writing two sets of EXTENDED. The data set is structured logically
identical records in order to create a second copy the same as a data set that is not in extended
of data. format but the physical format is different. See
also striped data set and compressed format .
dynamic cache management. A function that
extended link pack area (ELPA). The
automatically determines which data sets will be
extension of the link pack area that resides above
cached based on the 3990 subsystem load, the
16 MB in virtual storage. See also link pack
characteristics of the data set, and the
area .
performance requirements defined by the storage
administrator. extended pageable link pack area
(EPLPA). The extension of the pageable link
E
pack area that resides above 16 MB in virtual
EC. Extended control. storage. See also pageable link pack area.
ELPA. Extended link pack area. extended remote copy. Extended Remote
Copy (XRC) is a technique involving both the
Enhanced Capacity Cartridge System
DFSMS host and the I/O Subsystem that keeps a
Tape. Cartridge system tape with increased
“real time” copy of designated data at another
capacity that can only be used with 3490E
location. Updates to the primary center are
Magnetic Tape Subsystems. Contrast with
replicated at the secondary center
Cartridge System Tape.
asynchronously.
EOV. End-of-volume.
F
203
tape mount management, data that is written ISPF. Interactive System Productivity Facility.
once and never used again. The majority of this
J
data is point-in-time backups. (3) Objects
infrequently accessed by users and eligible to be JCL. Job control language.
moved to the optical library or shelf. Contrast with JES. Job entry subsystem.
active data .
JES3. An OS/390 subsystem that receives jobs
indexed VTOC. A volume table of contents with into the system, converts them to internal format,
an index that contains a list of data set names selects them for execution, processes their
and free space information, which allows data output, and purges them from the system. In
sets to be located more efficiently. complexes that have several loosely coupled
in-place conversion. The process of bringing a processing units, the JES3 program manages
volume and the data sets it contains under the processors so that the global processor exercises
control of SMS without data movement, using centralized control over the local processors and
DFSMSdss. integrated catalog facility catalog. A distributes jobs to them via a common job
catalog that is composed of a basic catalog enqueue.
structure (BCS) and its related volume tables of K
contents (VTOCs) and VSAM volume data sets
(VVDSs). See also basic catalog structure and KB. Kilobyte.
VSAM volume data set. kilo (K). The information-industry meaning
integrated catalog facility. The name of the depends upon the context: 1. K = 1024(210 ) for
catalog in DFSMSdfp that is a functional real and virtual storage 2. K = 1000 for disk
replacement for OS CVOLs and VSAM catalogs. storage capacity 3. K = 1000 for transmission
rates.
Interactive Storage Management Facility
(ISMF). The interactive interface of DFSMS that key-sequenced data set (KSDS). A VSAM
allows users and storage administrators access data set whose records are loaded in ascending
to the storage management functions. key sequence and controlled by an index.
interval migration. In DFSMShsm, automatic KSDS. Key-sequenced data set.
migration that occurs when a threshold level of L
occupancy is reached or exceeded on a
DFSMShsm-managed volume, during a specified LDS. See Linear data set. linear data set (LDS).
time interval. Data sets are moved from the A VSAM data set that contains data but contains
volume, largest eligible data set first, until the low no control information. A linear data set can be
threshold of occupancy is reached. accessed as a byte-addressable string in virtual
storage.
I/O. Input/output.
link pack area (LPA). In OS/390, an area of
IPL. Initial program load. virtual storage that contains reenterable routines
ISAM. Indexed sequential access method. that are loaded at IPL time and can be used
concurrently by all tasks in the system. load
ISMF. See Interactive Storage Management module. An executable program stored in a
Facility. partitioned data set program library. See also
ISO. International Organization for program object .
Standardization. logical storage. With respect to data, the
ISO/ANSI. When referring to magnetic tape attributes that describe the data and its usage, as
labels and file structure, any tape that conforms opposed to the physical location of the data.
to certain standards established by the ISO and LPA. See Link pack area.
ANSI.
205
nonvolatile storage (NVS) . Additional random optical library. A storage device that houses
access electronic storage with a backup battery optical drives and optical cartridges, and contains
power source, available with an IBM Cache a mechanism for moving optical disks between a
Storage Control, used to retain data during a cartridge storage area and optical disk drives.
power outage. Nonvolatile storage, accessible
optical volume. Storage space on an optical
from all storage directors, stores data during
disk, identified by a volume label. See also
DASD fast write and dual copy operations.
volume.
O
OSAM. Overflow sequential access method.
OAM. Object Access Method. OAM-managed
OS/390. OS/390 is a network computing-ready,
volumes. Optical or tape volumes controlled by
integrated operating system consisting of more
the object access method (OAM).
than 50 base elements and integrated optional
object. A named byte stream having no specific features delivered as a configured, tested
format or record orientation. system.
object access method (OAM). An access OS/390 UNIX System Services (OS/390
method that provides storage, retrieval, and UNIX). The set of functions provided by the
storage hierarchy management for objects and SHELL and UTILITIES, kernel, debugger, file
provides storage and retrieval management for system, C/C++ Run-Time Library, Language
tape volumes contained in system-managed Environment, and other elements of the OS/390
libraries. operating system that allow users to write and run
application programs that conform to UNIX
object backup storage group. A type of
standards.
storage group that contains optical or tape
volumes used for backup copies of objects. See P
also storage group.
pageable link pack area (PLPA). An area of
object storage group. A type of storage group virtual storage containing SVC routines, access
that contains objects on DASD, tape, or optical methods, and other read-only system and user
volumes. See also storage group. programs that can be shared among users of the
system. See also link pack area .
object storage hierarchy. A hierarchy
consisting of objects stored in DB2 table spaces partitioned data set (PDS). A data set on direct
on DASD, on optical or tape volumes that reside access storage that is divided into partitions,
in a library, and on optical or tape volumes that called members, each of which can contain a
reside on a shelf. See also storage hierarchy. program, part of a program, or data.
OCDS. Offline control data set. partitioned data set extended (PDSE). A
system-managed data set that contains an
offline control data set (OCDS). In
indexed directory and members that are similar to
DFSMShsm, a VSAM key-sequenced set that
the directory and members of partitioned data
contains information about tape backup volumes
sets. A PDSE can be used instead of a
and tape migration level 2 volumes.
partitioned data set.
OLTP. Online transaction processing.
PDS. See partitioned data set .
optical disk drive. The mechanism used to
PDSE. See partitioned data set extended .
seek, read, and write data on an optical disk. An
optical disk drive can be operator-accessible, performance. (1) A measurement of the
such as the 3995 Optical Library Dataserver, or amount of work a product can produce with a
stand-alone, such as the 9346 or 9347 optical given amount of resources. (2) In a
disk drives. system-managed storage environment, a
measurement of effective data processing speed
physical storage. With respect to data, the recovery. The process of rebuilding data after it
actual space on a storage device that is to has been damaged or destroyed, often by using a
contain data. backup copy of the data or by reapplying
transactions recorded in a log.
PLPA. Pageable link pack area.
Redundant Array of Independent Disks
pool storage group. A type of storage group (RAID). A disk subsystem architecture that
that contains system-managed DASD volumes. combines two or more physical disk storage
Pool storage groups allow groups of volumes to devices into a single logical device to achieve
be managed as a single entity. See also storage data redundancy.
group.
relative byte address (RBA). In VSAM, the
PPRC. Peer-to-peer remote copy. displacement of a data record or a control interval
primary data set. When referring to an entire from the beginning of the data set to which it
data set collection, the primary data set is the first belongs independent of the manner in which the
data set allocated. For individual data sets being data set is stored.
stacked, the primary data set is the one in the relative-record data set (RRDS). A VSAM data
data set collection that precedes the data set set whose records are loaded into fixed-length
being stacked and is allocated closest to it. slots.
primary storage. A DASD volume available to removable media library. The volumes that are
users for data allocation. The volumes in primary available for immediate use, and the shelves
storage are called primary volumes. See also where they could reside.
storage hierarchy. Contrast with migration
level 1 and migration level 2 . residence mode (RMODE). The attribute of a
load module or program object.
program management. The task of preparing
programs for execution, storing the programs, Resource Access Control Facility (RACF). An
load modules, or program objects in program IBM licensed program that is included in OS/390
libraries, and executing them on the operating Security Server and is also available as a
system. separate program for the OS/390 and VM
environments. RACF provides access control by
program object. All or part of a computer identifying and verifying the users to the system,
program in a form suitable for loading into virtual authorizing access to protected resources,
storage for execution. Program objects are stored logging detected unauthorized attempts to enter
in PDSE program libraries and have fewer the system, and logging detected accesses to
restrictions than load modules. Program objects protected resources.
are produced by the binder.
Resource Measurement Facility (RMF). An
PSCB. Protected step control block. IBM licensed program or optional element of
PSF. PSF for OS/390. OS/390, that measures selected areas of system
activity and presents the data collected in the
PSP. Program Services Period.
format of printed reports, system management
207
facilities (SMF) records, or display reports. Use shelf. A place for storing removable media,
RMF to evaluate system performance and such as tape and optical volumes, when they are
identify reasons for performance problems. not being written to or read.
resource profile. A profile that provides RACF shelf location. A single space on a shelf for
protection for one or more resources. User, storage of removable media.
group, and connect profiles are not resource
small-data-set packing (SDSP). In
profiles. The information in a resource profile can
DFSMShsm, the process used to migrate data
include the data set profile name, profile owner,
sets that contain equal to or less than a specified
universal access authority, access list, and other
amount of actual data. The data sets are written
data. Resource profiles can be discrete profiles
as one or more records into a VSAM data set on
or generic profiles.
a migration level 1 volume.
RLS. Record-level sharing.
SMF. See system management facility.
RMF. See Resource Measurement Facility.
SMS. See Storage Management Subsystem or
RMODE. Residence mode. System Managed Storage.
RRDS. Relative-record data set. SMS complex. A collection of systems or
system groups that share a common
RSECT. Read-only control section.
configuration. All systems in an SMS complex
S share a common active control data set (ACDS)
SCDS. See source control data set. and a communications data set (COMMDS). The
systems or system groups that share the
SDSP. Small data set packing. configuration are defined to SMS in the SMS
service level (Storage Management base configuration.
Subsystem). A set of logical characteristics of SMS control data set. A VSAM linear data set
storage required by a Storage Management containing configurational, operational, or
Subsystem-managed data set (for example, communications information that guides the
performance, security, availability). execution of the Storage Management
service-level agreement. (1) An agreement Subsystem. See also source control data set,
between the storage administration group and a active control data set, and communications
user group defining what service-levels the data set.
former will provide to ensure that users receive source control data set (SCDS). A VSAM
the space, availability, performance, and security linear data set containing an SMS configuration.
they need. (2) An agreement between the The SMS configuration in an SCDS can be
storage administration group and operations changed and validated using ISMF. See also
defining what service-level operations will provide active control data set and communications
to ensure that storage management jobs required data set.
by the storage administration group are
completed. storage administration group. A centralized
group within the data processing center that is
sharing control data set. A VSAM linear data responsible for managing the storage resources
set that contains information DFSMSdfp needs to within an installation.storage administrator. A
ensure the integrity of the data sharing person in the data processing center who is
environment. responsible for defining, implementing, and
SHCDS. Sharing control data set. maintaining storage management policies.
storage class. A collection of storage attributes
that identify performance goals and availability
requirements, defined by the storage
209
applications. See also system-managed storage tape library. A set of equipment and facilities
environment. that support an installation’s tape environment.
This can include tape storage racks, a set of tape
system-managed storage environment. An
drives, and a set of related tape volumes
environment that helps automate and centralize
mounted on those drives. See also
the management of storage. This is achieved
system-managed tape library and automated
through a combination of hardware, software,
tape library.
and policies. In the system-managed storage
environment for OS/390, the function is provided Tape Library Dataserver. A hardware device
by DFSORT, RACF, and the combination of that maintains the tape inventory associated with
DFSMS and OS/390. a set of tape drives. An automated tape library
dataserver also manages the mounting, removal,
system-managed tape library. A collection of
and storage of tapes.
tape volumes and tape devices, defined in the
tape configuration database. A system-managed tape mount management. The methodology
tape library can be automated or manual. See used to optimize tape subsystem operation and
also tape library. use, consisting of hardware and software
facilities used to manage tape data efficiently.
system-managed volume. A DASD, optical, or
tape volume that belongs to a storage group. tape storage group. A type of storage group
Contrast with DFSMShsm-managed volume and that contains system-managed private tape
DFSMSrmm-managed volume. volumes. The tape storage group definition
specifies the system-managed tape libraries that
system management facilities (SMF) . A
can contain tape volumes. See also storage
component of OS/390 that collects input/output
group.
(I/O) statistics, provided at the data set and
storage class levels, which helps you monitor the tape subsystem. A magnetic tape subsystem
performance of the direct access storage consisting of a controller and devices, which
subsystem. allows for the storage of user data on tape
cartridges. Examples of tape subsystems include
system programmer. A programmer who
the IBM 3490 and 3490E Magnetic Tape
plans, generates, maintains, extends, and
Subsystems.
controls the use of an operating system and
applications with the aim of improving overall tape volume. A tape volume is the recording
productivity of an installation. space on a single tape cartridge or reel. See also
volume.
T
temporary data set. An uncataloged data set
TB. Terabyte.
whose name begins with & or &&, that is normally
tera (T). The information-industry meaning used only for the duration of a job or interactive
depends upon the context: 1. T = session. Contrast with permanent data set.
1,099,511,627,776(2 40 ) for real and virtual
threshold. A storage group attribute that
storage 2. T = 1,000,000,000,000 for disk storage
controls the space usage on DASD volumes, as a
capacity 3. T = 1,000,000,000,000 for
percentage of occupied tracks versus total tracks.
transmission rates.
The low migration threshold is used during
tape configuration database. One or more primary space management and interval
volume catalogs used to maintain records of migration to determine when to stop processing
system-managed tape libraries and tape data. The high allocation threshold is used to
volumes. determine candidate volumes for new data set
tape librarian. The person who manages the allocations. Volumes with occupancy lower than
tape library. the high threshold are selected over volumes that
meet or exceed the high threshold value.
vital records. A data set or volume maintained VTS. Virtual tape server.
for meeting an externally-imposed retention VVDS. See VSAM volume data set.
requirement, such as a legal requirement.
W
Compare with disaster recovery.
WTO. Write-to-operator.
vital record specification. Policies defined to
manage the retention and movement of data sets X
and volumes for disaster recovery and vital
XRC. Extended remote copy.
records purposes.
volume. The storage space on DASD, tape, or
optical devices, which is identified by a volume
211
212 DFSMS Release 10 Technical Update
Index
C
CAMLST macro 69
Numerics
3-way audit 158 Candidate volumes 24
CC keyword
for data set backup function 125
A Concurrent Copy 124
ABARS 131 Control Area
ACS read-only variable size calculation 26
&ACSENVIR 171 Control Interval
&BLKSIZE 42 ensuring adequate size 27
&MSPOLICY 171 COPYSDB 40
&MGMTCLAS 171 Coupling Facility 77
&MSPOOL 171 CPOOL macro 51
&STORGRP 171
&UNIT 61
ACS routines D
modifying for VSAM data striping 15 DADSM 64
Allocation data set stacking 63
No secondary space 19 DCBEBLKSI 51
Secondary space 17 DEFINE command
ARA 54 SWITCHTAPES AUTOBACKUPEND 121
ARCHBACK macro 106 SWITCHTAPES PARTIALTAPE 124
ARCHMIG macro DEVSERV command
FORCEML1 =YES 100 QDASD 12
ARCINBAK program 106 DEVTYPE
ARCMDEXT installation exit 102 INFO=AMCAP 52
ARCTPEXT sample program 86 DFSMShsm startup parameter
AUX host 82 CDSQ=YES 82
CDSSHR=RLS 82
CDSSHR=YES 82
B HOST= 83
BACKDS command 106 HOSTMODE= 80
BACKUP processing 135 PRIMARY=YES 83
BACKVOL command DFSORT 47
DUMP 101 using large tape block size 47
BDW DPRTY parameter 86
Extended format 50 DS1CHA
Non-extended format 49 data set changed bit 101
BLKSIZE 38 DSTORE processing 135
BLKSZLIM 38
BSAM access method 37
BUFL 51 E
BUFNO 51 ECS
BUILD macro 50 Enhanced Catalog Sharing 76
BUILDRCD macro 50 EDGHSKP utility 135
EDGJRPT sample job 138
EDGJVLTM sample job 138
EDGRMMxx OPTION parameter
T
tape configuration database 159
Tape labels
supported for large tape block size 37
TAPEBLKSZLIM 39
TARGET keyword
for data set backup function 112
TMM 61
U
UCB
extension 40
215
216 DFSMS Release 10 Technical Update
IBM Redbooks review
Your feedback is valued by the Redbook authors. In particular we are interested in situations where a
Redbook "made the difference" in a task or problem you encountered. Using one of the following
methods, please review the Redbook, addressing value, subject matter, structure, depth and
quality as appropriate.
• Use the online Contact us review redbook form found at ibm.com/redbooks
• Fax this form to: USA International Access Code + 1 914 432 8264
• Send your comments in an Internet note to redbook@us.ibm.com
Review
Questions about IBM’s privacy The following link explains how we protect your personal information.
policy? ibm.com/privacy/yourprivacy/
(0.2”spine)
0.17”<->0.473”
90<->249 pages
®
DFSMS Release 10
Technical Update
One-stop guide to DFSMS, formerly known as DFSMS/MVS, continues to add
know all of the enhancements to performance, availability, system
INTERNATIONAL
enhancements to throughput, and usability for data access and storage TECHNICAL
DFSMS! management. SUPPORT
ORGANIZATION
DFSMS Release 10 is the first release of DFSMS that is
MUST-HAVE
available solely with OS/390. DFSMS Release 10 is packaged
information for
and shipped with OS/390 Version 2 Release 10 and offers the
installation ease of installation, integration, and maintenance inherent in BUILDING TECHNICAL
planning! the OS/390 product. INFORMATION BASED ON
PRACTICAL EXPERIENCE
Many worked This IBM Redbook provides an in-depth description of all the
examples! new enhancements made to DFSMS Release 10. This book is IBM Redbooks are developed by
designed to help storage administrators plan, install, and the IBM International Technical
migrate to DFSMS Release 10. Support Organization. Experts
from IBM, Customers and
Partners from around the world
create timely technical
information based on realistic
scenarios. Specific
recommendations are provided
to help you implement IT
solutions more effectively in
your environment.