You are on page 1of 6

Skip to main content

 Enterprise
 SMB
 Solutions Directory
 SUBSCRIBE

Submit

 Enterprise 
 SMB 
 Solutions Directory
 Webinars
 Podcasts
 Systems management
 Community
 Tips
 Resources
ADVERTISEMENT

5 Quick Ways to Speed up Your


z/OS Batch Without Application
Changes
By David Stephens / October 28, 2021

z/OS has several features that can


speed up batch, without needing any
application changes
z/OS users understand that batch is still important. However, with pressure to reduce downtime
and offer 24-7 online services, it seems we always need to reduce our batch window. Or in other
words, make our batch jobs run faster.

The good news is that z/OS has a lot of features that may speed up batch, without needing any
application changes. Let’s look at some of the features that are easier to implement, and see what
they can do.

1. Less Time Waiting for HSM


If you’re using DFSMShsm (HSM) to archive unused datasets, then your batch jobs may spend
time waiting for these datasets to be recalled. An easy way to resolve this is to modify DFSMS
management class settings so datasets aren’t archived, but not everyone has this luxury.

A quick way to reduce HSM delays sounds trivial: ensure you actually need the archived
datasets. HSM recalls all migrated datasets in JCL DD statements; even if they are never opened.

If you’re deleting an archived dataset, you don’t need to wait for it to be recalled. Most sites will
have set the IEFBR14_DELMIGDS parameter of the z/OS ALLOCxx parmlib member to
NORECALL. So, if you have a DD card that looks like this:
//DD1 DD DSN=MY.DSET.TO.DELETE,DISP=(MOD,DELETE),SPACE=(TRK,1)

Then the dataset will be deleted immediately by HSM: no recall. The trick is that this only works
if using IEFBR14. It won’t work if the job step calls another program, or if you use something
like IDCAMS to delete the dataset. 

By default, HSM recalls datasets one at a time as they are opened. So, if you have a job step that
recalls five datasets, it would be nice to recall them all at the same time. Setting the z/OS
BATCH_RCLMIGDS parameter of the ALLOCxx parmlib member to PARALLEL does exactly
that.

Those with more than one z/OS system in a sysplex will want to consider HSM common recall
queues. Enabling this feature allows other z/OS systems to recall your dataset if the local HSM is
too busy.

2. VIO
Virtual I/O (VIO) has been a favorite way of speeding up batch for decades, and it remains an
easy option to speed up temporary, non-VSAM datasets. 

Many use VIO datasets in a single job step. However, VIO datasets remain active for the entire
job. So, you could put data in a VIO dataset in one step, and then use that VIO dataset in
another. 

If you have a permanent dataset that is read by multiple batch steps, you may improve
performance by first copying that dataset into VIO:
//INTOVIO EXEC PGM=ICEGENER
//SYSUT1 DD DISP=SHR,DSN=MY.PERM.DSET
//SYSUT2 DD DISP=(NEW,PASS),DSN=&&TEMP1,UNIT=VIO
//SYSPRINT DD SYSOUT=*
//SYSIN DD SYSOUT=*

This could then be accessed (faster) in later steps:


//DD1 DD DSN=*.INTOVIO.SYSUT1,DISP=SHR

You may have noticed that this example is using a program called ICEGENER to copy our data.
I’ll explain why shortly.

If you’re using z/OS UNIX datasets, a similar option to VIO is the /tmp directory. Most systems
will have mounted the /tmp directory as a temporary filesystem (TFS). In other words, it remains
in memory.

3. Less Waits or ENQs


If another job (or user) has exclusive access to a dataset, your job must wait: an ENQ wait. So,
what can you do about this?

An easy solution is to speed up the job (or user) that has exclusive access to the dataset. The
ERV parameters of the z/OS IEAOPTxx parmlib member can be set to allow tasks holding
ENQs that are needed by other tasks to briefly get a higher priority.

Suppose you have a job that creates a dataset:


//STEP1 EXEC PGM=PGM1
//SYSUT1 DD DISP=(,CATLG),DSN=MY.DSET,UNIT=SYSDA

Normally, the entire job will hold an exclusive ENQ on this dataset—even if all later steps only
read the dataset, or don’t use it at all. We could speed things up by splitting the job into two. So,
the first job includes the step to create the dataset and ends, releasing the exclusive ENQ). The
second job has steps that read it: allowing other jobs and users to read the dataset at the same
time.

An alternative is to use the z/OS features that allows this ENQ to be “downgraded” to SHR if
subsequent steps don’t allocate it as exclusive. This can be enabled using the DSENQSHR
parameter of the JOB statement. For example:
//JOB1 JOB (ACCT),CLASS=1,MSGCLASS=X,DSENQSHR=ALLOW

Sites can also enable this for JES job classes by setting DSENQSHR on the JOBCLASS
definition statement in the JES2 parameters.

4. Faster Dataset Copies


Batch jobs often copy datasets to create a backup, or a second copy that they can work on or
process. If you’re using IEBGENER or IDCAMS REPRO to copy datasets, please stop.

For many years, DFSORT and SYNCSORT MFX have offered a much faster IEBGENER
replacement that uses I/O features used when sorting: ICEGENER and BETRGENR
respectively. Many sites have gone as far as creating an IEBGENER alias pointing to these better
options, so you may be using them without knowing. 

ICEGENER and BETRGENR can also copy VSAM datasets—usually faster than most other
options, including IDCAMS REPRO.

An even faster option is DFSMSdss. It can copy compressed or encrypted datasets without
decompressing/decrypting them first, and can use DASD subsystem features to create copies
almost instantly using concurrent copy.

5. Faster Tape
Batch jobs often use tape. Many sites don’t use the z/OS Large Block Interface (LBI) that can
increase the maximum blocksize from the 32kByte maximum for disk to values as high as
256kBytes. Many utilities now support LBI, including DFSMSdss and IDCAMS.

Tape mount delays can also slow down a batch job. There is a limit to the number of tape drives
that can be used, so reducing the number of drives used may help. Suppose we are searching
through three tape volumes. If we code:
//DD1 DD DISP=SHR,DSN=TAPE1.DSET
// DD DISP=SHR,DSN=TAPE2.DSET
// DD DISP=SHR,DSN=TAPE3.DSET

Then z/OS will mount all tapes on separate drives, and then continue. We use three tape devices
(waiting if three are not available), and hold them for the entire job. A better way would be to
code:
//DD1 DD DISP=SHR,DSN=TAPE1.DSET
// DD DISP=SHR,DSN=TAPE2.DSET,UNIT=AFF=DD1
// DD DISP=SHR,DSN=TAPE3.DSET,UNIT=AFF=DD1

Now, z/OS will only mount the first tape when the job step starts. It will mount the second tape
on the same drive as the first when the first is no longer used. Similarly for the third drive. The
big benefit is that we only use one tape drive: we only need to wait for one drive, and others are
available for other batch jobs.

Ways to Faster Batch


These are a few easier features that z/OS provides to speed up your batch: none require
application changes.
However, simply using these features may not necessarily improve your batch performance. If
looking at speeding up batch, a first step is to analyze the performance, and find out what makes
up its total elapsed time (i.e. time waiting for an initiator, waiting for dataset recall, or other
reasons). Then, you can start tackling those bottlenecks, possibly using some of the features
mentioned here. 
ADVERTISEMENT

Most Recent Articles


Why Ansible Is Emerging as a Top Automation Tool for Mainframes

What Advanced Storage Management can do for your business

John Connors and Milt Rosberg on Mainframe Security and Pervasive

Encryption
Tagged as:
z/OS / Linux on IBM Z / z/VM / z/VSE / Article / Systems management / Application
development / Data management / TechTips / Performance

About the author


As the Lead Systems Programmer for Longpela Expertise, David Stephens helps firms around
the world with their z/OS related problems.
See more by David Stephens

Related Content
CloudLeveraging the Cloud Computing Benefits for Developing Economies  →
IT infrastructureImprove Performance and Efficiency With Hybrid and All-Flash
Storage →
Systems managementVirtualization’s Past Helps Explain Its Current Importance  →
ADVERTISEMENT
ADVERTISEMENT

Stay on top of all things tech!


View upcoming & on-demand webinars

X
We use cookies to optimize your visit to our website. By visiting our website without changing
your settings, you’re acknowledging your consent to receive cookies on our website. If you
would like to change your cookie settings at any time please view our privacy policy for
additional information. Agree

Enterprise


SMB


LinkedIn

YouTube

 Advertise with us
 Subscriptions
 Contact us
 About us
 Privacy policy
 Terms of service

TechChannel and techchannel.com is a trademark of MSPC, a division of MSP Communications.

© 2022 Key Enterprises LLC. All rights reserved

You might also like