The BMS macro generation utility, DFHBMSUP, is used to recreate BMS macro statements from a mapset load module

. DFHBMSUP can recreate the original BMS macros that were assembled to produce a mapset load module, when the macro statements are no longer available. However, it is not possible to recover the original field names. Field names are generated by the utility and can be edited later. DFHBMSUP sets a return code indicating success or failure. DFHBMSUP requires the following inputs: • Input MAPSET- Name defined in the PARM field of the EXEC JCL statement. • Input MAPSET library- Name defined in the DFHRPL DD statement. DFHBMSUP provides the following outputs: • Output map- Name defined in the BMSOUT DD statement. • Output map library- Name defined in the BMSOUT DD statement.

DD statements for DFHBMSUP The following are the DD statements for the input and output data sets used by DFHBMSUP: STEPLIB DD Defines a partitioned data set (DSORG=PO) containing DFHBMSUP. If DFHBMSUP is in the link list, this statement is not required. DFHRPL DD Defines a partitioned data set (DSORG=PO) containing the mapset load module to be processed. The member name is supplied in the PARM field of the EXEC statement. BMSOUT DD Defines a sequential data set or a member of a partitioned data set (DSORG=PO) to contain the BMS macro statements generated by the utility. Return codes from DFHBMSUP DFHBMSUP sets one of the following return codes: 0 Utility executed successfully. 4 Input mapset could not be located. 8 Output mapset could not be opened.

Example of using DFHBMSUP The statements are required to process a BMS mapset load module, BMSET01, which is in the FPRST.A414030.LOAD library or FPRST.PROD.CICS in case of Production

region.Macro statements are generated and written to the MAPOUT member of the FPRST.OUTPUT.MACLIB library.

//* //RUNPROG EXEC PGM=DFHBMSUP,PARM='BMSET01',REGION=2M //STEPLIB DD DSN=CICS.TS220.SDFHLOAD,DISP=SHR //BMSOUT DD DSN=FPRST.OUTPUT.MACLIB(MAPOUT), // DISP=SHR //DFHRPL DD DSN=FPRST.A414030.LOAD,DISP=SHR //SYSUDUMP DD SYSOUT=* //*

*********************************************************************** *
How to recover older version of a dataset in MVS If you have modified the dataset and you do not have the back-up ofthat, but now you want to have the previous version.... so, how to get that previousversion??? Try this out ??? 1. Find the previous versions of a dataset by issuingthis TSO command.. (MVS takes back-up of dataset every time it'schanged) HLIST BCDS DSN('Dataset Name') By executing this command, you will get back-update and time, Generation (GEN), version (VER) etc.. Note down the generation number of a dataset, forthe version you want to recover. 2. Now, how to recover that previous version ofdataset. Issue this TSO command.. HRECOVER ('Dataset Name') GEN(generation number)NEWNAME By executing this command, MVS will ask for a 'newdataset name' to get that generation(version) copied over tothis. Give the 'new dataset name'..... now you have yourprevious version recovered. Note: Generally we are familiar with 'HRECOVER' commandto recover deleted datasets.

*********************************************************************** *
IEBEDIT utility can be used to select the steps in a job to be executed.It can also be used to copy jobs from one input dataset to another dataset,selectively copy job steps from each job by including a job step from one job or excluding job step from another job. Please find the examples for the same in the attached text document.Joba.txt contains two jobs A414030A and A414030B.

joba.txt

iebedit1.txt

A414030A contains one step STEP100 and A414030B contains 3 steps STEP300 STEP 400 and STEP500

*********************************************************************** *
How to remove duplicate records from any two files and get the unique ones from each of these 2 files ".

sortjno9.txt

sortjno8.txt

sortjno7.txt

sortjno3.txt

sortjno2.txt

sortjno1.txt

joinjcl3.txt

joinjcl2.txt

joinjcl1.txt

joinjcl4.txt

JOINKEYS is a new control card used withSYNCSORTto remove duplicates and get theunique rows from 2 files. JOINJCL1: This job joins both the matched and unmatched rows fromthe two input files intoFPRST.A414030.SORTJNO3

JOINJCL2: This job writes only those rows from File 1 that arenot available in File 2 toFPRST.A414030.SORTJNO1 JOINJCL3: This job writes only those rows from File 2 that arenot available in File 1 to FPRST.A414030.SORTJNO2

JOINJCL4: This job is similar toJOINJCL1but the only difference being the OUTFIL keyword usedto route the records to different outputfiles as per the conditions specified.

*********************************************************************** *
How to have lines inserted to enter data using TE command

Power Typing.doc

*********************************************************************** *
SDSF action characters like C, XC and others as used in the below JCL It is possible toexecute SDSF in batch and therefore get access to the spool. Here is an exampleusing SDSF to retrieve information about the current job: FPRST.A414030.DUK.JCL(SPOOL1) //A414030S JOB(2891,FPRS,EDT),'RADHIKA',NOTIFY=&SYSUID,CLASS=R, // MSGCLASS=U //SDSF EXECPGM=SDSF //SYSOUT DD SPACE=(TRK,(1,1),RLSE), // DCB=(LRECL=80,BLKSIZE=800,RECFM=FB), // DSN=&&ISFIN,DISP=(,PASS) //MSGFILE DDSPACE=(TRK,(1,1),RLSE), // DSN=FPRST.A414030.SPOOL.OP1,DISP=(,CATLG), // DCB=(LRECL=80,BLKSIZE=800,RECFM=FB) //ISFOUT DD DUMMY //ISFIN DD DSN=FPRST.A414030.ACNTL(SDSFCNTL),DISP=SHR FPRST.A414030.ACNTL(SDSFCNTL) DA PRINT FILEMSGFILE FINDA414030S ++? FINDJESJCL ++XC

SDSF Action Characters.doc

*********************************************************************** *
While BrowsingInternet, we have some option called address auto complete…we enter some letters inthe address bar and the browser auto completes it. Mainframe has also “Auto complete”feature!!! The steps to use thatare as follows. 1. Go to ISPF 3.4 2. Enter KEYS in the commandline, KEYS window will popup 3. Set any key to AUTOTYPE, save and exit

4. Now type any Dataset partially and press the assigned PFkey

*********************************************************************** *
to compare and list the datasets using 3.13 option

Options and Utilities of 3.13.doc

*********************************************************************** *
how to change the big strings in ISPF and how to find a semicolon using the find command in ISPF

sem colon in find.doc i Changing Big Strings in ISPF.doc

*********************************************************************** *
All, I missed out this link in the mail sent yesterday. Commarea has had a theoretical limitation of 32k and practically development teams experienced problems when it actually crossed 24k. With increased web-enablement of the CICS applications, this proved to be a bottleneck. It has been avoided with the use of Channels and Containers. You may view Containers and Channels like DFHCOMMAREA in the treatment of its life, however, you can have as many containers and as many channels that you want without any storage limit(max is the system storage!). http://publib.boulder.ibm.com/infocenter/cicsts/v3r1/index.jsp? topic=/com.ibm.cics.ts31.doc/dfhp4/commands/dfhp4_getcontainerchannel.htm In the days to come, you might get across a lot of tasks to convert the applications from DFHCOMMAREA to Channels! FBCA510 in A425394.OMNI.ORDERMF.PDSSORC

*********************************************************************** *
DCCMA030 – Multi row fetch program in ICS

*********************************************************************** * TSO Commands for editing
The following commands will be useful especially when we extract the spufi output to windows. These commands helps in deleting unwanted lines and the thereby saves a lot of time in editing.

1. X ALL (this excludes all source lines from view) 2. F ALL '--- ' (this finds all the source lines with character '----')

3.DEL ALL X (deletes all excluded lines) 4.DEL ALL NX (deletes all non-excluded lines) To delete all lines with '-----' we can issue the following series of commands X ALL ; F ALL '--- ' ; DEL ALL NX The screenshots are given in the below doc

screenshots.doc (293 KB)

*********************************************************************** *
CHANNELS and CONTAINERS : DFHCOMMAREA has a theoretical limitation of 32k. In order to overcome this limitation there is this introduction of CONTAINERS and CHANNELS. CONTAINERS are named blocks of data for passing information between programs.Any number of containers can be passed between programs.CONTAINERS are grouped together in named CHANNELS. CHANNELS can be used as a standard mechanism for exchanging data between programs.A CHANNEL can be passed on " EXEC CICS LINK , START , XCTL and RETURN commands. It provides more flexible and more structured method of passing data between programs.Variation in the size and number of containers can connectively be accomodated to allow easier evolution of the interfaces between programs. The size of the CONTAINER is limited only by the amount of storage available.There is no limit to the number of containers that can be added to a channel.This also removes the need for program to know the exact size of the data returned. When containers go out of scope they are automatically destroyed, so the programmer is relieved of storage management concerns. Channels can be used by applications written in any of the programming language supported by CICS.

*********************************************************************** * *********************************************************************** * *********************************************************************** * *********************************************************************** * *********************************************************************** *
https://hrsolutions.fidelity.com/wps/hr/Career/Simple?hrscontext=LLCatalog20
welcome1

Sign up to vote on this title
UsefulNot useful