Professional Documents
Culture Documents
GC18-9186-02
IBM DB2 Information Integrator
GC18-9186-02
Before using this information and the product it supports, be sure to read the general information under “Notices” on page 65.
This document contains proprietary information of IBM. It is provided under a license agreement and copyright law
protects it. The information contained in this publication does not include any product warranties, and any
statements provided in this manual should not be interpreted as such.
You can order IBM publications online or through your local IBM representative:
v To order publications online, go to the IBM Publications Center at www.ibm.com/shop/publications/order
v To find your local IBM representative, go to the IBM Directory of Worldwide Contacts at
www.ibm.com/planetwide
When you send information to IBM, you grant IBM a nonexclusive right to use or distribute the information in any
way it believes appropriate without incurring any obligation to you.
© Copyright International Business Machines Corporation 2003, 2004. All rights reserved.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
© CrossAccess Corporation 1993, 2003
Contents
Chapter 1. Overview of setting up and Chapter 5. Configuring correlation
starting Classic Event Publisher . . . . 1 services, publication services, and
Introduction to Classic Event Publisher . . . . . 1 publications . . . . . . . . . . . . 37
Overview of configuring to capture information . . 2 Copying the correlation service JCL . . . . . . 37
Overview of configuring to publish information . . 3 Configuring the correlation service and publication
Overview of starting Classic Event Publisher . . . 4 service . . . . . . . . . . . . . . . . 37
Configuring the maximum size of messages . . . 40
| Chapter 2. Preparing data and Configuring Cross Memory services . . . . . . 41
| configuring change-capture agents for Creating publications . . . . . . . . . . . 41
| CA-IDMS . . . . . . . . . . . . . . 5 Creating the Classic Event Publisher recovery data
sets . . . . . . . . . . . . . . . . . 43
| Setup procedures for CA-IDMS sources . . . . . 5
| Enabling CA-IDMS change-capture . . . . . . . 5
| Punching the schema and subschema . . . . . 6 Chapter 6. Starting the processes of
| Mapping the CA-IDMS schema and subschema . 6 capturing and publishing . . . . . . . 45
| Loading the data server metadata catalog . . . . 9 Starting the process of publishing . . . . . . . 45
| Activating change-capture in a CA-IDMS Central | Activating change-capture for CA-IDMS . . . . . 45
| Version . . . . . . . . . . . . . . . . 9 | Setting up the IDMSJNL2 exit . . . . . . . 45
| Modifying the Central Version JCL . . . . . 11 | Before starting a change-capture agent . . . . 46
| Modifying automatic journaling . . . . . . 11 | Starting an active change-capture agent . . . . 46
| Configuring a named server environment . . . 11 Activating change capture for an IMS
| Relinking the CA-IDMS database I/O module database/segment . . . . . . . . . . . . 46
| IDMSDBIO . . . . . . . . . . . . . . 12 Activating change capture for VSAM . . . . . . 47
| Relinking the presspack support module . . . . 12 Monitoring correlation services and publication
| Setting up a server to access a CA-IDMS Central services . . . . . . . . . . . . . . . . 47
| Version . . . . . . . . . . . . . . . . 13
Chapter 7. Recovering from errors . . . 49
Chapter 3. Preparing data and Introduction to recovery mode . . . . . . . . 49
configuring change-capture agents for | Starting a recovery change-capture agent for
IMS . . . . . . . . . . . . . . . . 15 | CA-IDMS . . . . . . . . . . . . . . . 50
Supported environments and program types . . . 15 | Parameter example . . . . . . . . . . . 51
Enabling IMS change capture . . . . . . . . 15 | Execution JCL . . . . . . . . . . . . 52
Mapping the sample IMS DBD and copybooks 15 | Journal files in execution JCL . . . . . . . 52
Loading the metadata catalogs . . . . . . . 22 Preparing for recovery mode when using IMS
Installing the IMS active change-capture agent . . . 23 change-capture agents . . . . . . . . . . . 52
Advantages and disadvantages of the IMS logger Recovering from errors when using IMS
exit installation options . . . . . . . . . 24 change-capture agents . . . . . . . . . . . 53
Adding the IMS logger exit to an existing exit . . . 25 Starting recovery change-capture agents for VSAM 54
Augmenting a DBD to generate IMS data capture Stopping recovery change-agents for VSAM . . . 54
log records . . . . . . . . . . . . . . 25
DB2 Information Integrator
Chapter 4. Preparing data and documentation . . . . . . . . . . . 55
configuring change-capture agents for Accessing DB2 Information Integrator
documentation . . . . . . . . . . . . . 55
VSAM . . . . . . . . . . . . . . . 27
Documentation about replication function on z/OS 57
Prerequisites for VSAM monitoring . . . . . . 27
Documentation about event publishing function for
Setup procedures for CICS monitoring for VSAM
DB2 Universal Database on z/OS . . . . . . . 58
changes. . . . . . . . . . . . . . . . 27
Documentation about event publishing function for
Configuring CICS resource definitions . . . . . 28
IMS and VSAM on z/OS . . . . . . . . . . 58
VTAM resource definitions . . . . . . . . 28
Documentation about event publishing and
CICS resource definitions . . . . . . . . . 29
replication function on Linux, UNIX, and Windows . 59
Mapping VSAM data . . . . . . . . . . . 30
Documentation about federated function on z/OS 60
Mapping the sample VSAM copybook . . . . 30
Documentation about federated function on Linux,
Loading the metadata catalogs . . . . . . . . 34
UNIX, and Windows . . . . . . . . . . . 60
Configuring change-capture agents for VSAM . . . 35
Each message contains changes from a single type of data source (for example,
only CA-IDMS changes, IMS changes, or VSAM changes). Each message can
contain an entire transaction or only a row-level change.
You can control which fields within which CA-IDMS files, IMS segments, or VSAM
files will be monitored for changes by using a metadata catalog to identify the
specific data items to be captured and published. This metadata catalog also
defines how the individual data items are to be reformatted into relational data
types. This relational mapping results in ″logical″ CA-IDMS, IMS, and VSAM
tables.
You can use Classic Event Publisher to push data changes to a variety of tools. The
most common consumers of changed data will be information brokers, data
warehousing tools, workflow systems and enterprise application integration (EAI)
solutions. Consider a scenario in which changing prices and inventory are
published to potential buyers. For example, a food wholesaler procures perishable
food products such as bananas from world markets in bulk and sells them to
grocery food retailers and distributors.
The value of bananas decreases the longer that they are in the warehouse. The
wholesaler wants to inform its potential buyers of the changing price and
inventory data and can set up event publishing to do that. Each time the price
changes, an XML message can be sent to potential buyers, informing them of the
″price change event.″
The correlation service collects information from the change-capture agents and
segregates the log data by unit-of-work identifiers. If a ROLLBACK occurs, the
correlation service discards all of the data collected for that unit of work. When a
COMMIT occurs, the correlation service processes all of the log data for that unit
of work.
The correlation service’s COMMIT-related processing reformats the data in the log
records into a relational format represented by one or more SQL Descriptor Areas
(SQLDAs). This reformatting ensures that all captured data changes are
consistently formatted before they are packaged for delivery. The SQLDAs are then
passed to the publication service that will handle the transformation into XML and
the delivery to WebSphere MQ.
The key to all of this processing is the metadata that is stored in the Classic Event
Publisher’s metadata catalog. You use Classic Event Publisher’s GUI administration
tool, Data Mapper, to define the metadata that tells Classic Event Publisher which
IMS or VSAM data is to be monitored for changes, as well as how to reformat the
IMS segment and VSAM record data into ″logical relational table″ format (i.e.
SQLDAs.). The metadata defined in the Data Mapper is then exported to the
z/OS® platform as USE GRAMMAR that is used as input to a metadata utility. The
metadata utility creates or updates the metadata stored in the Classic Event
Publisher’s metadata catalog.
The following steps provide an overview of how you start capturing and
publishing data with Classic Event Publisher.
1. Be sure that WebSphere MQ is running.
2. Start the correlation service, the publication service, and the publications.
For the steps required to start publishing, see Starting the process of
publishing.
3. Start the change-capture agents.
For the steps required to start change-capture agents, see the following
topics:
1. Activating change capture in a CA-IDMS Central Version
2. Activating change capture for an IMS database/segment
3. Activating change-capture for VSAM
| Note: It is assumed that DB2 II Classic Event Publisher has been installed on the
| mainframe and the Data Mapper is installed on a workstation.
| Note: In all the jobs that follow, you need to customize the JCL for your site. This
| customization includes concatenating CA-IDMS-specific libraries provided
| by the vendor. Templates for these libraries are included in the JCL. You
| need to uncomment the libraries and provide the appropriate high-level
| qualifiers.
| For more detailed information on data mapping, see the IBM DB2 Information
| Integrator Data Mapper Guide for Classic Federation and Classic Event Publishing.
| After generation is complete, you can view the USE grammar in Windows
| Notepad or click Yes when the Data Catalog USE Generation Results window
| appears. Your completed USE grammar will look similar to the following example.
| The ALTER TABLE statement at the end notifies DB2 II Classic Event Publisher to
| capture changes for a logical table definition.
| DROP TABLE CAC.EMPLOYEE;
| USE TABLE CAC.EMPLOYEE DBTYPE IDMS EMPSCHM
| SUBSCHEMA IS EMPSS01 VERSION IS 100
| DBNAME IS EMPDEMO
| PATH IS ( EMPLOYEE )
| (
| /* COBOL Name EMP-ID-0415 */
| EMP_ID_0415 SOURCE DEFINITION ENTRY
| EMPLOYEE EMP-ID-0415
| USE AS CHAR(4),
| /* COBOL Name EMP-FIRST-NAME-0415 */
| EMP_FIRST_NAME_0415 SOURCE DEFINITION ENTRY
| EMPLOYEE EMP-FIRST-NAME-0415
| USE AS CHAR(10),
| /* COBOL Name EMP-LAST-NAME-0415 */
| EMP_LAST_NAME_0415 SOURCE DEFINITION ENTRY
| EMPLOYEE EMP-LAST-NAME-0415
| USE AS CHAR(15),
| /* COBOL Name EMP-STREET-0415 */
| The installation of the IDMS exit is a link-edit job which needs to be done only
| once during installation and is not repeated. If you created a special loadlib to
| contain the IDMSDBIO module with the exit activated, activating change-capture
| in each new Central Version requires updating the STEPLIB DD for the Central
| Version to point to the special loadlib. For detailed instructions, see the section
| ″Relinking the CA-IDMS database I/O module IDMSDBIO.″
| Once IDMSDBIO has been relinked, the CA-IDMS Central Versions must be
| restarted to pick up the new IDMSDBIO module.
| Since the active agent running in CA-IDMS cannot notify the correlation service at
| shutdown, a jobstep must be added to the CA-IDMS Central Version JCL to notify
| the correlation service that CA-IDMS has terminated. The sample SCACSAMP
| member CACIDTRM must be added to the end of the Central Version JCL.
| The z/OS agent must be correctly set based on the Central Version number
| associated with the CA-IDMS Central Version. For example, if the Central Version
| number is 55, the parm must be specified as AGENT=’IDMS_055’. When an IDMS
| Central Version completes initialization, it issues CA-IDMS message DC201001,
| which identifes the Central Version number for that system.
| After making the necessary changes to the Central Version JCL, you can verify the
| installation by starting the Central Version and looking for the operator message
| CACH001I XSYNC AGENT ’IDMS_nnn’ INSTALLED FOR SERVER ’nnnnnnnn’
| in the Central Version JES messages. This message will only appear when the first
| journalled event takes place within the Central Version itself. To ensure a
| journalled event has occurred, you can update a record in the CA-IDMS Central
| Version using the data server product or any existing update application you have
| communicating with the Central Version.
| Once the active agent is installed, starting the Central Version without a correlation
| service will cause the message
| XSYNC SERVER ’(servername)’ NOT FOUND, REPLY ’R’ OR ’A’
| RECOVERY/ACTIVE FOR AGENT ’agentname’
| to appear on the operator console. This message indicates that database changes
| are taking place and there is no correlation service available to receive the changes.
| Though this message requires operator action, the CA-IDMS Central Version itself
| will not be halted to wait for the reply. In most cases, the operator should reply ’R’
| to this message to force the agent into recovery mode so any changes made to the
| database since it was started can be processed by the recovery agent.
| To verify that the termination message is working correctly, the CA-IDMS Central
| Version must be running in active mode and communicating successfully with the
| correlation service. Once that has been verified, stopping the CA-IDMS Central
| Version results in a 0 return code from the CACIDTRM jobstep, and the correlation
| service issues the message:
| CACG114I SHUTDOWN RECEIVED FROM ACTIVE AGENT ’IDMS_nnn’
| You must also add a jobstep to the Central Version JCL to inform the correlation
| service that the Central Version has been shut down. This allows the correlation
| service to be stopped without forcing the CA-IDMS agent into recovery mode.
| Member CACIDTRM contains a sample of the JCL that you need to add to the
| IDMS JCL.
| To modify your automatic archiving procedures, you can run the recovery agent as
| part of your archiving procedure. The recovery agent counts the number of full
| CA-IDMS online journals. You can use the returned value to prevent journal
| archiving from taking place unless there are a specified number of full
| (unarchived) journals available for recovery purposes.
| Because this modification will reduce the number of archived journals available,
| you may want to increase the number of online journal files that the CA-IDMS
| Central Version uses to prevent CA-IDMS from halting due to the unavailability of
| an archived journal.
| You might also need to change your end-of-day procedures to make sure all full
| journal files are archived.
| Stacking the exit requires renaming your exit CSECT from IDMSJNL2 to
| IDM2JNL2 as part of the link process. If IDM2JNL2 is resolved by the DB2 II
| Classic Event Publisher exit, it will automatically call your exit whenever it
| receives control from IDMS.
| Note: These are only general instructions for stacking the exit. The actual steps
| involved in completing this process depends on how well you know the
| linkage-editor and whether or not your exit source can be changed and
| rebuilt.
| After relinking the IDMSDBIO module, you must stop and restart IDMS before the
| exit is activated. Once activated, the exit remains essentially dormant until a DB2 II
| Classic Event Publisher correlation service is started with IDMS tables mapped and
| ALTERed for change-capture.
|
| Relinking the presspack support module
| If the tables that you are monitoring use the Presspack support module to
| compress data, relink the Presspack support module CACPPK to include the IDMS
| interface module so that the correlation service can decompress the data that is
| stored in the Central Version journals. Sample JCL for the relink can be found in
| the SCACSAMP member CACIDLPP.
| The following list shows the return codes from IDMS R14.0
| 00—Decompression successful
| >100—Error during decompression (most likely, the wrong DCT was specified);
| PRESSTO return code = return code minus 100.
|
| Setting up a server to access a CA-IDMS Central Version
| The following JCL changes are required to access an IDMS Central Version:
| 1. Add the IDMS.LOADLIB to the STEPLIB concatenation.
| 2. Add a SYSCTL DD statement and allocate the SYSCTL file used by the Central
| Version you need access to.
| To run DB2 Information Integrator Classic Event Publisher, you must create
| a separate authorized copy of the CA-IDMS.LOADLIB. If you are doing
| change capture on CA-IDMS records using Presspack compression, you also
| must authorize the library containing DCTABLE modules and include it in
| the server STEPLIB concatenation.
|
You can also capture updates made by CICS applications, DB2 Information
Integrator Classic Federation for z/OS, or ODBA clients using DRA.
Data capture is only supported when the batch job allocates a non-dummy
IEFRDER DD statement.
Note: In all the jobs that follow, you will need to customize the JCL as appropriate
for your site. This includes concatenating IMS-specific libraries provided by
IBM®. Templates for these libraries are included in the JCL. You will need to
uncomment them and provide the appropriate high-level qualifiers.
For more detailed information on data mapping, see the DB2 Information Integrator
Data Mapper Guide.
1. FTP the CACIMPAR, CACIMROT, and CACIMSTO members from the
SCACSAMP data set to a directory of your choice on the workstation where
Data Mapper is installed. As you FTP these members, rename them with the
following extensions:
v cacimpar.dbd
v cacimrot.fd
v cacimsto.fd
2. From the Windows® Start menu, select DB2 Information Integrator Data
Mapper.
3. From the File menu, select Open Repository and select the Sample.mdb
repository under the xadata directory.
4. From the Edit menu, select Create a new Data Catalog. The following screen
appears:
8. Select the DBD you obtained by FTP from the mainframe (cacimpar.dbd) and
click OK.
The DL/I DBD window appears.
Since this is a new Data Catalog, the list of tables will be empty.
The following information creates a logical table that includes the IMS root
segment PARTROOT as defined by the DBD.
You do not need to fill in the Name field, as it is automatically populated
from the Leaf Seg field.
a. Select CAC from the Owner drop down list.
b. Select PARTROOT from the Index Root drop down list.
c. Select PARTROOT from the Leaf Seg drop down list.
PARTROOT is referred to as the leaf segment because it acts as the leaf
segment as defined by this logical table.
For Classic Event Publisher you do not need to specify IMSID, PSB name
or PCB prefix information.
d. Click OK.
You are now ready to import the definitions from the CACIMROT copybook
you obtained by FTP from SCACSAMP data set.
11. From the File menu, select Import External File and select the CACIMROT
copybook that you stored on the workstation.
12. Make sure the correct segment in the Seg. Name drop down is selected. Make
sure the PARTROOT segment is selected as it is the segment for which you
are loading the copybook.
13. Click Import. This imports the COBOL definitions from the CACIMROT
copybook into the table CAC.PARTROOT. The columns for the table are
created:
You have completed creating the logical table mapping for the PARTROOT
segment. The following steps walk you through creating the logical table for the
STOKSTAT segment.
1. Click on the window titled ″IMS Tables for Data Catalog Parts Sample for IMS″
to regain focus.
The following information creates a logical table that includes the IMS
STOKSTAT segment as defined by the DBD.
You do not need to fill in the Name field, as it is automatically populated from
the Leaf Seg field.
3. Select CAC from the Owner drop down list.
4. Select PARTROOT from the Index Root drop down list.
5. Select STOKSTAT from the Leaf Seg drop down list.
STOKSTAT is referred to as the leaf segment because it acts as the leaf segment
as defined by this logical table.
For Classic Event Publisher you do not need to specify IMSID, PSB name or
PCB prefix information.
6. Click OK.
You are now ready to import the definitions from the CACIMROT copybook you
obtained by FTP from SCACSAMP data set. Follow the instructions outlined above
in steps 11, 12 and 13. When you have completed these steps, the window should
look as follows:
Make sure the correct segment in the seg. name drop down is selected. Make
sure the STOKSTAT segment is selected as it is the segment for which you are
loading the copybook. Also, make sure the Append to Existing Columns check
box is checked.
3. Click Import. This concatenates the COBOL definitions from the CACIMSTO
copybook into the table CAC.STOKSTAT after the CACIMROT definitions. You
have now defined a logical table which includes a root and child segment.
After generation is complete you can view the USE GRAMMAR from Windows
Notepad or click Yes when the Data Catalog USE Generation Results dialog box
appears. Your completed USE GRAMMAR will look similar to the following
example:
DROP TABLE CAC.PARTROOT;
USE TABLE CAC.PARTROOT DBTYPE IMS
DI21PART INDEXROOT PARTROOT PARTROOT
SCHEDULEPSB DFSSAM03
(
/* COBOL Name PARTCOD */
PARTCOD SOURCE DEFINITION ENTRY PARTROOT
DATAMAP OFFSET 0 LENGTH 2 DATATYPE C
USE AS CHAR(2),
/* COBOL Name PARTNO */
PARTNO SOURCE DEFINITION ENTRY PARTROOT
DATAMAP OFFSET 2 LENGTH 15 DATATYPE C
USE AS CHAR(15),
/* COBOL Name DESCRIPT */
DESCRIPT SOURCE DEFINITION ENTRY PARTROOT
DATAMAP OFFSET 26 LENGTH 20 DATATYPE C
USE AS CHAR(20)
);
ALTER TABLE CAC.PARTROOT DATA CAPTURE CHANGES;
Note: If the catalogs has already been allocated previously, you can skip this
step.
In the SCACSAMP data set, there is a member called CACCATLG. This
member contains JCL to allocate the metadata catalogs that are used by the
data server.
a. Customize the JCL to run in your environment and submit.
b. After this job completes, ensure that the data server procedure in the
PROCLIB points to the newly created catalogs using the CACCAT and
CACINDX DD statements.
One of the easiest ways to install the IMS active change-capture agent is to copy
module DFSFLGX0 from the Classic Event Publisher distribution libraries into the
IMS RESLIB. Another method is to concatenate the Classic Event Publisher load
library into your IMS batch jobs and started task procedures for the online DB/DC
or DBCTL regions.
Another modification to the IMS region JCL provides for recovery information.
When an IMS region is started without a correlation service running, the
change-capture agent running in the region records a restart point to a data set. To
enable Classic Event Publisher to record this information, an 80-byte lrecl data set
must be allocated and referenced by a CACRCV DD statement. The CACRCV DD
statement must be added to the DB/DC or DBCTL started task JCL or into IMS
batch job JCL. A unique data set name must be created for each IMS job that a
change-capture agent will be active in.
After making the necessary changes to the IMS region JCL, you can verify the
installation by starting the DB/DC or DBCTL region and looking for the following
operator message in the IMS region JES messages:
CACH001I EVENT PUBLISHER AGENT ’IMS_xxxx’ INSTALLED FOR SERVER ’(noname)’
After the active agent is installed, starting the DB/DC or DBCTL region without a
correlation service will cause this message to appear on the operator console:
CACH002A EVENT PUBLISHER SERVER ’(noname)’ NOT FOUND BY AGENT ’IMS_xxxx’, REPLY
’R’ OR ’A’ RECOVERY/ACTIVE
This message indicates that database changes are taking place and there is no
correlation service available to receive the changes. Though this message requires
operator action, the region itself will not be halted to wait for the reply. In most
cases, the operator should reply ’R’ to this message to force the agent into recovery
mode so any changes made to the database since it was started can be processed
by the recovery agent.
To verify that the termination message is working correctly, the DB/DC or DBCTL
region must be running in active mode and communicating successfully with the
correlation service. After that has been verified, stop the IMS region. The
correlation service should issue the message:
CACG114I SHUTDOWN RECEIVED FROM ACTIVE AGENT ’IMS_xxxx’
Again, this will only occur if the change-capture agent is in active mode.
Otherwise, all active messages (including the shutdown message) are disabled, as
recovery is necessary.
If you are implementing a large-scale deployment, then placing the IMS Logger
Exit in the IMS RESLIB is the easiest installation method. You have a large-scale
deployment when either/or:
v You are planning to augment the majority of your IMS databases for change
capture.
v You are augmenting an IMS database for change capture that is updated by the
majority of your IMS applications.
In a large-scale deployment, you need to update each IMS batch job and DB/DC
or DBCTL subsystems’ started task JCL to include a recovery data set and
(optionally) install IMS Log File Tracking. In a small-scale implementation, the
number of IMS batch jobs and started task procedures that need to be updated are
reduced. However, if you forget to update one of your IMS applications that
updates a monitored database, these changes are lost and the correlation service
has no knowledge that this has occurred.
If you install the IMS active change-capture agent in the IMS RESLIB and are only
performing a small-scale implementation, then the correlation service still tracks all
IMS control regions that are referencing the IMS RESLIB where the IMS active
change-capture agent is installed, even though many of these IMS applications do
not update databases that are being monitored by Classic Event Publisher .
Likewise, if these IMS active change-capture agents go into recovery mode, you
have to recover these failed agents, even though no IMS changes are being
captured, making more work for you.
The SCACSAMP member CACIMLEX is a sample relink job that will create a
backup of your Logger Exit, and then relink our version of the exit with yours.
Your version of the IMS Logger Exit must be named DFSFLGX0 for the call to
succeed.
To have IMS generate IMS Data Capture records, the DBD that specifies the
information to be captured must be augmented. These DBD modifications only
affect the actual DBD definitions (stored in the DBD/ACB library) and do not
affect the physical database.
You use the EXIT= keyword to specify IMS Data Capture information. The EXIT
keyword is supported for the DBD control statement and the SEGM control
statement. Supplying an EXIT keyword on the DBD statement defines default
values for all segments in the DBD. Specifying an EXIT keyword on the SEGM
statement allows you to override the default values. This gives you great flexibility
about the types and amounts of information that is captured.
Classic Event Publisher for IMS does not use data capture exits, but
co-exists if your site is using DPropNR, or if you have implemented
your own exits at your site. If you do not have any data capture
exits, set this parameter to *.
The recommended EXIT options for root segments are NOKEY, DATA, and NOPATH. The
recommended EXIT options for child segments are KEY, DATA, and NOPATH. Also,
NOCASCADE is recommended as an option. If possible, design the application that is
processing the changes to parent segments to handle the implied deletion of any
″child″ information.
In addition to specifying EXIT information in the DBD, you can also supply
VERSION information on the DBD control statement. Unless you have a specific
reason to do otherwise, allow IMS to generate the default DBD version identifier,
which is the date/time the DBD was assembled.
When the correlation service, during commit processing, processes an IMS Data
Capture record, it compares the version information contained in the record against
the version information in the DBD load module. If the version information does
not match, the correlation service logs an error message and terminates. By using
the date/time stamp of DBD assembly you are sure that the DBDs that IMS is
accessing are the same ones that the correlation service is using for reference
purposes.
If you have not enabled logging for CICS, see the CICS Transaction Server for z/OS
V2.3: Installation Guidefor instructions on enabling it.
If you do not yet have an application that interacts with a VSAM file through
CICS, you can use the sample COBOL application (FILEA) that comes with CICS.
See the CICS Transaction Server for z/OS V2.3: Installation Guide for information
about this sample application.
Note: It is assumed that Classic Event Publisher has been installed on the
mainframe and Data Mapper is installed on a workstation.
Sample member CACCAPPL in the SCACSAMP data set contains two sample VTAM
APPL definitions. CACCICS1 is not required for Classic Event Publisher for VSAM,
and can be removed. CACCICS2 is used by the metadata utility. The following is the
sample member:
*
* SAMPLE APPL ID DEFINITIONS FOR CICS INTERFACE
*
CACCAPPL VBUILD TYPE=APPL
CACCICS1 APPL ACBNAME=CACCICS1,
APPC=YES,
AUTOSES=1,
MODETAB=CACCMODE,
DLOGMOD=MTLU62,
AUTH=(ACQ),
EAS=100,PARSESS=YES,
SONSCIP=YES,
DMINWNL=0,
DMINWNR=1,
DSESLIM=100
CACCICS2 APPL ACBNAME=CACCICS2,
APPC=YES,
AUTOSES=1,
MODETAB=CACCMODE,
DLOGMOD=MTLU62,
AUTH=(ACQ),
EAS=1,PARSESS=YES,
SONSCIP=YES,
DMINWNL=0,
DMINWNR=1,
DSESLIM=1
Create a Logon Mode Table entry. The member CACCMODE in the SCACSAMP data set
contains the macro definitions to define it. Assemble and catalogue this member in
VTAM’s VTAMLIB. The following is the member’s content:
CACCMODE MODETAB
MTLU62 MODEENT LOGMODE=MTLU62,
TYPE=0,
FMPROF=X’13’,
TSPROF=X’07’,
PRIPROT=X’B0’,
SECPROT=X’B0’,
COMPROT=X’D0B1’,
RUSIZES=X’8989’,
PSERVIC=X’060200000000000000000300’
MODEEND
END
Copy the load modules CACCICAT from the load library to the CICS user load
library.
The file CACCDEF in the SCACSAMP data set contains a sample job. Add it to the CICS
transaction, program, connection, session, and file definitions required for Classic
Event Publisher for VSAM. For Classic Event Publisher for VSAM to capture the
before and after images of a file, the RECOVERY setting must be set to ALL in the file
definition for the file, and it must specify a FWDECOVLOG for which the journal to
which the after images for forward recovery are written.
To run the job:
1. After replacing the sample values CICSUID, CICSAPPL, and DFHJ01 with
site-specific values, run the following JCL, which defines a logstream into the
MVS™ LOGGER subsystem:
//STEP1 EXEC PGM=IXCMIAPU
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
DATA TYPE(LOGR) REPORT(YES)
DEFINE LOGSTREAM NAME(CICSUID.CICSAPPL.DFHJ01)
HLQ(xxxxxxx) MODEL(NO)
STG_DATACLAS(xxxxxxxx)
LOWOFFLOAD(0) HIGHOFFLOAD(80)
RETPD(n) AUTODELETE(YES)
DASDONLY(YES) DIAG(NO)
MAXBUFSIZE(65532)
/*
2. Update the job card for your site specifications.
3. Update the STEPLIB for the correct CICS library.
4. Update the DFHCSD DD for the correct CSD file.
5. Add the following user Journalmodel definition at the end:
DEFINE JOURNALMODEL (DFHJ01)
GROUP(CACVSAM)
DESCRIPTION (USER LOG STREAM)
JOURNALNAME(DFHJ01)
TYPE(MVS)
STREAMNAME (&USERID..&APPLID..&JNAME)
Note: The entries &USERID, &APPLID, and &JNAME can be modified or left
as they are.
6. Remove the program definition for CACCIVS, the EXV1 transaction, the EXC1
connection, and the EXS1 session.
7. If you are using an SMS Managed Storage for the VSAM file, run the following
job to alter LOG and LOGSTREAMID parameters in the VSAM file.
Note: In all the jobs that follow, you will need to customize the JCL as appropriate
for your site.
For more detailed information on data mapping, see the DB2 Information Integrator
Data Mapper Guide.
To map the sample VSAM copybook:
1. FTP the CACEMPFD member from the SCACSAMP data set to the workstation
where the Data Mapper is installed. Name the file on the workstation
cacemp.fd.
2. Start DB2 Information Integrator Data Mapper.
3. From the File menu, select Open Repository.
4. Select the Sample.mdb repository under the xadata directory.
5. From the Edit menu, select Create a new Data Catalog.
14. Close the Columns for VSAM Table EMPLCICS dialog box.
15. Close the VSAM Tables for Data Catalog Employee Sample - CICS VSAM
dialog box.
At this point, you should be back to the list of Data Catalogs dialog box
named Sample.mdb.
16. Ensure the Data Catalog Employee Sample -- VSAM is highlighted and select
Generate USE Statements from the File menu.
17. Select a file name for the generated statements to be stored on the
workstation, such as empcics.use, and click OK.
After generation is complete you can view the metadata grammar (USE Grammar)
from the Windows Notepad or click Yes when the Data Catalog USE Generation
Results prompt appears. The following is an example of what your completed USE
Note: Make sure that the CACCAT and CACINDX DD statements are
uncommented in the JCL.
3. Load the catalogs.
In the SCACSAMP data set, there is a member called CACMETAU. This member
contains JCL to load the metadata catalogs using the metadata Grammar as
input.
4. Customize JCL to run in your environment and submit.
a. Make sure the symbolic GRAMMAR= is pointing to the appropriate metadata
Grammar member (GRAMMAR=EMPLmetadata).
b. Ensure the CACCAT and CACINDX DDs refer to the catalogs created using the
CACCATLG JCL.
After this job has been run successfully, the catalogs have been loaded with the
logical tables created in the Data Mapper.
A return code of 4 is expected. The DROP TABLE fails since the table does not exist
yet.
The following is a sample SERVICE INFO ENTRY for the VSAM change-capture
agent:
SERVICE INFO ENTRY = CACECA1V VSAMECA 2 1 1 1 4 5M 5S \
APPLID=CICSUID STARTUP CICSUID.CICSAPPL.DFHLOG \
CICSUID.CICSAPPL.DFHJ01 CICSUID.CICSAPPL.DFHJ02 CICSUID.CICSVR.DFHLGLOG
To create a VSAM change-capture agent:
1. Uncomment the SIE for the change-capture agent (VSAMECA):
2. Specify the APPLID, STARTUP time and CICS system, user, and log of log
streams in the SIE.
The SIE parameter consists of ten sub-parameters, each delimited by at least one
space. The format of the first nine of these subfields is consistent across all
services. The format for the tenth subfield is service-dependent.
The following table shows sample SIEs for the correlation service and the
publication service.
Table 3. Sample Service Info Entries for the correlation service and the publication service
Type of Sample SIEs
service
Correlation SERVICE INFO ENTRY = CACECA2 XM1/XSYN/XSYN/16 2 1 1 16 4 10MS 30S \
service TCP/111.111.111.111/SOCKET#,CSA=1K,CSARLSE=3,INT=1,WARMSTART
The order of SIEs for the correlation service and the publication service matters.
The entry for the correlation service must come before the entry for the publication
service. This order is particularly important on shutdown, when services are
stopped in LIFO (reverse) order. The publication service must stop first so that it
can send the proper quiesce command to the correlation service. If the publication
service does not stop first, the correlation service might go into recovery mode on
an otherwise normal shutdown. For this reason, if the publication service is
configured to start before its corresponding correlation service, the publication
service will fail on startup when it fails to detect that the correlation service exists.
Important: You must define a unique data space/queue name for each
correlation service that will be running at any one time.
Examples:
TCP/192.123.456.11/5555
TCP/OS390/5555
queue manager is the name of the local queue manager that manages the
message queue. queue name is the name of the local message queue to use
as the restart queue. If your correlation service is running remotely from
your publication service, you can follow the name with a comma-delimited
communication string to describe how the publication service is to
communicate with the correlation service.
Additional service information for correlation services that you can specify:
CSA=nK -- The number of kilobytes that each server is to allocate for CSA space.
In most cases, 1K should be enough to manage change capture on at least 50
tables.
NAME--Names the correlation service. If you leave this option out, the correlation
service will be started unnamed. See the DB2 Information Integrator Planning Guide
for Classic Event Publishing for more information about named servers.
If you set the SIE to perform a coldstart, make sure that you reset the SIE after you
cold start the server so that you do not inadvertently cold start the server in the
future.
Use the MAX TRANSPORT MESSAGE SIZE parameter in your configuration file
to specify in bytes the largest size that a message can be before the publication
service writes it to a message queue. For example, consider the following entry in
a configuration file:
When the publication service constructs a message for a large transaction in the
message pool, whenever the publication service finds that the size of the message
reaches 256 KB, the publication service writes the current message to the
appropriate message queue and starts building another message to contain the
subsequent DML of the transaction. If this next message becomes 256 KB in size
before the end of the transaction is reached, the publication service writes this
message to the message queue and begins constructing another message. This
process continues until the publication service reaches the end of the transaction.
If you use the same combination of data space and queue name for more than one
correlation service definition, then change-capture agents will send captured
changes to the least-busy server, which might be the correct server. Names are
intentionally shared in a DB2 Information Integrator Classic Federation enterprise
server environment for load balancing and because serialization is not an issue.
But, in an Classic Event Publisher environment, serialization is essential.
Unless you have a specific reason for sharing data spaces between multiple
correlation services, use a unique data space name for each server. If you choose to
share data spaces between servers by using a common data space name, make sure
that the queue name is unique for each server.
If you configure more than one correlation service in a single data space, then the
first correlation service that is started will set the size of the data space.
Creating publications
After you configure your correlation service and your publication service, you
must configure publications to indicate where changes to mapped tables will be
published and how. You do so in the same configuration file in which you
configured your correlation service and your publication service. (If your
correlation service and publication service are remote from each other and
therefore use two different configuration files, configure your publications in the
publication service’s configuration file.)
IMS example
VSAM example
If the correlation service and publication service are configured in the same file,
you either issue a console command to start the correlation service JCL procedure,
or submit a batch job. The console command to start the correlation service is:
S procname
where procname is the 1-8 character proclib member name to be started. When you
issue commands from the SDSF product, prefix all operator commands with the
forward slash ( / ) character.
If the correlation service and publication service are configured in separate files,
you can issue a console command to start the correlation service JCL procedure
and another console command to start the publication service JCL procedure. The
console command is described above. You can also choose to submit a batch job for
each.
| Because IDMSJNL2 is a general purpose exit, you might be using your own
| version of the exit, and might want to incorporate your exit along with the DB2
| Information Integrator Classic Event Publisher version of the exit. For these cases,
| DB2 Information Integrator Classic Event Publisher supports stacking the exits.
Although IMS performs its recovery processing based on the normal contents of
the IMS log files, Classic Event Publisher does not use the “raw” log records that
IMS uses to capture changes. Classic Event Publisher does use the same log
records, in addition to some additional IMS sync-point log records, to track the
state of an in-flight Unit of Recovery (UOR), but does not use the type 50
(undo/redo) and other low-level change notification records that IMS uses for
recovery purposes.
Instead, Classic Event Publisher uses type 99 Data Capture log records to identify
changes to a monitored IMS database because these records contain more
information and are easier to deal with than the “raw” recovery records used by
IMS.
Data Capture log records are generated at the database or segment level and
require augmentation of your DBD definitions. This augmentation does not affect
the physical database definition; it adds additional information to the DBD and
ACB load module control blocks.
You can then put the updated DBD and PSB members into your production ACB
libraries. If you perform this augmentation using the IMS Online Change facility,
either Classic Event Publisher will go into recovery mode or you will need to
recycle the correlation service to pick up changes to an existing monitored DBD or
to add a new DBD to be monitored. As part of Classic Event Publisher installation
and customization, you update the correlation service’s JCL and add a DBDLIB
statement that references your DBD library or a copy of the DBD load modules
that are being monitored for changes.
This is followed by a message that indicates the time that processing began:
CACH106I START PROCESSING AT mm/dd/yyyy hh:mm:ss
These failures might not be recoverable if they are driven by change data that will
produce the same error if recovery is attempted. For example, if the data captured
and forwarded to the publication service caused the publication service to reject
the message, then the publication service will reject that message every time that
the message is resent.
The correlation service is responsible for detecting failure and returning messages
stating that the system entered recovery mode. At this point, the recovery
change-capture agent should start or be started. Depending upon the
configuration, recovery might start automatically, or might require you to run a job
or otherwise start the recovery agent manually.
When the recovery of data catches up with the active server, with some databases
you need to restart the database to move back into active mode with no changes
lost. However, often, it is not practical to stop the monitored database to complete
the recovery process. For these situations, Classic Event Publisher provides
You can use the THROTTLE parameter to keep the recovery change-capture agent
from overtaking the correlation service. For more information about this parameter,
see the chapter of the DB2 Information Integrator Operations Guide for Classic Event
Publishing for your database.
| Unlike the active agent, which captures changes as they are written to the journal,
| the recovery agent can process historical changes which were lost by the active
| agent due to the lack of an active correlation service or some other system failure.
| This agent can also use throttling to control the rate at which changes are sent to
| the correlation service so other active and recovery agents can continue to operate
| normally without risk of overrunning the message queue.
| The z/OS parameter controls the processing of the recovery agent. The format of
| the parameter is:
| PARM=’CV,Optkeyword=vvvv,...’
| ’LOCAL,Optkeyword=vvvv,...’
| ’ARCHIVE,Optkeyword=vvvv,...’
|
| ’REPORT,Optkeyword=vvvv,...’
| ’MONITOR,Optkeyword=vvvv,...’
| ’COUNT’
| Where:
| v CV defines recovery from one or more CA-IDMS disk journal files written by an
| CA-IDMS Central Version. In CV mode, the Central Version can either be
| running or stopped.
| v LOCAL defines recovery from a single tape or disk journal file written by a local
| mode CA-IDMS application.
| v ARCHIVE defines recovery from an archived Central Version journal file. Do
| not use the AGENT optional keyword with PARM=’ARCHIVE’.
| v REPORT requests a report of the current recovery sequence and timestamp for
| the requested agent. The AGENT keyword is required with the REPORT option.
| v MONITOR indicates whether the recovery agent will do a single check for
| recovery state, or will run as a continuous monitor for automatic recovery.
| v COUNT indicates to count the number of full recovery logs and skip automatic
| archiving if the minimum number of full journals is not available.
| v Optkeyword=vvvv is one or more optional keyword parameters, which can be
| included in the parameter string to control recovery processing.
| The following are the optional keywords for the CV, LOCAL, and REPORT parameters.
| v ACTIVATE={Y | N}. Specifies whether or not to enable the active agent on
| successful completion of processing. Specifying ‘Y’ informs the correlation
| Parameter example
| An example parameter string to recover from Central Version disk files in
| continuous mode is:
| EXEC PGM=CACEC1DR,PARM=’CV,RESTARTWAIT=2S’
| When allocating multiple files for Central Version mode, the order of dataset
| allocations to JnJRNL must match the processing order as defined in the CREATE
| DISK JOURNAL statements in the DMCL.
Normally, IMS log files have a certain lifetime associated with them. Generally,
the IMS log files are defined as generation data sets that have a fixed number of
generations that are retained. In these situations, supply a MAXLOGS value that
matches the number of generations being retained.
If you have specified that dual logging is being used for this IMS active
change-capture agent, the IMS Log File Tracking Utility automatically doubles
the MAXLOGS values supplied by you, because it assumes that the same
number of secondary IMS log files are retained.
PARM=’DUALLOGS=N,ECHO=N,MAXFILES=5’
For more information about recovery mode for when you are capturing data from
IMS databases, see IBM DB2 Information Integrator Planning Guide for Classic Event
Publishing.
A starting point is written in the correlation service’s restart data store, so the
recovery change-capture agent can locate a starting position in the log stream files.
After switching to recovery mode, the change-capture agent queries the correlation
service for the restart point and starts reading the log streams at that point. After
the end of the log streams is reached, the change-capture agent sets itself to active
mode.
Set retention period and AUTODELETE specifications of the system, user, and log
of log streams so that the data remains in the log stream for the longest period of
recovery that you want. If the retention period and AUTODELETE specifications
are met, CICS purges completed units of work on the system log file when CICS
terminates.
To access the latest DB2 Information Integrator product documentation, from the
DB2 Information Integrator Support Web site, click on the Product Information
link, as shown in Figure 1 on page 56.
You can access the latest DB2 Information Integrator documentation, in all
supported languages, from the Product Information link:
v DB2 Information Integrator product documentation in PDF files
v Fix pack product documentation, including release notes
v Instructions for downloading and installing the DB2 Information Center for
Linux, UNIX, and Windows
v Links to the DB2 Information Center online
Scroll though the list to find the product documentation for the version of DB2
Information Integrator that you are using.
You can also view and print the DB2 Information Integrator PDF books from the
DB2 PDF Documentation CD.
To view the installation requirements and release notes that are on the product CD:
v On Windows operating systems, enter:
x:\doc\%L
x is the Windows CD drive letter and %L is the locale of the documentation that
you want to use, for example, en_US.
v On UNIX operating systems, enter:
/cdrom/doc/%L/
cdrom refers to the UNIX mount point of the CD and %L is the locale of the
documentation that you want to use, for example, en_US.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not give you
any license to these patents. You can send license inquiries, in writing, to:
IBM Director of Licensing
IBM Corporation
North Castle Drive
Armonk, NY 10504-1785
U.S.A.
For license inquiries regarding double-byte (DBCS) information, contact the IBM
Intellectual Property Department in your country/region or send inquiries, in
writing, to:
IBM World Trade Asia Corporation
Licensing
2-31 Roppongi 3-chome, Minato-ku
Tokyo 106-0032, Japan
The following paragraph does not apply to the United Kingdom or any other
country/region where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS
FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or
implied warranties in certain transactions; therefore, this statement may not apply
to you.
Any references in this information to non-IBM Web sites are provided for
convenience only and do not in any manner serve as an endorsement of those Web
sites. The materials at those Web sites are not part of the materials for this IBM
product, and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it
believes appropriate without incurring any obligation to you.
The licensed program described in this document and all licensed material
available for it are provided by IBM under terms of the IBM Customer Agreement,
IBM International Program License Agreement, or any equivalent agreement
between us.
All statements regarding IBM’s future direction or intent are subject to change or
withdrawal without notice, and represent goals and objectives only.
This information contains examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include the
names of individuals, companies, brands, and products. All of these names are
fictitious, and any similarity to the names and addresses used by an actual
business enterprise is entirely coincidental.
COPYRIGHT LICENSE:
© (your company name) (year). Portions of this code are derived from IBM Corp.
Sample Programs. © Copyright IBM Corp. _enter the year or years_. All rights
reserved.
Trademarks
The following terms are trademarks of International Business Machines
Corporation in the United States, other countries, or both:
IBM
CICS
DB2
IMS
Language Environment
MVS
VTAM
WebSphere
z/OS
Java and all Java-based trademarks and logos are trademarks or registered
trademarks of Sun Microsystems, Inc. in the United States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of
Microsoft Corporation in the United States, other countries, or both.
Intel, Intel Inside (logos), MMX and Pentium are trademarks of Intel Corporation
in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Notices 67
68 DB2 II Getting Started with Classic Event Publishing
Index
A correlation service (continued)
response time out 39
Minimum tasks, correlation service 38
W
Warm starting
after a Change Capture Agent runs
without a correlation service 10, 24
correlation service 40
To learn about available service options, call one of the following numbers:
v In the United States: 1-888-426-4343
v In Canada: 1-800-465-9600
To locate an IBM office in your country or region, see the IBM Directory of
Worldwide Contacts on the Web at www.ibm.com/planetwide.
Product information
Information about DB2 Information Integrator is available by telephone or on the
Web.
If you live in the United States, you can call one of the following numbers:
v To order products or to obtain general information: 1-800-IBM-CALL
(1-800-426-2255)
v To order publications: 1-800-879-2755
Printed in USA
GC18-9186-02