Professional Documents
Culture Documents
Version 6.0.1
Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
for WebSphere DataStage, Version 8.1
SC18-9954-02
IBM WebSphere DataStage Pack for SAP R/3
Version 6.0.1
Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
for WebSphere DataStage, Version 8.1
SC18-9954-02
Note
Before using this information and the product that it supports, read the information in “Notices” on page 99.
Terminology
The terms used in the IBM WebSphere DataStage Pack for SAP R/3 to describe all
the stages are defined below:
Term Description
ABAP Advanced Business Application Programming. The language developed by
SAP for application development purposes. All R/3 applications are
written in ABAP.
BAPI Business Application Programming Interface. A precisely defined interface
providing access to processes and data in business application systems.
BAPIs are defined as API methods of SAP objects. These objects and their
methods are stored in the Business Objects Repository.
BOR Business Object Repository, which is the object-oriented Repository in the
SAP BW system. It contains, among other objects, SAP Business Objects
and their methods.
Business Object
The representation of a business entity, such as an employee or a sales
order, in the SAP Enterprise system.
Control record
A special administrative record within an IDoc, one for each IDoc. The
control record contains a standard set of fields that describe the IDoc as a
whole.
ERP Enterprise Resource Planning business management software.
IDoc Intermediate document. An IDoc is a report, that is, a hierarchal package of
related records, generated by R/3 Enterprise in an SAP proprietary format.
An IDoc, whose transmission is initiated by the source database, exchanges
data between applications.
IDoc type
Named metadata describing the structure of an IDoc that is shared across
databases. It consists of a hierarchy of segment record types.
MATMAS01
An example of an IDoc type.
The ABAP Extract stage lets your company maximize its existing investments in
large ERP systems by complementing SAP standard software and consulting
services. It does this by automatically generating the ABAP program to extract data
from the R/3 Repository. The ABAP Extract stage lets users of all levels efficiently
build an extraction object, then generate an extraction program written in the SAP
proprietary ABAP programming language. Or, you can use an SQL query to
generate the ABAP program.
2 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
See the Terminology section for a list of the terms used in this section.
Functionality
The ABAP Extract stage has the following functionality and enhancements:
v Lets you choose and define SAP connections using the GUI. You can select an
IBM WebSphere DataStage connection to SAP instead of entering the information
into the stage interface.
v Uses the ABAP Program page for all ABAP-related functionality.
v Lets you view the development status of the ABAP program.
v Lets you view referenced and referring tables.
v Lets you use the SQL Query Builder or an extraction object to generate the
ABAP program.
v Lets you overwrite the ABAP program as you save it to R/3.
v Uses the Data Transfer Method page for the functionality of the data transfer
methods.
v Uses the functionality of the former Access page for RFC-connection failures as
you exit the stage editor.
v Lets you synchronize and validate R/3 columns and WebSphere DataStage
columns.
v Supports NLS (National Language Support). For information, see WebSphere
DataStage Server Job Developer Guide.
v Supports the SAP R/3 remote function call (RFC) data transfer method on
Unicode systems.
ABAP program is automatically generated to extract data from the SAP R/3
repository. This is done by the ABAP Extract stage and is explained below:
1. Link 1. The GUI client component logs on to the SAP R/3 Enterprise
application server and retrieves metadata information.
2. Link 2. The GUI client component reads and writes job properties (for example,
extraction object, ABAP program, logon information, and so on) between the
GUI client component and the WebSphere DataStage repository.
3. Link 3. The runtime server component reads job properties from the
WebSphere DataStage repository.
4. Link 4. The runtime server component makes a remote function call (RFC) to
run the ABAP program and processes the generated data set.
Installation prerequisites
In addition to the software requirements for the Pack, the following are required
for installing the ABAP Extract stage:
v FTP server on the machine where the temporary extraction files from the R/3
Enterprise are stored
v Database account with read access to the R/3 Enterprise data dictionary tables
The RFC source file (ZDSRFC.TXT for non-unicode and ZDSRFCU.txt for unicode
SAP Systems) required to install the RFC is on the same CD as the GUI client
component. The text file is in the RFC directory.
The R/3 Administrator must install these two components by using the Transport
Management System (Transaction STMS), which is the recommended method
described in the next section. You can also create them manually by following
step-by-step procedures (see the following instructions and those in Creating an
authorization profile manually).
You can then import them within R/3 Enterprise by doing one of the following:
v Using transaction STMS to access the Transportation Management System
v Using the tp import transport control program at the operating system level
The cofiles and data files must be copied from the client installation CD to the
appropriate directory on the R/3 Enterprise machine before importing them. To
copy these files:
1. Log on to the R/3 Enterprise machine as <SID>adm where <SID> is the system
number, for example, C46.
2. For non-Unicode SAP systems, copy data files R900323.P46 (for RFC) and
R900337.P46 (for authorization profile) and cofiles K900323.P46 and
K900337.P46 (for Unicode, copy data files R900018.U47 and R900028.U47 and
cofiles K900018.U47 and K900028.U47) from \trans\cofiles and \trans\data
directories on the client CD to /usr/sap/trans/cofiles and /usr/sap/trans/data
directories respectively on the R/3 Enterprise system by doing one of the
following, depending on your network configuration:
a. Accessible file system. If the file system for your R/3 Enterprise system is
visible (for example, by NFS) on your WebSphere DataStage client system,
you can copy the K9nnnnn.xxx cofiles using a DOS command or Windows®
Explorer. Copy them from trans\cofiles on the client CD to
/usr/sap/trans/cofiles/ on the R/3 Enterprise system.
b. Unmapped network drive. If the R/3 Enterprise system is accessible by the
network, but it is not mapped as a network drive on the WebSphere
4 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
DataStage client system, you can use FTP in binary mode to transfer the
files using the directories specified in step a.
c. Inaccessible file system. If the R/3 Enterprise system is not accessible by
the network from the WebSphere DataStage client system, you can mount
the WebSphere DataStage client installation CD directly on your R/3
Enterprise system and copy the files specified in step a. Because of
differences in CD file system formats on UNIX® platforms, the path names
may vary as you mount the CD as follows:
LINUX Platforms. The file names are in lower case. Copy the cofiles to their
destination in upper case.
HP-UX Platform. The file names and directory names are in upper case, and
the file names have `;1’ appended to them. Copy the cofiles to their
destination without this `;1’ appendage.
To import the transport request, the R/3 Enterprise Administrator can use one of
the following two methods:
If you are importing at this level, you can do the following (instead of the previous
steps 3-7):
1. Log onto the target system and client as the R/3 Enterprise Administrator.
2. Create a development class named ZETL that is assigned to a transport layer
according to the system landscape for your company. You can use Transaction
SE80 to create this class.
3. Add the transport request to the buffer using the following syntax:
Creating a job
To create and build your job:
1. Create a WebSphere DataStage job using the IBM WebSphere DataStage and
QualityStage Designer Client.
2. Create a new ABAP Extract stage.
3. Define the properties for the ABAP Extract stage.
4. Define the R/3 Enterprise connection information.
5. Select the data transfer method.
6. Specify the ABAP program parameters.
7. Define the data to be extracted.
8. Compile the job.
You can change the default character set map defined for the project or the job by
selecting a map name from the list. This tab also has the following components:
6 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
v Show all maps. Lists all the maps supplied with WebSphere DataStage. Maps
cannot be used unless they are loaded using the WebSphere DataStage
Administrator Client.
v Loaded maps only. Displays the maps that are loaded and ready to use.
v Use Job Parameter . Lets you specify a character set map as a parameter to the
job containing the stage. If the parameter has not yet been defined, you are
prompted to define it from the Job Properties dialog.
v Allow per-column mapping. Lets you specify character set maps for individual
columns within the table definition. If per-column mapping is selected, an extra
property, NLS Map, appears in the grid in the Columns tab.
For more information about NLS or job parameters, see WebSphere DataStage
documentation.
Continue by creating an ABAP extraction stage that uses the RFC data transfer
method.
8 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
b. On the Output page, use the ABAP Program tab to load the program to
SAP R/3, as described in “Output ABAP program page” on page 14.
c. On the the Output page, use the ABAP Program tab to validate the
program, as described in “Output ABAP program page” on page 14.
d. On the Output page, use the General tab to validate the stage, as described
in “Validating runtime jobs” on page 27.
4. Use WebSphere DataStage to save, compile, and run the job.
For best performance, define two MPP system nodes for each processor on the SAP
R/3 system.
See the description of configuring the parallel engine in IBM Information Server
Planning, Installation, and Configuration Guide for details.
7. Configure the remote shell test using the rsh command. For example:
rsh dsserv1 uptime
rsh dsserv2 uptime
8. Ensure that the file /etc/hosts has an entry for each node.
9. Review the description of the parallel engine configuration file in IBM
WebSphere DataStage and Quality Stage Parallel Job Developer Guide.
10. Enable ABAP parallel job execution on a non-conductor node in an MPP
system by creating an empty file named slave in the project directory on the
non-conductor node. For example, if the project is named sapr3 and the
conductor node is dsserv1, create the empty slave file on node dsserv2 as
follows:
/opt/IBM/InformationServer/Server/Projects/sapr3/slave
See IBM Information Server Planning, Installation, and Configuration Guide for
information about configuring MPP on Microsoft® Windows systems.
This page works similarly to the General tab of the BW Load Input page for the
BW Load stage. You can use the WebSphere DataStage server to access the list of
connections to SAP, which is stored in a file.
By default, a newly created stage uses the connection most recently selected by the
current user for other stages of this type. Since the number of R/3 systems
accessed by a given WebSphere DataStage installation is likely to be small, the
default connection is generally correct.
10 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
Password. A password for the specified user name.
Client Number. The number of the client system used to connect to SAP.
Language. The language used to connect to SAP.
v Description. Enter text to describe the purpose of the stage.
v Validate Stage (formerly Job Validation). Click to open the Validation Stage
dialog. The validation process begins automatically when the dialog opens.
Extraction Object, Review Code, and Launch SAP GUI (formerly on the General
page) are now on the ABAP Program page.
The selected connection provides all needed connection and default logon details
to communicate with the corresponding R/3 Enterprise system. You can add new
entries here and modify or delete existing entries from the list of connections.
Although SAP logon details for the ABAP Extract stage are not stored on the
server, the stage can share SAP connections created by other R/3 stages you create,
such as the BAPI, IDoc Load, and IDoc Extract stages.
New, Properties, and Remove are administrative connection operations. They let
you manage the list of connections that is maintained on the WebSphere DataStage
server machine from this dialog:
v New. Click to open the Connection Properties dialog, which lets you define the
properties for the new connection. It is added to the list of connections. Here
you can specify SAP connection details and default logon details for the new
connection. The Connection and Logon Details page opens by default (see the
following section).
v Properties. Click to open the Connection and Logon Details page of the
Connection Properties dialog, showing the properties of the selected connection.
This is the same dialog that is opened when you click New, but in this context
the connection name is read-only (see Defining connection properties).
v Remove. Click to delete the selected connection after your confirmation.
Use the dialog to create a new SAP connection, or to view the properties of the
currently selected connection, depending on whether you open it using New or
Properties on the Select WebSphere DataStage Connection to SAP dialog
respectively.
Load balancing: Select Use Load balancing on the Connection and Logon Details
page of the Connection Properties dialog (see Connection and Logon details page)
to balance loads for SQL queries when connecting to the SAP R/3 system. Load
balancing at design time works as follows:
v R/3 lets you make logon connections through a message server. The message
server uses an algorithm that considers server workload and availability to
choose an appropriate application server to handle the logon.
v When connections are configured, you can choose a load balancing connection to
a message server rather than a specific R/3 instance to retrieve and validate
IDoc types and metadata, for example.
Use the Data Transfer tab on the Output page to transfer data. It is similar to the
Runtime page for the previous version of the ABAP Extract stage. It shows only
controls relevant for the selected transfer method.
This page displays the selected Data Transfer Method, which controls how to
process the data set. Choose one of three data transfer methods:
v CPI-C Logon. Default. Transfers data from the SAP application server directly to
the WebSphere DataStage server, without using FTP or other utilities.
The Local File → option has no effect, that is, no flat files are generated in the
SAP application server. The SAP user name must be the CPI-C user type.
v FTP. Uses the FTP service in the SAP application server to get the data set from
the SAP application. (The buttons for ABAP program loading in the former
version are now on the ABAP Program page.)
Choosing FTP enables the FTP Logon fields, which are required. Enter the path
name for the data file on the remote system to use during run time in Path of
Remote File. This path name is used to access the data file when you may not
have direct access to the data file. The data file contains the data extracted from
R/3. This file is transferred back to the WebSphere DataStage server system.
Alias for Remote Path. Optional. You cannot access the data file using the path
name in the Path of Remote File field, for example, if the system administrator
has restricted access to a directory in the path. In this case, use a relative path
name to access the file. If the WebSphere DataStage server cannot transfer the
12 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
data file using the path specified in the Path of Remote File field, it uses the
path in the Alias for Remote Path field to access the file.
Alternate Host Name. Specify an alternate host name for FTP to use as an alias
to access files on the application server’s machine (where the ABAP program
runs). This is necessary if, for security reasons, FTP cannot access files on the
application server’s machine using the application server name.
Run the program as a background process. Specifies whether to run ABAP
programs in the background (the default is cleared). For details, see Running
programs in the background.
Use SAP Logon Details. If selected for a CPI-C data transfer method, User
Name and Password are read-only and display the corresponding values from
the General page.
v Local File. Use this option if the file containing the data set cannot be accessed
using FTP. The resultant data set file can be placed on the WebSphere DataStage
server machine by the Administrator, and WebSphere DataStage can then access
it from there using the Local File option.
Choosing this option enables the Local Data File field and Browse . Type a
name for the data source file, or click Browse to search for a name. R/3
Enterprise needs write access to this file.
v RFC. Use this option to use the SAP R/3 remote function call (RFC) data
transfer method. Transactional RFC (tRFC) data transfer uses asynchronous
communication and supports parallel data processing. Queued RFC (qRFC) uses
serialized communication and supports sequential data processing.
Gateway Host
Specifies the host name of the system where the SAP gateway service is
running.
Gateway Service
Specifies the name of the SAP R/3 server gateway to be used by the
RFC destination. The standard Gateway Service is sapgw plus the system
number; for example, sapgw00. To display the gateway services, use the
SAP R/3 gateway monitor. See your SAP R/3 documentation.
Program ID
Specifies the program ID of the server program registered with SAP R/3.
The recommended program ID is the fully qualified job name.
RFC Destination
Specifies the name of the RFC destination created with SAP R/3
transaction SM59.
Use qRFC
Specifies serialized data processing using queued RFC (qRFC).
– Select the check box to use sequential qTRC processing.
– Clear the check box to use asynchronous transactional RFC (tRFC)
processing.
After changing this option, regenerate the ABAP extraction program and
recompile the job to use the new data transfer method.
qRFC queue name
Specifies the name of the RFC queue. The ABAP stage creates a queue
with this name automatically when you run the ABAP extraction
program. Data sent from SAP R/3 is collected in this queue in the order
in which it is sent. Use SAP R/3 transaction SMQS and click QRFC
This includes program generation (formerly done using the Extraction Object
dialog on the Access General page), editing, saving, and validation.
14 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
This page has the following components:
v ABAP Program ID. Enter the program ID as you sequentially define required
properties on the tabs on the Output page, from left to right. Otherwise, you see
Program needs ID in ABAP Program Development Status.
v Generation Method includes three ways to create the ABAP program:
– Build SQL query. This option (the default) lets you define the data to be
extracted by constructing an SQL query. (It uses the same SQL Builder
interface as that for other stages.) It does this by causing Build to open the
Build Open SQL Query dialog, described in Building SQL queries section.
– Build extraction object. When you select this option, Build opens the Build
Extraction Object dialog, which disables Options. This method gives you
backward compatibility (the dialog is similar to the Extraction Object dialog
in the former version). This dialog includes OK and Cancel (instead of just
Close as in the previous version). Generate ABAP/4 and Connect no longer
exist. See Building Extraction objects for more information.
– Enter program as text If selected, Build, Options, and Generate Program are
disabled. In this case, you can open ABAP Editor (using Edit Program) to
create the program by hand. If you edit a generated program, Generation
Method changes to Enter program as text.
v Build. Opens a dialog to build an extraction object or SQL query, depending on
the selection generation method.
Click to open the Build Extraction Object dialog if Build extraction object is
selected as the Generation Method. (Options is disabled.)
The Build Extraction Object dialog is similar to the Extraction Object dialog in
the former version, except it uses OK and Cancel (instead of merely Close), and
Generate ABAP/4 and Connect no longer exist.
Opens the Build Fully Generated Query dialog if Build SQL Query is selected
as the Generation Method.
v Options. Lets you further customize the generated ABAP by opening the
Program Generation Options dialog. This option is only available if Build SQL
Query is selected.
v Load Method. Specifies how to load the ABAP program. The options depend on
the data transfer method:
– WebSphere DataStage job loads the program. (Program ready for runtime
load appears as status after program generation.)
– Current user loads the program from this dialog box.
– SAP administrator loads the program manually.
v Delete the program from R/3 after it runs.
v Run Method. Specifies how to run the ABAP program. Options depend on the
data transfer and the load methods.
– WebSphere DataStage job runs the program. Selected by default. If the data
transfer method is CPI-C, only this option appears. The same limitation
applies if you select WebSphere DataStage job runs the program as the load
method.
– SAP administrator runs the program manually. Appears if WebSphere
DataStage job loads the program is not selected.
v ABAP Program Development Status. Displays the following read-only status for
the ABAP program you are developing so you know the next step. For example:
– SQL query needs to be built. Appears when you specify an ID, but the
selected Generation Method is Build SQL query, and you need to build the
query.
Note: When you save the ABAP program to R/3, you see a warning if a
program with the same name already exists on the R/3 system (you can
overwrite the program).
You can also click Load Program to R/3, Save Program as File , Clear program,
Edit program , Editor options , or Validate program if appropriate.
16 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
Building SQL queries
You can define the data to be extracted by constructing a SQL query.
The SQL Builder interface is the same as for other stages, for example, the Oracle
Call Interface stage.
If you select Build SQL Query as the Generation Method on the Output ABAP
Program page and click Build , the Build Open SQL Query dialog opens with the
Tables page on top.
The Build Open SQL Query dialog contains the Tables (the default), Select,
Where, Having, Order By, and SQL pages.
The generated SQL is customized to match the specific SQL syntax of the source
database and conforms to the SAP proprietary Open SQL syntax used by ABAP
programs.
The first five pages of the Build Open SQL Query dialog correspond to a clause in
the generated SQL. The SQL page shows the complete SQL statement as
determined by the settings in the preceding pages.
Click > and the selected table appears in the Selected tables list.
Join Type determines the type of join (see Join to -> buttons) between two or more
specified tables as follows. The tree representation of join nesting is described later.
v Inner >. Adds the table using inner join.
v Left >. Creates other types of joins.
v Right >. Creates other types of joins.
Show Related Tables displays tables related to the selected table through foreign
or primary keys. The related tables are listed in two folders that appear in a tree
structure below the selected table. These folders are labelled Referenced Tables
and Referring Tables. When related tables are added to the join tree (Selected
Tables), default join conditions are created automatically based on the key
relationships.
The join condition and join type of the selected join are shown in controls below
the join tree. Through these controls, you can modify the properties of the join.
The All Tables page includes the Available Tables control, which shows the R/3
tables organized according to the SAP application hierarchy. This hierarchy groups
the tables into folders and sub-folders based on the business category of the data
that is contained in the tables. (The organization of these folders is the same as that
in the SAP R/3 Module Tree dialog in the former version of this stage.)
This page lets you perform a simple query to search for the appropriate table. The
tables resulting from such a search can be added one by one to the join tree for the
query (shown in Selected Tables).
The fields in Available columns include the Short Text for each field to help you
identify the ones you want.
Click > and the selected columns appear in the Selected columns list.
The Agg menu includes these options: SUM, AVE, MIN, MAX, and COUNT. The
menu lets you add new columns that apply one of these functions to the selected
available column.
Columns in the Selected columns list can be reordered using the MOVE UP and
MOVE DOWN buttons.
Columns in the Selected columns list can be removed using the < (remove
selected) and << (remove all) buttons.
Click Find to open a dialog that lets you search for a field in Available columns or
Selected columns, depending on where the focus is when you click Find.
Use the Key column check box to specify which columns should be used as key
columns for the link.
When you click OK, the entire query is validated. The stage validates the syntax of
the Where and Having tabs and verifies the presence of at least one table and at
least one selected column. Errors are reported as warnings, but you can exit the
dialog without fixing the errors.
You can use WebSphere DataStage job parameters on the Where tab of the SQL
Builder by typing the parameter reference into one of the grid boxes. In this case,
the parameter references are in the standard format that is used elsewhere in
WebSphere DataStage (#PARAM#, where PARAM is the name of the job
parameter).
For example, the #LANG_LOW# and #LANG_HIGH# parameters specify the range
of values for the Where clause. You can click the SQL page to see the conversion of
18 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
parameter references to the format required by ABAP (see Build Open SQL Query
SQL page. This conversion is done automatically when the SQL is generated.
See Using job parameters for information about using WebSphere DataStage job
parameters with extraction objects.
When you click OK, the entire query is validated. The stage validates the syntax of
the Where and Having tabs and verifies the presence of at least one table and at
least one selected column. Errors are reported as warnings, but you can exit the
dialog without fixing the errors.
You can use WebSphere DataStage job parameters on the Having tab of the SQL
Builder by typing the parameter reference into one of the grid boxes. In this case,
the parameter references are in the standard format that is used elsewhere in
WebSphere DataStage (#PARAM#, where PARAM is the name of the job
parameter). You can click the SQL page to see the conversion of parameter
references to the format required by ABAP (see Build Open SQL Query SQL page.
This conversion is done automatically when the SQL is generated.
See Using job parameters for more information about using WebSphere DataStage
job parameters with extraction objects.
The Available columns are the ones selected on the Select page. The columns
listed higher in the Order by columns list have a higher precedence in the
resulting sort order. You can modify the precedence by using the MOVE UP and
MOVE DOWN buttons.
To change Ascending to Descending (or vice versa), click Ascending in the list
control. A combo box appears that allows changes.
The metadata is obtained directly from the R/3 system from which the data is
extracted.
The Open SQL statement is like the statement that appears in the generated ABAP
program, but without the INTO clause.
The SQL page displays the conversion of parameter references to the format
required by ABAP. This conversion is done automatically when the SQL is
generated.
Click OK to accept the SQL statement or Cancel to discard all the input.
Use any of the following methods to search for an SAP table to add to the
extraction object. Click the appropriate button, choose the appropriate command
from the Extraction Object Editor shortcut menu, or use the shortcut keys.
v Add tables from Search (Alt+s)
v Add tables from Module Tree (Alt+m)
v Add tables from Logical Database (Alt+l)
Before learning how to add a table, read the next section about navigation.
Navigation:
When you navigate the SAP R/3 Module Tree or the SAP R/3 Logical Database
Tree, double-click an entry to expand it. Because R/3 Enterprise is such a large
system, load metadata only when you want to interact with the system.
In the Extraction Object Tree, the root is the highest level and the field is the
lowest level. Since WebSphere DataStage processes input as a rowset, the second
level must only have one table.
The following navigation guidelines pertain to the Build Extraction Object dialog.
Common operations have corresponding buttons and shortcut menu commands.
v You cannot delete the root.
v You can relocate a table by selecting and dragging it to the desired location.
v A table can become a subtree of another table tree.
20 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
v You can delete tables by clicking Delete or choosing Delete from the shortcut
menu.
v You can move a column within one table, but you cannot move it outside that
table to another table.
v You cannot move columns to other columns.
v You can delete all or selected columns associated with a table by clicking Delete,
choosing Delete from the shortcut menu, or pressing Alt+e.
Adding a table:
1. Click Add Tables → From Search or choose Add Tables from the Extraction
Object Editor shortcut menu. The R/3 Table Searching & Adding dialog box
opens.
2. To find a table, do one of the following:
v Type a full or partial table name using the * or % wildcard in the Table
Name field. For example, you can type t00, t00*, or t00%. You can use * or %
interchangeably in one table name.
v Type a description in the Table Description field. You can type a full or
partial name using a wildcard (* or %). This name is case-sensitive.
v Create a combination of table name and table description values. Use the
AND or OR option buttons to specify the Boolean relationships among the
selection criteria.
Click Search. The Search Results for Table grid is filled in. The dialog box
name is replaced by ″Searching result: n entries found.″
3. Do one of the following:
v Highlight a table, then click Table Definition, or choose Table Definition
from the Table List Operations shortcut menu. The Table definition dialog
opens.
v Highlight one or more table columns, then click Add to EO Tree or choose
Add to Extraction Object from the Table Definition Operations shortcut
menu. The columns are added to the Build Extraction Object dialog. Close
the Table definition dialog.
v Click Print, or choose Print from the Table List Operations shortcut menu to
send the search results to the printer.
v Click Save As, or choose Save As from the Table List Operations shortcut
menu to save the search results to a flat file.
4. To view the table contents, click Table Content, or choose Table Content from
the Table List Operations shortcut menu. The Table Contents dialog box
opens.
The table content list on the left under Select Field for Selection Criteria
displays the associated column details. You can search for a string in the field
name or a description using the Find What field, the Direction list, and Find.
Using Find What is especially helpful for searching large tables.
You must specify selection criteria in order to view the table contents.
a. To add a single value, select the column for which you are specifying the
condition. Select a comparison operator from the Operator list under Single
Value Operation.
Type the comparison value in the Value field.
Click Add Single Value. The selection criteria is added to the SQL
Selection Criteria box.
b. You can add a range of values in the same manner. Select the column for
which you are specifying the condition. Select INCLUDE or EXCLUDE
Note: If you make an error in specifying the condition, you can remove
the condition from the SQL Selection Criteria box by clicking Clear
Condition. This removes the entire selection criteria definition from the
box. You can also manually edit the SQL condition.
c. If you do not specify any fields, this stage by default loads every field back
into the Fields for Selection list. If you use RFC to connect to the SAP
application server and the table is large (that is, the record length is large),
then you can load nothing back. In this case, make sure that the record you
want to view is less than 512 bytes long. After you specify the selection
criteria, click View Contents. A Table Content dialog showing the table
contents opens. The rows are displayed in 500-row lots.
Select one of the following using the buttons or commands on the Table
Content Operations shortcut menu:
d. Click Save As to save the table contents to a flat file using the Save As
dialog box.
e. Click Print to send the table contents to the printer.
Close this dialog and the Table Content dialog to go to the R/3 Table
Searching & Adding dialog box.
5. After you verify that you have the correct table, select the table from the grid in
your R/3 Table Searching & Adding dialog box, and click Add to Extraction.
Select one of the following using the buttons or the shortcut menu on the R/3
Table Searching & Adding dialog box:
v Select Save As to save the table columns to a flat file using a standard Save
As dialog box.
v Select Print to send the table columns to the printer.
6. Close the R/3 Table Searching & Adding dialog box. The Build Extraction
Object dialog now shows the added table.
22 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
3. The lowest level of any node in a module tree is a table. Right-click a table to
see its table definition or table content, or add it to the Build Extraction Object
dialog.
You can select multiple fields or multiple tables at the same time, but you
cannot select both fields and tables at the same time.
Note: If you make an error in specifying the condition, you can remove the
condition from the ABAP SQL Condition box by clicking Clear Condition.
This removes the entire selection criteria definition from the box. You can also
manually edit the SQL condition. You can then review your SQL statement by
clicking View SQL.
9. Highlight any entry, and click Property on the Build Extraction Object dialog to
look at its properties and values in the Properties window.
WebsSphere DataStage Pack for SAP R/3 23
Using job parameters
To use WebSphere DataStage job parameters from within the ABAP code that is
generated from the specified extraction object, you need to perform certain tasks
when designing your job.
1. Define the necessary job parameters in the WebSphere DataStage Designer
Client by choosing Edit > Job Properties. Enter job parameters on the
Parameters page.
In this example, LANG_LOW and LANG_HIGH are defined as job parameters
for this extraction job.
2. You can use job parameters in the SQL Condition Builder dialog by using the
syntax: DS_JOB_PARAM@job_parameter. In the following example, LANG_LOW
and LANG_HIGH specify a range of values in the SQL condition. Likewise,
you can use them in a single value or a join operation.
Click Add Range to add the range condition to the ABAP SQL Condition box.
3. Generate the ABAP code. Save and compile the job when your job design is
complete. When you run this job, WebSphere DataStage prompts for values for
each of the job parameters that are defined for this job. At run time, these
values are passed to the ABAP program that performs the data extraction from
R/3 Enterprise.
This lets you regenerate the program with all your custom insertions after you
modify the SQL query.
The list of columns is automatically synchronized with the SQL query (or the
extraction object) whenever you change the set of extracted tables and fields that is
specified in the SQL query (or extraction object). But since you can modify the
columns from the WebSphere DataStage Designer Client, Synchronize Columns is
useful.
24 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
Completing the job
Complete the definition of the other stages in your job according to normal
WebSphere DataStage procedures. Compile and run the job.
R/3 Enterprise
data type Description Examples
ACCP Posting period YYYYMM Type the value in single quotation
marks. Type a year and month in the
format YYYYMM, for example, `199906’.
CHAR Character strings Type the value in single quotation
marks, for example, `xyz’.
CLNT Client Type the value in single quotation
marks, for example, `800’ (three digit).
CUKY Currency key, referenced Type the value in single quotation
by CURR fields marks, for example, `USD’ for US
dollars, or `EUROS’ for Euros.
CURR Currency field, stored as Type a numeric value, for example,
DEC 500.00.
DATS Date field (YYYYMMDD) Type the value in single quotation
stored as char (8) marks. Type year, month, and day in the
format YYYYMMDD, for example,
`19990623’.
DEC Counter or amount field Type a numeric value without quotation
with comma and sign marks, for example, 8.0 or -8.0.
FLTP Floating-point number, Type a numeric value without quotation
accurate to 8 bytes marks, for example, 8.0 or -8.0.
INT1 1-byte integer, decimal Type a numeric value without quotation
number <= 254 marks, for example, 1 or 2 through 254.
26 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
FLTP(n)
FLOAT(n)
INT1(n)
CHAR(n)
INT2(n)
CHAR(n)
INT4(n)
CHAR(n)
LANG(n)
CHAR(n)
LCHR(n)
VARCHAR(n)
LRAW(n)
VARCHAR(n)
Validation is based on the data transfer method selected on the Data Transfer
Method tab of the Output page. (The system automatically supplies this data
transfer method information for the Runtime Validations dialog.)
FTP transfers:
CPI-C transfers:
You can also perform validations for CPI-C data transfers. For these transfers, the
following operations are considered:
v SAP Connection/Logon. SAP connection and logon are validated using the
details specified on the General tab of the Output page. However, if you specify
CPI-C User Name and Password in the CPI-C logon details section, these values
are used instead. After a successful logon, the system also determines whether
the user type is CPI-C and indicates this with a message.
v Check installed RFC. This component is shipped with the Pack and must be
installed on the R/3 Enterprise system. The validation routine checks whether
the correct version of RFC (Z_RFC_DS_SERVICE) is installed and activated on
the R/3 Enterprise system.
v ABAP Options. Based on the method selected to load the ABAP program, this
section requires one of the following validation sequences:
– Design time Load and Manual Load. The validation routine only checks
whether the ABAP program exists in the DataStage Repository.
– Runtime Load. The validation routine also checks whether the ABAP
program is syntactically correct.
RFC transfers:
When you validate RFC data transfers, the following operations are considered:
v SAP Connection/Logon. Validates the SAP connection and logon using the
details specified on the General tab of the Output page. After a successful logon,
the system also determines whether the data transfer type is RFC and indicates
this with a message.
28 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
v Check installed RFC. This component is shipped with the Pack and must be
installed on the R/3 Enterprise system. The validation routine checks whether
the correct version of RFC (Z_RFC_DS_SERVICE) is installed and activated on
the R/3 Enterprise system.
v ABAP Options. Based on the method selected to load the ABAP program, this
section requires one of the following validation sequences:
– Design Time Load and Manual Load. The validation routine checks whether
the ABAP program exists in the DataStage Repository.
– Runtime Load. The validation routine also checks whether the ABAP
program is syntactically correct.
The IBM WebSphere DataStage IDoc Extract stage, which lets WebSphere
DataStage capture IDocs from R/3 source systems to be used as source data for
WebSphere DataStage job data streams. It lets you browse R/3 Enterprise IDoc
metadata, select IDoc types to process, and extract the data from IDocs.
The IDoc Extract stage includes a set of tools and a custom GUI to passively
retrieve and process IDocs generated by R/3 Enterprise. It complements the
WebSphere DataStage ABAP Extract , which is described in part 1 of this guide.
You can use only standard SAP interfaces to access IDoc metadata and content. No
ABAP code is uploaded to R/3 source systems for IDocs.
NLS (National Language Support) is supported for the IDoc Extract stage.
Functionality
The IDoc Extract stage has the following functionality:
v Retrieval of IDocs generated by R/3 Enterprise as source data for IBM
WebSphere DataStage job data streams.
v Simultaneous connections to multiple R/3 Enterprise instances from WebSphere
DataStage.
v Ability to define a unique directory for each IDoc type, R/3 source combination.
v IDoc metadata browser capability.
v Automatic and manual job processing modes.
v Automatic re-connections to R/3 Enterprise.
v A separate client utility, the WebSphere DataStage Administrator for SAP. It
manages and configures R/3 connection and IDoc type properties.
v A persistent staging area (PSA) on the WebSphere DataStage server for storage
of IDocs retrieved from R/3 source systems.
v Performance features for a high volume of data.
v A mechanism for coordinating the processing of IDocs from the PSA.
The source data is delivered to WebSphere DataStage rather than being actively
requested from the source as in the traditional data access stages. Consequently, a
background listener receives the IDocs as an R/3 source system delivers them to
WebSphere DataStage. The listener stores each received IDoc in a persistent staging
area (PSA), which conceptually serves as the data source for the IDoc Extract stage.
This stage is a source stage, which means that it has no input links. It reads the
IDoc from the PSA, parses its content, and forwards this content as relational rows
down the output links of the stage for processing by downstream stages.
The custom GUI, which interfaces directly with an R/3 system, helps you design
jobs that use the IDoc Extract stage. You can select from a list of available IDoc
types and choose the IDoc segments you want. IDoc segment metadata is
automatically retrieved and used to populate a column grid for each output link
associated with a segment.
The following diagram illustrates the architecture of the system. The runtime
components include the listener subsystem and the IDoc Extract stage. These two
components provide the IDoc retrieval and processing functions for the system, as
described in the following sections.
30 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
IDoc custom GUI
DataStage
IDoc metadata repository
IDoc
R/3 source R/3 source R/3 source plug-in
system 1 system 2 system n
IDocs
Configuration requirements
WebSphere DataStage Systems. You need to install the following components: two
on the WebSphere DataStage server system and two on the client.
v WebSphere DataStage server system:
– Listener subsystem, which installs and registers the listener manager as a
service executable. See Listener subsystem for details.
– IDoc Extract stage.
v WebSphere DataStage client system:
– IDoc Extract client GUI.
– WebSphere DataStage Administrator for SAP utility.
As with the ABAP Extract stage, the installation is facilitated by a prior installation
of the SAP GUI.
R/3 Source Systems. Some configuration on all R/3 source systems is required to
identify WebSphere DataStage as a target system. Otherwise, the listeners cannot
connect to RFC ports in order to receive IDocs.
For our system to acknowledge receipt of IDocs from R/3, it must send a message
of type STATUS back to the sending R/3 system. The logical system corresponding
to our RFC server can participate in various distribution model views. In these
cases, these distribution model views must be configured to let us send messages
Note: You must use a user name and non-blank password for WebSphere
DataStage logon credentials to use the IDoc stages. This means you must clear
Omit on the Attach to Project dialog in WebSphere DataStage, otherwise,
unexpected results occur.
Runtime components
The IDoc Extract includes the listener subsystem and the IBM WebSphere
DataStage stage components. The following sections describe the listener
subsystem.
For information about the stage and the administrative functions, see IDoc extract
for SAP R/3 Enterprise stage and Administrator for SAP Utility.
Listener subsystem
The listener subsystem includes the following components:
v Listener manager
v One or more RFC listener servers
This architecture is similar to that of the IBM WebSphere DataStage Pack for SAP
BW. It lets multiple R/3 sources deliver IDocs to WebSphere DataStage. It runs as a
daemon on UNIX, as a service on Microsoft Windows NT®.
Listener manager
The listener manager detects changes in RFC servers by doing the following:
1. At startup, the manager reads a configuration file that contains parameters
describing R/3 connections, which are defined by the IBM WebSphere
DataStage job designer.
2. For each configured R/3 connection, the listener manager starts an RFC listener
server in a separate background process. The manager ensures that all its
associated listener servers remain active and connected to R/3.
3. If it finds any irregularities, it tries to restart the server in question.
4. Additionally, if connection parameters change while a server is connected, the
manager stops the server and uses the updated connection parameters to
restart it.
5. The listener subsystem sends any error messages or messages reported by R/3
to a log file, one for each listener. The log file contains error messages
encountered by the listener subsystem as well any messages reported by R/3.
You can view the contents of the log file manually, or use the WebSphere
DataStage Administrator for SAP utility.
Listener server
Listener servers run in the background, listening on R/3 transactional RFC (tRFC)
ports, waiting for IDocs to be delivered by R/3. An R/3 administrator must
configure tRFC ports, identifying IBM WebSphere DataStage as a target system.
When a server receives IDocs, it saves them as flat files to a PSA on disk and sends
receipt confirmation to R/3.
32 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
The stage runtime component parses the IDoc content and forwards the content as
relational rows down the output links for the stage for processing by downstream
stages.
If you need to stop or start the RFC Listener Manager on Windows NT, we
recommend that you use the Windows NT Service Manager dialog as follows:
1. Choose Start > Settings > Control Panel to display the Windows NT Control
Panel.
2. Double-click Services to display the Services dialog.
3. Select the WebSphere DataStage (IDoc Manager) service.
4. Click the start/stop button to start or stop the RFC Listener Manager.
5. Click Close to exit the Services dialog.
If you shut down the RFC Listener Manager, the RFC Listener Manager shuts
down all individual RFC Listener Servers.
For details about using the Windows NT Services dialog, see the Windows NT
documentation and online help.
The WebSphere DataStage RFC Listener Manager is a UNIX daemon that starts up
automatically by default when the operating system is started.
If you need to stop or start the RFC Listener Manager on UNIX, we recommend
that you run the dsidocd.rc script for your platform with the stop or start command
options. The location of the script for all UNIX platforms is given below:
Platform
Script
HP-UX 11i (Itanium® 64 bit) v2
$DSSAPHOME/DSSAPbin/dsidocd.rc
Linux $DSSAPHOME/DSSAPbin/dsidocd.rc
Example:
If you shut down the RFC Listener Manager, the RFC Listener Manager shuts
down all individual RFC Servers.
The following sections describe the execution modes available on a per-IDoc basis.
Batch processing mode: The server can launch an IBM WebSphere DataStage job
after a listener receives a specified number of IDocs of a specified type. For
example, you can configure the listener subsystem to launch WebSphere DataStage
jobs that process IDoc type MATMAS01 after 100 instances of this IDoc are
delivered.
You can configure the batch size for IDoc extraction jobs that are to be run in
manual mode.
34 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
Use WebSphere DataStage Director or WebSphere DataStage Designer to run the
IDoc extraction job manually.
Related tasks
“Extracting IDocs by IDoc number”
You can configure the IDoc numbers that an IDoc extraction job processes.
You can configure the IDoc numbers that an IDoc extraction job processes.
The PSA is a configurable property of an IDoc type and the R/3 connection from
which it arrives.
v Type. You can define a separate directory for each IDoc type that the WebSphere
DataStage external system can receive. That is, you can specify a directory of
your choice for each IDoc type that is represented by some job.
v Connection. Since WebSphere DataStage jobs must also specify an R/3
connection, IDocs can be segregated by connection as well as type. For example,
Example. The following diagram represents four RFC listener servers associated
with a single WebSphere DataStage server instance. Each RFC listener server listens
on one of four R/3 instances: development, QA, and two production instances.
Assume six WebSphere DataStage jobs are on this server.
R/3 Dev R/3 QA R/3 Prod P1 R/3 Prod P2
R/3 instances
Listener servers
A B A A B C PSA/IDoc type
DataStage jobs
J1 J2 J3 J4 J5 J6
The relationship between WebSphere DataStage jobs and the IDoc types they
process is as follows:
v J1 processes IDoc type A from the R/3 development instance.
v J2 processes IDoc type B from the R/3 development and QA instance.
v J3 processes IDoc type A from the R/3 QA instance.
v J4 processes IDoc type A from the R/3 production instance P1.
v J5 processes IDoc type B from the R/3 production instance P1.
v J6 processes IDoc type C from the R/3 production instance P2.
PSA maintenance: Because of potential limitations in available disk space, the file
system may not have the capacity to store every IDoc that IBM WebSphere
DataStage receives. To minimize the possibility of file systems becoming full due to
an excess of IDocs in the PSA, the IDoc Extract includes the WebSphere DataStage
Administrator for SAP utility for cleaning up or archiving IDocs that have been
processed by WebSphere DataStage jobs and are no longer needed. During
installation, a separate executable is scheduled to be run periodically (weekly every
36 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
Saturday by default) by the operating system. Use the utility to modify the default
scheduling of the executable or to run the cleanup/archive manually. A bookmark
file determines when processed IDocs are deleted from persistent storage.
When the process runs, it scans the PSA, identifying all IDocs that have been
successfully processed by all jobs having an interest in the particular IDoc. When
these IDocs are identified, they are deleted from the file system or archived to
another location. If they are archived, it is the responsibility of the user to maintain
the archive.
Inactive jobs
Inactive jobs are those that have not recently run. If a job stops being run, IDocs
that are to be processed by the job accumulate. They are not cleaned up because
the cleanup process detects that the inactive job has not yet processed the IDocs
and may never process them. If the job never runs again, the IDocs will
accumulate indefinitely.
To resolve this issue, a job attains an inactive status with respect to the listener
subsystem when it has not been run for 40 days. Forty days is the default value,
which you or the WebSphere DataStage Administrator Client can modify using the
WebSphere DataStage Administrator for SAP. When a job becomes inactive, its
unprocessed IDocs are not saved or archived by the cleanup process. For more
information, see Administrator for SAP Utility.
Configuration files
Because R/3 connection parameters and IDoc type properties are outside the scope
of individual IBM WebSphere DataStage jobs, they are not stored in job definition
files in the WebSphere DataStage repository. External configuration files store these
parameters and properties. We recommend that you use the IDoc Extract GUI to
modify settings for these configuration files. This section describes the various
configuration files that are used by the server to manage R/3 connection
parameters and IDoc type properties:
v DSSAPConnections.config
v IDocTypes.config
v <IDocType>.config
Note: On UNIX platforms, the dsenv file in the WebSphere DataStage server
directory must contain a umask setting so that users have read and write
permissions on these configuration files. (Windows NT platforms handle this
automatically.)
38 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
v SAPMESSERVER. The message server host that handles the connection request
when using load balancing.
v SAPSYSID. The R/3 system name that handles the connection request when
using load balancing.
v SAPGROUP. The group name of the application servers when using load
balancing.
IDocTypes.config file: When the listener manager starts, it starts a listener server
for each connection described in this file. The previous example contains a single
connection, PA45VERSION3.
When an IDoc type is first associated with a connection during job design, the
client GUI creates a directory in the IBM WebSphere DataStage server directory
called DSSAPConnections/<ConnectionName>/IDocTypes/<IDocTypeName>.
v ConnectionName specifies one of the connections defined by the NAME=
parameter in the DSSAPConnections.config file.
v IDocTypeName is a directory that specifies the name of an IDoc type.
The IDocTypes.config file specifies the directory locations for writing each type of
IDoc that has been configured for that connection. It has the same format as the
DSSAPConnections.config file, but it contains different fields. For example:
DSIDOCTYPES=<BEGIN>
<BEGIN>
USE_DEFAULT_PATH=FALSE
IDOC_FILES_PATH=/u1/IDocs/MATMAS01
NAME=MATMAS01
<END>
<BEGIN>
USE_DEFAULT_PATH=TRUE
IDOC_FILES_PATH=
NAME=CREMAS01
<END>
<END>
IDocType is the type name of the IDoc. This configuration file contains all the
information that is specific to that IDoc directory location. For example:
For more information about the management of these configuration files, see the
Administrator for SAP Utility section.
The IDoc Extract stage is a passive stage that has output links but no input links.
Each output link from the IDoc Extract stage is mapped to a single IDoc data
segment. You should create links for each IDoc segment whose data is of interest
when you design a job.
If you process only segments E1MARAM and E1MAKTM, for example, the data
contained in other segments is ignored. Segments are records within an IDoc that
can be child or parent segments. They include administrative fields such as the
IDoc number, the segment number, and the number of the parent segment. Child
segments have a many-to-one relationship to parent segments.
The selection of E1MARAM, which has child segments, does not imply that all its
child segments are included. Only the segment fields of E1MARAM are processed.
40 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
The segment fields for E1MARAM correlate with the column definitions for the
output link on which E1MARAM is associated. The segment fields for E1MAKTM
become the column definitions for a second output link. Use the custom GUI to
define the column metadata definitions. When a segment is chosen for a link, the
GUI displays the available segments fields that you can select for output. The GUI
then automatically translates the R/3 metadata for the selected segment fields to
equivalent WebSphere DataStage metadata definitions and populates the column
grid.
Examples
Note: WebSphere DataStage uses the segment definition name, for example,
E2MAKTM001, rather than the segment type name. The distinction between the
two is beyond the scope of this document. For more information, consult your SAP
documentation.
When the job is run, the stage reads unprocessed IDocs of the specified type from
the appropriate location in the PSA. Unprocessed IDocs are identified by a
bookmark file that is maintained for each IDoc type. (See the next section and PSA
maintenance for details about bookmarks.)
When subsequent jobs are run, only those IDocs that arrived following the current
bookmark are processed. This prevents a job from repeatedly processing the same
set of IDocs.
If a job fails, you can use the reset functionality for the job to restore the bookmark
to its original state.
Test Mode. To prevent the bookmark for the job from being updated, you can
process IDocs in Test Mode (set on the General tab of the Stage page). For more
information, see About the Stage General page.
The bookmark is updated when a job begins. If a job aborts, a job reset causes the
bookmark to be reset to its state before the aborted run. Therefore, the same batch
of IDocs can be reprocessed without data loss.
This scheme lets more than one job process IDocs of the same type from the same
location in the PSA. Since the bookmark coordinates the access for each job to
IDocs of the same type in a given PSA, it is not possible for one job to exhaust the
IDocs before a second job has the opportunity to process them.
Note: On UNIX platforms, the WebSphere DataStage server must be started so that
users who want to run a job have write permission on the bookmark file. This also
applies to the saprfc.ini file, which the job maintains in the WebSphere DataStage
project directory.
For information about using stages in WebSphere DataStage jobs, see WebSphere
DataStage Server Job Developer Guide.
When you start the stage GUI from the WebSphere DataStage Designer Client to
edit an IDoc Extract stage, the General tab of the Stage page opens.
42 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
1. DataStage Connection to SAP. The WebSphere DataStage connection to the
R/3 Enterprise system that is defined on the WebSphere DataStage server
machine and shared by all WebSphere DataStage users connected to that
machine. The fields in this area are read-only and are obtained from the
connection that you selected.
Name The name of the selected connection to the SAP R/3 system that
generates the IDocs to be extracted by this stage.
Select...
Click the button to choose a WebSphere DataStage connection to the
SAP R/3 system. The selected connection provides all needed
connection and default logon details that are needed to communicate
with the corresponding SAP R/3 system.
Description
Additional information about the selected connection.
Application Server
The name of the host system running SAP R/3. If the selected
connection uses load balancing, Message Server is used instead.
System Number
The number assigned to the SAP R/3 system used to connect to SAP
R/3. If the selected connection uses load balancing, System ID is used
instead.
2. SAP Logon Details. The fields in this area are read-only unless you clear the
Use connection defaults box.
a. User Name. The user name that is used to connect to SAP.
b. Password. A password for the specified user name.
c. Client Number. The number of the client system used to connect to SAP.
d. Language. The language used to connect to SAP.
e. Use connection defaults. Clear this box to remove the default SAP Logon
Details settings so you can use different logon information. If selected, the
displayed logon details are obtained from the selected connection and are
disabled.
3. Enter an optional description to describe the purpose of the stage in the
Description field.
4. Test Mode. The check box that specifies whether the job runtime component
updates the bookmark file for this stage. If you select this check box, the job
runtime does not update the bookmark file for this stage. Therefore, each time
the job is run, this stage reads all the IDocs that it read when the job was
previously run.
Use Test Mode as you develop a job and need to test it repeatedly with the
same set of input data without causing the bookmark to be updated.
Selecting Test Mode also prevents the job from being run automatically as new
IDocs of this type arrive. The listener server does not launch a job in test mode
even if its IDoc type is designated for automatic execution.
The Administrator can archive processed IDocs rather than permanently
remove them by moving them to a subdirectory of the PSA for the IDoc.
Currently, the only way to reprocess IDocs is to reset an aborted job or set the
job to Test Mode. You cannot reprocess archived IDocs.
For a newly created stage, the connection for the stage defaults to the last one you
selected for another stage of this type. If only one WebSphere DataStage connection
When you click Select on the General tab of the Stage page, the Select WebSphere
DataStage Connection to SAP dialog opens. This dialog lets you choose a
WebSphere DataStage connection to the R/3 Enterprise system. The selected
connection provides all needed connection and default logon details that are
needed to communicate with the corresponding R/3 Enterprise system.
The IDoc Listener Settings options function in the same way as the corresponding
options in the IDoc Listener Settings of the WebSphere DataStage Administrator for
SAP utility. In addition, the Number of servers for this connection field selects the
number of listening processes for the IDoc connection in the WebSphere DataStage
IDOC Manager. This property improves the performance during IDoc extraction.
Use SAP R/3 transaction SMSQS to configure multiple IDoc connections.
Add, Properties, Import into, Export and Remove are administrative connection
operations. They let you manage the list of connections that is maintained on the
WebSphere DataStage server machine from the Select DataStage Connection to SAP
dialog. They function in the same way as the corresponding buttons on the
DataStage Connections to SAP page of the WebSphere DataStage Administrator for
SAP utility.
Use the IDoc Extract Stage Options page to specify design-time connection details.
Name The name of the selected connection to the SAP R/3 system that generates
the IDocs to be extracted by this stage.
Select...
Click the button to choose a WebSphere DataStage connection to the SAP
R/3 system. The selected connection provides all needed connection and
default logon details that are needed to communicate with the
corresponding SAP R/3 system.
Description
Additional information about the selected connection.
Application Server
The name of the host system running SAP R/3. If the selected connection
uses load balancing, Message Server is used instead.
System Number
The number assigned to the SAP R/3 system used to connect to SAP R/3.
If the selected connection uses load balancing, System ID is used instead.
Use this connection for design time
Specifies a separate connection to SAP R/3 that is to be used only at
design time for extracting metadata from the SAP R/3 system. This
connection is not used at runtime.
Separate connections for design time and runtime are required to support
SAP Exchange Infrastructure (SAP XI). The DataStage connection to SAP
defined on the Stage General page is used to connect to SAP XI at job
runtime.
44 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
IDoc Data Directory Path
The path that is used to generate and store IDoc data or to read IDoc data
files when the SAP R/3 system is not available. This path is set in the IDoc
Type properties in the field Directory containing temporary IDoc files for
this type and connection.
Enable offline IDoc processing
Select to specify that IDocs are to be read from and stored to the path in
IDoc Data Directory Path when the SAP R/3 system is not available at job
runtime.
Configure an IDoc type as follows. The subsequent sections describe these steps in
detail.
1. Click Select to open the Select IDoc Type dialog for selecting an IDoc type.
2. Select a type, and click OK.
3. A prompt can appear about an unconfigured IDoc type. Click Yes to configure
the IDoc type for use with WebSphere DataStage.
4. Define the IDoc type configuration properties on the IDoc Type Properties
dialog.
5. The selected IDoc type and its component segment types are now visible on the
IDoc Type tab of the Stage page. The Select IDoc Type dialog displays all the
released IDoc types defined on the R/3 Enterprise system. It has the following
components:
v Connection. The connection name for the R/3 system whose IDoc types are
being shown is indicated at the top of the dialog.
v Description. A description of the R/3 system.
v IDoc Types. A list of all the released IDoc types defined on the R/3 system.
v Find. Click to open a Find IDoc Type dialog. It lets you search for IDoc
Types that contain user-specified substrings in their name or description.
v Properties. Click to view and change the WebSphere DataStage configuration
for the selected IDoc type.
v OK. When you select an IDoc type and click OK, the system checks whether
anyone on the WebSphere DataStage system configured the IDoc type for the
current connection. If not, you see the following message:
This IDoc type has not been configured for use with WebSphere
DataStage. Would you like to set these options now?
Click Yes to set the options to default values. The IDoc Type Properties
dialog opens (see the following section).
Related tasks
“Extracting IDocs by IDoc number” on page 35
You can configure the IDoc numbers that an IDoc extraction job processes.
Note: See “Setting batch size for manual IDoc extraction” on page 34 for
additional information.
7. The DataStage Logon Details for Running the Jobs area specifies the
WebSphere DataStage logon user name and password, and whether to use the
defaults for the connection.
8. Specify a specific job to run automatically in the Assign job section:
46 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
v Select Run specific job to specify the project name and job name that is to
run automatically when the stage receives the number of IDocs in the Run
jobs that extract IDocs of this type after receiving field. In the Project
Name field, choose the name of the project that is to run automatically. The
Project Name field includes up to two project names:
– The most recent, saved project associated with this IDoc type
– The current project
v Clear Run specific job to automatically run all jobs that extract this IDoc
type when the stage receives the specified number of IDocs.
9. After you select an IDoc type and optionally define IDoc Type configuration
properties, the Select IDoc Type dialog closes, and you return to the IDoc Type
tab of the Stage page, with the properties of the selected IDoc type now visible.
The IDoc Components area shows the control record and all the segments
defined for the IDoc type with their descriptions. (The control record, one for
each IDoc, is an administrative record that contains a standard set of fields
describing the IDoc as a whole.) This area contains the following information:
v Name. Shows the hierarchical relationship among the segments using a tree
structure with their descriptions.
(A segment type can appear only once within an IDoc type. The names for
the segments are the segment definition names, not the segment type names.
You can infer the segment type name from the segment definition name.)
v Assigned Output Link. After particular segments in the IDoc Type are
assigned to the output links of the stage, the IDoc Components control
shows the names of the links in the Assigned Output Link column of the
control. This gives you an overview of which segments are being extracted
by the stage.
The General tab of the Output page shows the segment type or control record for
each output link as described in subsequent sections.
To prevent third party applications that receive IDocs from the R/3 system from
having to change to accommodate the new segment definitions, you can specify
that IDocs of a particular IDoc type that are sent through a particular port use the
segment definitions from an earlier version of the R/3 system.
If this is already done in your SAP application, set R/3 Version on the IDoc Type
Properties dialog to the same value that is set for the IDoc type in the R/3 system
itself (See Connection and Logon Details page).
R/3 Version lists the version that is the same or earlier than that of the R/3 system
that is specified in the connection. It defaults to the current version of the R/3
system.
Version 46E and Later. By default, the IDoc Extract product only knows about R/3
46D or earlier. If you have an R/3 system that is later than 46D, the program reads
the version from the R/3 system and automatically adds it to the list.
When you click OK, the system verifies that the specified R/3 Version is unknown,
and prompts:
R/3 version "46E" is not among the versions currently
known to DataStage. Would you like "46E" to be added to
the list of known versions?
If you click Cancel, the IDoc Type Properties dialog stays open.
If you enter an incorrectly formatted version and click OK, the following error
appears:
″xx″ is not a valid R/3 version. A correctly formatted version must consist of two digits
followed by an upper case letter.
If you enter a version number that is later than that of the R/3 system specified in
the connection, you see the following error:
R/3 Version ″xxx″ is greater than ″xxx″, the version of the connection. The R/3 version for
the IDoc type must not be greater than the version of the connection.
If you change the port version of the connection for the stage or the R/3 version of
the IDoc type for the stage, the stage automatically refreshes itself using
appropriate IDoc metadata. If the either of these versions is changed before you
open the stage editor, the act of opening the stage editor produces the following
warning:
Either the port version has changed for connection ″xxx″, or the R/3 version in the
configuration for IDoc type ″xxx″ has changed. The IDoc metadata for the stage will now
be fixed.
After you choose the output link from the Output name box, the following steps
summarize how output links function.
48 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
5. Columns for missing fields can be added by clicking Add Columns on the
Columns tab of the Output page.
Click Select on the Output General page to open the Select IDoc Component to
Extract dialog, which contains an IDoc Components control identical to the one on
the IDoc Type tab of the Stage page.
1. Select the control record or a segment, and click OK.
2. The dialog closes, and you return to the General tab of the Output page with
the IDoc Component to Extract, Description, and Fields in Component
information.
If you selected a segment, the first seven fields in the Fields in Component list
are administrative fields common to all segments. These fields are followed by
the data fields specific to this segment type.
Adding columns
When you choose a segment or the control record on the General tab on the
Output page, a corresponding list of columns is automatically generated.
You can view and modify these columns using the Columns tab on the Output
page.
The Save IDoc Segment Definition dialog contains the following fields:
v Data source type. The type is always IDoc.
v IDoc type. The name of the data source.
v Segment name. The name of the table or file.
v Short description. The brief description of the IDoc segment.
v Long description. The detailed description of the IDoc segment.
The IBM WebSphere DataStage Load stage is a passive stage that has input links
but no output links. Use this stage to generate IDocs from source stages to load
data into R/3 Enterprise.
The user interface for the IDoc Load stage is almost identical to that of the IDoc
Extract stage.
50 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
v Ability to allow the WebSphere DataStage job designer to map relevant segment
data from multiple sources. Each IDoc Load stage in a WebSphere DataStage job
can send IDocs of one chosen type to R/3 Enterprise.
v A mapping mechanism that allows relational row data to be converted to SAP’s
proprietary IDoc data format. IDoc structural metadata determines the link
processing order. Join keys, which are sort keys, are identified by the job
designer. The processing order and join keys are used by the stage to assemble
IDocs.
v Support for message types, the scheme for electronically transmitted data used
for one specific business transaction.
v Support for primary key (P-key), and foreign key (F-key) handling, which differs
from that in the IDoc Extract stage. Foreign keys for segments at root level of the
IDoc hierarchy separate data into separate IDocs, allowing one stage to send
several IDocs in one transmission to SAP R/3. Foreign keys in non-root level
segments relate child segment records to their parent segment record.
v NLS (National Language Support). For information, see WebSphere DataStage
Administrator Client Guide.
Configuration requirements
WebSphere DataStage Systems. You need to install the following components: two
on the DataStage server system and two on the client.
v WebSphere DataStage server system:
– IDoc Load for SAP R/3 stage.
v WebSphere DataStage client system:
– IDocLoad for SAP R/3 client GUI.
– Administrator for SAP utility.
As with the ABAP Extract for SAP R/3, the installation is facilitated by a prior
installation of the SAP GUI.
R/3 Source Systems. Some configuration on all R/3 source systems is required to
identify WebSphere DataStage as a target system.
After installing the stage GUI, start the stage editor from the WebSphere DataStage
Designer Client by doing one of the following:
v Double-click the stage in the Diagram window
v Select the stage and choose Properties from the shortcut menu.
v Select the stage and choose Edit > Properties from the WebSphere DataStage
Designer Client window.
Note: You must use a user name and non-blank password for WebSphere
DataStage logon credentials to use the IDoc stages. This means you must clear
Omit on the Attach to Project dialog in WebSphere DataStage, otherwise,
unexpected results occur.
For a root-level segment type, the foreign key value identifies the specific
generated IDoc into which the segment records will be incorporated.
Since the column lists for an IDoc Load for SAP R/3 stage are generated
automatically, Transformer stages map values from source data columns to the
columns generated from the fields of the segment types. In each Transformer stage,
you must also map source data values to values that can be used as primary and
foreign keys for the segments. Values for the key columns not actually loaded as
data into the IDocs, but are only used to correlate records flowing into separate
links.
You can design jobs in any way that provides effective key values. The foreign key
values provided for a link representing a parent segment type must exactly match
primary key values provided for a link that represents the parent segment type.
For root-level segment types, the foreign key identifies the IDoc for the segments.
When you start the stage GUI from the WebSphere DataStage Designer Client to
edit an IDoc Load for SAP R/3 stage, the General tab of the Stage page opens by
default.
The main phases in defining an IDoc Load for SAP R/3 stage from the Stage
dialog box are as follows. See the following sections for details.
1. Select a connection to SAP.
2. Select an IDoc type.
3. Optionally define a character set map.
4. Define the data on the input links.
Click OK to close this dialog box. Changes are saved when you save the job
design.
52 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
Stage General page
The parameters that are needed by WebSphere DataStage to connect to an SAP
database are defined on the General tab on the Stage page. You select a logical
WebSphere DataStage connection to SAP from the list of available connections.
This list is maintained on the server. Each connection contains all the information
needed to connect to a particular R/3 system. Logical connections are shared with
the IDoc Extract for SAP R/3 component.
Use the following components on this tab to connect to an SAP R/3 system:
v WebSphere DataStage Connection to SAP. The WebSphere DataStage
connection to the SAP R/3 system that is defined on the server machine and
shared by all users connected to that machine. The fields in this area are
read-only and are obtained from the connection that you selected. For more
information, see Selecting the WebSphere DataStage Connection to SAP.
– Name. The name of the selected connection to the SAP R/3 system that
generates the IDocs to be loaded by this stage. The logical connection
includes RFC client logon details.
– Select. Click to choose a WebSphere DataStage connection to an existing SAP
R/3 system or create a new one. The Select WebSphere DataStage
Connection to SAP dialog opens. The selected connection provides all needed
connection and default logon details that are needed to communicate with the
corresponding SAP R/3 system. The default SAP Logon Details are used if
Use connection defaults is selected.
– Description. Additional information about the selected connection.
– Application Server. The name of the host system running R/3. If the selected
connection uses load balancing, Message Server is used instead.
– System Number. The number assigned to the SAP R/3 system used to
connect to R/3. If the selected connection uses load balancing, System ID is
used instead.
v SAP Logon Details. The fields in this area are read-only unless you clear the
Use connection defaults box.
– User Name. The user name that is used to connect to SAP.
– Password. A password for the specified user name.
– Client Number. The number of the client system used to connect to SAP.
– Language. The language used to connect to SAP.
– Use connection defaults. Clear to remove the default SAP Logon Details
settings so you can use different logon information for only this stage in this
job. If selected, the displayed logon details are obtained from the selected
connection and are disabled.
v Enter an optional description to describe the purpose of the stage in the
Description field.
v Validate All. Select to check the stage and link properties and the column lists
for consistency and completeness.
For a newly created stage, the connection for the stage defaults to the last one you
selected for another stage of this type. If only one WebSphere DataStage connection
to an SAP R/3 system is defined on the server machine, a new stage defaults to
that connection regardless of whether you previously created an IDoc Load for
SAP R/3 stage.
This dialog lets you choose a WebSphere DataStage connection to the SAP R/3
system. The selected connection provides all needed connection and default logon
details that are needed to communicate with the corresponding SAP R/3 system.
New, Properties, and Remove are administrative connection operations. They let
you manage the list of connections that is maintained on the WebSphere DataStage
server machine from the Select WebSphere DataStage Connection to SAP dialog:
v New. Click to open the Connection Properties dialog, which lets you define the
properties for the new connection. Here you can specify SAP connection details
and default logon details for the new connection. The Connection and Logon
Details page opens by default.
v Properties. Click to open the Connection Properties dialog, showing the
properties of the selected connection. This is the same dialog that is opened
when you click New, but in this context the connection name is read-only.
v Remove. Click to delete the selected connection after your confirmation.
Use Properties to open the Connection and Logon Details page of the Connection
Properties dialog, showing the properties of the selected connection. This is the
same dialog that is opened when you click New, but in this context the connection
name is read-only.
Click IDoc Load Settings on the Connection Properties dialog to complete the
configuration for loading IDocs.
54 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
Selecting IDoc types
After you provide a WebSphere DataStage connection to SAP on the General tab of
the Stage page, specify the type of the IDocs to be loaded on the IDoc Type tab of
the Stage page.
0 - All IDocs will be sent in one packet as a part of a single SAP IDoc Load
transaction. If the transaction fails, no IDocs will be loaded into SAP system.
1- The default IDoc packet size is 1, this maintains the current IDoc Load stage
behavior. Each IDoc is loaded as a part of separate SAP IDoc Load transaction.
`n’- `n’ IDocs are part of a single SAP IDoc Load transaction, where `n’ is defined
as a number greater than 1. If `n’ was set to 10, and there are 50 IDocs to be
loaded, then 5 transactions are used, each sending 10 IDocs in the packet. If there
is a problem in the fourth transaction, 30 IDocs will have been committed in three
separate transactions, no IDocs from the forth transaction will be loaded and the
fifth transaction will not occur.
For more information about NLS or job parameters, see WebSphere DataStage
documentation.
Begin the selection process from the General tab on the Input page:
The Input page has an Input name field, the General and Columns pages, and the
Columns button. The Input link pages are almost identical to the Output link
pages of the IDoc Extract for SAP R/3 stage. They display the fields to load.
v Input name. The name of the input link. Choose the link you want to edit from
the Input name list box.
v Click the Columns button to display a brief list of the columns designated on
the input link. As you enter detailed metadata in the Columns page, you can
leave this list displayed.
After you choose the input link from the Input name box, the following steps
summarize how input links function. The subsequent sections describe these steps
in detail.
1. Click Select beside IDoc Component to Load on the General tab of the Input
page to open the Select IDoc Component to Load dialog. The IDoc Component
to Load, Description, and Fields in Component information is displayed after
you select a segment for the link.
2. The selected segment type and its fields to load now appear on the General tab
of the Input page.
3. Columns are automatically generated for each link when the segment type is
selected.
4. You can delete unnecessary columns.
5. Columns for missing fields can be added by clicking Add Columns on the
Columns tab of the Input page.
56 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
Selecting the IDoc component to load
When you click Select on the General tab of the Input page, the Select IDoc
Component to Load dialog appears. This dialog contains an IDoc Components
control identical to the one on the IDoc Type tab of the Stage page.
You select the segment definition for each link from the Select IDoc Component to
Load dialog.
The most recent released version of the IDoc and its segments are used.
1. Select a segment type and click OK.
2. The dialog closes, and you return to the General tab of the Input page with the
IDoc Component to Load, Description, and Fields in Component information.
Modifying columns
When you choose a segment from the Select IDoc Component to Load dialog, a
corresponding list of columns for the link including key columns is automatically
generated.
P-keys (primary keys) are named after the segment type. F-keys (foreign keys)
relate records for input links to parent records. As a result, metadata is
synchronized throughout the links when changes are made to it. F-keys and P-keys
are automatically updated, giving users flexibility in using data from different
columns in the source data.
You can view and modify these generated columns using the Columns tab on the
Input page.
The Columns tab of the Input page has the following components:
v The Corresponding Load Field for Column ″xx″ is the single-row grid near the
bottom of the tab showing the load field that corresponds to the selected
column. (The value changes depending on the selected column.) It includes
Name, Description, Internal Type, and Length information. In this window, for
example, no values are displayed since a key column was selected with no
corresponding segment field.
v Click Add Columns to add columns for any fields that are not currently
represented in the columns list. The Add Columns dialog appears where you
can select from the available load fields for columns.
You can also delete any unneeded columns.
v Click Validate Columns to check the column list for consistency with the
segment fields.
The first column in the list on the Columns tab of the Input page,
(IDOC_MATMAS03_P_KEY in the previous example) represents the primary key
for the IDocs.
Any generated value can be used as the value for the column, but for each value
of the key, there must be exactly one record that is received through the link.
Select a corresponding segment type for the rest of the input links the same way as
previously described. As each segment type is being selected using the Select IDoc
Component to Load dialog, segment types that have already been assigned a link
are indicated in the Assigned Input Link column.
Generally, you should assign links to parent segment types before assigning links
to the child segment types.
Use Validate All on the General tab of the Stage page to detect when a link is
assigned to a segment, but the links are not assigned to their ancestors. If you try
to assign a segment type whose parent segment type has not been assigned to
some other link, you are warned, but you can continue.
Synchronizing columns
It is sometimes more convenient to use more than one column for a particular key.
Use Key Columns to open the Key Columns for Link dialog. This lets you change
the number of columns that are used to represent the primary and foreign key for
the link. If you change the primary key to two, for example, and click OK, the
column list is refreshed, generating as many columns for each key as requested.
Changing the number of columns that are used for a primary key also causes the
column lists for links that include a corresponding foreign key to be updated.
Likewise, changing the number of columns used for a foreign key causes the
corresponding primary key columns to be updated in the link representing the
parent segment type.
58 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
Loading and extracting SAP R/3 business objects using BAPI
interfaces
SAP created the Business Framework, which is an open, component-based product
architecture. It allows the technical integration and exchange of business data
among R/3 SAP components and between SAP and non-SAP components.
Business Objects and their BAPIs are important components of the Business
Framework, which are used by the stage to move data across SAP and non-SAP
components in a standard way.
BAPIs are standard SAP interfaces that are used by the stage to integrate with SAP
R/3 systems. They are technically implemented using function modules that are
RFC-enabled (Remote Function Call) inside SAP systems.
You can use the BAPI stage to interface with the my-SAP Business Suite, which is a
family of solutions and integrated application platforms, such as CRM, SCM, and
so forth that are supplied by SAP. Use the BAPI stage within IBM WebSphere
DataStage as a passive stage, for example, to load data into and extract data from
SAP R/3 Enterprise as the target system. Do this by using the library of Business
Application Programming Interfaces (BAPIs), which is provided and maintained by
SAP. The databases always remain in a consistent state after executing the BAPIs
(methods).
The BAPI stage lets you use these BAPIs to design load or extract jobs. You can do
this because the stage captures the metadata for each BAPI and dynamically builds
the complex RFM call to execute these BAPIs.
You can use the BAPI stage GUI to select a Business Object and its BAPI from the
SAP R/3 system and display the corresponding interface which you can use to
load or extract data. The runtime component for the stage loads or extracts data
into or from the SAP application server.
Functionality
The BAPI stage has the following functionality:
v Lets you choose and define SAP connections using the GUI.
v Lets you explore the SAP BOR, dynamically choose any BAPI, and store its
metadata in the job.
v Lets you view the BAPI interface using a function module (RFC) with its import,
export, and table parameters, and optionally decide which parameters to use
when executing the BAPI.
v Works with the my_SAP Business Suite products.
v Loads and extracts data into and from SAP R/3 using BAPIs with appropriate
log information.
v Supports NLS (National Language Support). For information, see IBM WebSphere
DataStage Server Job Developer Guide.
Application layer
1 4
2 3 5
In summary, the stage can dynamically call any BAPI and process the returned
data appropriately, based on whether it is a load or an extract BAPI.
Building a job
The BAPI stage can be a data source and a data destination. You can add links into
the stage and out from the stage. Multiple links are allowed in both directions that
let this stage call multiple BAPIs in the same job to load and extract data at the
same time.
60 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
If the job uses multiple input or output links, each link must define a unique BAPI,
that is, a link cannot contain a BAPI that is already used in another link in the
same job.
To build a job:
1. Create a WebSphere DataStage job using the WebSphere DataStage Designer
Client.
2. Create the BAPI stage, adding the stages and links you need to load and extract
data. Double-click the BAPI stage icon to open the stage editor dialog (GUI).
3. Define the stage properties.
4. Define the SAP R/3 connection details and the SAP logon information.
5. Specify the properties for input links for loading data to SAP R/3.
6. Specify the properties for output links for extracting data from SAP R/3.
7. Compile the job.
You can change the default character set map defined for the project or the job by
selecting a map name from the list.
For more information about NLS or job parameters, see WebSphere DataStage
documentation.
Since the information is stored on the WebSphere DataStage server, it lets you
share connections across several WebSphere DataStage clients that you created, for
example:
62 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
the corresponding buttons on the DataStage Connections to SAP page of the
WebSphere DataStage Administrator for SAP utility.
For more information about creating or defining new connections and deleting
existing connections, see the following section and Administrator for SAP Utility.
The Application Server and the System Number controls are replaced by Message
Server, System ID, and Group controls so that connection details specific to load
balancing can be entered.
The Input page has an Input name field, the General, BAPI, Logs, and Columns
pages, and the Columns button.
v Input name. The name of the input link. Choose the link you want to edit from
the Input name list.
v Click the Columns button to briefly list the columns designated on the input
link. As you enter detailed metadata in the Columns page, you can leave this list
displayed.
After you select a BAPI on the BAPI Explorer dialog, the General tab
automatically displays the selected Business Object, BAPI, and short description of
the selected BAPI.
This interim dialog opens only for release 4.6 and later of SAP. Otherwise, the
BAPI Explorer window opens.
After you select a component ID, this dialog contains only those Business objects
that belong to the selected application component. The list of available application
components to choose from is dynamically built, depending on the accessed
system.
v BAPI’s to Display. You can show only released BAPIs (the default) or all BAPIs
in the BAPI Explorer window. You must ensure unreleased BAPIs are complete
and ready to use, or a warning appears. (Click Show All BAPIs to see those
BAPIs that are unreleased.)
v OK. Click to open the BAPI Explorer dialog.
v Cancel. Click to return to the Input or Output General page.
You can view the parameters or attributes for the selected BAPI on the Import (the
default), Tables, and Export tabs. Fields that are required for import and table
parameters appear on the Columns page.
When a BAPI is first selected, only required parameters are active on the BAPI
Import tab of the Input page. For input links, fields that are required for import
parameters appear on the Columns page.
64 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
You can tell whether a parameter is active by the following indications:
v Green icons beside the parameter names indicate that the parameters are active,
that is, used to dynamically build BAPI calls at run time. (Red icons indicate
that parameters are inactive, unused when calling BAPIs.)
v Green icons display I or E to indicate whether table parameters are activated for
Import or Export.
When you move the cursor over a parameter, its shape changes, indicating that
you can activate or deactivate parameters.
Click the Tables tab on the Input BAPI page to view the table parameters of a
BAPI.
Fields that are required for tables parameters appear on the Columns page.
Click the Export tab on the Input BAPI page to view the export parameters of a
BAPI. By default, export parameters are optional and initially inactive.
Fields that are required for export parameters appear in a grid on the Input Logs
page, not on the Input Columns page, because they contain return values from the
BAPI call.
The reference structure being used for each parameter appears with a short text
description of the parameter. This helps you decide whether to activate a particular
parameter.
All fields in the grid except the first (BAPISeqNo) are derived from reference
structures for the import and tables parameters that get activated on the Input
BAPI page.
As for other stages, you can use the following components for the column
definitions:
v Save. Click to open the Save Table Definition dialog to save column definitions
to the WebSphere DataStage repository.
v Load. Click to open the Table definitions dialog to load the column definitions
from the WebSphere DataStage repository into other stages in your job design.
This resizable window looks exactly like the Input General page, except that the
selected BAPI extracts data from SAP.
The Output page has an Output name field, the General (the default), BAPI, Read
Logs, and Columns pages, and the Columns button.
v Output name. The name of the output link. Choose the link you want to edit
from the Output name list.
v Columns. Click to briefly list of the columns designated on the output link. As
you enter detailed metadata in the Columns page, you can leave this list
displayed.
66 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
After you select a BAPI on the BAPI Explorer dialog (see BAPI Explorer dialog),
the General tab of the Output page automatically displays the selected Business
Object, BAPI, and short description of the selected BAPI.
This page displays the attributes for the selected BAPI. As for the Input BAPI page,
you can design and determine the runtime behavior of the selected BAPI.
When you first select a BAPI, only required parameters are active.
The color of the icons beside parameter names indicate whether they are active:
v Green. Indicate active parameters that are used when dynamically building the
BAPI call at runtime.
v Red. Indicate inactive parameters that are unused when calling the BAPI.
You can activate or deactivate table parameters for Import or Export. When values
need to be passed to SAP using table parameters, you can activate them for Import
(or Input). When values need to be extracted from SAP, you can activate table
parameters for Export.
The reference structure is being used for each parameter with a short text
description of the parameter to help you decide whether to activate a particular
parameter.
For output links, the following parameters appear on the Columns page:
v Parameters activated on the Export tab
v Tables parameters that are activated for extracting values
The input file for import parameters passes input parameters to SAP R/3 for BAPI
extraction jobs.
The BAPI input file for import parameters passes data to the BAPI as
comma-separated values (CSV). The file has the same encoding as the SAP R/3
platform (therefore, on Unicode platforms, the file encoding is Unicode). You
specify the full path and file name of the input file in the Input File for Import
Parameters field of the Output Read Logs tab.
The input parameters can be structures or tables. The input-file format matches the
columns that are displayed by the BAPI Output Read Logs page.
v Each column displayed by the Read Logs page is represented in the input file as
a comma-delimited field.
v Each line in the file represents a call to the BAPI. Each call takes the input data
from one line. (To pass multiple lines to BAPI tables, use the Load design.)
v Use SAP R/3 to display the data in a table or structure.
Sample
The following example input file makes five BAPI calls. Each call takes the input
data from one line. (Each line in this example is truncated to fit the page.)
4500000153,,,,,,abc,,,,,,12,,,,,org,,,,345,f,,,,,,,,,,,,,,,,,,,,,,,,,IS,22,1,,,
4500000153,12345,d,text0,,,,,,,,,,,,,,,1234567890.123,pou,uis,opu,opi,45678,,,,
4500000153,,,,,,abc,,,,,,12,,,,,org,,,,345,f,,,,,,,,,,,,,,,,,,,,,,,,,IS,22,1,,,
4500000153,22347,d,text1,,,,,,,,,,,,,,,2234567890.123,pou,uis,opu,opi,45678,,,,
4500000153,,,,,,abc,,,,,,12,,,,,org,,,,345,f,,,,,,,,,,,,,,,,,,,,,,,,,IS,22,1,,,
The grid displays those fields that correspond to the parameters selected on the
Import tab on the Output BAPI page and the fields that correspond to parameters
activated on the Tables tab for Input on the Output BAPI page.
The locations for import parameters and other log files are specified on this
window. By default, a directory name that corresponds to the selected BAPI that
gets created in the DataStage\DSSAPConnections directory appears.
For example, if the selected BAPI for the stage is PurchaseOrder.GetDetails and the
connection name is C46, the default directory for import values is:
DSHOME\DSSAPConnections\C46\BAPIs\Extracts\PurchaseOrde
r.GetDetails
You can specify another directory to store these import values by clearing Use
defaults and browsing for existing directories or by entering a value in the edit
control.
68 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
Click Browse beside Location for Log Files to open the Browse directories dialog
to select a directory.
You can also click Browse beside Input File for Import Parameters to open a
dialog to select a file.
Runtime component
The BAPI stage runtime component is a modified sequential file stage that
supports multiple links in both directions. It dynamically builds a call to the
selected BAPI and runs that call.
Input links. The return values from the BAPI call are stored in the appropriate
directory that you specify on the Input Logs page.
The name of the log files corresponds to the parameter names that are activated for
the BAPI, for example:
v Export parameters. If an export parameter named PURCHASEORDER is
activated for the BAPI, a log file named PurchaseOrder.txt is created in the
Export directory in the directory specified on the Input Logs page.
v Tables parameters. You can use table parameters to import or export values.
Two log files are created for each parameter, which contain the following:
– The parameter name with the .TXT suffix
– The parameter name with the __RETURN.TXT suffix
For example, for the table parameter named DESCRIPTION, log files named
DESCRIPTION.TXT and DESCRIPTION_RETURN.TXT are created.
Output links. The job expects import values to be available in the Import directory
that gets created under the pathname specified on the Output Read Logs page. The
name of the log files corresponds to the parameter names that are activated for the
BAPI, for example:
v Import parameter. If an import parameter named PURCHASEORDER is
activated for the BAPI, a log file named PurchaseOrder.txt is expected to be in
the Import directory in the path name specified on the Output Read Logs page.
v Tables parameters. You can use table parameters to import or export values.
Two log files are created for each active parameter, which contain the following:
– The parameter name with the .TXT suffix
The Pack includes a stand alone utility called WebSphere DataStage Administrator
for SAP that lets you manage the configurations of the various R/3 connection and
IDoc type objects that are beyond the scope of a WebSphere DataStage job.
You can invoke this utility from the WebSphere DataStage program group on a
WebSphere DataStage client machine to do the following:
v Define the connection to the SAP database.
v Monitor the activity of the IDoc listener.
v Migrate connection and IDoc type configuration settings from one WebSphere
DataStage installation to another.
v Modify connection and IDoc type configuration settings.
v Schedule cleanup of temporary IDoc files.
The following steps let you define a WebSphere DataStage connection to SAP. The
subsequent sections describe these steps in detail.
1. Run the WebSphere DataStage Administrator for SAP on a WebSphere
DataStage client machine.
70 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
2. Click Add > New to define the properties for the new connection.
3. Enter a connection name, a description, and SAP connection and logon details.
4. Optionally, use load balancing.
5. Enter an IDoc listener program ID (the same ID must be defined for the tRFC
port in the SAP database).
6. Define the properties for running jobs automatically as IDocs arrive on the
DataStage Job Options for IDocs page.
7. Click Add to test the connection and logon properties and add the connection.
An IDoc listener server starts automatically after the new connection is added.
8. Configure the SAP system to access WebSphere DataStage by doing the
following:
a. Log on to the SAP R/3 system.
b. Create a tRFC port, and assign it the IDoc listener program ID that was set
for the connection.
c. Create a logical system to represent WebSphere DataStage.
d. Attach the tRFC port to the logical system.
9. Click IDoc Types to see the IDoc types for the selected connection. It opens the
IDoc Types dialog where you can set the properties of IDoc types.
The WebSphere DataStage Connections to R/3 page has the following components:
v Add. Click to open a popup menu with the New and Import items. Do the
following:
1. Click New to open the Connection Properties dialog, which lets you define
the properties for the new connection. Here you can specify Connection
Name, Description, and SAP connection and default logon details for the
new connection (See Defining WebSphere DataStage connections to SAP
R/3).
2. Click Import to open a standard Open File dialog that lets you select the
export file to be imported. This lets you import a new connection (see
Importing new connections).
v Properties. Click to open the Connection Properties dialog, showing the
properties of the selected connection. This is the same dialog that is opened
when you click New on the Select DataStage Connection to SAP dialog, but in
this context the connection Name is read-only, and the Add button is replaced
by an OK button.
v Import into. Click to import IDoc type configurations into the selected
connection that already exists. A standard Open File dialog appears so you can
select a file to be imported (see Importing IDoc type configurations).
v Export. Click to save the configuration information for the selected connection
and all its associated IDoc types into a file (see Exporting connections).
v Remove. Click to delete the selected connection after your confirmation.
v IDoc Types. Click to see the IDoc types for the selected connection. It opens the
IDoc Types dialog (see IDoc Types dialog). This button does not appear on the
Select WebSphere DataStage Connection to SAP dialog.
v IDoc Log. Click to open the IDoc Log dialog. This dialog displays log messages
reported by the IDoc listener for the connection (see the section IDoc Log
dialog). This button does not appear on the Select WebSphere DataStage
Connection to SAP dialog.
If Listen for IDocs received through this connection is cleared on the IDoc
Listener Settings page, the Import Into, IDoc Types, and IDoc Log buttons on this
The WebSphere DataStage Connections to R/3 page lets the RFC Server make a
non-load balancing connection even if load balancing is specified in the connection
configuration and provides an option that prevents the Listener from sending
status updates back to the R/3 system when IDocs are received.
The Connection and Logon Details page opens by default. Group appears for this
window when Use load balancing is set.
Load balancing
Select Use Load balancing on the Connection and Logon Details page of the
Connection Properties dialog to balance loads when connecting to the R/3 system.
Load balancing works as follows:
1. R/3 lets you make logon connections through a message server. The message
server uses an algorithm that considers server workload and availability to
choose an appropriate application server to handle the logon.
When connections are configured, you can choose a load balancing connection
to a message server rather than a specific R/3 instance to retrieve and validate
IDoc types and metadata.
72 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
2. If load balancing is selected for this connection, the listener server uses load
balancing when returning status updates to R/3.
3. If load balancing is selected, the stage runtime component uses load balancing
when connecting to R/3 to validate IDoc type metadata at job execution time.
The listener server does NOT use this connection to listen for IDocs arriving on an
RFC port since this is an RFC server connection. Load balancing is only a client
connection feature.
This page lets you make an additional load balancing test connection as follows:
1. Select Listen for IDocs received through this connection so that a listener
server runs continuously on the WebSphere DataStage server machine. This
check box is selected by default. If selected, this option indicates that the
connection is enabled for use with IDoc Extract or IDoc Load.
If Listen for IDocs received through this connection is cleared, the other
controls on the IDoc Listener Settings page and DataStage Job Options for
IDocs page are disabled (including the labels for the controls). Also if cleared,
the Import Into, IDoc Types, and IDoc Log buttons on the DataStage
Connections for R/3 page of the WebSphere DataStage Administrator for SAP
utility are disabled when you select that connection.
2. Specify IDoc Listener Program ID. The listener registers this program ID with
the R/3 system. Use the same program ID for the tRFC port on the SAP R/3
system to be invoked when the R/3 system sends an IDoc to WebSphere
DataStage.
3. Clear Return status update upon successful receipt of IDoc to prevent the
listener from sending status messages to the R/3 system when IDocs are
received. (It is set by default.) Do this to reduce the load incurred by the R/3
system when it sends IDocs to WebSphere DataStage.
4. If you select Use load balancing on the Connection and Logon Details page,
and the load balancing test connection succeeds, an additional test connection
is made using the IDoc Listener SAP Connection Details that you specify.
If this test connection fails, the following warning appears:
Currently unable to connect to SAP using the IDoc Listener
connection information specified for this connection. Do
you want to save your changes anyway?
Separate connection details must be provided for the listener, since the Listener
cannot make a load-balancing connection. If load balancing is indicated on the
Connection and Logon Details page, you can modify the Application Server,
System Number, and Router String controls shown on the IDoc Listener Settings
page.
The Application Server, System Number, and Router String information is used
by the listener when it connects to the R/3 system. If you clear Use load balancing
on the Connection and Logon Details page, the listener uses the connection details
specified on that page. In that case, the Application Server, System Number, and
Router String controls on the IDoc Listener Settings page displays values copied
from the Connection and Logon Details page, but these values are read-only
(disabled).
Click WebSphere DataStage Job Options for IDocs to open the page:
74 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
configurations are to be imported. This includes only those IDoc types that are
not already configured for the connection.
2. A standard Open File dialog opens that lets you select the export file to be
imported. Normally, the export file is created by exporting a connection from
another WebSphere DataStage installation.
3. The Add Connection dialog opens, with property values that default to those of
the connection being imported. If the name of the connection is the same as
that of an existing connection, you are asked if you want to overwrite the
existing connection.
4. The Import IDoc Type Configurations dialog opens, which shows the name and
description of the newly imported connection, and lists the IDoc types whose
configurations are imported with the connection (see the following section).
Exporting connections
To export connections by saving configuration information into a file:
1. Click Export from the WebSphere DataStage Connections to SAP page to
open the Export Connection dialog. This dialog lets you save the following
configuration information for the selected connection and all its associated IDoc
types into a file:
To set the properties of the IDoc types for the selected connection:
1. Open the IDoc Types dialog by clicking on IDoc Types on the WebSphere
DataStage Connections to SAP page.
This dialog is similar to the Select IDoc Type dialog, except OK is replaced by
Close, and there is no Cancel.
The Connection field specifies the connection whose IDoc types are displayed
in the list with descriptive text in the Description field.
2. Click Find to open a Find IDoc Type dialog to search for IDoc types that
contain user-specified substrings in their name or description as in the Select
IDoc Type dialog.
3. Click Properties to examine and change the WebSphere DataStage
configuration for the selected IDoc type using the IDoc Type Properties dialog.
(You can also do this by double-clicking an IDoc type in the list.)
76 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
SAP Authorization Requirements for ABAP
This section describes the SAP authorization requirements to run Version 3.0.1r1 of
the ABAP Extract for SAP R/3 stage. It documents how to manually create an
Z_DS_PROFILE authorization profile and the Z_RFC_DS_SERVICE RFC function
module in SAP R/3.
You already have these two components installed on your R/3 system if you
imported transport requests. However, if this import is unsuccessful, you can
manually create these components in your R/3 systems (see Creating an
authorization profiles manually and Installing the RFC function code manually.
The section also describes the configuration of the SAP Dispatch/Gateway service.
It further includes information about packaged extraction jobs that include
common extraction scenarios which you can customize for your environment.
The IBM WebSphere DataStage ABAP Extract for R/3 stage addresses this concern
and provides a comprehensive and flexible way to communicate with R/3 systems
without compromising the security aspects that already exist.
Depending on the level of security desired and the type of system (DEV, QAS, or
PRD) from which to extract data, you can assign the standard SAP S_A.DEVELOP
profile or Z_DS_PROFILE that is shipped as a transport request on the Pack client
CD to ABAP Extract stage users. The Z_DS_PROFILE, which you can install by
importing the transport request or create manually, contains the minimum
authorizations for designing and running WebSphere DataStage jobs.
These limited authorizations ensure that ABAP Extract stage users can run only
those transactions to which they have permissions and are denied access to other
transactions. For more details about each authorization, see Creating an
authorization profile manually.
The SAP Administrator needs to create a profile in the R/3 system. This profile
contains the authorization objects referenced in the following text with their
respective authorizations. The Administrator must then assign this profile to
WebSphere DataStage extract users.
Authorization
Object Text Authorization
S_RFC Authorization check for RFC Z:DS_RFC
access
S_DATASET Authorization for file access Z:DS_DATASET
S_CPIC Calls from ABAP programs Z:DS_CPIC
S_TABU_DIS Table maintenance (using Z:DS_TABU
standard tools, such as
SM30)
S_BTCH_JOB Background Processing: Z:DS_BJOB
Operations on Background
Jobs
S_XMI_PROD Authorization for external Z:DS_XMI
management interfaces (XMI)
78 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
v Object. S_DATASET - Authorization for file access
To upload or delete ABAP code, SAP users (CPIC and Dialog) must be
registered within SAP as developers. To do this, log on to the Online Support
System (OSS), and select the appropriate SAP installation number. Go to
Register > Register Developer, and type the SAP user name. OSS provides a 20
character access key. Record this access key, and create a dummy ABAP
program. When SAP prompts you for an access key, type this OSS key, and the
system will confirm it.
For security purposes, the Program name and Company name are required and
are case sensitive.
80 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
4. The Attributes page of the Function Builder: Change Z_RFC_DS_SERVICE
dialog opens.
Specify the following values on the Attributes page.
Use the defaults for the remaining entries:
v Application: Z
v Short text: WebSphere DataStage General Service Utility
v Processing type: Remote-enabled module
v Update module: Start immed.
If you need help determining which function group to use, see step 2 in
Troubleshooting for RFC installation for SAP R/3.
5. Click the Import page to enter the import parameters. New service types exist
in Z_RFC_DS_SERVICE so that job parameters can be passed to ABAP
programs that run in the background. For details about running these
programs in the background, see Running programs in the background.
The Import page opens.
Type the following values shown on this sample Import page for the
Parameter name, Type, Reference type, Default value, Optional, and Pass
value fields:
Default
Parameter name Type Reference type value Optional Pass value
I_SER VE_TYPE LIKE SY-TFILL Yes
I_TABLE_NAME LIKE DD02T-TABNAME Yes Yes
I_REPORT_NAME LIKE D010SINF-PROG Yes Yes
I_DELIMITER LIKE SONV-FLAG SPACE Yes Yes
I_NO_DATA LIKE SONV-FLAG SPACE Yes Yes
I_ROWSKIPS LIKE SOID-ACCNT 0 Yes Yes
I_ROWCOUNT LIKE SOID-ACCNT 0 Yes Yes
I_CURR_VARIANT LIKE RSVAR-VARIANT Yes Yes
I_VARI_DESC LIKE VARID Yes Yes
To use the WebSphere DataStage Pack for SAP R/3 correctly, you must add entries
for the SAP Dispatch/Gateway service to the services file for your DataStage client
and server systems. If SAP R/3 is already configured on the WebSphere DataStage
client and server systems, these entries may already be added.
Unix: /etc/services
# SAP Port
82 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
#
sapdp00 3200/tcp
sapdp01 3201/tcp
sapdp02 3202/tcp
sapdp03 3203/tcp
sapdp04 3204/tcp
sapdp05 3205/tcp
sapdp06 3206/tcp
sapdp07 3207/tcp
sapdp08 3208/tcp
sapdp09 3209/tcp
sapdp10 3210/tcp
sapdp11 3211/tcp
sapdp12 3212/tcp
sapdp13 3213/tcp
sapdp14 3214/tcp
sapdp15 3215/tcp
sapdp16 3216/tcp
sapdp17 3217/tcp
sapdp18 3218/tcp
sapdp19 3219/tcp
sapdp20 3220/tcp
sapdp21 3221/tcp
sapdp22 3222/tcp
sapdp23 3223/tcp
sapdp24 3224/tcp
sapdp25 3225/tcp
sapdp26 3226/tcp
sapdp27 3227/tcp
sapdp28 3228/tcp
sapdp29 3229/tcp
sapdp30 3230/tcp
sapdp31 3231/tcp
sapdp32 3232/tcp
sapdp33 3233/tcp
sapdp34 3234/tcp
sapdp35 3235/tcp
sapdp36 3236/tcp
sapdp37 3237/tcp
sapdp38 3238/tcp
sapdp39 3239/tcp
sapdp40 3240/tcp
sapdp41 3241/tcp
sapdp42 3242/tcp
sapdp43 3243/tcp
sapdp44 3244/tcp
sapdp45 3245/tcp
sapdp46 3246/tcp
sapdp47 3247/tcp
sapdp48 3248/tcp
sapdp49 3249/tcp
sapdp50 3250/tcp
sapdp51 3251/tcp
sapdp52 3252/tcp
sapdp53 3253/tcp
84 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
sapgw10 3310/tcp
sapgw11 3311/tcp
sapgw12 3312/tcp
sapgw13 3313/tcp
sapgw14 3314/tcp
sapgw15 3315/tcp
sapgw16 3316/tcp
sapgw17 3317/tcp
sapgw18 3318/tcp
sapgw19 3319/tcp
sapgw20 3320/tcp
sapgw21 3321/tcp
sapgw22 3322/tcp
sapgw23 3323/tcp
sapgw24 3324/tcp
sapgw25 3325/tcp
sapgw26 3326/tcp
sapgw27 3327/tcp
sapgw28 3328/tcp
sapgw29 3329/tcp
sapgw30 3330/tcp
sapgw31 3331/tcp
sapgw32 3332/tcp
sapgw33 3333/tcp
sapgw34 3334/tcp
sapgw35 3335/tcp
sapgw36 3336/tcp
sapgw37 3337/tcp
sapgw38 3338/tcp
sapgw39 3339/tcp
sapgw40 3340/tcp
sapgw41 3341/tcp
sapgw42 3342/tcp
sapgw43 3343/tcp
sapgw44 3344/tcp
sapgw45 3345/tcp
sapgw46 3346/tcp
sapgw47 3347/tcp
sapgw48 3348/tcp
sapgw49 3349/tcp
sapgw50 3350/tcp
sapgw51 3351/tcp
sapgw52 3352/tcp
sapgw53 3353/tcp
sapgw54 3354/tcp
sapgw55 3355/tcp
sapgw56 3356/tcp
sapgw57 3357/tcp
sapgw58 3358/tcp
sapgw59 3359/tcp
sapgw60 3360/tcp
sapgw61 3361/tcp
sapgw62 3362/tcp
sapgw63 3363/tcp
sapgw64 3364/tcp
sapgw65 3365/tcp
This troubleshooting information applies to all versions of SAP R/3. If the RFC
utility does not work, do the following:
1. Verify that the settings are correct on the Attributes page for transaction SE37.
2. If you do not know which function group to use, start transaction SE84 to see a
list.
a. Click the Programming > Function Builder, then double-click Function
groups. The R/3 Repository Information System: Function groups dialog
opens, where you specify the selection criteria.
b. Type z* in the Func. group name field, and run it to display a list of
available function groups. The Function groups dialog opens.
c. If you are not sure which group to use, start a new transaction SE80. The
Object Navigator dialog opens.
d. Several functions are displayed under the function modules branch. Display
the different function groups (as listed in SE84 in step 2) until you find the
one you want. The Import page appears.
3. Verify that the parameters are correct for the Import page according to the
following guidelines:
86 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
a. The LIKE entry for Type is case-sensitive.
b. The Optional value must be clear for the I_SERV_TYPE parameter. The
other parameters must have Optional selected.
c. The RFC function cannot pass a value by reference. Pass value must be
selected.
4. Verify that the parameters for the Export, Tables, and Exceptions tabs are
correct.
5. Click Source code. The function module header appears at the top of the code.
The IMPORTING and EXPORTING values must be passed by VALUE, not
REFERENCE. Instead of uploading the code into the Editor, do the following:
a. Copy the code in the ZDSRFC.TXT file, from TABLES through the last
ENDIF, to the clipboard.
b. Paste the code to the Editor in the function module.
c. Check that the syntax for this function procedure displays No syntax error
found.
6. Activate Z_RFC_DS_SERVICE. Do this by selecting Function module >
Activate (or Ctrl+F3) to activate the code so that it can function properly.
7. As part of the installation process, test that the function works in your
environment by testing the version number or the table content:
SAP and RFC code version number. Do the following:
a. Start Transaction SE37, and type Z_RFC_DS_SERVICE.
b. Press F8. The Test Function Module: Initial window opens.
c. Click Execute. The Test Function Module: Result window appears. The
SAP version number (for example, 46B) and the RFC utility version number
are displayed.
Table content. Do the following:
d. Start Transaction SE37, and type Z_RFC_DS_SERVICE in the Function module
field.
e. Press F8. The Test Function Module: Initial window opens.
f. Type 5 in the I_SERVE_TYPE field.
g. Type T000 for I_TABLE_NAME.
h. Click Execute. The Test Function Module: Result window opens. If
T_FIELDS and T_DATA contain records, the RFC utility works properly.
Packaged extractions
The ABAP Extract for SAP R/3 stage includes packaged extraction jobs that help
you understand common extraction scenarios.
You can customize these jobs for your ETL initiatives by providing parameters
specific to your environment and by using the extraction object and corresponding
ABAP that already exist in the job. The ABAP Extract stage also includes jobs that
extract long text fields from R/3. The following list describes these jobs and the
SAP tables from which they extract data:
v LongTextCPIC. This job uses CPIC data transfer method to extract long text
from VarChar fields. It decompresses them using the READ_TEXT function
module.
Note: The Extraction Object and the corresponding ABAP program cannot be
regenerated as this results in overwriting the code to call the function module. If
Path of Remote File in the Output >Runtime tab is changed, you must
manually search for the old path in the ABAP program and replace it with the
new one.
v MARAextract. Extracts a range of materials from MARA and their
corresponding descriptions from the MAKT table.
v MaterialExtract. Extracts data from the primary material tables, namely, MARA,
MARM, MBEW, MVKE, MARC, MARD, MCHB, MKOL, MLGN, and MLGT.
v SalesExtract. Extracts data from the VBAK and VBAP tables.
v SalesCustomerMasterExtract. Extracts data from the KNA1, KNB1, and KNVV
tables.
You can customize these jobs for your ETL initiatives by providing parameters
specific to your environment and by using the extraction object and corresponding
ABAP that already exist in the job. The ABAP Extract stage also includes jobs that
extract long text fields from R/3. The following list describes these jobs and the
SAP tables from which they extract data:
v LongTextCPIC. This job uses CPIC data transfer method to extract long text
from VarChar fields. It decompresses them using the READ_TEXT function
module.
Note: The Extraction Object and the corresponding ABAP program cannot be
regenerated since it overwrites the code to call the function module.
v LongTextFTP. This job uses the FTP data transfer method to extract long text
from VarChar fields. It decompresses them using the READ_TEXT function
module.
Note: The Extraction Object and the corresponding ABAP program cannot be
regenerated as this results in overwriting the code to call the function module. If
Path of Remote File in the Output >Runtime tab is changed, you must
manually search for the old path in the ABAP program and replace it with the
new one.
v MARAextract. Extracts a range of materials from MARA and their
corresponding descriptions from the MAKT table.
v MaterialExtract. Extracts data from the primary material tables, namely, MARA,
MARM, MBEW, MVKE, MARC, MARD, MCHB, MKOL, MLGN, and MLGT.
v SalesExtract. Extracts data from the VBAK and VBAP tables.
88 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
v SalesCustomerMasterExtract. Extracts data from the KNA1, KNB1, and KNVV
tables.
When a file is created, ownership is set to the user id of the process that created
the file. Group ownership is set to the primary group of the user who created the
file. Its permissions are then determined by its file creation mode minus the value
of its umask. Child processes inherit the umask value and user id of a process.
For example, if the default file creation mode of a file is -rw-rw-rw- (666), and the
umask for the process is 002, the file created has permissions of -rw-rw-r-- (664).
This means the user and the user’s group have read and write permissions but
everyone else only has read permissions.
Files created by the IDoc Extract Pack are subject to two sets of permissions,
depending on how they are created:
v The first set belongs to those files that are created by child processes of the
dsidocmgr executable, such as the dsIDocsvr executable.
v The second set belongs to those that are created by IBM WebSphere DataStage
client connections, such as the IDoc stage editor and the WebSphere DataStage
Administrator for SAP.
These files inherit the user ID and group ID of the user logged into the
WebSphere DataStage client and the umask setting of the WebSphere DataStage
server process owner. The umask is not that of the user logged in to the client.
Several other files with the .config suffix reside in this directory. (The client creates
these for internal purposes.)
v The IDoc.SegmentAdminFields.3.config and IDoc.SegmentAdminFields.2.config files
are created the first time a user accesses metadata for an IDoc from a particular
ALE port version.
v The IDocCleanup.config file is created the first time a user accesses the IDoc
Cleanup and Archiving page in the DataStage Administrator for SAP.
The user who first logs into WebSphere DataStage to perform these activities owns
these files, which are subject to the umask setting that the server was started with.
When a connection to SAP is created, a directory with the same name given to the
connection is created containing a subdirectory called IDocTypes. These directories
are owned by the user who creates the connection, with permissions subject to the
umask setting that the server was started with.
Note: Users who want to create a connection to SAP either using the stage editor
or the WebSphere DataStage Administrator for SAP must have write permissions
to the DSSAPConnections.config file and the DSSAPConnections directory.
When the first IDoc type is configured, a file called IDocTypes.config is created in
the IDocTypes directory, so the user who configures this first IDoc type must have
write permissions to the IDocTypes directory. Remember that this directory was
created by the user who defined the connection and may be different than the user
defining the first IDoc type.
When a user configures subsequent IDoc types, that user must have write
permissions to the IDocTypes.config file and permissions to create the directory
location (PSA) to be configured for that type. This directory can be configured
anywhere on the file system, but is created by default in the IDocTypes directory.
In the configured PSA, another file with the .config suffix is created that contains
parameters for this IDoc type. The user who logs into WebSphere DataStage to
configure this IDoc type owns these files, which are subject to the umask setting
that the WebSphere DataStage server was started with.
Any user who later modifies the configuration for an IDoc type must have write
permissions to:
v The IDocTypes.config file
v The directory the IDoc type is to be written to
v The IDoc type config file in that directory
If the modification is to specify a new directory location, the user must have
permission to write to the new directory location.
Running WebSphere DataStage jobs: An IBM WebSphere DataStage job uses the user
ID of the user logged in to WebSphere DataStage to run it, no matter how the job
is started. If the RFC Server runs the job, it runs with the WebSphere DataStage
user ID configured for that connection and type, not the user that owns the RFC
Server process (the RFC Server runs as root).
The first user who runs a job that is not in test mode, which reads IDocs of a
specified type from a specified directory location, must have permission to create a
bookmark file. The bookmark file is created in the directory from which the IDocs
for that type are read. This file is named for the IDoc type with the .bmk suffix and
is owned by the user who creates the file. It has permissions subject to the umask
setting with which the WebSphere DataStage server was started.
In addition, a file named saprfc.ini exists in all project directories after the first
WebSphere DataStage job is run. This is the case with any WebSphere DataStage
90 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
Pack for SAP R/3, such as the ABAP Extract and the Load Pack for SAP BW. The
user who runs the job owns the saprfc.ini file, with permissions subject to the
umask setting with which the WebSphere DataStage server is started. Each
subsequent job run removes and recreates this file, so all the users who want to
run jobs in this project must be able to write to this file. For example, if user A
runs a Load Pack for SAP BW job, and user B subsequently runs an IDoc Extract
job, the job fails unless user B has permission to remove and recreate this file.
Unconfigured IDoc types: When the RFC Server receives an IDoc that it does not
recognize, it creates a directory in the default location for that IDoc type. Since the
RFC Server runs as root, it can override the umask setting. It does this by creating
this directory with read, write, and execute permissions for user, group, and other
so that another user can later configure that location. All IDoc files that the RFC
Server receives and writes to the PSA are owned by root and subject to the umask
settings with which the RFC Server Manager is started.
Configuring IDoc cleanup and archiving: The cleanup scheduling mechanism uses
the crontab command. Consequently, user permissions to modify the cleanup
schedule are subject to that user’s permissions to run the crontab command.
Since you cannot use the crontab command to manipulate the crontab files of other
users, the information in the IDOC Cleanup and Archiving page of the IBM
WebSphere DataStage Administrator for SAP only applies to the user logging in to
the tool.
For example, if user A schedules a cleanup for 12:00 every Saturday, and user B
wants to change the cleanup schedule, user B can add this entry, but it will not
remove the entry for user A. Likewise, when user B logs into the tool, the cleanup
settings do not reflect the settings user A established. WebSphere DataStage has the
same limitation when you log in to the WebSphere DataStage Director Client.
Recommendations:
v The umask setting should apply a consistent set of permissions to all users who
log in to IBM WebSphere DataStage for the purposes of administering the IDoc
Extract functionality. In addition, the umask setting should be consistent for all
users running WebSphere DataStage jobs for all SAP R/3 products.
For example, if only one user will ever administer WebSphere DataStage
connections to SAP or run WebSphere DataStage jobs using SAP R/3 stages, the
umask should be 022. If only users from the same group will perform these
functions, the umask should be 002. If you want users to administer WebSphere
DataStage connections to SAP or run WebSphere DataStage jobs using SAP R/3
stages regardless of user id or group, the umask must be explicitly set to 000.
v For consistent permissions, the dsidocmgr executable should be invoked with
the same umask as the WebSphere DataStage server. The dsidocd.rc script used to
start the dsidocmgr references the dsenv file containing the environment settings
for the WebSphere DataStage server. Therefore, the dsidocmgr executable should
use an explicit umask contained in this file.
v Only one user should change the cleanup settings on the system. This lets the
WebSphere DataStage Administrator for SAP reflect the state of the system
accurately.
A subset of the product documentation is also available online from the product
documentation library at publib.boulder.ibm.com/infocenter/iisinfsv/v8r1/
index.jsp.
PDF file books are available through the IBM Information Server software installer
and the distribution media. A subset of the information center is also available
online and periodically refreshed at www.ibm.com/support/docview.wss?rs=14
&uid=swg27008803.
You can also order IBM publications in hardcopy format online or through your
local IBM representative.
You can send your comments about documentation in the following ways:
v Online reader comment form: www.ibm.com/software/data/rcf/
v E-mail: comments@us.ibm.com
Contacting IBM
You can contact IBM for customer support, software services, product information,
and general information. You can also provide feedback on products and
documentation.
Customer support
For customer support for IBM products and for product download information, go
to the support and downloads site at www.ibm.com/support/us/.
You can open a support request by going to the software support service request
site at www.ibm.com/software/support/probsub.html.
My IBM
You can manage links to IBM Web sites and information that meet your specific
technical support needs by creating an account on the My IBM site at
www.ibm.com/account/us/.
For information about software, IT, and business consulting services, go to the
solutions site at www.ibm.com/businesssolutions/us/en.
General information
Product feedback
You can provide general product feedback through the Consumability Survey at
www.ibm.com/software/data/info/consumability-survey.
Documentation feedback
You can click the feedback link in any topic in the information center to comment
on the information center.
You can also send your comments about PDF file books, the information center, or
any other documentation in the following ways:
v Online reader comment form: www.ibm.com/software/data/rcf/
v E-mail: comments@us.ibm.com
94 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
How to read syntax diagrams
The following rules apply to the syntax diagrams that are used in this information:
v Read the syntax diagrams from left to right, from top to bottom, following the
path of the line. The following conventions are used:
– The >>--- symbol indicates the beginning of a syntax diagram.
– The ---> symbol indicates that the syntax diagram is continued on the next
line.
– The >--- symbol indicates that a syntax diagram is continued from the
previous line.
– The --->< symbol indicates the end of a syntax diagram.
v Required items appear on the horizontal line (the main path).
required_item
required_item
optional_item
If an optional item appears above the main path, that item has no effect on the
execution of the syntax element and is used only for readability.
optional_item
required_item
v If you can choose from two or more items, they appear vertically, in a stack.
If you must choose one of the items, one item of the stack appears on the main
path.
required_item required_choice1
required_choice2
If choosing one of the items is optional, the entire stack appears below the main
path.
required_item
optional_choice1
optional_choice2
If one of the items is the default, it appears above the main path, and the
remaining choices are shown below.
default_choice
required_item
optional_choice1
optional_choice2
v An arrow returning to the left, above the main line, indicates an item that can be
repeated.
If the repeat arrow contains a comma, you must separate repeated items with a
comma.
required_item repeatable_item
A repeat arrow above a stack indicates that you can repeat the items in the
stack.
v Sometimes a diagram must be split into fragments. The syntax fragment is
shown separately from the main syntax diagram, but the contents of the
fragment should be read as if they are on the main path of the diagram.
required_item fragment-name
Fragment-name:
required_item
optional_item
96 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
Product accessibility
You can get information about the accessibility status of IBM products.
The IBM Information Server product modules and user interfaces are not fully
accessible. The installation program installs the following product modules and
components:
v IBM Information Server Business Glossary Anywhere
v IBM Information Server FastTrack
v IBM Metadata Workbench
v IBM WebSphere Business Glossary
v IBM WebSphere DataStage and QualityStage
v IBM WebSphere Information Analyzer
v IBM WebSphere Information Services Director
Accessible documentation
IBM may not offer the products, services, or features discussed in this document in
other countries. Consult your local IBM representative for information on the
products and services currently available in your area. Any reference to an IBM
product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any IBM intellectual property right may
be used instead. However, it is the user’s responsibility to evaluate and verify the
operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not grant you
any license to these patents. You can send license inquiries, in writing, to:
The following paragraph does not apply to the United Kingdom or any other
country where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION ″AS IS″ WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS
FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or
implied warranties in certain transactions, therefore, this statement may not apply
to you.
Any references in this information to non-IBM Web sites are provided for
convenience only and do not in any manner serve as an endorsement of those Web
sites. The materials at those Web sites are not part of the materials for this IBM
product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it
believes appropriate without incurring any obligation to you.
IBM Corporation
J46A/G4
555 Bailey Avenue
San Jose, CA 95141-1003 U.S.A.
The licensed program described in this document and all licensed material
available for it are provided by IBM under terms of the IBM Customer Agreement,
IBM International Program License Agreement or any equivalent agreement
between us.
All statements regarding IBM’s future direction or intent are subject to change or
withdrawal without notice, and represent goals and objectives only.
This information is for planning purposes only. The information herein is subject to
change before the products described become available.
This information contains examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include the
names of individuals, companies, brands, and products. All of these names are
fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
100 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
Each copy or any portion of these sample programs or any derivative work, must
include a copyright notice as follows:
© (your company name) (year). Portions of this code are derived from IBM Corp.
Sample Programs. © Copyright IBM Corp. _enter the year or years_. All rights
reserved.
If you are viewing this information softcopy, the photographs and color
illustrations may not appear.
Trademarks
IBM trademarks and certain non-IBM trademarks are marked on their first
occurrence in this information with the appropriate symbol.
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of
International Business Machines Corporation in the United States, other countries,
or both. If these and other IBM trademarked terms are marked on their first
occurrence in this information with a trademark symbol (® or ™), these symbols
indicate U.S. registered or common law trademarks owned by IBM at the time this
information was published. Such trademarks may also be registered or common
law trademarks in other countries. A current list of IBM trademarks is available on
the Web at ″Copyright and trademark information″ at www.ibm.com/legal/
copytrade.shtml.
Adobe, the Adobe logo, PostScript, and the PostScript logo are either registered
trademarks or trademarks of Adobe Systems Incorporated in the United States,
and/or other countries.
Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo,
Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or
registered trademarks of Intel Corporation or its subsidiaries in the United States
and other countries.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of
Microsoft Corporation in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Notices 101
Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the
United States, other countries, or both.
The United States Postal Service owns the following trademarks: CASS, CASS
Certified, DPV, LACSLink, ZIP, ZIP + 4, ZIP Code, Post Office, Postal Service,
USPS and United States Postal Service. IBM Corporation is a non-exclusive DPV
and LACSLink licensee of the United States Postal Service.
102 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
Index
Special characters batch size
IDoc Extract stage 34
D
.bmk file suffix 90 bookmark files 36 data transfer methods
bookmark files for IDoc Extract ABAP Extract 12
PSA 41 DataStage Administrator for SAP 76
A BOR browsing DataStage Connections to SAP page
DataStage Administrator for SAP 70
ABAP definition 2 BAPI plug-in 64
BOR definition 2 DataStage Job Options for IDocs page
ABAP Editor 15
Build Extraction Object DataStage Administrator for SAP 74
ABAP Extract 14
program generation, ABAP defining
ABAP Extract plug-in
extract 15 ABAP Extract Stage properties 6
functionality 3
Build Extraction Object dialog 15 IDoc Extract stage 42
overview 2
Build SQL query properties for IDoc types for IDoc
ABAP Extract programs
program generation, ABAP Extract 45
RFC data transfer 7
Extract 15 defining ABAP Extract Stage
running in background 14
Build SQL Query dialog 15 properties 6
ABAP Extract Stage dialog 6
building defining BAPIs
ABAP Program Development Status,
extraction objects or SQL queries, BAPI plug-in 64
ABAP Extract 15
ABAP Extract 15 defining character set mapping
ABAP Program page 14
SQL queries, ABAP Extract 15 IDoc Load plug-in 56
ABAP RFC extraction job
Business Object definition 2 defining character set maps
troubleshooting 10
ABAP Extract plug-in 6
accessibility 93
BAPI plug-in 61
activating BAPI parameters
defining IDoc Load input data 56
BAPI plug-in 65 C defining IDoc Load stage 52
Add Columns dialog 49 character set maps defining IDoc type configuration
ADM_ prefixes ABAP Extract plug-in 6 properties
IDoc extraction 49 cleanup scheduling IDoc Extract 45
administrative fields crontab command 91 defining input properties
IDoc extraction 49 cofiles and data files BAPI plug-in
Administrator for SAP utility 70 copying, for ABAP Extract 4 BAPI Input General page 63
architecture columns defining log files 65
BAPI plug-in 60 adding for IDoc extraction 49 defining output properties
IDoc Extract plug-in 30 columns and corresponding segment ABAP Extract 7
archiving IDocs 37 types defining SAP connections
archiving processed IDocs IDoc Load plug-in 58 BAPI plug-in 62
IDoc Extract 46 configure IDoc types defining stage properties
assigned input links 55 IDoc Load plug-in 55 BAPI plug-in 61
authorization profile, ABAP Extract 4 configuring global properties 70 design time connection, IDoc Extract
automatic job mode, IDoc extraction 46 connecting to SAP R/3 systems from stage 44
DataStage destination, RFC 8
IDoc Extract plug-in 42 documentation
B Connection Properties dialog accessible 93
background processing, ABAP accessing from ABAP Extract GUI 11 dsenv file
Extract 14 BAPI plug-in 63 IDoc Extract plug-in 37
BAPI definition 2 connections dsidocd.rc script
BAPI Explorer dialog managing for IDoc Extract 44 permissions 91
BAPI plug-in 64 selecting for ABAP Extract 11 RFD Listener Manager 33
BAPI input file for import connections to SAP
parameters 68 ABAP Extract 10
BAPI Input page
BAPI plug-in 64
Control record definition 2
control records
E
IDoc extractions 47 E2MAKTM
BAPI plug-in 64, 65
CPI-C data transfer method, ABAP example of translation to DataStage
BAPI plug-in runtime component 69
Extract 12 columns 41
BAPI Stage dialog 61
creating programs manually, ABAP Enter program as text
BAPI Stage General page 62
Extract 15 program generation, ABAP
BAPISeqNo field
creating tables 17 Extract 15
BAPI plug-in 66
crontab command 91 ERP definition 2
batch count
customer support 93 errors
WebSphere DataStage limit for
IDoc extraction 48
IDocs 34
I K P
IBM support 93
Key Columns for Link dialog Pack definition 2
IDoc control records 47
IDoc Load plug-in 58 parent and child segment types
IDoc definition 2
IDoc Extract plug-in IDoc Load plug-in 58
overview 29 partner number
IDoc Extract stage L IDoc Load plug-in 54
connection, design time 44 legal notices 99 permissions
IDoc extraction listener log files IDoc Extract bookmark files 41
by IDoc number 35 IDoc Extract plug-in 32 permissions on bookmark file 41
IDoc number, extracting by 35 listener servers Port Version list
IDoc extraction batch size, manual IDoc Extract plug-in 32 DataStage Administrator for SAP 74
job 34 listener subsystem port versions
IDoc extraction components IDoc Extract plug-in 32 IDoc extraction 48
selecting 49 load balancing preventing updates in bookmark files 41
IDoc extraction job BAPI plug-in 63 processing IDocs using DataStage
automatically run all jobs 46 DataStage Administrator for SAP 72 summary 30
automatically run one job 46 load methods, ABAP Extract 15 product accessibility
running manually 46 loading accessibility 97
IDoc Listener Settings page ABAP programs, ABAP Extract 15 program generation
DataStage Administrator for SAP 73 local storage for IDocs 35 ABAP Extract 14
IDoc Load Columns page 57 log files directory Program Generation Options dialog 15
IDoc Load Input General page 56 BAPI plug-in 65 programs
IDoc Load plug-in 55, 57 logs running in background, ABAP
overview 50 IDoc listener 76 Extract 14
IDoc Log dialog PSA
DataStage Administrator for SAP 76 bookmark files for IDoc Extract 41
IDoc segment definitions PSA definition 2
saving for IDoc extraction 50 PSA property 35
104 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
Q selecting segment definitions
IDoc Load plug-in 57
viewing Import parameters
BAPI plug-in 64
qRFC data transfer, ABAP program 8 software services 93 viewing Tables parameters
specifying BAPI plug-in 64
ABAP program properties, ABAP
R Extract 14
R/3 source systems configuration Stage General page
IDoc Extract plug-in 42
W
IDoc Extract plug-in 31 WebSphere DataStage Administrator for
restore bookmark files to original IDoc Load plug-in 52
SAP utility 70
state 41 Stage IDoc Type page
return parameters 65 IDoc Extract plug-in 45
RFC data transfer starting and stopping RFC Listener
ABAP Extract job 7 Manager on UNIX 33 Z
RFC data transfer method, ABAP status, program development Z_DS-PROFILE profile, ABAP Extract 5
program 8 ABAP Extract 15 Z_RFC_DS_SERVICE RFC function
RFC definition 2 summary module, ABAP Extract 5
RFC destination, creating 8 output links for IDoc extraction 48 ZDSRFC.TXT file, ABAP Extract 4
RFC, importing for ABAP Extract 4 processing IDocs using DataStage 30 ZETL development class, ABAP
root segments summary of IDoc Extract stage and Extract 5
IDoc Load plug-in 58 links 42
run methods, ABAP Extract 15 support, customer 93
running background processes, ABAP synchronizing columns
Extract 14 IDoc Load plug-in 58
S T
SAP XI support, design time tables
connection 44 creating 17
saprfc.ini file 41, 90 Tables log files
IDoc Extract plug-in 41 BAPI plug-in 69
permissions 90 temporary IDoc files directory for IDoc
Save IDoc Segment Definition dialog 50 extraction
IDoc Load plug-in 58 path name for temporary IDoc files
scheduling jobs for IDoc extraction 46
IDoc Extract 46 temporary segment index files
screen readers 93 IDoc Load plug-in 54
Segment definition 2 Terminology 1
segment definition names Test Mode
IDoc extraction 47 IDoc Extract Stage General page 43
segment type definition 2 processing IDocs 41
Select Application Component ID dialog testing load balancing 73
BAPI plug-in 64 tp import, ABAP Extract 4
Select DataStage Connection to SAP trademarks 101
dialog Transaction SE80, ABAP Extract 5
ABAP Extract plug-in 11 Transaction STMS, ABAP Extract 4
BAPI plug-in 62 transferring data
IDoc Extract plug-in 44 methods for ABAP Extract 12
Select IDoc Component to Extract transport control program, ABAP
dialog 49 Extract 4
Select IDoc Component to Load dialog Transport Management System, ABAP
IDoc Load plug-in 57 Extract 4
Select IDoc Type dialog tRFC data transfer, ABAP program 8
IDoc Extract plug-in 45 tRFC port definition 2
IDoc Load plug-in 55 tRFC ports 32
selecting
ABAP Extract data transfer
methods 12 V
selecting a BAPI validating
BAPI plug-in 64 ABAP Extract stage 11
selecting DataStage connection to SAP validating columns 57
IDoc Load plug-in 52 Validation Stage dialog 11
selecting IDoc types Variant definition 2
IDoc Load plug-in versions of R/3
IDoc Load Stage IDoc Type DataStage Administrator for SAP 74
page 55 viewing Export parameters
selecting IDoc Types BAPI plug-in 65
IDoc Extract plug-in 45
Index 105
106 Integration Guide for IBM WebSphere DataStage Pack for SAP R/3
Printed in USA
SC18-9954-02