Professional Documents
Culture Documents
Table of Contents
1. Extraction................................................................................................................................................. 8
1.1Introduction ......................................................................................................................................... 8
1.2 Step-by-step control flow for a successful data extraction with SAP BW: .......................................12
2. Data Extraction from SAP Source Systems.......................................................................................... 12
2.1 Introduction ...................................................................................................................................... 12
2.1.1 Process ....................................................................................................................................... 13
2.1.2 Plug-in for R/3 Systems..............................................................................................................14
2.2 Transfer Method - PSA and IDoc....................................................................................................... 14
2.2.1 Introduction ...............................................................................................................................14
2.2.2 Persistent Staging Area (PSA)..................................................................................................... 15
2.2.2.1 Definition ............................................................................................................................ 15
2.2.2.2 Use ...................................................................................................................................... 15
2.2.3 IDocs..........................................................................................................................................15
2.2.3.1 Definition ............................................................................................................................ 15
2.2.3.2 Example: ............................................................................................................................. 15
2.2.4 Two Methods to transfer data...................................................................................................15
2.2.4.1 Differences and advantages:...............................................................................................16
2.2.4.1.1 PSA ...............................................................................................................................16
2.2.4.2 ALE (data IDoc)................................................................................................................ 16
2.3 Data Source....................................................................................................................................... 16
2.3.1 Assigning DataSources to InfoSources and Fields to InfoObjects.............................................. 17
2.3.2 Maintaining DataSources...........................................................................................................17
2.3.3 Transferring Business Content DataSources into Active Version .............................................. 18
2.3.4 Extraction Structure ...................................................................................................................18
2.3.5 Transfer Structure ......................................................................................................................18
2.3.6 Replication of DataSources ........................................................................................................19
2.3.6.1 Replication of the Entire Metadata ....................................................................................19
5.6.2.2
5.6.2.3
6. XML Integration...................................................................................................................................140
6.1 Introduction ....................................................................................................................................140
6.2 Benefits of XML Integration ............................................................................................................140
6.2.1 End-to-End Web Business Processes .......................................................................................140
6.2.2 Open Business Document Exchange over the Internet ........................................................... 141
6.2.3 XML Solutions for SAP services ................................................................................................141
6.3 Business Integration with XML ....................................................................................................... 141
6.3.1 Incorporating XML Standards ..................................................................................................142
6.3.2 SAPs Internet Business Framework ........................................................................................142
6.3.3 SAP applications with XML....................................................................................................... 143
6.3.4 Factors leading to emergence of XML-enabled SAP solutions ................................................ 144
6.3.4.1 Changing Business Standards and their adoption ............................................................144
6.3.4.2 Internet Security Standards ..............................................................................................144
6.4 Web-based business solutions........................................................................................................144
6.4.1 Components of Business Connector........................................................................................144
6.5 How to Customize Business Connector (BC)...................................................................................145
6.5.1 Add New Users to BC ............................................................................................................... 145
6.5.2 Add SAP Systems......................................................................................................................145
6.5.3 Add Router Tables....................................................................................................................146
6.5.4 Access functionality in the Business Connector....................................................................... 146
7. Data Mart Interface ............................................................................................................................. 147
7.1 Introduction ....................................................................................................................................147
7.2 Special Features ..............................................................................................................................147
7.3 Data Mart Interface in the Myself System......................................................................................148
7.4 Data Mart Interface between Several Systems .............................................................................. 148
7.4.1 Architectures............................................................................................................................ 149
7.4.1.1 Replicating Architecture....................................................................................................149
7.4.1.2 Aggregating Architecture..................................................................................................150
7.4.2 Process Flow.............................................................................................................................151
7.4.2.1 In the Source BI................................................................................................................. 151
1. Extraction
1.1Introduction
Extraction programs that read data from extract structures and send it, in
the required format, to the Business Information Warehouse also belong to the data staging
mechanisms in the SAP R/3 system as well as the SAP Strategic Initiative products such as
APO, CRM and SEM. The IDOC structures or TRFC data record structures (if the user
chooses to use the PSA - Persistent Staging Area) that are generated from the transfer
structures for the Business Information Warehouse on the source system side are used for
this. These extraction tools are implemented on the source system side during
implementation and support various releases. In non-SAP applications, similar extraction
programs can be implemented with the help of third party providers. These then collect the
requested data and send it in the required transfer format using BAPIs to the SAP Business
Information Warehouse.
All application data must be described in SAP BW using meta data. The
InfoObjects used for this are not just transaction and master data but also relationship sets
such as attributes or hierarchies for master data.
The SAP BW extractors carry out a number of functions in the SAP
OLTP, in order to guarantee smooth communication between SAP OLTP and SAP BW.
These tasks must be carried out by external software in non-SAP Systems that are set up in
the BAPI interfaces of the SAP BW. Various businesses have already been certified as
official third party providers in this field. A complete list can be found on the SAPnet BW
homepage.
Virtually any source of data can be extracted for use in the Business
Information Warehouse
Benefits of TRFC
API to access the data stored in the ODS (read and update).
1.2 Step-by-step control flow for a successful data extraction with SAP
BW:
1. An InfoPackage is scheduled for execution at a specific point of time or for a certain
system- or user-defined event.
2. Once the defined point of time is reached, the SAP BW system starts a batch job that
sends a request IDoc to the SAP source system.
3. The request IDoc arrives in the source system and is processed by the IDoc dispatcher,
which calls the BI Service API to process the request.
4. The BI Service API checks the request for technical consistency. Possible error
conditions include specification of DataSources unavailable in the source system and
changes in the DataSource setup or the extraction process that have not yet been
replicated to the SAP BW system.
5. The BI Service API calls the extractor in initialization mode to allow for extractorspecific initializations before actually starting the extraction process. The generic
extractor, for example, opens an SQL cursor based on the specified DataSource and
selection criteria.
6. The BI Service API calls the extractor in extraction mode. One data package per call is
returned to the BI Service API, and customer exits are called for possible
enhancements. The extractor takes care of splitting the complete result set into data
packages according to the IDoc control parameters. The BI Service API continues to
call the extractor until no more data can be fetched.
7. The BI Service API finally sends a final status IDoc notifying the target system that
request processing has finished (successfully or with errors specified in the status
IDoc).
extracted, and from which tables it should read it from and in which structure. This is how
it fills different extract structures and DataSources.
The user can run generic data extraction in the R/3 source system
application areas such as LIS, CO-PA, FI-SL and HR. This is how LIS, for example, uses
generic extraction to read info structures. DataSources are generated on the basis of these
(individually) defined info structures. We speak of customer-defined DataSources with
generic data extraction from applications.
Regardless of application, the user can generically extract master data
attributes or -texts, or transaction data from all transparent tables, database views or SAP
query functional areas or using the function module. The user can generate user-specific
DataSources here. In this case, we speak of generic DataSources.
The DataSource data for these types are read generically and
transferred into the BW. This is how generic extractors allow the extraction of data that
cannot be made available within the framework of Business Content.
The user can find further information in the implementation guide to
data extraction from SAP source systems. The user get there by choosing The user Source
System Context Menu (right mouse click) Customizing Extractors in the BW
Administrator Workbench Modeling.
The Persistent Staging Area (PSA) is the initial storage area for
requested transaction data, master data attributes, and texts from various source systems
within the Business Information Warehouse.
2.2.2.2 Use
The IDoc interface exchanges business data with an external system. The
IDoc interface consists of the definition of a data structure, along with processing logic for
this data structure. The business data is saved in IDoc format in the IDoc Interface and is
forwarded as IDocs. If an error occurs, exception handling is triggered using SAP tasks.
The agents who are responsible for these tasks and have the relevant authorizations are
defined in the IDoc Interface. Standard SAP format for electronic data interchange between
systems (Intermediate Document).Different message types (for example, delivery
confirmations or purchase orders) normally represent the different specific formats, known
as IDoc types. Multiple message types with related content can be assigned to one IDoc
type.
2.2.3.2 Example:
The IDoc type ORDERS01 transfers the logical message types ORDERS
(purchase order) and ORDRSP (order confirmation).Among other areas, IDocs are used in
both Electronic Data Interchange (EDI) and for data distribution in a system group (ALE).
2.2.4 Two Methods to transfer data
With the IDoc method, IDoc interface technology is used to pack the data
into IDoc containers.
With the PSA transfer method, IDoc containers are not used to send the
data. Instead, the data is transferred directly in the form of a transfer structure.
Information is sent from the source system (no data) through the Idoc interface (info
IDocs). This information can be, for example, the number of data records extracted or
information on the monitor.
2.2.4.1 Differences and advantages:
2.2.4.1.1 PSA
as regards the data transfer into BW. During a Metadata upload, the properties of the
DataSource relevant to BW are replicated in BW.
There are four types of DataSource:
relevant source system in the source system tree of the BW Administrator Workbench Modeling. Or the user can go directly to the DataSource maintenance screen by choosing
Maintaining DataSource in Source System from the context menu of the source system
DataSource overview.
2.3.3 Transferring Business Content DataSources into Active Version
Here the user can add additional fields from the communication
structures available to the extract structure.
In the Data source maintenance screen, the user can customize the
data source by using the following fields: field name, short text, selection, hide field,
inversion or cancellation field or reverse posting, and field only known in customer exit.
These talks about the delta update mode the user are using and how do
the user control the data load based on the volume of data. LO Cockpit supports 4 types of
update modes (delta modes, which we have already discussed): Serialized V3 update,
Direct Delta, Queued Delta, Unserialized V3 update.
2.5.5 Setup Tables
Access to application tables are not permitted, hence setup tables are
there to collect the required data from the application tables. When a load fails, the user can
re-run the load to pull the data from setup tables. Data will be there in setup tables. Setup
tables are used to Initialize delta loads and for full load. Its part of LO Extraction scenario.
With this option, the user avoid pulling from R/3 directly as we need to bring field values
from multiple tables. The user can see the data in the setup tables. Setup table name wiil be
extract structure name followed by SETUP. Set up table names starts with 'MC' followed
by application component '01'/'02' etc and then last digits of the datasource name and then
followed by SETUP Also we can say the communication structure (R/3 side, the user can
check it in LBWE also) name followed by 'setup'.
The setup tables are the base tables for the Datasource used for Full
upload. So if the user are going for only full upload full update is possible in LO extractors.
Full update is possible in LO extractors. In the full update whatever data is present in the
setup tables(from the last done in it) is sent to BW.But setup tables do not receive the delta
data from the deltas done after the init.So if users full update should get ALL data from the
source system will need to delete and re-fill setup tables.
2.5.6 Serialized V3
With queued delta update mode, the extraction data (for the relevant
application) is written in an extraction queue (instead of in the update data as in V3) and
can be transferred to the BW delta queues by an update collective run, as previously
executed during the V3 update. After activating this method, up to 10000 document
delta/changes to one LUW are cumulated per datasource in the BW delta queues.
If the user use this method, it will be necessary to schedule a job to regularly transfer the
data to the delta queues As always, the simplest way to perform scheduling is via the "Job
control" function in LBWE.SAP recommends to schedule this job hourly during normal
operation after successful delta initialization, but there is no fixed rule: it depends from
peculiarity of every specific situation (business volume, reporting needs and so on).
2.5.8 Direct Delta ( 2nd delta update method in our list)
With this update mode, that we can consider as the serializers brother,
the extraction data continues to be written to the update tables using a V3 update module
and then is read and processed by a collective update run (through LBWE).But, as the
name of this method suggests, the V3 unserialized delta disowns the main characteristic
of his brother: data is read in the update collective run without taking the sequence into
account and then transferred to the BW delta queues.Issues:Only suitable for data target
design for which correct sequence of changes is not important e.g. Material Movements V2
update has to be successful.
5. Click on DataSource
6. Scroll down to reach the appended structure
7. Double Click on appended structure ZBIW_KNA1_S1 (or the user chosen one) &
add the fields ( which the user wish to add to DataSource)
8. Check, save & activate - It will append this to Extract Structure of DataSource.
9. Go Back
11. Now Click on Extract Structure BIW_KNA1_S (in my case), check out the
appended field below append structure.
1. Post appending the structure, we need to fill this Append with data For this
a. LOGON to Source System (R/3)
b. Go to T Code CMOD
c. Choose the project Which the user is using for Master Data Enhancement.
Project ZSDBWNEW in my case. ( the user need to create project in case the
user are doing enhancement on this BW system for the first time)
d. Click on Enhancement Assignments
e. Click on Change
2. Press Continue In case it prompts for some confirmation like Enhancement
project already active etc
1. Now Final Step We need to regenerate the DataSource & make sure Newly Added
Attributes (Starting with ZZ) are not HIDE.
2. Logon to BW System Go to Administrator Workbench (RSA1) Go to Source
systems Choose Source System (R/3 in our case) - Right Click & go to
Customizing for Extractors.
3. Choose Edit DataSources and Application Component Hierarchy
4. Go to 0CUSTOMER_ATTR Customer Number DataSource & Click on CHANGE.
Scroll Down.
Generic extractors of type (extraction method) F2 and delta process are AIE (After image
Via Extractor) will be using pull delta model 'F2': The data is extracted by means of a
function module that, in contrast to 'F1', occupies a simplified interface (see documentation
for data element ROFNAME_S).
Whenever we request delta data from BW, the data will pulled via delta queue and Delta
LUWs will be saved in repeat delta table and repeat delta LUWs only will be visible in
RSA7, But for the normal F1 type extractors both Delta and Repeat delta LUWs will be
visible in RSA7
In the below screen shot Total =2 will refer number LUWs in repeat delta table
We need to first log into the R/3 System and call Transaction SBIW
(Display IMG).Here navigate as per the screenshot below and Delete the data from the
setup tables. There are two setup tables in R/3, which exist and get filled with the logistic
data before its extracted into BW.
Click on the clock symbol to delete the existing contents from the setup table if any so
that its free and we can go ahead and make or configure our Logistic Datasources. Select
the Application component and execute for deletion of data. Click YES.
In the above screenshot: Click on the Update Overview text to reach the following
screen. This will take the user to SM13 for any relative table updates and Execute.
This will take the user to RSA7 transaction to view the delta queues if any
Click on Run and It will Prompt for confirming the entries in the Extract structure.
Assign a request so that it generates extract structure successfully.
Now on the Main LBWE screen, the user can see RED status before the datasource.
Assign a Request to have the Datasource screen where properties related to fields can be
modified.
As the user assign this and come back the user will see a change in the status color as
YELLOW.
Now go to the BW System and Replicate the related Datasource from the Exact Source
system.
Now go back to the R/3 System and Click on the ACTIVE parameter under the Job
Control, assign a request
Now the user will see that the status color will turn as GREEN and then the user can
assign the update mode as well.
Now in the BW system create transformations from the datasource to the Infoprovider.
Create an infopackage and DTP to load the data.
0ARTICLE_ATTR
0ARTICLE_TEXT
0CUSTOMER_ATTR
0CUSTOMER_TEXT
0PLANT_ATTR
0PLANT_TEXT
0MATL_GROUP_TEXT
0MAT_VEND_ATTR
0VENDOR_ATTR
0VENDOR_TEXT
0SALESORG_ATTR
0SALESORG_TEXT
0SALES_DIST_TEXT
0SALES_GRP_TEXT
0SALES_OFF_TEXT
0SALESORG_ATTR
0SALESORG_TEXT
0SALES_DIST_TEXT
0SALES_GRP_TEXT
0SALES_OFF_TEXT
2.8.2 TRANSACTIONAL DATA
2LIS_02_SCL
2LIS_03_BF
2LIS_03_BX
2LIS_03_UM
2LIS_13_VDITM
2LIS_40_REVAL
In the screen Installation of Data Source from Business Content, the user must mark the
data source that the user wants to activate and select Activate Data Source.
The delta process is a feature of the extractor and specifies how data is to be transferred. As
a DataSource attribute, it specifies how the DataSource data is passed on to the data target.
From this the user can derive, for example, for which data a DataSource is suited, and how
the update and serialization are to be carried out.
2.8.3 Delta Process
Forming deltas with after, before and reverse images that are updated directly in the delta
queue; an after image shows the status after the change, a before image the status before
the change with a negative sign and the reverse image also shows the negative sign next
to the record while indicating it for deletion. This serializes the delta packets. The delta
process controls whether adding or overwriting is permitted. In this case, adding and
overwriting are permitted. This process supports an update in an ODS object as well as in
an InfoCube. (technical name of the delta process in the system: ABR)
The extractor delivers additive deltas that are serialized by request. This serialization is
necessary since the extractor within a request delivers each key once, and otherwise
changes in the non-key fields are not copied over correctly. It only supports the addition
of fields. It supports an update in an ODS object as well as in an InfoCube. This delta
process is used by LIS DataSources. (technical name of the delta process in the system:
ADD)
Forming deltas with after image, which are updated directly in the delta queue. This
serializes data by packet since the same key can be copied more than once within a
request. It does not support the direct update of data in an InfoCube. An ODS object must
always be in operation when the user update data in an InfoCube. For numeric key
figures, for example, this process only supports overwriting and not adding, otherwise
incorrect results would come about. It is used in FI-AP/AR for transferring line items,
while the variation of the process, where extractor can also send records with the deletion
flag, is used in capacity in BBP. (technical name of the delta process in the system:
AIM/AIMD)
In contrast with other business content and generic data sources, the
LO datasources use the concept of set up tables to carry out the initial data extraction
process. The data extractors for HR, FI etc. extract data by directly accessing the
application tables, but in case of LO extractors they do not access the application tables
directly. The presence of restructuring/set up tables prevents the BI extractors directly
access the frequently updated large logistics application tables and are only used for
initialization of data to BI. For loading data first time into the BI system, the set up tables
have to be filled. The restructuring/set up tables are cluster tables that hold the respective
application data, and the BI system extracts the data as a onetime activity for the initial data
load, and the data can be deleted from the set up tables after successful data extraction into
BI to avoid redundant storage.
The setup tables in SAP have the naming convention, <Extraction
structure>SETUP and the compressed data from application tables stored here can be
viewed through SE11. Thus the datasource 2LIS_11_VAITM having extract structure
MC11VA0ITM has the set up table MC11VA0ITMSETUP. A job is executed to fill the set
up tables, and the init InfoPackage extracts the initial data into BI.
2.9. 2 Delta Extraction
The type of delta provided by the LO datasources is a push delta, i.e. the delta data records
from the respective application are pushed to the delta queue before they are extracted to BI
as part of the delta update. The fact whether a delta is generated for a document change is
determined by the LO application. It is a very important aspect for the logistic datasources
as the very program that updates the application tables for a transaction triggers/pushes the
data for information systems, by means of an update type, which can be a V1 or a V2
update.
2.9.3 Update Modes
WAIT which waits until the update work process outputs the status of the update. The
program then responds to errors separately.
The V1 updates are processed sequentially in a single update work process and they
belong to the same database LUW. These updates are executed under the SAP locks of the
transaction that creates the update there by ensuring consistency of data, preventing
simultaneous updates. The most important aspect is that the V1 synchronous updates can
never be processed a second time. During the creation of an order the V1 update writes data
into the application tables and the order gets processed. The V1 updates are carried out as a
priority in contrast to V2 updates, though the V2 updates are usually also processed straight
away.
2.9.3.2 V2 Update
b. Queued Delta
c. Un-serialized V3 Delta
The following section discusses in detail the concept behind the three different delta update
modes, the scenarios in which they are used, their advantages and disadvantages. The
figure below shows the update modes for delta mechanism from the LO Cockpit.
Writing to the delta queue within the V1 posting process ensures serialization
by document.
Recommended for customers with fewer documents.
Extraction is independent of V2 updating.
No additional monitoring of update data or extraction queue required.
The data from documents posted is completely lost if documents are posted during the reinitialization process.
2.10.2 Queued delta (V1 + V3 updates)
In the queued delta update mode the logistic application pushes the
data from the concerned transaction into an extraction queue by means of the V1 update.
The data is collected in the extraction queue and a scheduled background job transfers the
data in the extraction queue to the delta queue, in a similar manner to the V3 update, with
an update collection run. Depending on the concerned application, up to 10,000 delta
extractions of documents can be aggregated in an LUW in the delta queue for a datasource.
The data pushed by the logistic application can be viewed in the logistics queue overview
function in the SAP ECC 6.0 system (transaction LBWQ). SAP recommends the queued
delta process for customers with a high amount of documents with the collection job for
extraction from extraction queue to be scheduled on an hourly basis.
2.10.2.1 Benefits
When the user need to perform a delta initialization in the OLTP, thanks to the logic
of this method, the document postings (relevant for the involved application) can be
opened again as soon as the execution of the recompilation run (or runs, if several
and running in parallel) ends, that is when setup tables are filled, and a delta init
request is posted in BW, because the system is able to collect new document data
during the delta init uploading too (with a deeply felt recommendation: remember
to avoid update collective run before all delta init requests have been successfully
updated in the user BW!).
By writing in the extraction queue within the V1 update process (that is more
burdened than by using V3), the serialization is ensured by using the enqueue
concept, but collective run clearly performs better than the serialized V3 and
especially slowing-down due to documents posted in multiple languages does not
apply in this method.
On the contrary of direct delta, this process is especially recommended for
customers with a high occurrence of documents (more than 10,000 document
changes - creation, change or deletion - performed each day for the application in
question.
Extraction is independent of V2 update.
In contrast to the V3 collective run an event handling is possible here, because a
definite end for the collective run is identifiable: in fact, when the collective run for
an application ends, an event (&MCEX_nn, where nn is the number of the
application) is automatically triggered and, thus, it can be used to start a subsequent
job.
2.10.2.2 Limits
The job uses the report RMBWV311 for collection run and the function module will have
the naming convention MCEX_UPDATE_<Application>, MCEX_UPDATE_11 for sales
orders. In the initialization process, the collection of new document data during the delta
initialization request can reduce the downtime on the restructuring run. The entire
extraction process is independent of the V2 update process.
2.10.3 Un-serialized V3 Update (V1/V2 + V3 Updates)
As the name suggests the update is un-serialized, i.e. this mode of update does not ensure
serialization of documents posted to the delta queue. This means that the entries in the delta
queue need not correspond to the actual sequence of updates that might have happened in
the logistic application. This is important if the data from the datasource is further updated
to a DSO in overwrite mode as the last entry would overwrite the previous entries resulting
in erroneous data. An un-serialized delta update when used should always update data
either to an infocube or to a DSO with key figures in summation mode. It is also advised if
the un-serialized V3 update can be avoided to documents subjected to a large number of
changes when it is necessary to track changes.
1. Under Transaction SBIW This step gives the user the option of creating and
maintaining generic Data Sources for transaction data, master data attributes or
texts from any kind of transparent tables, database views or SAP query functional
areas or via a function module, regardless of application. This enables the user to
use the generic extraction of data.
3.
a) Choose an application .Component to which the data source is to be assigned.
b) Enter the descriptive texts. The user can choose these freely.
c) Choose Generic Delta
4. Specify the delta-specific field and the type for this field. Maintain the
settings for the generic delta: Specify a safety interval. Safety interval should be set
so that no document is missed even if it was not stored in the DB table when the
extraction took place.
5. Select Delta type: New status for changed records (I.e. after-image); This can be
used with Data target ODS (AIE).Additive Delta (I.e. aggregated data records)
(ADD) Then choose Save.
6.
After step 4, the screen of step 3 comes back. Now choose Save again.
This will generate the data source. After generating the data source, the user will
see the Delta Update flag selected. In systems as of basis release 4.0B,the user can
display the current value for the delta-relevant field in the delta queue.
Delta Attributes can be monitored in delta queue (RSA7). Also note LUW count
does not equal to the changes records in the source table. Most of the time it will be
ZERO. Delta is enabled by data selection logic
LUW count can also have value 1.Whenever delta is extracted, the extracted data is
stored in the delta queue tables to serve as a fallback, when an error occurs during
the update of the BW system. The user will see a '1' in this field (the extract counts
as one LUW) and are even able to be displayed in a detail screen.
Master data attributes are fields that are used to provide a more
detailed description of master data elements. These attributes are used to display additional
information so results can be better understood. An attribute table can be used by several
InfoCubes. This ensures a higher level of transparency for the user and a more
comprehensive consistency. An example of a master data attribute is the country of the
supplier that goes with the supplier number.
2.12.1.3. Hierarchies
If the user creates time-dependent texts, the text for the key date is
always displayed in the query.
2.12.2.3 Time-dependent Texts and Attributes
If the texts and the attributes are time-dependent, the time intervals do
not have to agree.
2.12.2.4 Language-dependent Texts
In the DataSource maintenance in BI, the user determines which fields from the
DataSource are actually transferred. Data is transferred in the input layer of BI, the
Persistent Staging Area (PSA). In the transformation, the user determines what the
assignment of fields from the DataSource to InfoObjects from BI should look like. Data
transfer processes facilitate the further distribution of the data from the PSA to other
targets. The rules that the user set in the transformation apply here.
2.13.1 Extraction Structure
The user can edit DataSources in the source system, using transaction
SBIW.
2.13.3 Replication of DataSources
In the first step, the D versions are replicated. Here, only the
DataSource header tables of BI Content DataSources are saved in BI as the D version.
Replicating the header tables is a prerequisite for collecting and activating BI Content
In the second step, the A versions are replicated. DataSources (R3TR RSDS) are saved in
the M version in BI with all relevant metadata. In this way, the user avoid generating too
many DDIC objects unnecessarily as long as the DataSource is not yet being used that is,
as long as a transformation does not yet exist for the DataSource.3.x DataSources (R3TR
ISFS) are saved in BI in the A version with all the relevant metadata.
13. Assign Data source. Map the fields manually since ZZBOM is an enhancement, it
should be mapped to the InfoObject created by us.
14. Create ODS.
15. Assign the objects corresponding to the Key fields and Data fields.
17. Create InfoSource and InfoPackage.The data is extracted form the SAP R/3 while
we schedule
the load from Info Package. The Data is then loaded into the
ODS which is the data target for the InfoPackage.
19. The data is loaded in to the ODS which could be monitored. The data load is
successful.
20. Create InfoCube.
21. Assign the corresponding characteristics, Time Characteristics and Key figures.
22. Create relevant Dimensions to the cube.
23. Activate the InfoCube
and create update rule in order to load the data from the
ODS.
24. Update ODS and the InfoCube is successfully loaded. Manage the InfoCube to view
the available data.
26. As per the requirement the General Ledger InfoCube is created and ready for
Reporting. Create a new query using a query designer.
27. Query designer for 0_FI_GL_4.
28. General Ledger Balance Sheet for the period 1995, 1996 and 1997.
The SAP Business Information Warehouse allows the user to analyze data from operative
SAP applications as well as all other business applications and external data sources such
as databases, online services and the Internet.
The SAP Business Information Warehouse enables Online Analytical Processing (OLAP),
which processes information from large amounts of operative and historical data. OLAP
technology enables multi-dimensional analyses from various business perspectives. The
Business Information Warehouse Server for core areas and processes, pre-configured with
Business Content, ensures the user to look at information within the entire enterprise. In
selected roles in a company, Business Content offers the information that employees need
to carry out their tasks. As well as roles, Business Content contains other pre-configured
objects such as InfoCubes, queries, key figures, and characteristics, which make BW
implementation easier.
Microsoft Excel, this planning data can be loaded into BI so that a plan-actual comparison
can be performed. The data for the flat file can be transferred to BI from a workstation or
from an application server.
1. The user defines a file source system.
2. The user creates a DataSource in BI, defining the metadata for the user file in BI.
3. The user creates an InfoPackage that includes the parameters for data transfer to the
PSA.
Give APLICATION COMPONENT, TABLE NAME and give Descriptions. Then click on
SAVE button.
Select the fields like master data otherwise click on hide. For transaction data KEY
FIGURES and REFERENCE are compulsory. So select click on SAVE button. Some key
figures along with the reference values. And then
-> Go to BW side and Replicate the user datasource by selecting the application
component, which we already assigned in R/3 side, in the source system tab.
-> Click on Assign infosource by the context menu of the user replicated datasource
-> Assign the transfer rules by selecting necessary infoobject related to the object, which
we extracted by clicking F4 for each and every infoobject. And then click on Activate.
-> After the creating of infocube, create update rules for that infocube.
-> Go to infosource and select the user datasource and create infopackage.
Source/Input: Create a Flat File of type CSV, with the following structure:
Customer Language
Land
City
Name
EN
US
SEATTLE
JOHN
260
261
DE
DE
HAMBURG
SAM
262
DE
DE
WALDORF MICHEAL
Right click on the InfoProvider column header. Select Create InfoArea
In the Attribute tab, enter the fields (non-key fields) that need to be maintained
within the Master Data table.
Ex: Customer Id primary key field
Land
City
On entry of each field, hit [Enter] and in the pop-up box, select the 1st option Create
Attribute as Characteristic.
Choose the Source System File <Flat File source system name>
In the right Window, the user can view the Data Source properties.
General Info tab provides information on the Data Source created
Extraction tab provides details of file to be extracted.
In a Data source, we need to mention 2 details:
Name of Flat File (ASCII or CSV) to be uploaded
Structure (Lathe user) of the input file, mentioning the fields.
Therefore, make the following changes:
Select name of file using F4, search for the file on the user local desktop
Header rows to be ignored = 1 (if file has a header row as column titles)
= 0 (if file has no header row)
Select Data Format as Separated with Separator (for ex CSV).
Enter the Data Separator as , .
Proposal tab provides a quick view of the data in the flat file to be uploaded
Click on the Load Example tab. The details can be viewed in the below pane.
Fields tab provides details of all fields used to create a data structure, to map with
the fields from Source file.
In the Template Info field, enter the field names (characteristics/attributes).
[Enter] or [Save]
[Copy] : all properties of the characteristics ( ex: data types, length etc) are
copied here.
Note: the fields 0LANGU and 0TXTSH are the standard SAP defined characteristics
InfoObjects.
[Save]
[Activate]
Preview tab gives a preview of data on flat file loaded onto a structure.
Click on Read Preview Data
Info Package: An InfoPackage helps to transport data from the Data Source structures into
the PSA tables. (This is similar to transfer of data from Work areas to Internal Tables in
R/3 ABAP).
Rt. Click on the Data Source and Create InfoPackage.
In the next screen, we can view the status of the data loading to PSA.
Click on the [PSA Maintenance] or [Ctrl + F8] to maintain the PSA table with the flat file
values.
In the pop-up, defining the number of records per request, click OK.
This leads to the PSA display screen, with the table and the respective data in them.
Click on [Back] and come back to the DataSource screen.
Transformation: This is a rule that is defined, for mapping the source fields in the
DataSource to the final Target (basically, the Info Providers from where BI extracts the
reports).
In this case, our target is the InfoObject (Master data table), which has been already
created.
Note: Normally, the Transformation can be created on the Data Source or the Target. Here,
we are creating a Transformation on the DataSource.
Right click on DataSource Create Transformation
In the pop-up,
For the Target of Transformation,
Enter Object Type: InfoObject (this is our target in this scenario)
Enter subtype of Object: Attributes (as were considering Master data only.
Not transactional data)
Enter the name of the InfoObject we created.
For the Source of Transformation,
Enter Object Type: DataSource (this is our source structure)
Enter name of Data Source used
Enter the Source System used.
Click on OK
Note:
While uploading Master Data, select Sub-type of Object : Attributes (IP1 for DS1)
While uploading Text desc. For this Master Data, select Subtype of Object : Texts (IP2
for DS2)
In the next screen pop-up, we get the no. of proposals (rules) generated to map source
fields with target fields. Click on Ok.
On the next window we can see a graphical representation of the mapping of Source fields
to Target fields.
Save and Activate, if the mapping if done correctly.
In the pop-up,
The DTP name is described as : < DataSrc/SourceSys -> InfoObject >
DTP type : standard
Target of DTP : Object Type : InfoObject (in this case)
Subtype of Object : Attribute
Name : < name of InfoObject >
Source of DTP: Object Type : Data Source
DataSource : <name>
Source System: Flat file
Click on OK.
Extraction tab,
Select Extraction Mode as Full.
[Save]
[Activate]
Observations:
In the above Master data table display,
The fields Customer Name, Customer Land, and Customer City: have been created and
loaded with data as per the file.
These 3 fields have been created as a result of checking the With Master data for the
Characteristic. The contents of the Master data are stored in the newly created transparent
table /BIC/P<char. Name> Ex:
The fields Language, Description : have been created and loaded with data as per the file.
These 2 fields have been created by default, as a result of checking the With Text data
for the Characteristic. The contents of the Text data are stored in the newly created
transparent table /BIC/T<char. Name> Ex:
4. DB Connect
4.1 Introduction
The DB Connect enhancements to database interface allow the user
to transfer data straight into BI from the database tables or views of external
applications. The user can use tables and views in database management systems
that are supported by SAP to transfer data. The user use Data Sources to make the
data known to BI. The data is processed in BI in the same way as data from all other
sources.
SAP DB Connect only supports certain Database Management systems (DBMS). The
following are the list of DBMS
There are 2 types of classification. One is the BI DBMS & the other is source DBMS. The
main thing which is, both these DBMS are supported on their respective operating system
versions, only if SAP has released a DBSL. If not, they dont meet the requirements &
hence cant perform DB Connect. In this process we use a Data source, to make the data
available to BI & transfer the data to the respective Info providers defined in BI system.
Further, using the usual data accusation process we transfer data from DBs to BI system.
Using this SAP provides options for extracting data from external systems, in addition to
extracting data using standard connection; the user can extract data from tables/views in
database management systems (DBMS)
Now, Under DB Connect, we can see the name of our Source System (MS SQL DB
Connect)
The logical DB Connect name is MSSQL
The below screen shot describes how to perform extraction or loading using a
Table/View. As the standard adapter is Database Table (by default), we can
specify the Table/View here
ID
Description
Now that the Data source has to be activated before it is loaded, we ACTIVATE it
once.
After activation, the data records (4) are displayed. Eastern, Western, Northern &
Southern
Now, we need to create an Info Area to create an Info provider (like Info Cube)
After creating the info cube we check for the data in the PSA by Manage the PSA
This can be also done using the Key controls (Ctrl + Shift + F6)
Status
Data Packet
Data records
REGION ID
REGION Description
The Table/View CUSTOMERS is now chosen for Extraction. In the next tab we
have PROPOSAL, which describes all the Database fields, and we have to specify
the Data source fields, types & length.
Now, go to RSA1 Info objects Info object (Test) Create Info Object Catalog
We now create 2 Info objects & pass the Region ID & Region Description to the 2
objects.
1.
2.
Now, these are the 2 variables created under the Info object test2
Region (REGION2)
Region ids (REG_ID)
We create Characteristic as Info Provider for the Master Data loading in the Info
Provider section Insert Characteristic as Info Provider
We now choose the Source System after this MSSQL MS SQL DB CONNECT
After checking the Transformation mappings on the Region ID, we now perform a
DTP Creation on the same Region ID (Attribute)
We choose the Target system (default) as Info object Region ID REG_ID &
the Source Type as Data source with the Source System MSSQL
After this step, we proceed with creating an Info package IP_DS_TEDDY which
has the source system as MSSQL. Further we start the scheduling of the Info
package. Once the info package has been triggered we can go to Maintain PSA &
monitor the status of data in PSA
Further, we EXECUTE the DTP. And, we can monitor transfer of data from PSA
Info cube
In the source system tree in Data Warehousing Workbench, choose Create in the
Context menu for the UD Connect folder.
Select the required RFC Destination for the J2EE Engine.
Specify a logical system name.
5.
6.
7.
8.
9.
Check the mapping and change the proposed mapping as required. Assign the
non-assigned source object elements to free DataSource fields. The user cannot
b)
map elements to fields if the types are incompatible. If this happens, the system
displays an error message.
Choose Copy to Field List to select the fields that the user want to transfer to
the field list for the DataSource. All fields are selected by default.
c)
d)
e)
f)
g)
h)
i)
Under Transfer, specify the decision-relevant DataSource fields that the user
wants to be available for extraction and transferred to BI.
If required, change the values for the key fields of the source. These fields are
generated as a secondary index in the PSA. This is important in ensuring good
performance for data transfer process selections, in particular with semantic
grouping
If required, change the data type for a field.
Specify whether the source provides the data in the internal or external format.
If the user chooses an External Format, ensure that the output length of the
field (external length) is correct. Change the entries if required.
If required, specify a conversion routine that converts data from an external
format to an internal format.
Select the fields that the user wants to be able to set selection criteria for when
scheduling a data request using an InfoPackage. Data for this type of field is
transferred in accordance with the selection criteria specified in the
InfoPackage.
Choose the selection options (such as EQ, BT) that the user wants to be
available for selection in the InfoPackage.
Under Field Type, specify whether the data to be selected is languagedependent or time-dependent, as required.
If the user did not transfer the field list from a proposal, the user can define the fields of the
DataSource directly. Choose Insert Row and enter a field name. The user can specify
InfoObjects in order to define the DataSource fields. Under Template InfoObject, specify
InfoObjects for the fields of the DataSource. This allows the user to transfer the technical
properties of the InfoObjects to the DataSource field.
Entering InfoObjects here does not equate to assigning them to DataSource fields.
Assignments are made in the transformation. When the user define the transformation, the
system proposes the InfoObjects the user entered here as InfoObjects that the user might
want to assign to a field
In order to keep the data mass that is generated during UD Connect access to a JDBC data
source as small as possible, each select statement generated by the JDBC adapter receives
a group by clause that uses all recognized characteristics. The recognized key figures are
aggregated. What is recognized as a key figure or characteristic and which methods are
used for aggregation depends on the properties of the associated InfoObjects modeled in
SAP BW for this access.
The amount of extracted data is not restricted. To prevent exceeding the storage limitations
of the J2EE server, packages with around 6,000 records are transferred to the calling ABAP
module.
5.5.2 Use of Multiple Database Objects as UD Connect Source Object
Currently only one database object (table, view) can be used for a UD Connect Source. The
JDBC scenario does not support joins. However, if multiple objects are used in the form of
a join, a database view should be created that provides this join and this object is to be used
as a UD Connect source object. The view offers more benefits:
The database user selected from SAP BW for access is only permitted to access
these objects.
Using the view, the user can run type conversions that cannot be made by the
adapter (generation of the ABAP data type DATS, TIMS etc.)
Architecture (JCA).
The user can also use the BI JDBC Connector to make these data sources available in SAP
BI systems using UD Connect. The user can also create systems in the portal that are based
on this connector.
The connector adds the following functionality to existing JDBC drivers:
Standardized connection management that is integrated into user management in the
portal
A standardized metadata service, provided by the implementation of JMI capabilities
based on CWM
A query model independent of the SQL dialect in the underlying data source
The JDBC Connector implements the BI Java SDK's IBIRelational interface.
5.6.1 Deploy the user data sources JDBC driver to the server:
1.
2.
3.
4.
5.
Generation requires the customer-specific installation number and can be generated online.
The system administrator knows this procedure and should be included in the procurement.
The key has to be procured and entered exactly once per user and system. Because the
generated objects were created in the SAP namespace, an object key is required.
Like the developer key, this is customer specific and can also be procured online. The key
is to be entered exactly once per object and system. Afterwards, the object is released for
further changes as well. Further efforts are not required if there are repeated changes to the
field list or similar.
6. XML Integration
6.1 Introduction
SAPs Business Connector (SAP BC) is a business integration
tool to allow SAP Customers the ability to communicate with other SAP customers or SAP
marketplaces. The Business Connector allows integration with R/3 via open and nonproprietary technology. This middleware component uses the Internet as communication
platform and XML/HTML as data format thus seamlessly integrating different IT
architectures with R/3.
SAP BC integrates RFC server and the client and provides an
XML layer over R/3 functionality. That is, it comes with an XML automation that converts
SAPs RFC format into XML and supports both synchronous RFC and asynchronous RFC
protocols such that there is no requirement of SAP R/3 automation at the receiving end.
Also, SAP BC has a built in integration support for SAPs specification of IDOC-XML and
RFC-XML standards. Hence, whenever dealing with the messages conforming to the same
standards it supports the integration.
Figure 1
3. If the user want to transmit data from an SAP system to the BC, the user need to have
the same user in both systems. For creating users in the BC click Security > Users and
Groups > Add and Remove Users.
4. Enter the desired SAP User, assign the corresponding Password and click Create
Users. This creates a User. Mark the just created User in the Groups box section and
make sure that the Select Group ="Administrators". Now, add the User into the
Administrators group by clicking (below the right selection box). Click Save Changes to
save the settings.
6.5.2 Add SAP Systems
5. All the proposed SAP system(s) should be added within the Business Connector. To
achieve this, click Adapters > SAP which opens a new window. In the new window click
SAP > SAP Servers > Add SAP Server.
6. Enter the necessary information for the SAP server (System, Login Defaults, Server
Logon, and Load Balancing. "Save" (as illustrated in screen 1).
7. All incoming calls to Business Connector are scrutinized and routing information is
extracted about the Sender, Receiver and Msg Type. Using the Rules it finds the
recipient and the format that should be used to send the call to this particular recipient.
Clicking Adapters > Routing opens a new window. Enter information like Sender,
Receiver and MsgType. With "Add Rule" the rule is created on other details like
"Transport" and "Transport Parameters" must be provided.
8. After entering all the details, click Save and enable the rule by clicking No under the
"Enabled?" column.
6.5.4 Access functionality in the Business Connector
9. Post a document containing the XML format of the Idoc or BAPI/RFC-call to the
Business Connector service.
For example: Use the following statements to post a document to the /sap/InboundIdoc
service of the Business Connector.
The user cannot use the SAP Business Connector in a web application (e-Commerce,
Procurement, etc), but can use it to facilitate a business-to-business transaction in an EDIlike manner. For example: The user can send an XML document to the user vendor, and
they send the user an XML packet back.
A BI system defines itself as the source system for another BI system by:
Providing metadata
Providing transaction data and master data
An export DataSource is needed to transfer data from a source BI into a target BI. Export
DataSources for InfoCubes and DataStore objects contain all the characteristics and key
figures of the InfoProvider. Export DataSources for master data contain the metadata for all
attributes, texts, and hierarchies for an InfoObject.
If the user selects this architecture, the data for a BI server is available
as source data and can be updated in further target BI Systems.
If required, define the source system. This is only necessary if the source system
has not yet been created in the target BI.
Create the InfoProvider into which the data is to be loaded.
Replicate the metadata of the user export DataSource from the user source BI into
the user target BI. Using the source system tree, the user can replicate all the
metadata of a source system, or only replicate the metadata of the user DataSource.
Activate the DataSource.
Create an InfoPackage at the user DataSource using the context menu. The user
source BI system is specified as the source by default.
Using the context menu for the user DataSource, create a data transfer process with
the InfoProvider into which the data is to be loaded as the target. A default
transformation is created at the same time.
The complete data flow is displayed in the InfoProvider tree under the user
InfoProvider.
Schedule the InfoPackage and the data transfer process. We recommend that the
user always use process chains for loading processes.
7.4.3 Generating Export DataSources for InfoProviders
1. Select the InfoObject tree in the Data Warehousing Workbench in the source BI
system.
2. Generate the export DataSource from the context menu of the user InfoObject. To
do this, choose Additional Functions Generate Export DataSource.
The technical name of the export DataSource is
8******M for attributes (M stands for master data)
8******T for texts
8******H for hierarchies.
(The asterisks (*) stand for the source InfoObject).
When the user create an InfoObject or a master data InfoSource in the source BI system,
the user must therefore make sure that the length of the technical name of each object is no
longer than 28 characters.
The data transfer is the same as the data transfer in an SAP system.
The system reads the data, taking into account the specified dimension-specific selections,
from the fact tables of the delivering BI system.
7.4.5.1 Delta Process
Using the data mart interface, the user can transfer data by full upload as well as by delta
requests. A distinction is made between InfoCubes and DataStore objects.
The InfoCube that is used as an export DataSource is first initialized, meaning that
the current status is transferred into the target BI system. When the next upload is
performed, only those requests are transferred that have come in since initialization.
Different target systems can also be filled like this. Only those requests are
transferred that have been rolled up successfully in the aggregates. If no aggregates
are used, only those requests are transferred that are set to Qualitative OK in the
InfoCube administration.
For DataStore objects, the requests in the change log of the DataStore object are
used as the basis for determining the delta. Only the change log requests that have
arisen from reactivating the DataStore object data are transferred.
7.4.5.2 Restriction
It is possible to only make one selection for each target system for the
delta. The user first makes a selection using cost center 1 and load deltas for this selection.
Later on, the user also decides to load a delta for cost center 2 in parallel to the cost center
1 delta. The delta can only be fully requested for both cost centers, meaning that it is then
impossible to separately execute deltas for the different selections
7.4.6 Transferring Texts and Hierarchies for the Data Mart Interface
7. Virtual InfoCubes
7.1 Introduction
They only store data virtually, not physically and are made available
to Reporting as Info-Providers. Virtual remote cube can be used to 'act like an infocube,
feel like an infocube, but is not an infocube', The user can create this virtual infoprovider,
having the same structure similar to infocube, with dimensions, characteristics and key
figures. But the actual generation of data for this cube, is, well depends on the user making.
The user can choose to create a virtual provider with a 'direct access' to
R/3. This means that upon viewing the data in this virtual provider, a remote function call
is called direct to R/3 to get the values on the fly. The user can also choose to get value
from a function module of the user calling, so the implementation details are really up to
the user.
SAP RemoteCube
RemoteCube
Virtual InfoCube with Services
7.3.1 SAP RemoteCube
An SAP RemoteCube is a RemoteCube that allows the definition of queries with direct
access to transaction data in other SAP systems.
Use SAP RemoteCubes if:
The user need very up-to-date
up
data from an SAP source system
The user only access a small amount of data from time to time
Only a few users execute queries simultaneously on the database.
The SAP RemoteCube can be used in reporting in cases where it does not differ from other
InfoProviders.
1.
Choose the InfoSource tree or InfoProvider tree in the Administrator Workbench
under Modeling.
2.
In the context menu, choose Create InfoCube.
3.
Select SAP RemoteCube as InfoCube type and enter the user InfoSource.
4.
The user can define if a unique source system is assigned to the InfoCube using
an indicator. Otherwise the user must select the source system
stem in the relevant query. In this
case the characteristic (0LOGSYS) is added to the InfoCube definition.
5.
In the next screen the user can check the defined SAP RemoteCube and adjust it
if necessary before activation.
6.
The user can then assign source systems to the SAP RemoteCube. To do this,
choose Assign Source System from the context menu in the user SAP RemoteCube. In the
dialog box that follows, choose one or several source systems and select
Save
Assignments.. The source system assignments are local to the system and are not
transported.
7.3.1.2 Structure
BW Service API functions (contained in the SAP R/3 plug-in) are installed.
The Release status of the source system is at least 4.0B
In BW, a source system ID has been created for the source system
DataSources from the source system that are released for direct access are assigned
to the InfoSource of the SAP RemoteCube. There are active transfer rules for these
combinations.
7.3.2.1 Structure
When reporting using a RemoteCube, the Data Manager, instead of using a Basic Cube
filled with data, calls the RemoteCube BAPI and transfers the parameters.
Selection
Characteristics
Key figures
As a result, the external system transfers the requested data to the OLAP Processor.
7.3.2.2 Integration
To report using a RemoteCube the user has to carry out the following steps:
1.
2.
3.
4.
5.
In BW, create a source system for the external system that the user wants to use.
Define the required InfoObjects.
Load the master data:
Create a master data InfoSource for each characteristic
Load texts and attributes
Define the RemoteCube
Define the queries based on the RemoteCube
7.3.3.1 Structure
When the user creates an InfoCube the user can specify the type. If the
user chooses Virtual InfoCube with Services as the type for the user InfoCube, an extra
Detail pushbutton appears on the interface. This pushbutton opens an additional dialog box,
in which the user defines the services.
1. Enter the name of the function module that the user wants to use as the data
source for the virtual InfoCube. There are different default variants for the
interface of this function module. One method for defining the correct variant,
together with the description of the interfaces, is given at the end of this
documentation.
2. The next step is to select options for converting/simplifying the selection
conditions. The user does this by selecting the Convert Restrictions option. These
conversions only change the transfer table in the user-defined function module.
The result of the query is not changed because the restrictions that are not
processed by the function module are checked later in the OLAP processor.
Options:
No restrictions: if this option is selected, any restrictions are passed to the
InfoCube.
3. Pack RFC: This option packs the parameter tables in BAPI format before the
function module is called and unpacks the data table that is returned by the
function module after the call is performed. Since this option is only useful in
conjunction with a remote function call, the user have to define a logical system
that is used to determine the target system for the remote function call, if the user
select this option.
4. SID support: If the data source of the function module can process SIDs, the user
Should select this option.
If this is not possible, the characteristic values are read from the data source and
the data manager determines the SIDs dynamically. In this case, wherever
possible, restrictions that are applied to SID values are converted automatically
into the corresponding restrictions for the characteristic values.
5. With navigation attributes: If this option is selected, navigation attributes and
restrictions applied to navigation attributes are passed to the function module.
If this option is not selected, the navigation attributes are read in the data manager
once the user-defined function module has been executed. In this case, in the
query, the user need to have selected the characteristics that correspond to these
attributes. Restrictions applied to the navigation attributes are not passed to the
function module in this case.
6. Internal format (key figures): In SAP systems a separate format is often used to
display currency key figures. The value in this internal format is different from
the correct value in that the decimal places are shifted. The user use the currency
tables to determine the correct value for this internal representation.
If this option is selected, the OLAP processor incorporates this conversion during
the calculation.
7.3.3.2 Dependencies
If the user uses a remote function call, SID support must be switched
off and the hierarchy restrictions must be expanded.
Variant 1:
Variant 2:
Variant 3:
SAP advises against using this interface.
The interface is intended for internal use
use only and only half of it is given here.
Note that SAP may change the structures used in the interface.
7.3.3.3 Method for determining the correct variant for the interface
The following list describes the procedure for determining the correct
interface for the user-defined
defined function module. Go through the list from top to the bottom.
The first appropriate case is the variant that the user should use:
If Pack RFC is activated: Variant 1
If SID Support is deactivated: Variant 2