Professional Documents
Culture Documents
Framework For Parallel Processing For Customer Reports
Framework For Parallel Processing For Customer Reports
Dietmar-Hopp-Allee 16
D-69190 Walldorf
CS STATUS
customer published
DATE VERSION
Nov-11 2008 2.1
Best_Practice_FPP_for_Customer_Reports_V21.doc – 11.11.2008
Best Practice
Framework for Parallel Processing for Customer Reports
Table of Contents
1 Management Summary 4
1.1 Goal of Using This Service 4
1.2 Staff and Skills Requirements 4
1.3 System Requirements 5
1.4 Duration and Timing 5
1.5 Examples 5
2 Best Practice – Implementation Guide 6
2.1 Design of Framework for Parallel Processing 6
2.1.1 Architectural Context 7
2.1.2 Events 7
2.2 Step-by-Step Instructions 9
2.2.1 Application Type and Object Type 9
2.2.2 Logical Definitions 9
2.2.2.1 Package Formation Category 9
2.2.2.2 Restart Procedure 11
2.2.3 Global Application Parameter 12
2.2.4 Events and Callback Modules 12
2.2.4.1 Event 0205 – Create Package Templates 12
2.2.4.2 Event 1000 – Initialize Package 13
2.2.4.3 Event 1100 – Selection per Range 15
2.2.4.4 Event 1200 – Selection for Known Object List 15
2.2.4.5 Event 1300 – Edit Objects 16
2.2.5 Start Report 17
2.2.6 Application Log Handling 17
2.2.7 Job Log Message Handling 18
2.2.8 Collection of Statistical Information at the End of a Mass Run 19
2.3 Configuration (Customizing) 20
2.4 Enhancement Options 22
2.5 Complex Scenarios 22
2.5.1 Multi-Step Processing 22
2.5.2 Special Parameters for Package Formation 22
2.5.3 Object Locks – Dependencies Between Application Types 23
2.5.3.1 Set Locks – BANK_MAP_PP_LOCKS_SET 23
2.5.3.2 Read Locks – BANK_MAP_PP_LOCKS_GET_MLT 24
2.5.3.3 Delete Locks – BANK_MAP_PP_LOCKS_RELEASE 25
2.6 Job Distribution 25
2.6.1 Degree of Parallelization 25
2.6.2 Distribution 26
2.7 Monitoring of FPP Enabled Reports 26
2.7.1 MassMan in Satellite System and Central System 26
2.7.2 MassMan – Start Screen 27
2.7.3 MassMan – Displayed Information 27
3 Further Information 29
3.1 Documentation of Central Interfaces 29
3.2 Debugging 31
1 Management Summary
In the Banking Services environment, Framework for Parallel Processing (FPP) enables applications to
process mass data efficiently. Very often customer-developed reports are poor in scalability and memory
allocation, which also has a tremendous impact on the general performance of the report and sometimes on
the project success.
This document describes how to set up a parallelization and package size for performance-critical customer-
developed mass processes as part of the Banking Services solution, by using the existing infrastructure and
logic of FPP, which has already proven its robustness in the SAP standard processes. The tool enables
parallel execution (several application servers) of runtime-intensive mass processing. The application
allocates the data to be processed to packages that are transferred to the tool. The tool administers these
packages and controls their processing in parallel background jobs. The execution of these jobs is controlled
by basis job scheduling functionality.
The performance improves when data is processed in many batch processes with a specified package size.
The split into packaged processing increases the scalability of the customer reports as well because it
prevents huge internal tables and memory overflow by design.
Better maintainability and hence lower TCO costs because of scalability and optimal usage of CPU resources
can be reached by enabling mass data processing customer reports on FPP in a banking environment.
The document provides guidance on how IT departments can take advantage of FPP for a banking solution.
Implementing Run SAP recommendations means implementing SAP Best Practices of how IT can more
efficiently run their Banking solution. This paper focuses on best practices for processing mass volumes and
can be used as a developer handbook to enable customer reports to run with the optimal performance. FPP
enables applications to process mass data very efficiently.
Performance improves by processing the data in several processes and packages. The scalability of the
mass processing is ensured by the package processing of FPP, which reduces the likelihood of memory
dumps and huge internal tables. Depending on the package size, the same number of objects will be picked
up from the work list and will be processed in an equal manner. The framework provides all control functions
but the application must prepare the business logic.
This document describes in detail the design of FPP and explains the adjustments that are necessary to
incorporate a customer program on the framework. Using FPP for customer reports will provide you the
advantage to make use of the monitoring capabilities of SAP Solution Manager and FPP which are already in
place to monitor SAP standard reports.
To implement FPP in customer reports you need one experienced consultant with developer skills.
FPP is part of the SAP application basis SAP_ABA as of SAP NetWeaver release 620 or higher and can
therefore also be used for banking and non-banking applications.
The parallel processing tool is already incorporated in all mass processing reports of the Banking Services
environment.
Duration and timing depend on the experience of the developer. For the first implementation, the effort will be
a bit higher. After the first development was successfully done, you can implement the FPP in less than ten
days per report, depending on the complexity of the report.
1.5 Examples
A demo application is delivered with FPP. It displays an example of an FPP implementation and the functions
of the callback modules. It is a fictitious application that shows the integration of FPP. It consists of the
following elements:
Function group BANK_API_PP_DEMO contains the callback modules for the FPP events
RBANK_PP_DEMO_START – Report for starting the demo application
RBANK_PP_DEMO_RESTART – Report for restarting a canceled run of the demo application
RBANK_PP_DEMO_GENERATE_DATA – Report for generating test data
RBANK_P_DEMO_CREATE_PACKMAN – Report for creating a package administrator
This chapter provides a description of how to integrate and use the tool.
The basic principle of Framework for Parallel Processing is to split up processing into individual processing
steps or events in which business or application-specific logic is run. The application prepares this logic in
function modules (“callback modules”).
From a technical point of view, each event is optional. The absence of an event does not lead to an error.
Section 2.1.2 describes a suggested combination of events.
The prerequisite for the implementation of FPP is that the data to be processed is stored in the database.
This does not apply for application processes that create data.
The application is identified by FPP using a unique application type, which must be entered in Customizing
and transferred when the framework is started. You have to enter the relevant callback modules for the
application type in Customizing.
To start FPP from the application, call up the function module BANK_MAP_PP_START.
From the point of view of the main process, the parallel process runs asynchronously in batch jobs. This
means that:
It is usually possible to transfer data to this process using database tables or persistent saves only.
The callback functions assigned to this area cannot access any data that the application stored previously
in global areas.
The following graphic shows an overview of the structure of FPP and its integration in the application.
Application
Customizing
FPP Start Report
Mass Run Layer
Appl.
Cat. Process Layer Process Callback
Events Function
Basis Job Control Modules
Process Layer
Figure 1 Architecture
2.1.2 Events
The following overview shows the assignment of events to the three processing areas. Event 205 (in bold
type) is compulsory. The events in the middle section are executed in a loop for each package of a parallel
process.
Preparation
0205: Create Package Templates
Parallel Process
0110: Get Appl. Parameters
(Jobs)
1400: Start Parallel Process
End Processing
0300: End of Mass Run
Figure 2 Events
As mentioned above, the only compulsory event is event 0205. However, parallel processing is of very little
use without processing (event 1300 “Process Objects”). For an implementation to serve any purpose, it
should contain at least the following events:
Event 0205 – Create Package Templates
Event 1000 – Initialize Package
Event 1100 – Selection per Range or
Event 1200 – Selection for Known Object List
Event 1300 – Edit Objects
Events 1100 or 1200 (selection) could also be completed at the start of event 1300. However, we recommend
that you execute them separately, so that the events can be maintained and monitored. If an application does
not require data selection, you can leave out those events as well.
The definition of the application type in Customizing for FPP is an organizational activity. You have to enter
the application type, as the callback functions for the applications are created under this key (as described
above).
You only need to assign the object type if you want to enable object list management (package formation) or
resubmission.
The definitions described below affect the design of some callback modules. They do not necessarily contain
or require technical transactions; they are rather a note to the application developer to clarify the required
procedure before the corresponding modules are implemented.
The package formation category determines the procedure used to divide the objects to be processed into
packages.
Affected events:
0205 – Create package templates (poss. 206, 207)
1000 – Initialization of package
1100 – Selection per range
1200 – Selection for object list / resubmission
You can also use special attributes like: Specification of Range Limits from DB
If the objects cannot be divided into ranges using the calculation described above, you can divide them using
the objects in the database or using business criteria. In this case, the range limits are specified for each case
in the callback module at event 0205. The tool stores them for each package. The application module at
event 1000 then transfers the specified ranges for each package from FPP and stores them in global
variables.
As before, the objects are retrieved from the database in the callback module at event 1100.
In this procedure, the callback module only returns the number of packages at event 205. The range limits for
each package are defined in the callback module at event 1000, and the values are stored in the global
variables of the application.
At event 1100, the relevant application module can then get the objects that belong to the range from the
database and store them in a global data area.
This procedure is used if the objects to be processed are subdivided into ranges using an algorithm.
Example: The unique key of the objects is a GUID. You can distribute the existing object keys equally across
the complete GUID area from 0000000000000000 to FFFFFFFFFFFFFFFF to calculate corresponding
ranges.
Object list
In this procedure, the objects are selected for each package according to business criteria. However, the
callback module does not transfer a range with upper and lower limits to FPP at event 0205, rather it transfers
a list of specific objects. FPP stores these objects for every package in a DB table.
Event 1200 is the selection event in this case. The relevant callback module of the application receives the
object list that is valid for the current package. It can then get the relevant data from the database and store it
in a global data area.
This procedure is used automatically for restarted runs, as the objects are from the worklist of the first run and
are already known.
Special attributes
The application may want to use attributes in addition to those described in the procedure above to construct
the worklist, i.e. to select the objects for each package.
You can do this using the export parameter E_STR_PACKPARAM in the callback module at event 0205. The
attributes are then returned to the application at event 1000.
For example, you can use a list of bank posting areas or products to further restrict a range of accounts.
The application must decide whether it is possible to restart canceled runs. To restart a canceled run, the
application must flag the relevant packages, and thus the whole run, by confirming the objects to FPP with
the status “RESTART” at event 1300. Depending on how the application processes the objects, the
application must either confirm the status of each object or flag the package by confirming a dummy object
with the status “RESTART”. To help with the decision, you can use the following rule of thumb for the
rollback:
The application itself executes the rollback if the package has errors, which means either all or no objects
in a package are processed No status for object
The application does not execute a rollback if the package has errors, which means that the “good”
objects are in the database Individual status must be confirmed to enable restart of incorrect objects
Application wants to restart the exact package (for example, with exactly the same range limits), but does
not want to confirm individual objects Confirm a dummy object
To manage the status of individual objects, you have to activate this option in Customizing for FPP.
No restart
Canceled runs are not restarted. A new start is made.
The application does not need to manage any status information about the processed objects and
packages.
Normal restart
If status values are saved for each object, the change to the status of the objects should be made in the same
logical unit of work as the change to the object itself. If the application does not execute its own commits, they
are executed by FPP. This is the recommended method.
An application that needs to trigger a commit in a package first has to use the module BANK_MAP_PP_CON-
FIRM_OBJECTS to confirm the status of the objects that have been processed already to FPP.
If you want the application to transfer data from your start report to your callback modules using FPP without
going via a database table, you can use a global application parameter. This is a data structure that is
transferred to FPP when you call the FPP start module BANK_MAP_PP_START. FPP then transfers the data
to the application at certain events.
Usually, the data from the selection screen of the start report is transferred in a certain structure to the
application. Data can be criteria for package creation, such as posting area, product, or number of packages.
Below is an example of a structure for the parameters of the application Account Settlement.
The following describes the five most important obligatory events. The application must provide modules for
these events.
At this event, the application must specify the method for package creation (the "package template“).
E_FLG_NO_PACKAGE XFELD
E_PACKDEFCATG BANK_DTE_PP_PACKDEFCATG
Implementation notes
This event is not executed for a restart.
FPP calls up the module in a loop until the export parameter e_flg_no_package is set or the exception
FAILED is triggered. An exception is made for package formation category 3 (calculation of range limits),
for which the module returns the number of packages and is called only once.
If the flag e_flg_no_package is set, no more packages exist. It replaces the exception NOT_FOUND,
which still exists in the module interface for compatibility reasons.
The exception FAILED indicates that an error occurred during formation of the package template.
You can define a package template by specifying a range, a list of correct objects, or the number of
packages (see 2.2.2.1). The relevant range limits E_LIMIT_LOW or E_LIMIT_HIGH, the object list
E_TAB_OBJKEY, or the number of packages E_CNT_PACKAGES must be returned.
Enhancements
If the application requires parameters for each package template in addition to the global parameter
I_STR_PARAM, they can be returned in the structure E_STR_PACKPARAM. The structure of this export
parameter must be exactly the same as the row structure of the table type in the Data Dictionary, which is
defined in Customizing for package-related application data. The structure must contain the package
template key, so that it is possible to use this key to access the parameters for the package during later
processing. The current values for each package are contained in parameter I_STR_PACKAGE_KEY.
This event is the first step in processing a package of work. The application gets all the parameters that are
required for the following processing steps – selecting, checking and processing application data.
I_STR_PACKATTR
Implementation notes
The transferred parameters need to be stored in global data areas of the application for use at later
events.
The indicator I_XRESTART means that the current package has been already processed once. Object
data already exists in the application tables in the database
You can use the key I_STR_PACKAGE_KEY to access application-specific data for the package template
which was stored at the event Event 0205 – Create Package Templates.
The parameter I_STR_PACKATTR contains a reference to application-specific data about the package
template that was stored later by FPP rather than at event 0205.
At this event, you should delete the buffers of the application that contain the data for the objects of a work
package.
Enhancements
If there are dependencies between the objects being processed (for example, settlement of subordinate
accounts for reference accounts), processing may be executed in several steps. Each step is processed in
parallel. The step number I_CURRSTEPNO is required if the processing step is to be used to select the
processing step.
Example: Only subordinate accounts are selected from a package template in step 1, only the reference
accounts are selected in step 2.
Implementation Notes
The selection criteria were transferred when the work package was started (event 1000) and should be
available in the global database for the application.
The objects selected must be buffered in global data areas of the application for later processing.
The selected objects can be returned in the table E_TAB_OBJKEY. They must be converted from the
application format to the format required for parallel processing. You can use the function module
BANK_API_PPOBJ_CONV for the conversion.
This event is run instead of event 1100 if the objects to be processed are already known. This is the case if it
is a restart run, or if the objects were already specified in a previous selection, such as event 0205.
Implementation Notes
The objects transferred in the table I_TAB_OBJKEY are in the format valid for FPP. They must be converted
into the correct format if they are to be processed by the application. You can do this the function module
BANK_API_PPOBJ_CONV.
In case of a restart, the worklist of objects to be processed may have changed. The changes must be
communicated to FPP by making entries in the two export tables. Use module BANK_API_PPOBJ_CONV to
convert the objects into FPP format. The following two changes are possible:
Some of the objects selected for the first start are no longer valid and cannot or should not be processed.
The invalid objects must be returned to FPP in table E_TAB_OBJKEY_NOT_VALID.
New objects have been added. Parallel processing control must be notified of the new objects using
export parameter E_TAB_OBJKEY_NEW.
Implementation Notes
The processing status of the objects can be returned in export table E_TAB_STATUS_CHANGE. If you want
to run a restart, return is obligatory. The following table shows all possible status values and effects.
0 Selected
During processing of this event, some applications must trigger Commit Work commands. To avoid inconsis-
tencies in the status update, you must first confirm the status of the objects processed up to the Commit
Work. To do so, call module BANK_MAP_PP_CONFIRM_OBJECTS.
From the application’s point of view, the start report is the starting point of a parallel mass run. You have the
following options on the selection screen:
Business criteria for package formation and data selection
Technical information for control of FPP
Start FPP by calling function module BANK_MAP_PP_START (for interface, see section 0).
Important: The current version of FPP predefines that an application log object is opened in the start report
with the functions of the message class EMSG (package BMESSAGE). Also, the error messages that were
triggered before the start of parallel processing are output directly or written to the job log, which can lead to
immediate cancellation without the application retaking control. FPP opens a log object in each of the parallel
processes.
FPP uses the standard application log to log errors by message class. The components are found in the
function group EMSG of package BMESSAGE. The log must differentiate between the areas of preparation
and parallel processes.
The parallel processes are run in batch jobs started by FPP. FPP serves as a channel and creates log
objects at this event. When BANK_MAP_PP_START is called, the application can transfer one or more log
objects, which can be opened in the corresponding sequence. If nothing is transferred, FPP opens log objects
FS_EXC and FS_PROT. This means that there are two logs for each parallel process in the application log.
The system writes any error messages that were issued before parallel processing to the application log with
the functions of the message class. This also triggers a MESSAGE … RAISING command. As FPP does not
serve as a channel at this point, it does not open a log but waits for the application to do so. If no open log
exists, the messages are output. If processing runs in the background, then the job is canceled.
Important: The application must open at least one log object using the message class (function group
EMSG) before the start module BANK_MAP_PP_START is called. Therefore, in order to save messages at
the end of the process in event 1410, you need to create one log object in event 1400 before calling module
BANK_MAP_PP_START since there is no input parameter for class CL_BANK_PP_STANDARD_LOGGER
available.
For an example of the creation of log objects, see the routine init_messages in the start report
RBANK_PP_DEMO_START, in which the module MSG_OPEN creates two logs.
From the FPP perspective it is natural and wanted to create as many logs as there are processes so that any
process can be identified and analyzed in detail.
You can write application log records with the following function modules:
APPL_LOG_WRITE_HEADER: With this function module, you write the log header data in local memory.
APPL_LOG_WRITE_LOG_PARAMETERS: With this function module, you write the name of the log
parameters and the associated values for the specified object or sub-object in local memory. If this
function module is called repeatedly for the same object or sub-object, the existing parameters are
updated accordingly. If you do not specify an object or sub-object with the call, the most recently used is
assumed.
APPL_LOG_WRITE_MESSAGES: With this function module, you write one or more messages, without
parameters, in local memory.
APPL_LOG_WRITE_SINGLE_MESSAGE: With this function module you write a single message, without
parameters, in local memory. If no header entry has yet been written for the object or sub-object, it is
created. If you do not specify an object or sub-object with the call, the most recently used is assumed.
APPL_LOG_WRITE_MESSAGE_PARAMS: With this function module you write a single message, with
parameters, in local memory. Besides this it works like APPL_LOG_WRITE_SINGLE_MESSAGE.
APPL_LOG_SET_OBJECT: With this function module, you create a new object or sub-object for writing in
local memory. With a flag you can control whether the APPL_LOG_WRITE_... messages are written in
local memory or are output on the screen.
APPL_LOG_INIT: This function module checks whether the specified object or sub-object exists, and
deletes all existing associated data in local memory.
APPL_LOG_WRITE_DB: With this function module, you write all data for the specified object or sub-
object in local memory to the database. If the log for the object or sub-object in question is new, the log
number is returned to the calling program.
For optimal performance of the mass processing reports, ensure that the same information is not written to
the job log and the application log. The following chapter describes in detail how you can suppress job log
messages if the same information is already available in the application log or if it is unnecessary.
Use function BP_SET_MSG_HANDLING to suppress writing all application log messages to the job log and
avoid unnecessary performance overhead, since the data is already contained in the application logs. The
function should be called twice, once before the main processing to set the flag for suppressing job log writing
and then one more time after the main processing to set it back.
All processes within FPP run in independent roll areas and do not share any memory. Therefore the results
and other statistical information received from every executed process cannot be stored in the buffer. The
results of the mass run should be stored on DB level and then collected after the mass run.
Here are just a few examples of the information that may need to be accumulated in the buffer/memory while
executing the multiple processors.
Record count: Number of records processed within the file or process
Average balance/total balance – for the accounts within the process
Write-off amount: If an account contains a write-off amount, this value is summarized and at the end of the
program, posted to a relevant account.
The reason for excluding the account from the processing needs to be stored on a custom table or in the
application log to produce a further report.
Statistical information of the mass run like counters for number of processed objects or number of objects
processed successfully must be recorded in DB table BANK_MR_LINE_CNT with the following structure:
massrun_id: Identifier
counter_catg: Counter categories like ACCSUCC (accounts processed successfully) or ACCDUE (total
number of accounts due), etc.
addkey: Additional identifier, for instance, the job number
counter: Number of objects
Note: To avoid waiting situations in parallel processing there is an additional key ADDKEY in table
BANK_MR_LINE_CNT, which is filled with the number that is assigned to the ABAP work process. If the job
number is used as an additional key, either a new entry for the first package of that job should be inserted, or
the existing entry should be updated by adding the counter values to the previous ones. This data must be
recorded at the end of each package, so a good place to insert the counters would be the end of the 1300
function module.
Note: The counter information will also be shown in the mass activity monitor (MassMan) and can be used for
business process monitoring in SAP Solution Manager (accounts processed successfully)
Implementation
First, the external ID of the run is needed. It can be found in DB table BANK_PP_PARUNHD, using the fields
PROGN, PROGNO and PROGDATE variables as keys (provided in the function module 1000).
Then, the massrun_id that corresponds to the actual run needs to be determined. To do that, the function
module BANK_OBJ_MR_START needs to be called before parallelization. Function module 100 might be a
good place. This function module needs three parameters:
i_application_identification: Application type
i_runid_ext: External ID that is mentioned above
i_vb_mode: Must be ' ' (space)
The module will return the massrun_id needed to record the counters. It should be stored somewhere in the
global memory (not in method 100 since it is not part of the parallel process, so the massrun_id will not be
visible in other jobs (and other application servers) because each job has its own global memory), so that it is
available in the 1300 function.
The massrun_id should be stored in the global parameter structure (e_str_newparam), and then moved back
to the global memory in method 1000 or any method triggered by a parallel job.
To declare new counters, with the appropriate description text, transaction BANK_CUS_MR_CNTCG should
be used. The assignment of new counters to the appropriate application type will be done in transaction
BANK_CUS_MR_APCNT. New counter categories are not mandatory; the existing counters can be used.
In event 1300, the counters should be inserted in the database at the end of the package processing.
Another example is the program RBCA_BSPRPR_RUN_PP where the results are stored at event 1300 (Edit
Objects) to the table BCA_GLARCH_LOV, and at event 0300 (End of the Mass Run) the data is read from
this table to be printed. So the abortions and restart information can be handled.
Event 0300 - End of the Mass Run does not have an importing parameter for the package ID (type
BANK_STR_PP_PACKAGEKEY). To get the package ID, function module BANK_MAP_PP_GET_STATUS
should be called in event 0300 with import parameters:
I_PROGN = BANK_DTE_PP_PROGN
I_PROGDATE = BANK_DTE_PP_PROGNO
I_PROGNO = BANK_DTE_PP_PROGDATE
I_FLG_CHECK_ACTIVE_RUNS X Delete X (Set to FALSE)
For customizing settings use transaction BANK_CUS_PPC, where customers can enter their own application
types and relevant methods.
“Application for BTE” – FPP enables you to use business transaction event 0BANK010 to control package
formation. You must enter the application that is entered for the business transaction event into table
TPS34.
The value entered for the “number of sequential repeat runs” defines whether additional runs are to be
made in a sequential session once the “number of repeat runs” has been reached. If so, a batch job is
created for processing.
Customers can make settings for their own applications as well as for SAP applications. If both, SAP and
customer settings are made for a value, FPP decides which setting has precedence as follows:
- “All objects persistent” – Set indicator has precedence
- “% postponed” – Minimum has precedence
- “Number of repeat runs” – Maximum has precedence
- “Application for BTE” – Only customer setting is relevant
- “Number of sequential repeat runs” – Maximum has precedence
Supported lock owner sessions are used to define the lock procedure. You can enter the following values for
each application:
Lock by application only – the owner of the locked object is the application (type), other runs of the same
application can process the objects. This is the recommended procedure.
Lock by mass run only – the owner of the locked objects is the relevant run, only this run can process the
objects
No locks
The first two points are described in detail in the previous section. Note the following when entering relevant
locks for other application types:
If nothing is entered for an application, all applications are relevant.
The setting only affects applications that call these other application types during the start checks.
Unfortunately, it is currently not possible to identify these applications in Customizing.
This transaction is also used to maintain the entries for the events and the relevant interface parameters. This
is done by the developers responsible for FPP.
Some applications need to process their objects in two or three steps. In Account Settlement for banking, for
example, the subordinate accounts are processed in the first step. The settlement results from this first step
are then accessible for the settlement of the higher-level accounts, which is executed in a second step.
The application uses parameter I_MAX_STEPNO when calling the start module BANK_MAP_PP_START to
notify FPP of the number of steps that are to be processed.
As it is not possible to define a callback module for every step, multi-step processing is a repeat call of the
application by specifying the current step number. The application must then separate the processing of each
step in its modules.
Processing of a new step is essentially the same as starting a new run, as new parallel processes are formed
and executed in relevant batch jobs.
Event 100 and the final processing events 300, 130 and 207 are run once per run only, rather than once per
step.
In addition to normal package formation, it is also possible to use additional parameters to define special
attribute values for each package. You can save these parameters either in your own database table or using
FPP.
This topic deals with the interaction of different mass runs when errors and cancellations occur for both,
different runs of the same application type and different runs of different application types.
The use of object locks should ensure that the processing of objects by one run must be completed before
they can be processed by subsequent or dependent runs. To make this possible for several runs or different
application types, you must make some cross-application settings:
Which application types depend on each other?
- Which application type sets locks?
- Which application type reads locks?
Which objects are locked?
You can define the application types on which each application type depends either in Customizing or in
transaction BANK_PP_APPLREL (maintenance view V_TBANK_APPLREL).
Note: If you enter the dependencies in Customizing, it does not mean that the check is saved automatically.
You must do this for each application type.
See the following scenario for an example: A banking component has three applications: Mass run account
settlement, bank statement and account closure. The master object for account settlement and bank
statement is the account. The corresponding order is the master object for account closure, but it also
processes accounts. Both, the bank statement and the account closure cannot process the account until
account settlement has been completed successfully.
Account settlement must set locks, lock object is the account.
Bank statement and account closure must read locks.
Enter account settlement as relevant for statement and closure in the V_TBANK_PP_APPLREL view.
Note: Interaction only works if the application types refer to the same lock object.
In the above example, settlement sets account locks, so the account closure must read account locks even
though its master object is the order.
The locks are set and deleted in the logical unit of work in which the relevant objects are processed, in the
callback module of event 1300.
INTERNAL_ERROR Exception
CURR_RUN_NOT_QUALIF Exception
IED
Implementation notes
The parameter I_FLG_EXCL_CURR_RUN defines that the locks set by own runs or application types are to
be ignored. They are returned in E_TAB_LOCKED rather than in E_TAB_OWN_LOCKS.
The parameter I_FLG_IGNORE_SUCCEEDING_LOCKS defines whether locks from other runs are to be
taken into account if they were set after the last lock set by your own run.
Implementation notes
FPP creates a process for every application type. To define the degree of parallelization for each application
type or run, you have the following options:
Entry in Customizing or maintenance view (see section Fehler! Verweisquelle konnte nicht gefunden
werden. )
Transfer as parameter at start
Implementation of BTE 0BANK011 (see section )
To transfer the desired job distribution when you call FPP, specify the parameter I_TAB_JOBDIST of type
BANK_TAB_GRP_SRV. The fields of the corresponding structure type are shown in the following table.
See the next section for a description of the fields and the distribution procedure.
2.6.2 Distribution
The distribution of the required jobs is executed in several steps. Internal distribution is executed using a
table that has the format described above, irrespective of the definition method.
Entries exist with server name Desired number of jobs scheduled on this server
Entries with server group Distribution of corresponding number of jobs to server in this group
Entries with number only Distribution to desired number of available servers
During distribution, the system checks whether the server is active, whether batch processes exist on the
relevant server, and whether free processes exist.
FPP cannot hand over job distribution (also known as load balancing) to SAP Job Control at the start event of
the processes but it must execute job distribution itself at an earlier stage. This is necessary for the initial
assignment of work packages to specific servers – when the packages are created, the servers for each
process must already be known. Sometimes this means that no free batch processes are available on one or
more of the planned servers when the processes are due to start.
In any case, you must check whether parallel processes can be corrected if necessary.
To monitor all FPP-enabled reports, the MassMan monitor would be the most suitable solution.
In satellite system and central system, MassMan can be started in transaction code ST13. ST13 is a
collection of different SAP analysis and service tools that are delivered by the ST/API add-on. From the list of
available tools choose MASS_MAN_MONITORING and execute. MassMan can either be executed on the
banking system or can use preconfigured RFC connections to load mass runs from target systems into a
central monitoring system like SAP Solution Manager.
The next figure shows the start screen of MassMan. On top of the screen are command buttons, lower part is
selection conditions.
Different conditions can be set to choose the relevant mass runs. With the selection criteria Appl. Catg. on the
selection screen the customer can especially search for the customer defined application category possibly
starting with Z*.
Choose the Execute button from the MassMan start screen and all mass runs that satisfy the selection
conditions are listed.
See the following table for the meaning of the columns of the mass run detail view.
Columns Description
Hist If selected, the mass run is stored in MassMan history table, otherwise from FPP table.
Ext. Runid External run ID, can be specified by the customer to identify different mass runs
Status Different status of the mass run (R: Running, F: Finished, A: Aborted, P: Processing)
Startdate, These three fields are the time stamp to specify when the mass run is started and finished. It is
Starttime, the time window that FPP is called. It is a subset of job start time and finish time from SM37.
Endtime
est. rem Estimated remaining time that the mass run still needs to run before it finishes
Duration
est. Durat. Estimated total run time for the mass run. When the mass run is finished, it is set to 0.
Jobs Total number of started jobs for the mass run (parent job + child jobs)
APPL Average CPU usage (per hour) from the time the mass run is running
DB Average DB usage (per hour) from the time the mass run is running
3 Further Information
3.1 Documentation of Central Interfaces
BANK_MAP_PP_START
NO_OUT_OF_SYNC Exception
NO_EXPORT_ALLOWED Exception
PACKMAN_INVALID Exception
PREPARE_FAILED Exception
START_FAILED Exception
3.2 Debugging
As FPP uses batch jobs for the processing of parallel processes, debugging is possible at the start of this job
only. If you want to analyze processing for each process, there are two possible options.
Procedure:
Set the flags I_X_SYNC and I_X_USE_DIALOG_WP when you call BANK_MAP_PP_START.
FPP does not create jobs. The processes/packages are processed sequentially in dialog mode.
Possible values:
DBG_OFF
DBG_DIA
DBG_BTC
DBG_BTC_DIA
Prerequisites:
DEBUG authorization and change authorization in debugger
If DBG_BTC is set, an endless loop is run in the reports RBANK_PROC_START and RBANK_PROC_END
(start of parallel job), which makes it possible to capture each job or process in the process overview SM50.
You can then set extra stop points in the debugger.
To end the endless loop and run the report as “normal”, you must set the l_flg_exit flag to ‘X’ in the debugger.
Warning about debug mode: If the flag is set, an endless loop is run in every parallel job and in every
related end job and must be ended by setting the flag in the debugger. Therefore, you must ensure that you
do not set a high degree of parallelization in job distribution.
3.3 BANK_PP_SETTINGS
Transaction BANK_PP_SETTINGS is used to set special parameters for FPP. At the moment, one parameter
is available: Indicator whether the process data is to be deleted automatically after the current process.
This term refers to an immediate repetition of a package in the processing loop for the packages. As soon as
the package is finished, it is reprocessed by the application in the parallel job.
One possible use is for a temporary lock of one or more objects, by dialog locks for example. It is hoped that
the lock will be set for a short time only and that the package will be reprocessed immediately. From method
1300, the application returns the affected objects (with status “6”) to FPP. As described above, the application
must cancel the repeat runs. The next package can be processed as soon as no object has status “6”. The
application can incorporate this function using a repeat loop in the module for event 1300.
This term refers to a repetition of a whole step in the current mass run. The objectives are the same as before
but note that you can remove all the object locks in the time required to stop and restart the whole step.
The application triggers the repeat by returning objects with status “4”. The exact procedure of FPP is
affected by the settings made in Customizing (see section 2.3).
3.4.3 Restart
This term refers to restart of a canceled mass run. The procedure is not described here.
You can use the report RBANK_PP_GENERATE_APPL to generate the relevant callback modules for all
events. Create interfaces using the information in Developer Customizing.
3.6 Glossary
Index of Figures
Figure 1 Architecture 7
Figure 2 Events 8
Figure 3 MassMan monitor 26
Figure 4 MassMan – Start screen 27
Figure 5 MassMan – Mass run detail view 27
The information in this document is proprietary to SAP. No part of this document may be reproduced, copied, or transmitted in any form
or for any purpose without the express prior written permission of SAP AG.
This document is a preliminary version and not subject to your license agreement or any other agreem ent with SAP. This document
contains only intended strategies, developments, and functionalities of the SAP® product and is not intended to be binding upon SAP to
any particular course of business, product strategy, and/or development. Please note that this document is subject to change and may
be changed by SAP at any time without notice.
SAP assumes no responsibility for errors or omissions in this document. SAP does not warrant the accuracy or completeness of the
information, text, graphics, links, or other items contained within this material. This document is provided without a warranty of any kind,
either express or implied, including but not limited to the implied warranties of merchantability, fitness for a particular purpose, or non-
infringem ent.
SAP shall have no liability for damages of any kind including without limitation direct, special, indirect, or consequential damages that
may result from the use of these materials. This limitation shall not apply in cases of intent or gross negligence.
The statutory liability for personal injury and defective products is not affected. SAP has no control over the information that you may
access through the use of hot links contained in these materials and does not endorse your use of third-party Web pages nor provide any
warranty whatsoever relating to third-party Web pages.