Professional Documents
Culture Documents
RSRV Analysis:
Indicates Index of Master data used as Dimension of
20 Performathe cube has a corrupt Index or is inconsistent.
• Check the error message completely and also check the long text
of the error message, as it will tell you the exact Master Data which
is locked by user ALEREMOTE.
• The lock which is set is because of load and HACR timing which
clashed. We first need to check RSA1 -> Tools -> HACR, where in
we would get the list of InfoObjects on which HACR is currently
running. Once that is finished only then, go to the Tx Code SM12.
This will give you few options and couple of default entries. When
we list the locks, it will display all the locks set. Delete the lock for
the specific entry only else it may happen that some load which
was running may fail, due to the lock released.
ster data Loads, sometimes a lock is • Now we choose the appropriate lock which has caused the failure,
em user ALEREMOTE. This happens and click on Delete. So that the existing lock is released.
HACR is runnning for some other MD Care should be taken that we do not delete an active running job.
me time when the system tries to carry Preferable avoid this solution
or this new MD. This is scheduling • When HACR finishes for the other Master Data, trigger Attribute
e to the time clashing the error has change run fot this Master Data.
• First check the job status in the Source System. It can be checked
through Environment -> Job Overview -> In the Source System.
This may ask you to login to the source system R/3. Once logged in
it will have some pre-entered selections, check if they are relevant,
and then Execute. This will show you the exact status of the job. It
should show “X” under Canceled.
• The job name generally starts with “BIREQU_” followed by system
generated number.
• Once we are confirm that this error has occurred due to job
R/3 system cancels due to some cancellation, we then check the status of the ODS, Cube under the
this error is encountered. This may be manage tab. The latest request would be showing the QM status as
roblem in the system. Some times it Red.
ue to some other jobs running in • We need to re-trigger the load again in such cases as the job is
takes up all the Processors and the no longer active and it is cancelled. We re-trigger the load from BW.
elled on R/3 side. • We first delete the Red request from the manage tab of the
y or may not be resulted due to Time InfoProvider and then re-trigger the InfoPackage.
ppen that there would be some system • Monitor the load for successful completion, and complete the
ks/DB Issues in the source system further loads if any in the Process Chain.
• Once confirmed with the error, we go ahead and check the
“Detail” tab of the Job Overview to check which Record, field and
what in the data has the error.
• Once we make sure from the Extraction, in the Details tab in the
Job Overview that the data was completely extracted, we can
actually see here, which record, which field, has the erroneous
data. Here we can also check the validity of the data with the
previous successful load PSA data.
• When we check the data in the PSA, it will show the record with
error with traffic signal as “Red”. In order to change data in PSA, we
need to have the request deleted from Manage Tab of the
InfoProvider first, only then it will allow to change the data in PSA.
• Once the change in the specific field entry in the record in PSA is
done, we then save it. Once data in PSA is changed. We then
again reconstruct the same request from the manage tab. Before
we could reconstruct the request, it needs to have QM status as
“Green”.
n some times that the incoming data • This will update the records again which are present in the
g some incorrect format, or few request.
ew incorrect entries. For example, • Monitor the load for successful completion,
e was in upper case and data is in and complete the further loads if any in the
if the data was expected in numeric Process Chain.
ame was provided in Alpha Numeric. NOTE: Before changing the Data from the PSA getting it
d may be a Flat File load or it may be confirmed
tly it may seem that the Flat File from the OSC/Client whether your authorised to change the
e users may have incorrect format. Data in the PSA.
• Usually “Time Out” Error results in a Short Dump. In order to
check the Short Dump we go to the following, Environment -> Short
Dump -> In the Data Warehouse / -> In the Source System.
• Alternatively we can check the Transaction ST22, in the Source
System / BW system. And then choose the relevant option to check
the short dump for the specific date and time. Here when we check
the short dump, make sure we go through the complete analysis of
the short dump in detail before taking any actions.
• In case of Time Out Error, Check whether the time out occurred
after the extraction or not. It may happen that the data was
extracted completely and then there was a short dump occurred.
Then nothing needs to be done.
• Inorder to check whether the extraction was done completely or
not, we can check the “Extraction” in the “Details” tab in the Job
Overview. Where in we can conclude whether the extraction was
done or not. If it is a “full load” from R/3 then we can also check the
no. of records in RSA3 in R/3 and check if the same no of records
are loaded in BW.
• In the short dump we may find that there is a Runtime Error,
"CALL_FUNCTION_SEND_ERROR" which occurred due to Time
Job fails with an error “Time Out” it Out in R/3 side.
e job has been stopped due to some • In such cases following could be done.
e request is still in yellow state. And as • If the data was extracted completely, then change the QM
same it resulted in Time Out error. It status from yellow to green. If “CUBE” is getting loaded then
hort dump in the system. Either in R/3 create indexes, for ODS activate the request.
• If the data was not extracted completely, then change the
may also occur if there is some QM status from yellow to red. Re-trigger the load and
e type of incoming data. For example monitor the same.
s not in the format which is specified in • Monitor the load for successful completion, and complete the
ay happen that instead of giving an further loads if any in the Process Chain.
ve a short dump. Every time we trigger
• Once this error is encountered, we could try to Click a complete
Refresh “F6” in RSMO, and check if the LUW’s get cleared
manually by the system.
• If after “couple” of Refresh, the error is as it is, then follow the
below steps quickly as it may happen that the load may fail with a
short dump.
• Go to the menu Environment -> Transact. RFC -> In the Source
System, from RSMO. It asks to login into the source system.
• Once logged in, it will give a selection screen with “Date”, “User
Name”, TRFC options.
• On execution with “F8” it will give the list of all Stuck LUW’s. The
“Status Text” will appear Red for the Stuck LUW’s which are not
getting processed. And the “Target System” for those LUWs should
be the BW Production system. Do not execute any other IDOC
which is not related have the “Target System”.
• Right Click and “Execute” or “F6” after selection, those LUW’s
which are identified properly. So that they get cleared, and the load
on BW side gets completed successfully.
• When IDocs are stuck go to R/3, use Tcode BD87 and expand
• Whenever such error occurs the data is may or may not be
‘IDOC in inbound Processing’ tab for IDOC Status type as 64 (IDoc
completely loaded. It is only while activation it fails. Hence when we
ready to be transferred to application). Keep the cursor on the error
see the details of the job, we can actually see which data package
message (pertaining to IDOC type RSRQST only) and click
failed during activation.
Process tab (F8) . Ths will push any stuck Idoc on R/3.
• We can once again try to manually Activate the ODS, here do not
sact Remote Function Call Error, • Monitor the load for successful completion,
change the QM status as in Monitor its green but within the Data
ver LUW’s (Logical Unit of Work’s) are and complete the further loads if any in the Process Chain.
Target it red. Once the data is activated QM status turns into Green
d from the source system to the
.
stem.
• For successful activation of the failed request, click on the
“Activate” button at the bottom, which will open another window
which will only have the request which is/are not activated. Select
the request and then check the corresponding options on the
bottom. And then Click on “Start”
• This will set a background job for activation of the selected
request.
• Monitor the load for successful completion, and complete the
further loads if any in the Process Chain.
• In case the above does not work out, we check the size of the
Data Package specified in the InfoPackage. In InfoPackage ->
Scheduler -> DataS. Default Data Transfer. Here we can set the
size of the Data Package. Here we need to “reduce” the maximum
size of the data package. So that activation takes place
successfully.
• Once the size of the Data Package is reduced we again re trigger
oad in ODS, It may happen sometimes the load and reload the complete data again.
ets extracted and loaded completely, • Before starting the manual activation, it is very important to check
time of the ODS activation it may fail if there was an existing failed “Red” Request. If so make sure you
error. delete the same before starting the manual activation.
k of resources, or cause of an existing • This error is encountered at the first place and then rectified as at
in the ODS. For Master Data it is fine if that popint in time system is not able to process the activation
xisting failed request. process via 4 different Parallel processes. This parameter is set in
s as there are Roll back Segment rscusta2 transaction. Later on the resources are free so the
e Database and gives an error ORA- activation completes successfully.
activation of data takes place data is • As a permanent solution, ask DBA to increase the initrans
data table and then either Inserted or and Maxtrans on the active data table.
e doing this there are system dead
cle is unable to extend the extents.
• Whenever we get such an error, we first need to check the
Transfer Rules (TR) in the Administrator Workbench. Check each
rule if they are inactive. If so then Activate the same.
• You need to first replicate the relevant data source, by right click
on the source system of D/s -> Replicate Datasources.
• During such occasions, we can execute the following ABAP
Report Program “RS_TRANSTRU_ACTIVATE_ALL”. It asks for
amp” Error occurs when the Transfer Source System Name, InfoSource Name, and 2 check boxes. For
e (TR/TS) are internally inactive in the activating only those TR/TS which are set by some lock, we can
check the option for “LOCK”. For activating only those TR/TS which
o occur whenever the DataSources are Inactive, we check for the option for “Only Inactive”.
n the R/3 side or the DataMarts are • Once executed it will activate the TR/TS again within that
W side. In that case, the Transfer Rules particular InfoSource even though they are already active.
ing active status when checked. But • Now re-trigger the InfoPackage again.
lly not, it happens because the time • Monitor the load for successful completion, and complete the
n the DataSource and the Transfer further loads if any in the Process Chain.
erent.
cates that some master data (on which After getting the Node ID and the Corresponding Master Data.
s to be created) is not loaded (or Once you get the Master Data load the same and activate the
yet. Find out the node ID and Master data. After activating the Master Data load the hierarchy
master data. again.
cates that the generated Hierarchy file
BW when trying to load a Hierarchy
ot generated completely, gives this Run the Hierarchy file generation Program again and use that to
load the Hierarchy.
T Errors like DATASET cant write, cant
s when there is a Problem with the
ms authorizations or Unix file system is
ple: The Archive Program trying to
the work directory to archive directory
e is no free space in the archive
Program Fails. Or the Program
File (Say Hierarchy file) in the work
not produce a complete file since
e space left in the work directory.
ing in Partial files and load failures due
n case of Hierarchy.
processes running in the APO or other
erminate any active processes. For In this case the data extraction from these systems should be
APO check-point runs, it terminates scheduled in such a way that it does not overlap the check-point
s going on in the system. runs.
There are various master data tables P, I, H tables. You can check
all the tables related to the master data object. Use program
RSDMD_CHK* and specify the Info Object and run the analysis.
Use the repair option to fix all the inconsistencies.