You are on page 1of 25

Backup and Recovery for APO 3.

0 Page 1 of 25

Backup and Recovery for


APO 3.0

Version 08
21 December 2000

Christiane Hienger (SAP AG)


Dr. Volkmar A. Söhner (SAP AG)
Werner Thesing (SAP AG)
Backup and Recovery for APO 3.0 Page 1 of 25
Backup and Recovery for APO 3.0 Page 2 of 25

TOC
Backup and Recovery for APO 3.0 .......................................................................... 1
Version 07.................................................................................................................. 1
21 December 2000..................................................................................................... 1
1. General Information on Data Storage in an APO System................................ 4
1.1 APO Architecture .................................................................................................... 4
1.2 Which Data is Stored Where? ................................................................................ 4
1.2.1. Stored in the APO-DB ........................................................................................ 4
1.2.2. Stored in the liveCache: ..................................................................................... 4
2. Backup of the APO Database ............................................................................ 6
3. liveCache Logging, Backup and Recovery....................................................... 7
3.1 Terminology ............................................................................................................ 7
3.2 The Checkpoint Procedure .................................................................................... 7
3.2.1. Checkpoint Functionality..................................................................................... 7
3.2.2. Activating the Checkpoint Procedure.................................................................. 9
3.2.3. Recreating the Data After a liveCache Crash ..................................................... 9
3.3 (Synchronous) liveCache Logging ...................................................................... 12
3.3.1. Fundamentals of liveCache Logging................................................................. 12
3.3.2. Scope of liveCache Logging ............................................................................. 12
3.3.3. The Synchronous liveCache Logging Procedure: ............................................. 12
3.3.4. Effects of Synchronous Logging on APO Performance..................................... 13
3.3.5. Activating Synchronous liveCache Logging ...................................................... 14
3.3.6. When does it make sense to deactivate liveCache logging?............................. 15
3.3.7. liveCache Recovery with Synchronous Logging ............................................... 15
3.4 liveCache Backup ................................................................................................. 16
3.4.1. Saving a liveCache in APO 2.0......................................................................... 16
3.4.2. Saving a liveCache in APO 3.0......................................................................... 16
3.5 Archive Log Aarea ................................................................................................ 17
3.5.1. Fundamentals of the Archive Log Area............................................................. 17
3.5.2. Activating the Archive Log Area........................................................................ 18
3.5.3. Recovery Using the Archive Log Area .............................................................. 19
3.6 Final Comments on liveCache Logging .............................................................. 19
3.6.1. Programs in the liveCache Logging Environment ............................................. 19
3.6.2. OSS Notes for liveCache Logging .................................................................... 19
3.6.3. Outlook for Future liveCache Releases ............................................................ 19
3.7 High Availability Solution for the APO 3.0 System............................................. 20
3.7.1. System Requirements ...................................................................................... 20
3.7.2. Description of the Work Process....................................................................... 20
3.7.3. Activating the High Availability Mode ................................................................ 20
3.7.4. Additional Programs in the High Availability Solution Environment ................... 21
4. Backup and Recovery of an APO System ...................................................... 22
4.1 Offline Backup ...................................................................................................... 22
4.2 Online Backup....................................................................................................... 22
4.3 APO-External Data Consistency with R/3 Systems after a Crash ..................... 23
5. Action log .......................................................................................................... 24
5.1 Log contents ......................................................................................................... 24
Backup and Recovery for APO 3.0 Page 2 of 25
Backup and Recovery for APO 3.0 Page 3 of 25

5.2 Output of the log data........................................................................................... 24


5.3 Deletion of the log data ........................................................................................ 24
6. Frequently Asked Questions ........................................................................... 25
6.1 How can I copy a complete APO system? .......................................................... 25
6.2 Is it possible to install several liveCache instances on one computer? ........... 25
6.3 Is a point-in-time recovery possible? .................................................................. 25
6.4 How does the program /sapapo/om_checkpoint_write work? .......................... 25

Backup and Recovery for APO 3.0 Page 3 of 25


Backup and Recovery for APO 3.0 Page 4 of 25

1. General Information on Data Storage in


an APO System
1.1 APO Architecture
An APO system is based on three-tier client-server architecture, just like a normal R/3 sys-
tem. There is a database server, one or more application servers and several presentation
servers, which are linked to the application servers. The ABAP application programs run on
the application servers. Unlike an R/3 system, an APO system also has a liveCache server
and an optimizer server. (The optimization server is not relevant to liveCache recovery and
will therefore not be dealt with in this document.)
In an APO system, data is stored in the APO database (abbreviated in this document as
APO-DB) and in the liveCache. Both data storage systems contain Customizing data, master
data and transaction data. Data data is stored either redundantly (stored in both the APO-DB
and in the liveCache). Or non-redundantly in (either the APO-DB or in the liveCache).
The liveCache server is accessed using the application server, the APO database is ac-
cessed using the Multi-DB-Connect. The R/3 kernel has a database interface that can be
used to set up a connection to several different databases. If an application program triggers
a commit on the application server, then this commit will be transmitted to the liveCache first
and then the APO-DB. (A rollback occurs in the same way on both liveCache and APO-DB.)
If an application program modifies data in the APO-DB and in the liveCache, then a commit
will cause the changes in both data storage systems, in the APO-DB and in the liveCache, to
be committed or rolled back immediately one after the other, almost simultaneously. This is
not a 2-phase commit-secure protocol.
1.2 Which Data is Stored Where?
1.2.1. Stored in the APO-DB
• Master data, such as resources, materials, PPMs, setup matrices, definitions of locations
and transportation lanes, planning books and Customizing data.
• Information on transaction data, such as operation texts
• Until APO 2.0, the header data for customer orders, transport orders, planned orders,
production orders and purchase orders (as of APO 3.0A this order header data is only
stored in the liveCache.)
• Demand planning transaction data, i.e. historical data, forecast data and product alloca-
tion data (as of APO 3.0A you can choose to have product allocation data stored in the
liveCache.)
1.2.2. Stored in the liveCache:
• Master data. Not all attributes are stored in the liveCache, however, only those that are
relevant to the planning routines (COM-Routines) within the liveCache (texts, such as
resource names or material texts are only needed for display purposes, not for planning
functions. For this reason, they are only stored in the APO-DB and not in the liveCache).
• Transaction data. For performance reasons, transaction data is stored predominantly in
the liveCache. Examples of transaction data include the sales orders and transport or-
ders and purchase orders together with all their items, schedule lines and stocks.
Planned orders and purchase orders include nearly all the data that is generated upon
expanding the bill of material and the routing (PPM or PPU). The components (with mate-
rial, deadline and quantity), schedule lines and items and an order header for planned
and/or production orders are saved in the liveCache, along with the work processes,
transaction segments or phases with the corresponding capacity requirements, resources
and relationships (constraints).

Backup and Recovery for APO 3.0 Page 4 of 25


Backup and Recovery for APO 3.0 Page 5 of 25

Data in an APO system is therefore stored either separately in either the APO-DB or the
liveCache or stored redundantly in both. The important point here is that data must have a
consistent status in both the APO-DB and the liveCache. This is especially important for the
data that is stored redundantly in both the APO-DB and in the liveCache. For example: A
resource that is stored in the APO-DB must also exist in the liveCache and vice versa. The
non-redundant data stored in the APO-DB and the liveCache must also have a logical con-
sistency, however. (When a production order is stored in the liveCache, the corresponding
texts for the order’s work processes must also exist in the APO-DB.)
Consistency between the APO-DB and the liveCache will be referred to as (APO-) internal
data consistency from this point onwards.
One or more OLTP systems can be linked to an APO system. In this case, the data stored in
the APO system must also be consistent with the data in these OLTP systems. (An OLTP
system can be an R/3 system, an R/2 system or a non-SAP system.) This type of data con-
sistency is called (APO-) external data consistency.
Both internal and external data consistency must be guaranteed not only during normal sys-
tem operation but also after recovery. This documentation describes how to backup an APO-
DB and the corresponding liveCache, so that both types of data consistency can be restored
via a recovery procedure after either the APO-DB or the liveCache have crashed.
Section 2 “Backup of the APO Database” covers data backup for the APO-DB.
Section 3 “liveCache Logging, Backup and Recovery” describes recovery and backup for the
liveCache.
Section 4 “Backup and Recovery of an APO System” deals with the whole scenario involving
backup and recovery for a complete APO system.

Backup and Recovery for APO 3.0 Page 5 of 25


Backup and Recovery for APO 3.0 Page 6 of 25

2. Backup of the APO Database


The APO-DB is backed up using procedures that are in commonly used, for example, in the
R/3 environment. Details and manufacturers' specifications on backup and recovery of the
APO-DB can be found for example in the R/3 note 23070 and in the section “Database ad-
ministration in CCMS” of “Application Help”.

Backup and Recovery for APO 3.0 Page 6 of 25


Backup and Recovery for APO 3.0 Page 7 of 25

3. liveCache Logging, Backup and Recov-


ery
3.1 Terminology
Several mechanisms can be used to recover a liveCache after a crash. These are described
below.
Checkpoint
It is possible to copy all data in the liveCache data cache onto a permanent storage medium
(hard drive). Copying of the data from the data cache (main memory) onto the hard drive (to
the so-called liveCache “data devspaces”) is triggered by a Checkpoint. Specific liveCache
tasks (called “datawriter tasks”) ensure that the data from a liveCache instance is copied
(“flushed”) from the data cache to the local drives of the liveCache computer. After complet-
ing a checkpoint, the contents of the data devspaces are logically consistent with the live-
Cache data (for further information see section 3.2 “The Checkpoint Procedure”).
liveCache Logging
To be able to carry out a liveCache recovery, it is necessary to log all changes to liveCache
data on a permanent medium. This process is called liveCache-logging.
liveCache Backup
liveCache backup involves the saving of the liveCache data onto an external medium (e.g.
on tape or on the hard drive of a different computer). liveCache data can only be saved if a
checkpoint has already been successfully written. Only the data belonging to the last check-
point is copied. Changes to the data that are made after the checkpoint are not saved.
liveCache Recovery
liveCache recovery means the restoration of the consistent liveCache data as it was at the
moment of the crash.
3.2 The Checkpoint Procedure
The checkpoint procedure is one of two different procedures for recovering a liveCache after
a crash. (The second procedure is described in section 3.3 ”(Synchronous) liveCache Log-
ging”.)
3.2.1. Checkpoint Functionality
As of release APO 1.1, copying of liveCache data to the data devspaces is supported by a
checkpoint. To create a checkpoint:
1. Stop the liveCache via transaction LC10 (Pushbutton “Stop liveCache” in the “liveCache
Administration” menu)
2. Call the report /sapapo/om_checkpoint_write.
From a technical point of view, writing a checkpoint is carried out as follows:
After a checkpoint is requested at execution point t0, the liveCache management system
(lCMS) waits until all running transactions (TA1 and TA2 in Figure 1 “Checkpoint”) are com-
pleted. (t1 being the point at which the last current transaction is completed.) All transactions
(such as TA3) that are started after the execution point t0 are placed in a queue (they are not
processed immediately by the lCMS).

Backup and Recovery for APO 3.0 Page 7 of 25


Backup and Recovery for APO 3.0 Page 8 of 25

t0 t1 t2

Time

TA1
TA2
TA3

Figure 1. Checkpoint
From execution time t1 there are no longer any open transactions in the liveCache. The lCMS
now starts to mark all the storage areas (8 kilobyte pages) that have been changed since the
last checkpoint. Only a few milliseconds are needed to select the changed pages. After all
the changed pages have been marked (point t2), the lCMS starts to process the transactions
in the queue (in Figure 1 “Checkpoint” this would be TA3). Parallel to this, the lCMS starts to
copy the marked pages to the hard drive. When all the marked pages have been copied to
the hard drive, the checkpoint is completed. There is a new version of consistent data, (i.e. a
new checkpoint) on the data devspaces. The checkpoint contains the data status at point t1.
As of APO 3.0 the APO (with the exception of the Demand Planning application (and there-
fore possibly Supply Network Planning)) is client-enabled. A checkpoint saves all data stored
in the liveCache (i.e. for all clients). A checkpoint for individual clients is not possible.
You can use transaction LC10 as follows to display the directories (of the liveCache server)
containing the dev-spaces:
APO 2.0:
After calling LC10, enter “LCA” in the entry field and then choose the “liveCache perform-
ance” pushbutton, followed by the “Detail analysis” pushbutton. In the following screen,
choose the “Configuration” pushbutton. At the bottom of the next screen you can view the
names and directories of the dev-spaces.
APO 3.0:
After calling LC10, enter “LCA” in the entry field and then click the “liveCache configuration”
pushbutton.
You can reach transaction LC10 via the APO menu as follows: Tools -> APO Administration -
> liveCache / COM-Routines -> Configure Monitor.) Transaction LC10 should only be used
by the system administrator (as of APO 3.0 this transaction requires authorization.)
Note: What happens if a selected page is changed during a checkpoint?
If a selected page, which has not yet been copied to the hard disk, is changed by a new
transaction (such as TA3 in Figure 1 “Checkpoint”), lCMS copies this page before the change
is carried out. The new transaction can then carry out the change to the original version of
this page. The copied page is retained until lCMS has written this copy to the hard disk. The
copied page is then deleted. The changed original is only copied to the devspaces at the
next checkpoint.
For further information on checkpoints, see the liveCache documentation.
Advantages of the checkpoint procedure:
• The checkpoint can be carried out while the liveCache is running (online)
• A consistent status is copied
• Only the pages that have changed since the last checkpoint are copied to the hard disk
• The performance of the APO system is not significantly affected
Backup and Recovery for APO 3.0 Page 8 of 25
Backup and Recovery for APO 3.0 Page 9 of 25

• Checkpoints are written in parallel with the processing of current transactions


Disadvantages of the checkpoint procedure:
• Data changes between two checkpoints are not logged. After a system crash, it is only
possible to return to the system status as it was at the last checkpoint. Changes made
after the last checkpoint are lost
• Transactions that were started before the checkpoint start (such as TA1 in Figure 1
“Checkpoint”) delay the writing of the checkpoint. This in turn delays processing of new
transactions (such as TA3). This can lead to the APO experiencing related performance
problems when a checkpoint is requested.
• The phenomenon described in b) can lead to problems especially if a user opens a de-
bug session before the start of the checkpoint. Because the debug session does not set
any commits, the corresponding liveCache transaction remains open until the user ends
the debug session. Correspondingly, the lCMS will delay the new transactions for this pe-
riod of time. This causes all transactions started after the checkpoint was requested to
wait until the checkpoint is completed. In a production system, this can be expected to
cause all online and batch processes to “hang”, since their requested tasks cannot be
processed in the liveCache (they remain in the queue); they must wait until the check-
point has selected all the data pages. This problem can only be avoided in a productive
APO system if debugging is only permitted when a checkpoint does not need to be co-
pied.
The program /sapapo/om_checkpoint_write therefore uses a dialog box to require all de-
bug users to end the debug session immediately. If the 4.6D R/3 kernel (compatible with
4.6C) is being used in the APO system, report /sapapo/om_checkpoint_write ends all de-
bug sessions 20 seconds after issuing the above-mentioned dialog box. The debug users
still have 20 seconds to end the debug session themselves. (4.6C does not have the C-
Routine required to end the debug sessions.)
3.2.2. Activating the Checkpoint Procedure
SAP recommends the scheduling of a checkpoint every 4 to 6 hours via background proc-
essing (transaction SM36). To do this, program /sapapo/om_checkpoint_write needs to be
scheduled via transaction SM36 as follows. The following entries need to be made on the
SM36 screen:
1. The specified target server must be an application server from the APO system.
2. The start date can be used to define the execution time and the frequency.
3. Date / Time: Current date and time.
4. Select “Execute Job Periodically”.
5. Use period values to define a repetition interval from 4 to 6 hours.
Save all entries.
In the exit screen, via “Steps”, enter and save the ABAP name
/sapapo/om_checkpoint_write. The checkpoint process is now activated.
3.2.3. Recreating the Data After a liveCache Crash
After a liveCache crash (e.g. due to a power failure), you can only restore the system back to
the last checkpoint. All changes that have been made since the last checkpoint were not
logged and are therefore lost. In general, this means that after a recovery, both the internal
and the external data consistency of the APO are compromised. This disadvantage would
not be removed, even if checkpoints were to be set more often (e.g. every minute, or every
second). Only if a checkpoint were to be written after every transaction, would it be possible
to recover the last committed status after a restart, thereby ensuring both internal and exter-
nal consistency. This would drastically reduce performance, however, and the liveCache
would have to run in single user mode, since after every transaction, all new transactions
would have to wait until all changed pages had been selected. SAP strongly discourages
setting checkpoints every minute, let alone every second.

Backup and Recovery for APO 3.0 Page 9 of 25


Backup and Recovery for APO 3.0 Page 10 of 25

After a liveCache crash you have the following two options (provided you are using the
checkpoint procedure and not the logging procedure described in section 3.3 “(Synchronous)
liveCache Logging”) for recovering the liveCache (usually with some data loss):
3.2.3.1. liveCache Restart
After the crash, the liveCache is started via transaction LC10 (not initialized). The liveCache
starts on the basis of the last checkpoint. All changes that have been made since the last
checkpoint are no longer stored in the liveCache and are therefore lost.
Internal data consistency (between the APO-DB and the liveCache) can be restored with the
help of transaction /sapapo/om17 (Menu path: Tools -> APO Administration -> liveCache /
COM-Routines -> Consistency check). (You need to select all the checkboxes and then
choose the “Immediate Check” and/or “Check (Batch)“ pushbuttons.

Figure 2. liveCache data consistency check


This program compares the APO-DB and the liveCache. After the liveCache restart, orders
and stocks that have been created/deleted since the last checkpoint will only exist in either
the APO-DB or the liveCache, because their creation/deletion is not recorded in the check-
point. Transaction /sapapo/om17 deletes these orders. This results in a loss of data, but the
APO-internal data consistency is restored. If orders or stocks are simply changed rather than
created or deleted (since the last checkpoint), then these objects are returned to their previ-
ous status (at the time of the last checkpoint) after the restart. These changes are not cor-
rected via OM17. The restoration of APO-internal data consistency is therefore only possible
with certain restrictions.
As of APO 2.0A, Service Pack 5, APO-external data consistency can be checked and re-
stored with the help of the program /sapapo/cif_deltareport. This program compares the se-
lected R/3 system with the APO. Only transaction data is checked, however. The check only
ensures that objects exist, rather than whether all data for all objects is identical in the R/3
system and in the APO system. If an object only exists in the R/3 system and not in the APO
system, then this object can be requested from the R/3 system and posted back (“refreshed”)

Backup and Recovery for APO 3.0 Page 10 of 25


Backup and Recovery for APO 3.0 Page 11 of 25

in the APO system. The check does not display whether an object only exists in the APO
system (and not in the R/3 system) (there is always the possibility, for example, that a
planned order was created in the APO system but had not yet been transmitted to the R/3
system.)
In a later Service Pack (for APO 3.0) it should also be possible to selectively send orders of
this type to the R/3 system. If the order has different dates in the APO system and the R/3
system, this inconsistency is neither noticed nor corrected by the program
/sapapo/cif_deltareport. The restoration of external consistency is therefore only possible to a
limited extent. Furthermore, the program runtime increased with the number of objects to be
checked. High data volume increases runtime accordingly.
3.2.3.2. liveCache Initialization
The second option for restarting the liveCache after a crash, (provided you are using the
checkpoint procedure), is liveCache initialization with subsequent initial data provision from
the OLTP system. To do this, go to the LC10 transaction and choose the “liveCache Admini-
stration” pushbutton followed by the “Initialize liveCache” pushbutton. The following proce-
dure ensues:
1. liveCache is stopped (without writing a checkpoint) adfasdfasdf asdfasdfadsf
asdfasdfasdfsdfads
2. liveCache is started, i.e. the system tables are loaded and the COM-routines are regis-
tered. The liveCache then contains no APO application data.
3. ABAP program /sapapo/delete_lc_anchors is called (as of 3.0A this is done for all clients
defined in the APO system and for which a suitable RFC link has been created (as de-
scribed in note 305634)). This is absolutely necessary. This program carries out the fol-
lowing steps:
3.1. Deletion of the order anchor tables /sapapo/ordkey and /sapapo/ordmap in the APO-
DB. Table /sapapo/ordkey contains a data record for every single order stored in the li-
veCache. Table /sapapo/ordmap contains a record for every order in the liveCache that
has been received from or sent to a linked OLTP system. Table /sapapo/ordmap con-
tains the conversion “APO-internal key to external order key” (and vice versa).
These two tables must be deleted after a liveCache initialization, because after the in-
itialization, the liveCache will contain no orders. (An initialization therefore removes all
the transaction data in the liveCache.) If the anchor tables were not deleted, then there
would be an inconsistency between the liveCache and the APO-DB, because the live-
Cache would be empty and yet the anchor would still contain entries.
As of APO 3.0A both anchor tables are contained in the liveCache. This means that
neither table exists after a liveCache initialization and therefore must be recreated (as
empty tables). They are no longer deleted via explicit SQL commands, but via the live-
Cache initialization itself.
3.2. Deletion of the contents of the remaining tables in the APO-DB (these tables contain a
reference to orders in the liveCache). For example, all data referring to work processes
in production orders is deleted.
3.3. Loading of master data. The master data is fully stored in the APO-DB and can therefo-
re be posted to the liveCache after initialization. Master data includes resources, pro-
duct-location combinations and setup matrices. Restoring resources is especially run-
time-intensive. The resources are therefore posted to the liveCache via several parallel
batch jobs. Nevertheless, this process can be very time-consuming if a large number
(several thousand) of resources have been created in the APO system.
3.4. After the liveCache initialization, the liveCache contains all master data again. The
transaction data (for all orders and stocks) is all lost, however. The transaction data
therefore needs to be transferred from the linked OLTP systems to the APO. (If the
OLTP system is an R/3 system, then the corresponding integration model needs to be
activated.)

Backup and Recovery for APO 3.0 Page 11 of 25


Backup and Recovery for APO 3.0 Page 12 of 25

liveCache initialization is generally safer than the procedure described in section 3.2.3.1
“liveCache Restart”.
3.3 (Synchronous) liveCache Logging
As of APO 2.0A (Service Pack 2) it is possible to use a logging procedure that allows perma-
nent logging of all data manipulation in the liveCache.
3.3.1. Fundamentals of liveCache Logging
Synchronous liveCache logging (available for APO 2.0 from Service Pack 2, for APO 3.0
from Service Pack 7) makes it possible to recover a consistent system (as it was last com-
mitted just before a liveCache system crash). Recovery based on synchronous logging guar-
antees both internal and external data consistency –(with certain restrictions: see section
3.3.2 “Scope of liveCache Logging”).
3.3.2. Scope of liveCache Logging
Logging contains data from the APO applications:
• Vehicle Scheduling (VS),
• Available-to-promise (ATP),
• Production Planning (PP),
• Detailed Scheduling (DS)
• Capable-to-match (CTM)
• Supply Network Planning (SNP)
Data from Application Demand Planning (DP) is not logged (as of APO 3.0 Demand Planning
data can also be contained in the liveCache.) If the product allocation data is stored in the
liveCache (this is optional), then this data is not logged either. During a recovery, the product
allocation requirements are, however, read from the corresponding documents in the APO-
DB and rebuilt in the liveCache. Only product allocation quotations that have been changed
since the last checkpoint are lost after a liveCache system crash.
It makes limited sense to carry out logging for mass data from Demand Planning, because
this data can be regenerated after a liveCache crash via the corresponding forecasting tech-
niques and/or planning runs. Furthermore, all liveCache data is stored on the hard drive with
by checkpoint. In other words, during Demand Planning only the changes carried out since
the last checkpoint are lost.
Special features of synchronous logging in APO 2.0:
• In APO release 2.0A, the data from all plan versions is logged (if synchronous logging is
activated).
Special features of synchronous logging as of APO 3.0:
• As of 3.0A only the data for active plan version 000 is logged.
• Data for inactive plan versions is recorded with checkpoints. For the inactive plan ver-
sions, this means that after a liveCache system crash, all changes made after the last
checkpoint will be lost. This procedure was introduced because the inactive plan versions
often contain test or simulation data, whichhave much lower priority than productive data
in terms of data security. Furthermore, this procedure does not affect external data con-
sistency, because the APO only exchanges data from the active plan version with the
linked OLTP systems. The inactive plan versions cannot, therefore, lead to external data
inconsistency. A liveCache recovery can lead to APO-internal data inconsistencies via
the inactive plan versions. These inconsistencies can then be removed with transaction
/sapapo/om17 (see section 3.2.3.1 “liveCache Restart”).
3.3.3. The Synchronous liveCache Logging Procedure:
All programs that modify (create, delete or change) transaction data (i.e. stocks, purchase
orders, sales orders, production orders or transport orders) read the changed orders and/or

Backup and Recovery for APO 3.0 Page 12 of 25


Backup and Recovery for APO 3.0 Page 13 of 25

stocks from the liveCache and copy the entire orders and/or stocks synchronously (that is,
still in the same transaction) to the (liveCache) log area. When master data is changed, (as
of APO 3.0A) only the references (i.e. the APO-internal keys) to the changed master data are
logged in the log area.
A log area consists of the following four tables in the APO database:
• /sapapo/lc_logha (for order headers)
• /sapapo/lc_logca (cluster table for order data)
• /sapapo/lc_logfa (for fixed pegging relationships)
• /sapapo/lc_logsa (for the key of the changed master data)
Table /sapapo/lc_logha contains a single copy of every order and/or stock that have been
changed since the last checkpoint. The entries in these tables contain a link to the cluster
table /sapapo/lc_logca. The referenced cluster contains the full order and/or stock data. Ta-
ble /sapapo/lc_logfa contains the fixed pegging relationships that have been changed since
the last checkpoint. Table /sapapo/lc_logsa (only from 3.0A) contains the APO-internal keys
from the master data that was changed since the last checkpoint.
There are two log areas (each composed of 4 tables):
• Log area A: /sapapo/lc_logha, /sapapo/lc_logca, /sapapo/lc_logfa, /sapapo/lc_logsa
• Log area B: /sapapo/lc_loghb, /sapapo/lc_logcb, /sapapo/lc_logfb, /sapapo/lc_logsb
At any given time, only one log area is active (A or B). The modified transaction data and/or
the keys for changed master data are always copied to the active log area, A or B. Every
time a checkpoint is written, the active log area is changed. If, for example, log area A is ac-
tive, then the system activates log area B (and deactivates log area A) just before the check-
point. After the checkpoint has been recorded successfully, the program deletes the deacti-
vated log area (the corresponding tables are “dropped”). This ensures that the log areas do
not get too big. For this reason, it is very important to periodically schedule program
/sapapo/om_checkpoint_write for recording checkpoints.
As far as recovering the last committed status of the liveCache after a crash, this is only pos-
sible within the restrictions mentioned in section 3.3.2 “Scope of liveCache Logging”. In other
words certain data (changes) cannot be recovered. These restrictions do not, however, pre-
vent the liveCache recovery of all the data relevant to the production running of an APO
system.
3.3.4. Effects of Synchronous Logging on APO Performance
The advantage of synchronous liveCache logging, compared to the checkpoint procedure, is
that no data is lost after a liveCache crash. The disadvantage is the reduced performance of
transactions that change data in the liveCache.
A transaction lasts longer because it reads the changed transaction data (orders and stocks)
and copies this data to the APO-DB, the transaction lasts longer accordingly. The extent to
which performance is reduced depends on several factors, including the speed of the hard
drives used in the APO-DB and the percentage of changing transactions (as opposed to
those that just read the data) (synchronous liveCache logging reduces performance only
when write accesses are made to the liveCache, rather than read-accesses.) The loss of
performance also depends heavily on the (average) order size. This reduction in perform-
ance tends to get worse according to the number of schedule lines and position items (con-
tained by purchase orders, sales orders and transport orders) as well as the number of com-
ponents and work processes contained by the production orders.
As far as performance is concerned, it should also be noted that the transactions from the
areas PP and DS generally simulate transactions (without the user actually noticing this.)
The main point here is that all changes that are made during the PP/DS transactions are only
posted permanently to the liveCache after they have been saved at the end of the transac-
tion. Logging is then only carried out if the user saves the changes after the transaction is
completed. This means, for example, that the performance of a planning table is only ad-

Backup and Recovery for APO 3.0 Page 13 of 25


Backup and Recovery for APO 3.0 Page 14 of 25

versely affected by synchronous liveCache logging when the user saves (not during the
separate user interactions in the planning table).
Where small orders are involved (average of 5 schedule lines, 1 - 3 work processes, 5 com-
ponents), the performance reduction in an SAP-tested benchmark averaged 10% to 20%
(these values only refer to the liveCache accesses). As mentioned above, these values de-
pend on several factors, so it is impossible to draw any definitive conclusions on the APO
system.
The transactions from Application Demand Planning are not influenced by the logging. The
optimizers for Detailed Scheduling (DS), Capable-To-Match (CTM) and Supply Network
Planning (SNP) are not affected during their runtime, but only when the optimization results
are posted to the liveCache. This means that synchronous logging only causes reductions in
performance for an APO system during transactions from the areas of Production Planning
(PP), DS and CTM. The “Available-To-Promise” application is only negatively influenced
when planned orders are created within the framework of a multi-level ATP check. The stan-
dard availability check is not influenced by synchronous logging.
3.3.5. Activating Synchronous liveCache Logging
Before liveCache logging is activated, report /sapapo/om_checkpoint_write must be periodi-
cally scheduled as described in section 3.3.2 “Scope of liveCache Logging”. The time interval
between two checkpoints should be set so that the number of log records in tables
/sapapo/lc_logha and/or /sapapo/lc_loghb does not exceed 100,000 to 150,000. (With this
number of log records, recovery generally does not take longer than 30 minutes.) The ad-
ministrator must therefore make a (rough) estimation of how long the productive system
would take, on average, to produce around 100,000 to 150,000 modified (created, changed
or deleted) orders and stocks. However, the time interval should not be longer than 6 hours.
3.3.5.1. Procedure in APO 2.0
In APO 2.0, synchronous liveCache logging is deactivated by default. It can be activated with
the following modification of an ABAP Include. No users are permitted to work in the APO
system while the changes are being carried out.
In Include /sapapo/om_livecache_log use transaction SE38 to replace the following line
gc_liveCache_log type int4 value 0.
with
gc_liveCache_log type int4 value 1.
and activate the Include.
Note: This is a modification of this Include. You should therefore make sure that the installa-
tion of subsequent Support Packages does not overwrite this Include. This would cause live-
Cache logging to be deactivated again.
Synchronous liveCache logging can be deactivated again by setting the variable
gc_liveCache_log to 0. (This also requires that no users are working in the APO system.)
3.3.5.2. Procedure as of APO 3.0
In APO 3.0A, logging can be activated and deactivated via transaction /sapapo/om06 (which
calls function module /sapapo/om_lc_logging_set)
Menu path:
Tools -> APO Administration -> liveCache / COM-Routines -> Logging-
Level
After calling this transaction, a screen appears, in which synchronous liveCache logging can
be activated or deactivated using radio buttons. After the user has saved the setting, live-
Cache logging is only activated/deactivated for the client in which the transaction
/sapapo/om06 was called.

Backup and Recovery for APO 3.0 Page 14 of 25


Backup and Recovery for APO 3.0 Page 15 of 25

3.3.6. When does it make sense to deactivate liveCache logging?


If you are copying all APO-relevant data from a linked OLTP system into the APO system,
then it is sufficient to write a checkpoint after this initial data provision and only then to re-
activate liveCache logging. The procedure provides the best performance results when
posting what tend to be large data quantities from the OLTP system.
Logging can also be activated/deactivated via function module /sapapo/om_lc_logging_set
(for the corresponding logon clients).
When calling function module /sapapo/om_lc_logging_set set the Import parameter
„IV_NEW_LOGGING_LEVEL“ to 1 in order to deactivate liveCache logging. Setting
IV_NEW_LOGGING_LEVEL = to 2 activates synchronous logging.
3.3.7. liveCache Recovery with Synchronous Logging
liveCache recovery requires the following:
1. The liveCache has actually crashed (“red light” in transaction LC10)
2. Synchronous logging was activated before the crash and checkpoints were written at
regular intervals
3. Any hardware or connection errors have been rectified
LiveCache recovery is started via program /sapapo/om_lc_recovery. This program carries
out the following steps (partly in the background):
1. Deallocation of the periodically scheduled batch report (scheduled by the administrator).
This batch report writes checkpoints via program /sapapo/om_checkpoint_write.
2. Restart of the liveCache (based on the last successfully saved checkpoint).
3. Transfer of data from the current active log area into the liveCache. To do this, the pro-
gram first reads the key of the changed master data from the log area and then collects
the corresponding data from the relevant master data tables in the APO-DB. It then posts
the master data to the liveCache. This process is possible for master data, because the
master data segments relevant to the liveCache are stored in both the APO-DB and the
liveCache. This is not possible for transaction data, because it is mainly stored only in the
liveCache. For this reason, the transaction data must be fully logged, while the APO-
internal key is sufficient for master data. After posting the master data, all changes to
transaction data (orders, stocks and fixed pegging relationships) are read from the last
active log area and copied to the liveCache. In the last phase, master data that was de-
leted since the last checkpoint and before the liveCache crash is deleted in the live-
Cache,.
4. Activation of the CIF output queue in the linked R/3 system. (CIF stands for Core Inter-
face. This designates the software that guarantees the real-time integration of R/3 and
APO.) After the liveCache crash, the output queues in the R/3 system receive an error
message that causes these queues to be frozen. For this reason, the CIF queues are re-
activated after successful recovery.
5. Activation of the CIF output queues in the APO system. A liveCache crash normally does
not cause the output queues in the APO system to be frozen. As an added precaution,
the recovery program activates these queues too.
6. Program /sapapo/om_lc_recovery sends the user a high-priority email as soon as live-
Cache recovery is completed.
7. In addition, the system administrator needs to:
• check the job log for possible errors.
• check the application log for errors via transaction /sapapo/om09 and
• carry out general tests (calling up some of the APO transactions) to ensure that the sy-
stem is functioning correctly.

Backup and Recovery for APO 3.0 Page 15 of 25


Backup and Recovery for APO 3.0 Page 16 of 25

8. After the administrator is certain that recovery has been successfully completed, he must
reschedule report /sapapo/om_checkpoint_write to be run at regular intervals, as descri-
bed in section 3.2.2 „Activating the Checkpoint Procedure“.
At the next checkpoint, the data from the active log area is deleted and is then no longer
available for recovery!
Internal data consistency for non-active plan versions can be recreated after a liveCache
Recover using transaction /sapapo/om17.
If liveCache is started via LC10 (“Start liveCache” pushbutton) after a crash, the liveCache
restarts without triggering a recovery. This runs the risk of data being read from application
transactions and changed before a recovery is triggered. The user can avoid this by main-
taining the liveCache LCA connection in LC10, and entering report
/sapapo/om_lc_restart_prep as a preparation report for the liveCache start. To do this
1. Call transaction LC10
2. Enter the literal “LCA” in the “Logical connection name”
3. Click on the “liveCache” pushbutton.
4. Click on “liveCache: create/delete/change” (on the far left).
5. Confirm your choice in the “Start liveCache” dialog box
6. Enter report name /sapapo/om_lc_restart_prep in the “Preparation” entry field.
This causes report /sapapo/om_lc_restart_prep to be called every time before the liveCache
is started. This report checks whether the liveCache had previously experienced an abnor-
mal termination. If this is the case, then the report issues a dialog box in which the user must
decide whether a liveCache recovery should be carried out before starting the liveCache.
3.4 liveCache Backup
To create a backup of a liveCache, the liveCache data devspaces are saved onto an external
storage medium.
3.4.1. Saving a liveCache in APO 2.0
Up to APO 2.0A, the liveCache devspaces (i.e. the last written checkpoint) can only be saved
if the liveCache is in “offline” mode (red light in LC10). When the liveCache is offline, all data,
log and system devspaces are saved with operating system resources.
3.4.2. Saving a liveCache in APO 3.0
As of APO 3.0A, the data saved from the checkpoint to the devspaces can be saved online.
This can only be carried out after a successful checkpoint, however. To do this, report
/sapapo/om_checkpoint_write provides the parameters “lcbackup” and “media”. If lcbackup is
set to “X” and parameter “media“ specifies the medium to be used for the save, then the re-
port saves the liveCache data devspaces (after a successful checkpoint) to the medium
specified. The medium must be set up beforehand in the liveCache administration tool “Da-
tabase Manager”.
Report /sapapo/om_checkpoint_write is called for this purpose as follows:
submit /sapapo/om_checkpoint_write
with lcbackup eq 'X' sign 'I'
with media eq '<media-name>' sign 'I'
and return.
The two parameters can also be called via a suitable report variant if you want to schedule
the report as a batch program.
Report /sapapo/om_checkpoint_write provides a Business Add-In (a type of Customer-Exit).
The APO user can use this Business Add-In to program the user-specific logic which will run
after a successful checkpoint. This requires that the Business Add-In includes the
DEV_SPACE_BACKUP method, to be filled as follows:

Backup and Recovery for APO 3.0 Page 16 of 25


Backup and Recovery for APO 3.0 Page 17 of 25

1. In transaction se19 enter the so-called “implementation name”. This name is freely defin-
able.
2. A dialog box appears, in which you enter the corresponding “definition name”. In our
case, this is “/SAPAPO/OM_BADI_01”. (This definition name is defined by SAP.)
3. In the next dialog box, enter the required development class, if necessary.
4. The next step takes you to the “Badi-Builder”. You can view the characteristics and the
interface of the Badi.
5. Under the “Interface” header, you can now double-click on the method
“DEV_SPACE_BACKUP” to get to the editor for this method. This is where you can carry
out various ABAP coding tasks such as calling the function module /sapapo/om_lc_save
(to trigger saving of the data).
6. Save and activate the changed methods.
(Transaction SE19 contains detailed documentation on Business Add-Ins.)
The coding implemented in the Business Add-In is called by report
/sapapo/om_checkpoint_write after the checkpoint has been written successfully. If the busi-
ness add-in is filled with coding and if report /sapapo/om_checkpoint_write is also called with
parameter lcbackup = “X”, then both the Business Add-In is run, as well as the saving of the
data devspaces! If the checkpoint could not be successfully copied, then the Business Add-In
is not executed.
At the beginning of program /sapapo/om_checkpoint_write, an enqueue block is set and only
removed at the end of the program. This prevents two transactions from requesting check-
points or saving the data devspaces at the same time.
During periodic scheduling of report /sapapo/om_checkpoint_write, you should make sure
that the period of time between two successive runs is long enough for the checkpoint to be
written and also for the liveCache data to be saved. (SAP recommends, however, that
checkpoints be kept at 4 to 6 hour intervals. This period should be sufficient in any event.)
Chapter 4 “Backup and Recovery of an APO System” describes how to create a backup for
the entire APO system.
3.5 Archive Log Aarea
3.5.1. Fundamentals of the Archive Log Area
If liveCache logging is activated, then the contents of the last active log area are deleted after
each checkpoint. As of APO 2.0A, Service Pack 6, it is possible to copy the data from the last
active log area to the so-called archive log area before deletion.
This procedure could be used in the following example scenario:
Let us assume that a checkpoint is written every 6 hours (i.e. at 0000 hours, 0600 hours,
etc.). Every night at 0000 hours, the liveCache data is saved. If the liveCache crashes at
0800 hours, it is possible to recover the liveCache from the checkpoint at 0600 hours and the
last active log area. When using the archive log area, it is also possible to recover the live-
Cache with the checkpoint from 0000 hours, along with the archive log area and the last ac-
tive log area. If, for whatever reason (such as a defective hard drive), the checkpoint from
0600 hours is no longer available, this is the only way to recover the liveCache. The archive
log area therefore provides an additional level of protection against data loss.
The archive log area is composed of the following 4 tables in the APO-DB.
• /sapapo/lc_loghz (for the order headers)
• /sapapo/lc_logcz (cluster table for order data)
• /sapapo/lc_logfz (for fixed pegging relationships)
• /sapapo/lc_logsz (for the key to changed master data)
Please note that when using the archive log area for each checkpoint, all data from log area
A or B is copied to the archive log area. Depending on the data volume, this can increase the
runtime for report /sapapo/om_checkpoint_write accordingly. It should also be noted that the
Backup and Recovery for APO 3.0 Page 17 of 25
Backup and Recovery for APO 3.0 Page 18 of 25

tables in the archive log area can take up a great deal of space in the APO-DB, depending
on data volume. This is especially true if the archive log area is not deleted often enough.
The archive log area should therefore be deleted regularly - at least every 24 hours (more
often if required). Report /sapapo/om_checkpoint_write provides parameter “archdel” (ar-
chive log area delete) for this purpose. If archdel = “X” then report
/sapapo/om_checkpoint_write deletes the archive log area after the checkpoint has been
successfully written and after the saving (if any) of the liveCache data devspaces. Report
/sapapo/om_checkpoint_write needs to be scheduled for this with the appropriate variant.
Alternatively, it can be called as follows (only in APO 3.0):
submit /sapapo/om_checkpoint_write
with archdel eq 'X' sign 'I'
and return.
If, after a successful checkpoint, you want to save liveCache data and then delete the ar-
chive log area, the report must be called as follows (compare with section 3.4.2 “Saving a
liveCache in APO 3.0”) (only in APO 3.0):
submit /sapapo/om_checkpoint_write
with archdel eq 'X' sign 'I'
with lcbackup eq 'X' sign 'I'
with media eq '<media-name>' sign 'I'
and return.
The archive log area can also be deleted by calling report
/SAPAPO/OM_ARCHIVE_LOGAREA_DEL. This has the drawback that it is not carried out
immediately after a checkpoint and the saving of the liveCache data. Only by deleting the
archive log area via report /sapapo/om_checkpoint_write, are the writing of a checkpoint, the
saving of data devspaces and the deletion of the archive log area combined in a single proc-
ess that is protected by the enqueue block. If the writing of the checkpoint or the saving of
the liveCache data is unsuccessful, report /sapapo/om_checkpoint_write does NOT delete
the archive log area, as this would be extremely undesirable. If the user programs using re-
port /sapapo/om_achrive_logarea_del to delete the archive log area, they should take such
eventualities into consideration. SAP therefore recommends that report
/SAPAPO/OM_ARCHIVE_LOGAREA_DEL be used only as required to delete the archive
log area at the user’s own risk.
The tables in the archive log area should be suitably configured in the DDIC. Via “Technical
Settings” in the DDIC, it is possible to configure the size of the tables in accordance with the
corresponding application scenario.
3.5.2. Activating the Archive Log Area
In APO 2.0A, the system uses the archive log area if you change the coding line (mentioned
in section 3.3.5 “Activating Synchronous liveCache Logging”) of Include
/sapapo/om_livecache_log as follows:
gc_liveCache_log type int4 value 2.
As of APO 3.0A it is possible to activate use of the archive log area via transaction
/sapapo/om06. Select the option, “archive log data”.
Usage of the archive log area can be changed dynamically just like logging itself (see section
3.3.5 “Activating Synchronous liveCache Logging”) using function
/sapapo/om_lc_logging_set. The optional parameter IV_USE_ARCHIVE_LOGAREA should
be set as follows:
• IV_USE_ARCHIVE_LOGAREA = 0: The current configuration for archive log area usage
is not changed.
• IV_USE_ARCHIVE_LOGAREA = 1: Usage of the archive log area is started immediately
• IV_USE_ARCHIVE_LOGAREA = 2: Usage of the archive log area is stopped immedi-
ately.

Backup and Recovery for APO 3.0 Page 18 of 25


Backup and Recovery for APO 3.0 Page 19 of 25

3.5.3. Recovery Using the Archive Log Area


If the archive log area is used, report /sapapo/om_lc_recovery issues a dialog box in which
the user decides whether to include data from the archive log area in the recovery. If report
/sapapo/om_lc_recovery is called as a batch job, then this dialog box does not appear and
the data from the archive log area is not included in the recovery. The data from the archive
log area is therefore only used if the user explicitly requests this. It should be noted that the
runtime of the recovery program can be considerably longer if the archive log area is being
used for the recovery, because generally much more data has to be posted to the liveCache.
3.6 Final Comments on liveCache Logging
3.6.1. Programs in the liveCache Logging Environment
As mentioned in section 3.3.5 “Activating Synchronous liveCache Logging”, transaction
/sapapo/om06 (report /SAPAPO/OM_LC_LOGGING_SET) allows you to dynamically acti-
vate or deactivate liveCache logging (as of APO 3.0A). Usage of the archive log area can
also be activated and deactivated.
Program /sapapo/om_lc_logarea_check checks the log areas. A small log is created here
too, containing data on which log area is active, and which log area contains data. Informa-
tion on the clients for which logging is activated and which log area is active can be found in
the APO-DB and in the liveCache. If this data is inconsistent for any reason, report
/sapapo/om_lc_logarea_check tries to remove this inconsistency by copying data from the
APO-DB to the liveCache. (A message is issued if this occurs.)
The report also makes it possible to delete the inactive log area and the archive log area.
Program /sapapo/om_checkpoint_unplan (as of APO 3.0) unplans the periodically sched-
uled batch job that requests checkpoints. This program could be especially interesting for
customer-specific solutions. After this program has been run, however, it will be necessary to
ensure that report /sapapo/om_checkpoint_write is correctly scheduled again.
If, as described in section 3.2.3.2 “liveCache Initialization”, the liveCache is initialized via
transaction LC10, the program deletes log areas A and B, as well as the archive log area
after the initialization of the liveCache. The last checkpoint is also deleted during liveCache
initialization. After liveCache initialization, all log data and the checkpoint are lost. A recovery
is therefore no longer possible!
3.6.2. OSS Notes for liveCache Logging
Note 185220 describes synchronous liveCache logging for APO 2.0. The existing documen-
tation replaces this note for release APO 3.0.
3.6.3. Outlook for Future liveCache Releases
liveCache logging is a form of logging that is carried out via the application programs, unlike
in traditional database systems, where this is carried out by the data storage system (i.e. in
this case, this would be the liveCache itself). To be more precise, the logging process is im-
plemenented in the function modules from the development class /sapapo/om. This is the
APO development class at the deepest level of the software hierarchy. (The higher-level de-
velopment classes contain the APO applications such as PP, DS, CTM, ATP, etc.) The func-
tion modules in development class /sapapo/om call the COM routines in the liveCache. The
COM routines report which orders have been changed to the function modules in class
/sapapo/om. The relevant /sapapo/om module then reads the changed orders from the live-
Cache and writes them to the corresponding active log area.
In a later release of the liveCache it will be possible to configure the liveCache so that it can
carry out logging itself, just like a conventional database. The recovery procedure will be dif-
ferent for this variant. The log areas mentioned above will then no longer used. SAP currently
(as of November 2000) plans to deliver this variant in the middle of 2001 as part of the APO
release 3.0.

Backup and Recovery for APO 3.0 Page 19 of 25


Backup and Recovery for APO 3.0 Page 20 of 25

3.7 High Availability Solution for the APO 3.0 System


3.7.1. System Requirements
• LiveCache disks must be mirrored.
• With NT: Microsoft – clusters, i.e. two liveCaches. Only one of these is active, the other
one is offline. (A cluster can also contain more than two liveCache instances. As a rule,
only two instances will be defined in the cluster.) All liveCache instances of the cluster
work under the same name (i.e. the same IP address) and with exactly the same configu-
ration.
• APO 3.0, Service Pack 7 or higher.
3.7.2. Description of the Work Process
It is assumed that the cluster includes the two liveCache instances 1 and 2. Instance 1 is
active, the liveCache for instance 2 is not active. In other words, the computer is switched on,
but the liveCache is not activated. Both liveCache instances access the same hard drives. In
other words, the data devspaces can be accessed by all the cluster liveCache computers.
The cluster software recognizes when either the active liveCache 1 or the computer on which
the active liveCache is installed, are no longer available. If this happens, the cluster software
switches to the other, as yet inactive liveCache 2 and activates it. Because instance 2 has
the same IP address as instance 1, the application does not notice that another liveCache
has been started. (Both instances have the same external IP address.) For a short period
during the switch to the inactive instance, the applications cannot access the liveCache
cluster.
liveCache 2 is started on the basis of the checkpoint that was last written successfully by
instance 1. liveCache 2 then switches to a special mode (“write transaction disabled”) that
prevents change transactions from being processed in liveCache 2. This mode is maintained
until recovery has been completed in liveCache 2.
A CCMS-controlled program checks every 5 minutes to determine whether the liveCache is
in the “write transaction disabled” mode. If this is the case, the CCMS creates the R/3 event
SAP_APO_LIVECACHE_DISABLED. The APO system administrator must use transaction
SM36 to schedule report /sapapo/om_lc_restart_ccms so that this report is executed as soon
the CCMS creates this event. Report /sapapo/om_lc_restart_ccms then calls report
/sapapo/om_lc_recovery, which actually carries out recovery (described above). Before re-
covery is actually started, the program checks whether the liveCache had actually crashed or
simply stopped in the normal fashion (via LC10). If the liveCache was stopped via transaction
LC10, the liveCache records the shutdown information. If the liveCache was terminated ab-
normally, then the recovery program continues with its operations. If not, it stops and issues
a corresponding message.
3.7.3. Activating the High Availability Mode
To activate the high availability solution, you must carry out the steps in the APO system:
1. Activate synchronous liveCache logging (see above).
2. Via transaction SE38, execute reports /SAPAPO/OM_RSDS_LC_HEARTBEAT and
/SAPAPO/OM_RSDS_LC_STATUS once each. These reports ensure that the CCMS
checks every 5 minutes to determine whether the liveCache is running in the “write trans-
action disabled” mode and if so then triggers the corresponding event.
3. Via transaction SM36 (menu path: System-> Services -> Jobs -> Job-Definition), sched-
ule Job SAP_APO_LIVECACHE_RESTART with reference to event
SAP_APO_LIVECACHE_DISABLED. To do this, you need to enter
SAP_APO_LIVECACHE_RESTART in the “Job name” field, “A” in the “Job class” field “
and enter a target server. Under “Start conditions”, choose the “By event” pushbutton and
then enter the event name SAP_APO_LIVECACHE_DISABLED in the “Event” field. Fi-
nally save the data.

Backup and Recovery for APO 3.0 Page 20 of 25


Backup and Recovery for APO 3.0 Page 21 of 25

After carrying out these steps, a liveCache crash will automatically cause the system to
switch to the second liveCache instance of the cluster and will then trigger a liveCache re-
covery.
If the liveCache of your APO system is running on a UNIX operating system, then you need
to set up a UNIX cluster. In UNIX, the services are triggered via a shell script. The liveCache
is therefore automatically started via a shell script if the cluster calls this script during a
switch. The shell script is supplied by your hardware partner. The following commands are
required in order to start the liveCache during a cluster switch under UNIX:
x_server start
dbmcli -d <LC name> -u control,control db_warm
3.7.4. Additional Programs in the High Availability Solution Envi-
ronment
Program /SAPAPO/OM_LC_RESTART_ENABLE switches the liveCache from the “write
transaction disabled” mode to the “write transaction enabled” mode. In the enabled mode,
change transactions are processed again. You would only need to call this program if (due to
some error) the liveCache did not automatically change to the “write transaction enabled”
mode when started.

Backup and Recovery for APO 3.0 Page 21 of 25


Backup and Recovery for APO 3.0 Page 22 of 25

4. Backup and Recovery of an APO Sy-


stem
Section 4 “Backup and Recovery of an APO System” describes how to carry out a backup of
an APO system. A distinction is drawn between an offline backup and an online backup.
4.1 Offline Backup
To carry out an offline backup, you should first stop the CIF queues and then all application
servers of the APO system. You then need to create a backup of the APO-DB and the live-
Cache. To carry out the backup of the APO-DB, refer the documentation for the correspond-
ing database system. To create a backup of the liveCache, first stop the liveCache via trans-
action LC10. This causes a checkpoint to be written. You can then save the liveCache data
to a storage medium.
An offline backup guarantees that the saved APO-DB and liveCache are consistent. In each
data storage system, no transactions were open at the time of saving, and no change trans-
actions were carried out during the saving of the two data stocks. An offline backup therefore
guarantees APO-internal data consistency. It is also possible to load the saved data onto
another APO system.
4.2 Online Backup
An online backup saves the APO-DB and the liveCache without having to stop the CIF
queue, the application server, the APO-DB or the liveCache.
To create an online backup of an APO system, you must first save the APO-DB. You then
call report /sapapo/om_checkpoint_write, which writes a checkpoint and triggers the saving
of the liveCache data (see section 3.4.2 “Saving a liveCache in APO 3.0”). The saved ver-
sions of the APO-DB and the liveCache created in this way are not consistent. (Any number
of changes to the APO-DB and the liveCache could take place between execution time t1, to
which saving of the APO-DB refers and execution time t2 to which the checkpoint refers.) For
this reason, an online backup does not guarantee APO-internal data consistency. Therefore,
an online backup cannot be used to copy data from one APO system to another. An online
backup does, however, allow disaster recovery. In other words, it can be used after an ex-
treme system termination of the APO-DB or of the liveCaches (disk crash or similar).
If the APO-DB needs to be recovered after a disk crash, then the online backup may be used
– in accordance with the documentation from the database manufacturer – to recover it. It
has to be recovered to the last committed system status. A recovery to an earlier point in
time would cause APO-internal data inconsistency, because it is not possible to reset the
liveCache.
If disaster recovery needs to be carried out for the liveCache (if, for example, a disk crash
causes the liveCache instance for the last checkpoint to be lost), then the online save of the
liveCache must be first loaded into the (repaired) liveCache instance. It is finally necessary to
carry out a liveCache recovery using the archive log area (as described in section 3.3.6 ).
This requires the use of the archive log area. After successfully saving online, it makes sense
to delete the archive log area.
The following is an example scenario for liveCache disaster recovery.
1. Online save of the APO-DB at 2330 hours.
2. Online save of the liveCache data by calling report /sapapo/om_checkpoint_write after
successfully saving the APO-DB (e.g. at 0000 hours). Report
/sapapo/om_checkpoint_write needs to be called so that the liveCache data is saved and
the archive log area is selected (see section 3.5.1 “Fundamentals of the Archive Log
Area”).

Backup and Recovery for APO 3.0 Page 22 of 25


Backup and Recovery for APO 3.0 Page 23 of 25

3. A checkpoint is written every 6 hours. The next checkpoint after online saving is therefore
at 0600 hours (without saving the liveCache data and without deleting the archive log
area.)
4. If the disk drives of the liveCache were to crash at 0800, they would first need to be re-
placed. Afterwards the online save from 0000 hours can be loaded onto the liveCache in-
stance and report /sapapo/om_lc_recovery can be called. This report starts the liveCache
on the basis of the checkpoint from 0000 hours. A recovery is then carried out, which
uses the archive log area. This recovers the liveCache to its status at 0600 hours. Finally
report /sapapo/om_lc_recovery then starts recovery on the basis of the active log area,
thereby bringing the liveCache up to the status of the last committed status before the
crash at 0800 hours.
Disaster recovery therefore requires the use of the archive log area. It makes sense to delete
the archive log area after a successful online save of the APO system. (See section 3.5.1
“Fundamentals of the Archive Log Area”).
4.3 APO-External Data Consistency with R/3 Systems after
a Crash
There are several causes for an APO system crash. For example, the APO-DB, the live-
Cache, the application server or the data connection between the APO system and the con-
nected OLTP systems may become unavailable. If the APO system, for whatever reason,
should become unavailable for a certain period of time, the connected OLTP systems can
continue to function (only ATP checks cannot be processed, because they require a link to
an R/3 system via the APO system.)
If the connected system is an R/3 system, then all data that the R/3 system sends to the
APO system is placed in a CIF output queue. (The data is therefore stored temporarily in the
database of the R/3 system.) If the APO system is unavailable, then the processing of this
CIF queue is stopped. As soon as the APO system is available again, the CIF queues must
be activated. This causes all the data in the queues to be transmitted to the APO system
('delayed update'). Assuming that a recovery of the APO-DB or the liveCache after an APO
system crash can restore the system status to that which it was just before the crash, then
after posting of the CIF output queue the APO-external data consistency will be reestab-
lished.

Backup and Recovery for APO 3.0 Page 23 of 25


Backup and Recovery for APO 3.0 Page 24 of 25

5. Action log
5.1 Log contents
APO release 3.0 SP8 introduces action logging. The following information is logged:
• Checkpoint – general information
• Checkpoint – Mandant-specific information
• LiveCache backup
• Logging changes
• Recovery – general information
• Recovery – Mandant-specific information
• Initialization
• Deletion of the archive area
Start time, time length, user, client, messages, etc. are logged. The number of orders
processed during an event will be logged if the number of orders can be determined without
requiring more time.
5.2 Output of the log data
The log data can be output with transaction /SAPAPO/OM11. This transaction requires that a
data be entered. All log data created not earlier than this date will be output.
A saved layout can be used to switch between display of the old and new lists. The standard
functionality of the General List Viewer (ALV) is available within the list view.
A visual alert will be generated for important events (such as errors).
5.3 Deletion of the log data
The log data can be deleted with transaction /SAPAPO/OM12. This transaction requires that
a date be entered. All log data created before this date will be deleted.

Backup and Recovery for APO 3.0 Page 24 of 25


Backup and Recovery for APO 3.0 Page 25 of 25

6. Frequently Asked Questions


6.1 How can I copy a complete APO system?
See OSS note 0210564 and section 4.1 “Offline Backup”.
6.2 Is it possible to install several liveCache instances on
one computer?
Technically, this is possible. As of liveCache Version 7.2.4, Build 5, it is possible to run sev-
eral liveCache instances (which can be different versions). Since this would make a mean-
ingful distribution of system resources almost impossible, SAP strongly discourages it.
This question is often asked because the users want to run not only the live APO system, but
also a test APO system and a quality APO system and they want to use as few computers as
possible for the test and quality systems. Instead of installing the liveCache for the test sys-
tem and liveCache for the quality system on separate computers, it is better to install the test
system and the quality system on separate computers. Each computer now requires as well
as the installation of the database and application server, as well as the liveCache. Of
course, this only works if a low data volume is to be stored on these systems, and only if very
few users access them.
6.3 Is a point-in-time recovery possible?
The synchronous logging procedure makes it possible to recover the liveCache to the last
committed system status. Recovery to a particular point in the past is not possible.
6.4 How does the program /sapapo/om_checkpoint_write
work?
Program /sapapo/om_checkpoint_write completes the following tasks in the sequence speci-
fied:
1. Setting of an enqueue block, preventing other users from using this program at the same
time.
2. Switching to the inactive log area. (If log area A is active, log area B is activated and A is
deactivated. This only occurs if liveCache logging is activated for at least one client in the
APO systems.)
3. Starting of the checkpoint. The liveCache waits until all running transactions have been
completed. Transactions that have only just been started are placed in a queue by live-
Cache until all running transactions have been completed.
If the checkpoint could not be written successfully, the program terminates with a corre-
sponding error message. (A checkpoint may possibly not be recorded, because running
transactions were not ended within a certain period of time (Request Time Out). The Re-
quest Time Out is a configurable parameter of the liveCaches and should be set to 60
minutes.)
4. After the checkpoint has been successfully recorded, the data from the old log area are
copied to the archive log area, if the archive log area is activated.
5. Deletion of the old log area.
6. Calling of the Business Add-Ins (Customer-Exit).
7. Saving of the liveCache data devspaces, if the parameter “lcbackup” is equal to ‘X’ and if
the storage medium (on which should be saved) is specified in parameter “media”.
8. Deletion of the archive log area, if the parameter “archdel” is set to ‘X’ and if no error has
occurred during the call of the Business-Add-Ins and the saving of the data devspaces.
9. Release of the enqueue block set in 1.

Backup and Recovery for APO 3.0 Page 25 of 25