Professional Documents
Culture Documents
WORKSHOP
Sept. 30 Oct. 2, 02 Bremen, Germany
PRINT ON DEMAND
sponsored by
Learning Objectives
T T T T T T
Integrate your SAP liveCache into the APO system Start, stop and initialize your SAP liveCache Configure your SAP liveCache Take backups and restore it React on critical situations Monitor the system regarding
Consistent views and garbage collection Memory areas Task structure Performance
PRINT ON DEMAND
sponsored by
T T T T T
The workshop contains 12 units Each unit consists of (15 min) T lecture Most units consist of (10 min) T exercises ( 5 min) T solutions Feel free to ask your questions during the exercises Breaks every 2 hours for 15 min
PRINT ON DEMAND
sponsored by
Agenda
(1) (2) (3) (4) (5) (6) (7) (8) (9)
liveCache concepts and architecture liveCache integration into R/3 via transaction lc10
Basic administration (starting / stopping / initializing) Complete data backup Data storage Advanced administration (log backup / incremental data backup/ add volume) Consistent views and garbage collection Memory areas Task structure
z In this workshop you will learn the main tasks of a liveCache administrator. Moreover the
architecture and the concepts of the liveCache are introduced which gives an understanding of the liveCache behavior and ideas how to analyze and to overcome performance bottlenecks.
z This workshop refers to the liveCache release 7.4.
PRINT ON DEMAND
sponsored by
WORKSHOP
Sept. 30 Oct. 2, 02 Bremen, Germany
z In this unit you will learn how to integrate an existing liveCache into the CCMS.
PRINT ON DEMAND
sponsored by
Disk-based approaches and data storage based on relational schemas are not suitable for high performance processing and thus for Advanced Planning and Optimization (APO)
z For the development of the Advanced Planning and Optimization (APO) component a
database system was needed which allows fast access to data organized in a complex network.
z Applying conventional relational database management systems as data sources for the
APO showed a poor performance since disk I/O and the non-appropriate data description in the relational schema limited the performance.
PRINT ON DEMAND
sponsored by
App. Server
Application
Application buffer
0.1 ms
Bring data to the application Comprehensive computation triggers huge data traffic and disk I/O Buffered data still relational
Database Server
Database buffer
1 ms
Database
10 ms
= 8KB
z To read data from an application buffer which is in the same address space as the
application takes about 0.1ms. Reading data from a database takes about 1ms if the corresponding record is already in the database buffer and even 10 ms if the record must be read from a hard disk before.
z Working with an application having a too small buffer to accommodate all required data
application buffer they are still organized in a relational schema which is not appropriate to describe complex networks.
z To achieve a good performance for applications which require access to a large amount of
data (i.e. APO) it is necessary to bring the application logic and the application data together in one address space. One possible solution could be to shift the application logic from the application server to the database server via stored procedures. However, this impairs the scalability of R/3. On the other hand one could shift all required data to the application server. But this requires that each server is equipped with very large main memory. Furthermore, the synchronization of the data changed on each server with the data stored in the database server is rather complicated.
PRINT ON DEMAND
sponsored by
Presentation client
App. Server
Application
Application buffer
Concurrency and transactional behavior supported Bring application logic and data together Avoid huge data traffic and disk I/O on comprehensive computation
Database Server
Database buffer
Database
liveCache
Dedicated hardware/software system
server tier for the main memory-based temporary storage of volatile shared data.
PRINT ON DEMAND
sponsored by
liveCache is an instance type of the relational DBMS SAP DB which was expanded by properties of an ODBMS liveCache is an object management system for concurrent C++ programs which run in a single address space liveCache provides an API to create, read, store and delete OMS objects liveCache provides a transaction management for objects (commit, rollback) liveCache ensures persistence of OMS objects including recovery
application programs (COM routines). These objects - called OMS objects - contain application data, whose meaning is unknown to the liveCache . All objects ideally are located in the main memory - in the global data cache - of the liveCache , but may be swapped out to disk in case of memory shortage.
z COM routines run as stored procedures in the address space of liveCache and are called
from APO ABAP programs which run on the APO application servers. Due to the fact that COM routines run in the address space of liveCache , they have direct access to OMS objects, and navigation over networks of OMS objects is very fast. Typical access time is less than 10 microseconds per object.
z liveCache provides classes and class methods to the COM routines to administer their
objects. Technically: COM routines inherit class methods from the liveCache base classes to create, read, store and delete OMS objects.
z liveCache relieves the application programs of implementing their own transaction and
lock management. The application program is able either to commit or rollback all changes made on several objects in a business transaction.
z liveCache ensures the existence of OMS objects beyond the lifetime of COM routines.
Thats why liveCache uses the term persistent OMS objects. When liveCache is stopped or when a checkpoint is requested, all objects are stored on hard disks.
PRINT ON DEMAND
sponsored by
liveCache provides suitable representations of complex data structures, like networks and trees, based on object references liveCache is used for fast navigation in large and complex networks liveCache offers consistent views to isolate navigation on data structures from simultaneous changes on these data structures liveCache provides the complete functionality of a OLTP database which can be used in COM routines via a SQL interface
z The APO application uses a complex object orientated application model. This model is
easier to implement by an object oriented programming than with the relational structures of a relational database. Therefore, liveCache supports object oriented programming through providing adequate C++ methods/functions.
z liveCache provides the application with the concept of consistent views to isolate the data
COM routine from ABAP is quite simple through using EXEC SQL.
PRINT ON DEMAND
sponsored by
10
10
liveCache objective
Application Server
liveCache Server
T liveCache resides in main memory and therefore avoids disk I/O T Object orientation enables efficient programming techniques T C++ applications run in the address space of liveCache T Objects are referenced via logical pointers (= OID)
z In a standard SAP system, typical database request times are above 1 ms. For data
intensive applications, a new technology is required in order to achieve better response times. liveCache has been developed to reduce typical database request times to below 10 s. Key factors in achieving these response times are:
y Accesses to liveCache data usually do not involve any disk I/O. y The processes accessing the data are optimized C++ routines that run in the process
compared to a relational database, where many related tables may have to be accessed to retrieve all requested information - one object contains all the relevant information and the need to access numerous objects or tables is eliminated. In other words, the typical liveCache data structure is NOT a relational data table.
y Objects are referenced via logical pointers OID in contrast to referencing records via
PRINT ON DEMAND
sponsored by
11
11
liveCache
Devices
SAP AG 2002, Title of Presentation, Speaker Name 12
z ABAP Programs and the APO optimizers use native SQL for communicating through the
standard SAP DB interface to liveCache. liveCache has an SQL interface that is used to communicate with the SAP instances. With native SQL, ABAP programs call stored procedures in the liveCache that point to Component Object Model (COM) routines written in C++. An SQL class provides SQL methods to access the SQL data through the COM routines.
z The COM routines are part of a dynamic link library that runs in the process context of the
liveCache instance. In the Windows NT implementation of liveCache, COM routines and their interface are registered in the Windows NT Registry. For the Unix implementation, a registry file is provided by liveCache. A persistent C++ class provides the COM routines with access to the corresponding Object Management System (OMS) data that is stored in the liveCache.
z COM Routines in APO are delivered in DLL libraries as SAPXXX.DLL and SAPXXX.LST
on NT or as shared libraries SAPXXX.ISO and SAPXXX.LST on UNIX . The application specific knowledge is built into these COM routines based on the concept of object orientation.
PRINT ON DEMAND
sponsored by
12
12
OMS
live Cache
OMS basis
(page chains)
DBMS basis
Log devices
z liveCache is a hybrid of a relational and object-oriented database z The relational part of the liveCache is available as the open source data base SAP DB
(see www.sapdb.org)
z The SQL part as well as the OMS part of the liveCache are based on the same DBMS
basis functionality which supplies services as for instance transaction management, logging, device handling and caching mechanisms
z Object and SQL data are stored on common devices z All liveCache data is stored in the caches as well as on disks in 8 KB blocks called pages. z liveCache stores the OMS objects in page chains, the pages in the chain being linked by
pointers. SQL table data are stored in the B*trees. SQL and OMS data reside together in the data cache and the data devices of the liveCache.
PRINT ON DEMAND
sponsored by
13
13
Transaction LC10 in the SAPGUI Database Manager CLI (DBMCLI) command line interface Database Manager GUI (DBMGUI) graphical user interface for Windows NT/2000 only Web Database Manager (WEB DBM)
z liveCache, similar to the standard SAP RDBMS, can be administered within the SAP
system. The SAP transaction LC10 makes it possible to monitor, configure and administer liveCache.
z The LC10 applies the Database Manager CLI (DBMCLI) to administer the liveCache.
Therefore, it is obvious that all functionalities of an SAP System are still available without the LC10 and could also be performed with the native data base administration tool DBMCLI.
z In addition to the DBMCLI the administration tool DBMGUI is available, which is a
only an internet browser and the DBM Web Server which can be installed anywhere in the net.
z DBMCLI, DBMGUI and WEB DBM should not be used for starting or stopping the
liveCache, even if LC10 itself calls DBMCLI for starting or stopping the liveCache. They should only be used for changing liveCache parameters, defining backup media and for liveCache monitoring. That is because the LC10 runs in addition to starting, stopping and initializing application specific reports. Moreover, it registers the COM routines each time the liveCache is started.
PRINT ON DEMAND
sponsored by
14
14
WORKSHOP
Sept. 30 Oct. 2, 02 Bremen, Germany
z In this unit you will learn how to integrate an existing liveCache into the CCMS.
PRINT ON DEMAND
sponsored by
15
15
Transaction LC10
LCA_LAPTOP) which need not to be the physical name of the liveCache as it was installed on the liveCache server.
z Integration Button y creates and modifies new liveCache connections z Monitoring Button y leads to the main screen of the LC10 y liveCache administration (stop,start and initialization of the liveCache). y changing liveCache configuration y watch and analyze the liveCache performance by the liveCache administrator y save and recovery of the liveCache z Console Button y views the status of liveCache tasks z Alert Monitor Button y reports error situations (liveCache specific part of the transaction RZ20)
PRINT ON DEMAND
sponsored by
16
16
z Choose Integration on the initial screen of LC10 to reach the integration screen. z The integration data are required for the multi-db-connection from an R/3 system to the
liveCache via NATIVE SQL. They are stored in tables DBCON and DBCONUSR on the RDBMS.
z The Name of the database connection is used for a NATIVE SQL connection to an R/3
system.
z The liveCache name is the name of the liveCache database. It can be different from the
the output from the command hostname on a DOS prompt or UNIX shell.
z The default user/password combinations are control/control for the DBM operator and
liveCache connection information. This guarantees the R/3 system connects to the correct liveCache instance.
PRINT ON DEMAND
sponsored by
17
17
decentral authorization:
y you have to authorize access to the liveCache on each APO application server y On each application server you have to start the dbmcli command (via sm49):
central authorization:
y central authorization data is stored in the APO database in table DBCONUSR y this authorization is recommended and it is the default y new with version 46D -> APO 3.1
PRINT ON DEMAND
sponsored by
18
18
z Execution of application-specific functions: y To run an ABAP report automatically prior or after liveCache start, stop and
liveCache was initialized. This report is responsible for the integrity of the APO and the liveCache data.
PRINT ON DEMAND
sponsored by
19
19
z When the alert monitor is activated a number of performance critical data (i.e. heap and
device usage, cache hit rates) is collected periodically and displayed in the alert monitor which can be reached by pressing Alert monitor on the initial screen of the LC10.
z The alert monitor is activated by default if the liveCache was installed by the standard
PRINT ON DEMAND
sponsored by
20
20
WORKSHOP
Sept. 30 Oct. 2, 02 Bremen, Germany
Basic Administration
z At the end of this unit you will be able to start, stop and initialize a liveCache and you will
PRINT ON DEMAND
sponsored by
21
21
liveCache status
z This is the main screen of the LC10 which can be reached by pressing liveCache
Monitoring on the initial screen of the LC10. It offers all services and information to administrate the liveCache.
z Before this window appears, the R/3 system sends a request to the liveCache about its
status. The liveCache name and liveCache server information are stored in the table DBCON as described in previous slides. The remaining information displays the output from the status request. If the connection to a liveCache is not available, an error message is displayed.
z The left frame of the screen shows a tree which contains all information and services
needed to administer the liveCache. The tree branches with the most important information and services are opened by default.
z The right frame displays the details which belong to the activated branch of the service
tree.
z Initially the window screen which belongs to the Properties icon is activated. z The DBM server version displays the version of the database manager server which is
PRINT ON DEMAND
sponsored by
22
22
z There are three liveCache operating modes: y OFFLINE: No liveCache kernel processes are running, memory areas (caches) are
volumes. Users cannot connect to the liveCache. Only the liveCache administration user can connect and perform administrative tasks like restoring the database.
y ONLINE: The liveCache kernel is active and data and log information is
synchronized between caches and volumes. Users can connect to the liveCache.
PRINT ON DEMAND
sponsored by
23
23
liveCache: Monitoring screen. There you can find three buttons with the following meanings:
y Start liveCache starts the liveCache into online mode. After the restart all data,
committed before the last shutdown (or crash), are available again.
y Initialize liveCache deletes the complete contents of the liveCache. (The next pages
DBMCLI, it is strongly recommended to use the transaction LC10. First, LC10 calls up an APO specific report after starting the liveCache instance. If the report does not run, accesses of work processes to the liveCache may cause an error. Second, during stopping the liveCache instance, LC10 informs all work processes about this, which causes them to automatically execute a reconnect when accessing the liveCache next time. If the liveCache instance was stopped using DBMGUI or DBMCLI, a short dump occurs as soon as work processes try to access the liveCache again after a restart.
PRINT ON DEMAND
sponsored by
24
24
z Initializing the liveCache always formats the log volumes. If the data volumes do not
PRINT ON DEMAND
sponsored by
25
25
z This slide demonstrates the steps in a liveCache initialization process. z Formatting log volume can take some time; It depends on the size of the log volumes. z Loading system tables is needed for liveCache error messages and liveCache monitoring. z User sapr3 is the owner of liveCache content. This user is re-created each time the
liveCache is initialized.
z Registration of COM-Routines registers all application specific routines, e.g. sapapo.dll
for APO.
z In an APO system, the report /SAPAPO/DELETE_LC_ANCHORS is required to be
PRINT ON DEMAND
sponsored by
26
26
z Each time the liveCache is started, stopped or initialized, a log file (LCINIT.LOG ) is
written which can be viewed in the branch Logs->Initialization->Currently of the service tree.
z The log file of the previous starts, stops or initializations is displayed in
Logs->Initialization->History.
z The tab Controlfile of the selection Problem Analysis->Logs->Initialization displays the
script LCINIT.BAT which is used to start, stop and initialize the liveCache.
z Whenever the liveCache is started, stopped or initialized successfully you can find a
message liveCache <connection name> successfully started/stopped/initialized at the end of the log file.
PRINT ON DEMAND
sponsored by
27
27
z The knldiag file logs messages about current liveCache activities. The actions logged
include liveCache start, user logons, writing of savepoints, errors and liveCache shutdown. Therefore, this file is one of the most important diagnostic files to analyze database problems or performance bottlenecks.
z The knldiag file is recreated at every liveCache start. The previous one is saved under
knldiag.old (Problem Analysis->Messages->Kernel->Previous ), which means that the content of every knldiag file is definitely lost after two consecutive restarts. To avoid loosing the information about fatal errors that happened during two consecutive startup failures, errors are also appended to file knldiag.err.
z To avoid that the size of the knldiag file increases unlimitedly with the time the database
spends in the operation mode online, the knldiag file has a fixed length which can be set as a configuration parameter of the database. The system messages are written in a roundtrip. Therefore, it can happen that the knldiag file does not contain all system messages after long operation time. This is another reason why all error messages are written into the file knldiag.err.
PRINT ON DEMAND
sponsored by
28
28
z In contrast to the knldiag file, knldiag.err is not overwritten cyclically or reinitialized during
a restart. It logs consecutively the starting time of the database and any serious errors.
z This file is required to analyze errors if the knldiag files, which originally contained the
PRINT ON DEMAND
sponsored by
29
29
liveCache directories
Directory structure
<SID>
db
misc
sap
required for the database management system while the <IndepData> directory accommodates all configuration and message files which belong to specific liveCache instances.
z For each instance a new subdirectory is created in the <IndepData>. The
<Rundirectory> defines the name of the subdirectory where to find the message files which belongs to the instance currently monitored. Usually you should have only one instance on your liveCache server.
z In the <IndepPrograms> subdirectory those programs and scripts are stored which do not
depend on particular liveCache releases, like e.g. the downward compatible network server program x_server that transfers data between any liveCache instance and a remote client.
z In contrast to the <IndepPrograms> directory all files contained in the directory
PRINT ON DEMAND
sponsored by
30
30
z Sapdb/data/config: database configuration file for each installed database instance. z Sapdb/data/config/install: log file for each installation of the SAPDB database
management system.
z Sapdb/data/wrk/Lca: working directory of a liveCache. The working directory contains the
message files knldiag, knldiag.old and knldiag.err, the liveCache trace file knltrace and the dump file knldump. The dump file is created whenever the database crashes due to an error. The file contains an image of all structures stored in the memory. Together with the knldiag file this file is essential for the error analysis. The size of this file is about 10 percent larger than the size of the data cache. Make sure that there is always sufficient space on the device accommodating the working directory to host the knldump file in case of a crash.
z Sapdb/data/wrk/Lca/dbahist: detailed log files for each backup and restore of the
database
z Sapdb/data/wrk/Lca/DIAGHISTORY: All message, dump and trace files except the
knldiag.err are overwritten after a restart. To avoid the lost of the message files needed for the error analysis at each restart all files from the working directory are saved in a subdirectory of DIAGHISTORY when the liveCache detects that the previous shutdown was due to an error. The subdirectories are labeled with a time stamp.
PRINT ON DEMAND
sponsored by
31
31
Programs specific for installed liveCache release System programs, tools Documentation files Root directory for the SAP DB Web Server Scripts for creation of system tables List of installed files
Libraries for precompiler system programs, e.g kernel.exe SAP-specific liveCache utilities Map files of all system programs Release independent programs, e.g. DBMCLI
contain the application code to run via COM in the database. Here you can also find the script LCINIT.BAT to start, stop and initialize the liveCache.
PRINT ON DEMAND
sponsored by
32
32
WORKSHOP
Sept. 30 Oct. 2, 02 Bremen, Germany
z At the end of this unit you will be able to perform a complete backup of the liveCache
PRINT ON DEMAND
sponsored by
33
33
DAT_00001
Data volumes
Data n
Log volumes
Log 2
z A complete backup saves all occupied pages of the data volume. In addition, the
consistent on the level of transactions since the before images of running transaction are stored in the data area; i.e. they are included in the backup.
z Each backup gets a label reflecting the sequence of the backups. This label is used by
the administrative tools to distinguish the backups. A map from the logical backup media name to the backup label can be found in the file dbm.mdf in the <Rundirectory> of the liveCache.
z For each backup log is written to the file dbm.knl in the <Rundirectory>.
PRINT ON DEMAND
sponsored by
34
34
z To perform an initial backup of the liveCache we will use the DBMGUI which can be called
by choosing Tools->Database Manager (GUI). After the selection you will be asked for the user name of the database manager and its password which are usually CONTROL/CONTROL.
z Since backup and restore procedures of a liveCache are identical to those for a OLTP
instance of the SAP DB these functions are not directly included in the liveCache specific transaction lc10 but can be accessed via the general administration tool DBMGUI.
z To use the DBMGUI it has to be installed on the local PC.
PRINT ON DEMAND
sponsored by
35
35
z Appearance of the DBMGUI: y On the left side you can see all possible actions and information grouped into six
topics.
y On the right upper side the most important database information are displayed:
the filling levels of data and log volumes and the cache hit rates.
y In the central window new information will be shown if you click on one of the icons in
media.
z At the lower border of the central window there are two icons which can be used to define
PRINT ON DEMAND
sponsored by
36
36
z You can choose nearly any name for the media name. There are only a few names reserved for
external backup tools: ADSM, NSR, BACK. If your media name begins with one of these strings, an external backup tool is expected.
z Besides the media name you have to specify a location. You have to enter the complete path of the
media. If you specify only a file name this file will be created in the <Rundirectory> of the database.
z There are four backup types: y Complete: full backup of the data. y Incremental: incremental backup of the data, saves all pages changed since the last complete
data backup.
y Log: interactive backup of the full log area (in units of log segments). y AutoLog: automatic log backup, when a log segment is completed, it will be written to the
defined media.
z For a complete or incremental data backup you can choose one of the three device types: file, tape
or pipe. For a log backup you can choose file or pipe. It is not possible to save log segments directly to tape.
z After you have entered the necessary information, you have to press the button OK (green tick). z The media definition is stored in the file dbm.mmm in the <Rundirectory> of the database.
PRINT ON DEMAND
sponsored by
37
37
(1) Tivoli Storage Manager (2) Networker (3) Tools which support the Interface BackInt for Oracle
z To use one these tools you have to choose the device type Pipe for your backup media.
Moreover, the name of the media has to start with either the letters ADSM, NSR or BACK. The DBMGUI needs these letters to decide which kind of external tool it should apply.
z For Windows NT media location must be as \\.\<PipeName> where <Pipename> stands
for any name. On a UNIX platform the location can be any file name of a non existing file.
PRINT ON DEMAND
sponsored by
38
38
z To create a complete data backup you have to select Backup->Complete. In the central
window you are offered all media which are available for this operation.
z After you have chosen a media you have to confirm your choice by pressing the Next
Step button. The following window repeats your choice and ask you to confirm it. When this is done the backup process starts and you can follow the progress displayed in a progress bar.
PRINT ON DEMAND
sponsored by
39
39
z When the backup is finished, a status message will be displayed. z A complete backup is consistent, i.e. it is possible to restart the recovered database
PRINT ON DEMAND
sponsored by
40
40
z Use the report RSLVCBACKUP to perform a liveCache backup in the background. z The report requires the input parameter: y liveCache connection name (usually LCA) y backup type
z Before the report can be executed the backup media must be defined which can be done
PRINT ON DEMAND
sponsored by
41
41
WORKSHOP
Sept. 30 Oct. 2, 02 Bremen, Germany
Data Storage
z At the conclusion of this unit you will be able to monitor the data page usage of the
liveCache.
PRINT ON DEMAND
sponsored by
42
42
liveCache objects
class MyObj : public OmsKeyedObject<MyObj, unsigned char> { public : unsigned char UpdCnt; MyObj() { UpdCnt = 0;} }; STDMETHODIMP TestComponent::OID_UPD_OBJ (int KeyNo) { try { const MyObj* pMyObjKey = MyObj::omsKeyAccess(*this, KeyNo, OMS_DEFAULT_SCHEMA_HANDLE, CONTAINER_NO); if (pMyObjKey) { MyObj* pUpdMyObj = pMyObjKey->omsForUpdPtr (*this, DO_LOCK); pUpdMyObj->UpdCnt++; // 1st update pUpdMyObj->omsStore(*this); pUpdMyObj->UpdCnt++; // 2nd update } else throw DbpError (100, "Object key not found"); } catch (DbpError e) { omsExceptionHandler(e); } return S_OK; }
SAP AG 2002, Title of Presentation, Speaker Name 43
z The liveCache was designed to store instances of C++ classes which are defined within
COM routines. At runtime a COM routine generates instances of classes. These instances are called persistent objects since they survive their creators (the COM routines). They are stored in liveCache and on physical disks.
z The example above shows the definition of a class (MyObj) to generate persistent objects
inherit the ability to be stored persistently in the liveCache. The template OmsKeyedObject belongs to the API supplied by the liveCache it offers transaction control (rollback,commit), lock mechanisms, access methods and the ability to be stored to all derived classes.
PRINT ON DEMAND
sponsored by
43
43
...
z SQL data is stored on SQL pages and is sorted using the B*tree algorithm. Access
occurs via a key and requires a search for the record position in the index. In contrast object data is stored in OMS pages, which are linked to build page chains. Objects are accessed via an OID. The OID contains already the object position therefore no further search is required.
z In the liveCache, all data are stored in data volume pages regardless of the data type
PRINT ON DEMAND
sponsored by
44
44
Table 1
logical reference via primary key
1
Table 2
2 3
Primary index
1
1 1 2
record
rec
rec
Data pages
record
Navigation via logical key using SQL in > 1 ms (data cache access)
SAP AG 2002, Title of Presentation, Speaker Name 45
z Application data in APO is organized as a network of linked data records. Data records
contain application data and mostly one or more links to other records used for the navigation over the data network.
z In a traditional relational database management system, data is stored in relational
tables. Tables containing related data are logically linked through one or more fields (which may but do not have to carry the same names). Mostly the primary key of the tables is used as link criteria.
z To retrieve data in a table, an index will be used either the primary index containing the
primary key or a secondary index. Normally more than one access to index data is necessary to navigate to the table data in the data pages.
z Navigation over a network of data, stored in one or several tables, is performed using
first record.
y The database reads the next record and returns it to the application program. y (1-3) until all data is read. z If most of the pages accessed are buffered in the databases RAM then no disk access
will be required, but if this is not the case the database software has to read information stored on hard disks to fulfill the data request. Physical disk access slows down the performance of the database.
PRINT ON DEMAND
sponsored by
45
45
Objects of class 1
Class container 1
page 11
page 4
page 34
object
1
object
Class container 2
page 5
page 20
obj obj
3
Objects of class 2
z In liveCache the data (objects) are stored in class containers which consist of double
linked page chains. Navigation between objects is very fast because objects are referenced using a physical reference the Object ID (OID) which contains the page number and the page offset.
z Direct accesses to the body of an object, e.g. by searching data in the body, is not
possible. Only alternative are keyed objects where the application may define a key on the object. Features like LIKE, GT, LT etc. are not supported only a key range iterator is supplied.
z liveCache can also store data in relational tables and access them correspondingly, but
PRINT ON DEMAND
sponsored by
46
46
Class container
Chain 1 Chain 2 Chain 3 Chain 4
first free
Map: key->OID
index
next free
z The liveCache supplies two kinds of class containers to store objects. One for fixed
the class containers can be partitioned into more than one chain to avoid bottlenecks during massive parallel insert of objects.
z The root page of each chain includes administrative data, e.g. the pointer to the first
of fixed length onto an OID. Object of those containers can also be accessed via a key. The index is organized as one or several B* trees.
PRINT ON DEMAND
sponsored by
47
47
T pointer to next free frame T object lock state T pointer before image
T page number T check sum T pointer to first free object frame T num. of free/occupied frames T pointers to next/previous pages
z Each page contains objects instantiated from the same class, i.e. all objects on a page
are of the same length. Therefore, they are stored in an array of object frames. With this approach, there is no space fragmentation on a data page.
z The object frame consists of a 24 Bytes header with internal data and the data body that
is visible to the COM routines. The header stores for instance the lock state of the object, the pointer to the next free object frame and the pointer to the before image of the object.
z The length of a data page is 8KB. Each page has a header of 80 Byte and a trailer of 12
Bytes. These parts of the page are not used for object frames but filled with structural data as the page number, the numbers of the previous and next pages in the page chain, a checksum to detect I/O errors, the number of occupied/free object frames on the page and the offset of the first free frame.
z The length of a fixed length object is limited to the page size of slightly less than 8KB.
PRINT ON DEMAND
sponsored by
48
48
Class container
Primary container i. continuation container j. continuation container
z Objects with variable length may be distributed over several pages and have a theoretical
containers for objects of variable length. Each of those class containers consists of one primary container and six continuation container.The primary container can accommodate object smaller than 126 Byte. The ith continuation container contains object frames which can host object with the length of ~126*(2^i) Byte with i=1,..,6.
z To insert an object the liveCache chooses a free object frame from the primary file. If the object is
smaller than 126 Byte it is put into this free frame otherwise the object is put into a frame of the continuation container which has the smallest object frames which can still accommodate the object. The OID of the frame, where the object is actually stored, is put into the chosen frame in the primary file.
z The OID which is used by the application to identify an object is always the OID from the primary
container. This guarantees that the object can be accessed always by the same OID even if its length changed and it was moved to another continuation container.
z The construction of the page chains and the pages of the continuation files is similar to those of
the fixed length class containers except that object frames in the continuation files are only 8 Byte long.
z No index can be defined for objects of variable length. z Accesses to objects with variable length are more expensive than accesses to ordinary objects if
they are longer than 126 Byte, since each access to those objects requires more than one page access.
z Primary containers as well as continuation containers can be partitioned too.
PRINT ON DEMAND
sponsored by
49
49
z The LC10 offers detailed data about all class containers stored in the liveCache. The class
frames.
y Schema: name of the schema a class container is assigned to. Each container must be
assigned to a schema. A schema can be considered as a name space which can be dropped with all its class containers at once.
y Class GUID: external unique identifier of the class.
PRINT ON DEMAND
sponsored by
50
50
COM routines
z Objects stored in the class containers can be accessed and manipulated only via COM
all COM objects and their methods which are currently registered at the database. For each COM routine a detailed parameter description is available when the triangle left of the routine name is pressed.
z The COM routines can be executed through stored procedure calls. For instance The
COM routine CREATE_SCHEMA from the example above can be executed by the SQL command call CREATE_SCHEMA (MyFirstSchema).
z The registration of the COM routines is done automatically when the liveCache is
PRINT ON DEMAND
sponsored by
51
51
WORKSHOP
Sept. 30 Oct. 2, 02 Bremen, Germany
Advanced Administration
z At the conclusion of this unit you will be able to save the log, perform an incremental
backup, to add a data device and to configure the liveCache for saving the log automatically.
PRINT ON DEMAND
sponsored by
52
52
liveCache icon
z When performing the last exercise the liveCache ran into a log-full situation which caused a
standstill of the liveCache. All users trying to write any entry into the log were suspended. However, users can still connect to the database and as long as they only read they can continue to work on the database.
z The filling level of the data and log volumes can be observed by the transaction LC10 or
the DBMGUI. Within the LC10 the selection Current Status->Memory Areas ->Decvspaces displays a detailed list of occupation of the data and log devices. However, it is more convenient to watch the bars at the upper side of the DBMGUI. By a double click on the liveCache icon you can get a detailed information about data and log devices in the central screen too. If the log filling reaches critical values you can find warning messages in the knldiag file too.
PRINT ON DEMAND
sponsored by
53
53
z You can convince that no data base task in particular no user task is active in the log full
situation by choosing Check->Server. By clicking on the selection TASKS you get an overview what each database task is currently doing.
z In case the log device is full you find the archive log writer task in the state log-full. z User tasks which have tried to write entries into the archive log you can find in the state
LogIOwait
z Tasks which serve other users are not suspended and in the state Command wait, i.e.
This is because a small amount of the log is reserved and cannot be used by user tasks. This reserved part is required to guarantee that the liveCache can be shut down even in a log full situation.
PRINT ON DEMAND
sponsored by
54
54
(2) Add log volume (3) Log backup (4) Continue log
Dev 1 BACKUP
Dev 1
Dev 2
z At first glance one could think that a log full situation could be overcome by only adding
another log volume. However, the liveCache/SAP DB writes the log cyclically onto the volumes as they would be only one device. This means that even if a new log volume is added, the log writing has to be continued after the last written entry. Therefore, a log volume cannot be used immediately after it was added but the log has to be backed up before (SAVE LOG interactive log backup).
z Note: Prerequisite for a log backup is a data backup.
PRINT ON DEMAND
sponsored by
55
55
Data volumes
LOG_00001
Log 1
Log volumes
Log 2
z Interactive log backup (SAVE LOG) backs up all occupied log segments from the log
each log segment. The version files get a number as extension (e.g. L_BackUpFile.001, Al_BackUpFile.002, ...).
z The label versions are independent of the labels generated with complete data backup
PRINT ON DEMAND
sponsored by
56
56
z Choosing Backup->Log`in the DBMGUI you activate the central window which allows to
back up all log segments (interactive log backup SAVE LOG). After activating Backup>Log` the central window displays a list of all log backup media defined so far which can be used to save the current log. If this window is empty or all media defined are already in use you must define a new log backup media before.
PRINT ON DEMAND
sponsored by
57
57
z For the definition of the log backup media you have to enter a name and a location for the
media. By pressing the green tick the input can be confirmed. By following the footprint icon you can now continue the log backup. No further input is required. At the end of the backup you get a report about the save.
z You can define a log backup media as well as a data save media also by choosing
Configuration->Backup Media
z The log is logically divided into a number of log segments. The size of these segments is a
configuration parameter of the liveCache. After the first of these segments is saved all tasks which were suspended due to the log full situation are immediately resumed. That means suspended tasks continue working already during the backup of the log area if there exist more than one log segment.
PRINT ON DEMAND
sponsored by
58
58
z To prevent the database from further standstills due to a full log device you can activate the
autosave log mode (AutoLog mode). When the AutoLog mode is activated the log is automatically written to files whenever a log segment is full. Each segment is saved in a new backup file. The backup files are labeled as the corresponding media file plus a suffix of a three digit number. The numbers are assigned in ascending order according to the order of the saves.
z You can switch on the AutoLog mode by selecting Backup->AutoLog on/off. There you
can select a media which stores the automatically written log files. Alternatively, you can define a new media by pressing the Tape icon. After you have confirmed your media selection with the AutoLog icon the auto log mode is activated.
z Pressing the tape icon on the lower taskbar of the central window you can create also a
PRINT ON DEMAND
sponsored by
59
59
z After your last exercise the database is nearly full. Therefore, another data volume should
be added to prevent the liveCache from a standstill due to a database full situation.
z In the LC10 you can add a data volume by selecting Administration->Configuration-
>Devspaces. After pressing the Add Devspace button in the upper left corner a new dialog window appears where you have to specify the size and the location of the new volume.
z The new volume is immediately available after you have saved and confirmed the input
values.
z Data and log volumes can also be added using the DBMGUI (Configuration->Data
Volumes).
PRINT ON DEMAND
sponsored by
60
60
liveCache configuration has to be checked. If a Nth device shall be added this parameter must be larger or equal to N. The parameter can be changed in the LC10 by the selection Administration->Configuration->Parameters. If you choose the DBMGUI you have to select the menu path Configuration->Parameters. Note that new values of the database configuration parameters are not valid until the database was stopped and started again.
PRINT ON DEMAND
sponsored by
61
61
Data volumes
Log 1
Log volumes
Log 2
z In addition to a complete data backup data pages can also be backed up with an
where the incremental backup contained all pages which changed since the last incremental or complete data backup.
z The label version is increased with each complete and incremental data backup. z To decide if you should rather make an incremental backup than a complete backup check
the number of pages which have been changed since the last complete backup. You can find this number by choosing the tab Data area in the selection Current Status->Memory Areas->Data Area. An incremental backup is useful if the number of changed pages is small compared to the number of used pages.
PRINT ON DEMAND
sponsored by
62
62
z An incremental data backup can be performed via the DBMGUI by selecting Backup-
>Incremental. As for the complete data backup you have to choose a media for the backup. Via the icons on the lower task bar of the central window you can also create and delete media or change the properties of existing media. The Next Step button guides you through the further backup process. At its end a backup report is shown.
PRINT ON DEMAND
sponsored by
63
63
WORKSHOP
Sept. 30 Oct. 2, 02 Bremen, Germany
z In this unit the concept and the consequences of consistent views are explained.
PRINT ON DEMAND
sponsored by
64
64
Consistent views
All read accesses provide the image of an object that was committed at a certain time. This point in time is the same for all accesses within one transaction.
Example of reading within implicit consistent views T1
Set s=3 Commit
T3
Set s=7
Commit
T4
First reading Reading s s=3 Commit
T2
First reading
Reading s s=7
Commit
T5
Reading s s=7
Commit
time
SAP AG 2002, Title of Presentation, Speaker Name 65
z liveCache uses consistent views to isolate read accesses to object from concurrent
(consistent read).
z Transactions are always performed as consistent views. The dedicated time which
decides about the appropriate before image to be read is the first access to a persistent object and ends with COMMIT or ROLLBACK (implicit consistent view).
z liveCache also knows the concept of named consistent views, called versions. These
views do not end with commit or rollback but can contain several transactions (see later) and may be active for several hours. Such named consistent views are used by APO for transactional simulations.
z Reading within consistent views allows to provide only committed images without
PRINT ON DEMAND
sponsored by
65
65
Example of reading without consistent views transaction T1 S follow path to element B S knows the path to C transaction T2 S deletes element C S inserts element X S updates the path B X S commit
X B A B' A B D A
transaction T1 continued S knows the old image of B S wants to read element C S element C is deleted S element D is unreachable
C D
C D
T2 T1 time
SAP AG 2002, Title of Presentation, Speaker Name 66
z Consistent views are required to navigate trough networks. z The example above demonstrates one problem that can occur when reading is without
1. Transaction T1 starts to read a object chain at object A. It wants to follow the path until object D in order to update D. 2. Unfortunately a scheduler interrupts transaction T1 after reading object B. 3. Transaction T2 is started and replaces element C by X. 4. T2 commits. 5. Transaction T1 continues and follows the link to the object C. However, C is deleted and D therefore unreachable.
z If transaction 1 uses a consistent view of the chain, it can still access the deleted
PRINT ON DEMAND
sponsored by
66
66
History files
OMS: pObj->value = y; pObj->omsStore(*this); ... Commit Transaction list
x(k1)
Class container
y x (k1)
Data cache
SAP AG 2002, Title of Presentation, Speaker Name 67
z The read consistency forces that all old images of objects which where updated by a
transaction T are stored not only until T committed but until the last consistent view which where open before T committed is closed.
z The storage of before images is realized with the help of history files. When an object is
updated, the old value of the object (the before image) is copied to a history file which exists for each transaction. Then the new object is copied to the data page and a pointer in the page points to the former object version in the history file.
z History files of open transactions are not only used for the consistent read but they can
page and the history file is destroyed. If the transaction ends with a commit its history file survives the transaction end and is inserted into a history file list.
PRINT ON DEMAND
sponsored by
67
67
T3
Set s=3
Commit
T5
Set s=7
Commit
T6
Set s=8
T4
First Reading Reading s s= 15
History files
T3
s (15)
Class container
T4
T5
s (3) s (8)
T6
s (7)
z Several changes of an object made by different transactions are recorded in the history
files. These different versions of an object are linked in the history files.
z Dependent on the start time of active consistent views (transactions or named consistent
PRINT ON DEMAND
sponsored by
68
68
Problem
Objects are marked as deleted only but not removed Changes to objects (before images) are recorded in history files
Solution
Garbage is collected by garbage collector tasks:
z Due to the consistent read no transaction that removes an object can remove the object
directly since a consistent view of an other transaction could probably access this object or one of its before images. Therefore, objects are only marked as deleted when a transaction deletes them.
z Actually, objects marked as deleted are removed by special server tasks called garbage
collectors. Scanning the history pages they remove objects when no consistent view can access the objects anymore.
PRINT ON DEMAND
sponsored by
69
69
T4
Commit
T6
Delete t
Delete u Commit
Garbage collection
T5
Class container
T4
STOP
T6
delete
STOP
delete
z The garbage collectors scan periodically the history file list for history files of
transactions which cannot be accessed anymore by open consistent views. When the garbage collector finds such a file it looks for all log entries which point to deleted objects and removes these objects finally, i.e. afterwards the corresponding object frames in the class container file are free and can be reused. After all delete entries in the history file were found and the corresponding objects were removed the complete file is dropped.
z The garbage collectors checks also whether the class containers contain too many
empty pages. If more than 20% of the pages of a file are empty the GC removes all empty pages. The GC finds the empty pages by following the chain of free pages which belongs to each container.
PRINT ON DEMAND
sponsored by
70
70
The algorithm of garbage collection changes according to the filling level of the database
History files are removed which belong to committed transactions which are older than the oldest transaction which was open while one of the currently active consistent views start.
To avoid a standstill of the data base due to a database full situation object history files are removed even if their before images could be accessed by an active consistent view. The garbage collector removes the oldest history files until either the filling is again below 90% or there are no more history files of committed transactions.
z As long as transactions are not committed or named consistent views are not dropped,
the before images of objects stored in the history files cannot be released, because they may be accessed by the consistent views. Remember that the consistent view wants to see the liveCache as it was when the consistent view started. So before images in the history files that are younger than the consistent view may reflect the status of liveCache at start of the consistent view. As a result the history files may grow.
z When a transaction or a named consistent view is active for a long time, this may become
When the data is accessed again (by the application or the garbage collectors) it must be read into the data cache before. This leads to physical I/O what has to be avoided for liveCache.
y When history files grow further, this may lead to a database full situation. The result
can be monitored with transaction LC10 -> Current Status -> Memory Areas -> Data Cache .
PRINT ON DEMAND
sponsored by
71
71
T3
Set s=3
Commit
T5
Set s=7
Commit
T6
Set s=8
T 4 First reading
History files
T3
s (15)
Class container
T4
T5
s (8) s (3)
T6
s (7)
z If the data cache filling exceeds the limit of 95% consistent views may become
incomplete since old object images which belong to the view are removed. The access to such a removed old image causes the error too old OID or object history not found .
z When the data cache filling level is above 95% before images which are not accessed by
any consistent view are removed. However, since the before images are linked in a chain the connection to older images which might be visible in a consistent view is lost.
z When the database filling reaches the limit of 90% before images are removed which
PRINT ON DEMAND
sponsored by
72
72
WORKSHOP
Sept. 30 Oct. 2, 02 Bremen, Germany
Memory Areas
z In this unit you will get to know the two main memory areas of the liveCache: the data
PRINT ON DEMAND
sponsored by
73
73
...
Session context n
liveCache
liveCache basis
Data cache
z A COM routine is called as a stored procedure in ABAP from the APO application server. z Within a transaction (terminated by COMMIT or ROLLBACK), several COM routines can
be called. All these routines will work within the same session context in the liveCache. An important feature of a session context is that global data is copied into a private memory area (OMS heap) and that all following operations will operate on these private copies. The access to private data is much faster than accessing global data from the data cache, leading to a considerable win of performance on cost of memory consumption. The changes on private copies will be transferred into the global memory after a COMMIT and the private memory is released (one exception are versions). The released memory is not returned to the operating system but only free to be used again for new private caches. Therefore, the OMS heap memory can never shrink.
PRINT ON DEMAND
sponsored by
74
74
History pages
SQL pages
Parameter: CACHE_SIZE
Parameter: OMS_HEAP_LIMIT
z liveCache uses two main memory areas in the physical memory of the liveCache server:
y all these pages may be swapped to data volumes if the data cache is too small to hold all data
z OMS heap
y liveCache heap grows when additional heap memory is requested. The maximum size is
y no swapping mechanism for heap memory is implemented except for inactive named
consistent views
PRINT ON DEMAND
sponsored by
75
75
OID 13.1
?
session context
25
free
object pages
13
45
free
free
liveCache basis
free
data cache
z When an object is accessed via its OID, the object is searched in the private cache of the
session first. The OIDs of the private cache are stored in a hash table.
z When the object cannot be found in the private cache, the object is read from the global
data cache. The OID contains the physical page number of the page that contains the object.
z If the page is not already in global data cache, it will be read from the data volumes.
PRINT ON DEMAND
sponsored by
76
76
OID 13.1
B
13.1 ?
session context
25
free
45
free
free
liveCache basis
free
data cache
z When the page that contains the searched object is located in the global data cache, the
page offset which is part of the OID is used to locate the object inside the page.
z The object is copied to private cache and the hash table of the private cache is updated.
PRINT ON DEMAND
sponsored by
77
77
A B
13.1
session context
25
free
object pages
13
45
free
free
liveCache basis
free
data cache
z All further accesses to the object will be handled in the private cache. z All changes on the object will be made on the local copy of the object. z The global version of the object in data cache remains unchanged until the transaction
performs a commit. If the transaction ends with a rollback the private cache is released without changing any global version of the object.
z The subtransactions are completely handled within the private cache. z When the object is used by a version, the object will never be copied back to global
PRINT ON DEMAND
sponsored by
78
78
>Performance->OMS Monitor lists for each COM routine the number of object accesses to the OMS heap and the data cache.
z The tab displays two kinds of columns. Columns named like OMS describe accesses
to the private OMS heap while those named like Basis count the various object accesses to the data cache. By comparing an OMS column with the corresponding basis column you can find out how effective the private object cache works. Simply speaking: the larger the ratio between OMS object acc. and Basis object acc. is, the better works the OMS caching.
z Object accesses via keys and iterators are supplied only by the basis layer, therefore no
columns for OMS key accesses and OMS iterator accesses exist.
PRINT ON DEMAND
sponsored by
79
79
Is a static memory and allocated when the liveCache is started Contains persistent OMS objects (OMS page chains) Contains swapped inactive transactional simulations Contains SQL data and keys for OMS objects (B* trees) Contains the history files (before images)
OMS heap (parameter OMS_HEAP_LIMIT)
liveCache heap grows dynamically until OMS_HEAP_LIMIT is reached Contains copies of objects in consistent views
transactions named consistent views (versions/transactional simulations)
z Memory administration in heap y When local object copies are released at the end of transaction or when a named
consistent view is dropped, the freed heap memory is not returned to the operation system. So the physical allocated heap never shrinks. It can only grow - up to OMS_HEAP_LIMIT.
y Internally the liveCache heap is organized in 64kB blocks. y The allocated heap memory is fully under control of the liveCache. liveCache
implements its own memory administration for OMS objects in private cache.
y Memory is only released to the liveCache and may be used for other liveCache
objects.
z When OMS_HEAP_LIMIT is reached, liveCache copies inactive named consistent views
allocate memory gets an outOfMemory error and the transaction is rolled back by the COM routine. All private data of this consistent view is freed. To handle the destruction of objects an emergency memory area of 10MB is allocated at liveCache start.
z Heap usage can be monitored with report /SAPAPO/OM_LC_MEM_MEMORY.
PRINT ON DEMAND
sponsored by
80
80
z The selection Current Status->Memory Areas->Heap usage yields information about the usage of
OMS heap.
z Available heap is the memory that was allocated for heap from the operating system. It reflects
the maximum heap size that was needed by the COM routines since start of liveCache .
z Total Heap usage is the currently used heap. When additional memory is needed, liveCache uses
the already allocated heap until Available is reached. Additional memory requests will result in additional memory requests from operating system and the value of Reserved will grow. (Available heap > Total Heap usage )
z It is important to monitor the maximum heap usage. When the Available heap reaches
OMS_HEAP_LIMIT, errors in COM routines may occur due to insufficient memory. This should be avoided.
z OMS malloc usage: memory currently in use that has been allocated via calls of method
memory, the emergency chunk is assigned to the corresponding session and following memory requests are fulfilled from the emergency chunk. This ensures that the db-procedure can cleanup correctly, even if no more memory is available. After the db-procedure call the emergency chunk is returned to public.
z 'Temporary emergency reserve space: memory of emergency chunk currently in use.
('Temp. heap at memory shortage' >= 'Max. emergency reserve space used')
PRINT ON DEMAND
sponsored by
81
81
z The menu path Current Status->Memory Areas->Data cache leads to a screen which
displays all information about the liveCache data cache like data cache size, used data cache and the usage and hit ratios for the different types of liveCache data.
z In an optimal configured system y the data cache usage should be below 100% y the data cache hit rate should be 100% y if data cache usage is higher than 80%, the number of OMS data pages should be
Analysis->Performance->Monitor->Caches.
z Compare the size of OMS data with OMS history. If data cache usage is higher than 80 %
and OMS history has nearly the same size as OMS data, use the Problem Analysis>Performance->Monitor->OMS Versions screen to find out if named consistent views (versions) are open for a long time. Maximum age should be four hours.
PRINT ON DEMAND
sponsored by
82
82
Reading s
Reading t
Commit
T3
s=3 Set s=2 t=1 Commit
T4
s=2 Open Reading s Version t=1 Reading t Drop Commit Version
T2
T5
s=3 t=3
A session can run within a version: enclosed in the API commands omsCreateVersion-OMSDropVersion. All transactions running in one version have the same consistent view. It was started when the version was created. Such a consistent view is called a named consistent view. All updates, creations and deletions of objects performed within a version remain in the private cache of the session. Complete detachment of a user from the action of other users.
Versions can be closed temporarily and re-opened. Closed versions are called inactive.
SAP AG 2002, Title of Presentation, Speaker Name 83
required the ability to keep one consistent view over more than one transaction. This is for instance because in such a plenary scenario DynPro changes occur which cause automatically commit requests. For these scenarios the liveCache provides versions.
z After creating a version within a session all transactions in this session have the same
consistent view.
z After a commit no changed data is written into the global data cache but all data reside in
the private cache. Thus cached objects cannot be released from the private cache after a commit or rollback. The consequence is that versions consume more and more OMSmemory the longer they exist. Moreover, the garbage collector cannot release history pages since the version could access an old image of an object.
z Versions can be closed temporarily and reopened in any other session. This is necessary
since after a commit an application may be connected to another work process and therefore to another liveCache session.
z In case the heap consumption passes certain limits closed versions can be swapped into
the global data cache where they are stored in B*-trees on temporary pages.
z Since temporary pages as well as the states of the private session caches are not
recovered after a restart versions disappear automatically after stopping and starting the liveCache.
PRINT ON DEMAND
sponsored by
83
83
Monitoring versions
z One reason for a large consumption of OMS heap and data cache could be a long running
version which cumulates heap memory and which prevents the garbage collector from releasing old object images.
z With the selection Problem Analysis->Performance->Monitor->OMS versions you can
columns Time and Age (hours) define the starting time and the version and the time since the start. Note, there should never be any version older than 4 hours. To avoid this situation, the report /SAPAPO/OM_REORG_DAILY must be scheduled at least once a day.
z Versions can be closed and re-opened in another session. To gain heap memory versions
can be rolled out into the global data cache where it is stored on temporary pages. The column Rolled out displays if the version cache was rolled out into the data cache. In the column Rolled out pages you find the number of temporary pages in the data cache which are occupied by the rolled out version cache.
z Long running transactions can cause the same memory lack as versions. To display
PRINT ON DEMAND
sponsored by
84
84
parameters OMS_VERS_THRESHOLD and OMS_HEAP_THESHOLD. Both parameter allow a limitation of the heap consumption on cost of the object access time.
z OMS_VERS_THRESHOLD:
At the end of the transaction, unchanged data from versions of a session are deleted from the version cache and the version cache is rolled out into the data cache if the version occupies more than OMS_VERS_THRESHOLD KB of memory. If the stored object is accessed again at a later stage within the version, the object must be copied again from the data cache into the heap. You do not have to do this if you set the OMS_VERS_THRESHOLD higher and there is enough memory available.
z OMS_HEAP_THRESHOLD:
If the percentage rate is exceeded when the available heap is occupied (The available heap is defined by the parameter OMS_HEAP_LIMIT. ), then objects that were read and not changed within a version are removed from the heap at the end of the transaction and the version cache is rolled out to the data cache. The default value is set to 100. Where memory bottlenecks are concerned, it might be wise to determine a smaller value.
PRINT ON DEMAND
sponsored by
85
85
WORKSHOP
Sept. 30 Oct. 2, 02 Bremen, Germany
Task Structure
z At the conclusion of this unit you will be able to monitor the tasks running inside your
liveCache server.
PRINT ON DEMAND
sponsored by
86
86
UKT Dcom 0 - n Dev 0 - n (IOWorker 0 - n) Asdev 0 - n UKT Garbage Garbage Collector Collector Event event UKT TraceWriter DataWriter datawriter Timer
UKT ALogWriter
UKT Utility
liveCache process
SAP AG 2002, Title of Presentation, Speaker Name 87
z The operating system sees the liveCache as one single OS process. The process is
divided into several OS threads (Windows and UNIX). liveCache calls these threads UKTs (user kernel threads).
z Some threads contain different specialized liveCache tasks whose dispatching is under
control of liveCache.
z Other threads contain just one single task. z The tasks that perform the application requests are called user tasks. User tasks are
be limited by parameter MAXCPU. MAXCPU defines the number of UKTs which accommodate user tasks. Since the usertasks consume the majority of the CPU performance MAXCPU defines approximately how many CPUs of the liveCache server are occupied by the liveCache.
PRINT ON DEMAND
sponsored by
87
87
Coordinator Initialization / UKT coordination Requestor Connect processing Console Diagnosis Timer Time monitoring Dev0 thread Master for I/O on volume Dev<i> slave threads Async0 thread Master for backup I/O AsDev<i> threads
SAP AG 2002, Title of Presentation, Speaker Name 88
User
User
PRINT ON DEMAND
sponsored by
88
User User
Task description
User Server ALogWriter DataWriter
Executes commands from applications and interactive components Performs I/O during backups Writes the logs to the log volumes Writes dirty pages from the data cache to disk Flushes the kernel trace to the kernel trace file Handles liveCache administration Monitors LOCK and REQUEST TIMEOUTs Removes outdated history files and object data
TraceWriter
z Each UKT makes various tasks available, including: y user tasks, i.e. tasks that users connect to in order to work with the liveCache y tasks with specific internal functions z The total number of tasks is determined at start-up time and they are then distributed
dynamically over the configured UKTs according to defined rules. Task distribution is controlled by parameters like e.g. _TASKCLUSTER_02.
z UKT tasks allow a more effective synchronization of actions involving several
threads of one UKT form a group in which only one thread can be active.
PRINT ON DEMAND
sponsored by
89
89
Task distribution
z The task distribution of the liveCache can be viewed within the LC10 through the
PRINT ON DEMAND
sponsored by
90
90
the status of liveCache tasks which are currently working for an APO work process.
z In a running system, possible status are y Running: y Command Wait: y DcomObjCalled:
task is in kernel code of liveCache and uses CPU user tasks wait for another command to execute task is in COM routine code and uses CPU
y IO Wait (R) or IO Wait (W): task waits for I/O completion y Vbegexl, Vsuspend: y Vwait: y No-Work:
task waits for an internal lock in liveCache task waits that a lock is released which is hold by another APO application. Locks are released after commit or rollback. task is suspended since there is nothing to do
z If the sum of tasks in status Running and DcomObjCalled is higher than the number of
CPUs on the liveCache server for a longer time, liveCache likely faces a CPU bottleneck. Before the number of CPUs is increased, a detailed analysis of the COM routines may be necessary .
z The Application Pid is the process ID of the connected APO work process which can
PRINT ON DEMAND
sponsored by
91
91
liveCache Console
liveCache: Console
z The liveCache: Console window displays information about the liveCache status as
they are mainly also shown by the selection Current Status->Kernel threads in the liveCache: Monitoring window (see previous slide). However, while the output from the liveCache: Monitoring window bases always on SQL queries to the liveCache the results from the selection liveCache: Console get their results directly from the run time environment of the liveCache. That means in situations where you cannot connect anymore to the liveCache you can still use the liveCache console to investigate the liveCache status.
z All data shown in the various selections from the console screen can also be yield by
calling the command x_cons <liveCache name> show all on a command line.
PRINT ON DEMAND
sponsored by
92
92
z A comprehensive description of all objects of the liveCache run time environment (RTE)
is displayed when the liveCache: Console screen is used. RTE objects are tasks, disks, memory, semaphores (synchronization objects which here are called regions) and waiting queues.
z In addition to the information about the current task states which can also be displayed
as shown on the previous slides the selection Task activities displays cumulated information about the task activities. In particular the dispatcher count is given which counts how often a task was dispatched by the task scheduler. As long as this number is constant the task is inactive. Other important parameter are:
y command_cnt: counts the number of application commands executed the the task. y exclusive_cnt: number of accesses to regions (synchronization objects) y state_vwait:
counts the cases where the task had to wait for objects locked by
another task
z Among the other information which can be displayed by the liveCache console the
number of disk accesses, the accesses of critical regions (see slides in unit Performance analysis) and the PSE data are most important. Everything else is intended to be used only by liveCache developers. Therefore the displayed values sometimes may seem to be a little bit cryptic.
PRINT ON DEMAND
sponsored by
93
93
WORKSHOP
Sept. 30 Oct. 2, 02 Bremen, Germany
Recovery
z At the conclusion of this unit you will be able to restore your liveCache.
PRINT ON DEMAND
sponsored by
94
94
Restart
Savepoint T1 T2 T3 T4 C: Commit R: Rollback R C T5 R T6 C t No recovery for transactions T1 and T5
SAP AG 2002, Title of Presentation, Speaker Name 95
Savepoint C
Savepoint
Crash
Redo (read archive log) : T4, T6 Undo (read undo file) : T2, T3
z Automatic recovery at restart. z The restart performs a redo of transactions which were open at the time of the last
savepoint and committed at the crash time. Transactions which were open at the crash time will only be rolled back if they were open at the time of the last savepoint .
z Start-point for the redo/undo is the last savepoint. All data written after the last savepoint
the time of the last savepoint. The modifications were written to the Data-volumes. Modifications of transactions 5 are not in the data area of the last savepoint
y Transaction 2, 3 and 4 were not completed at the time of the last savepoint. The liveCache will redo transaction 4 REDO y Transaction 2 and 3 will be rolled back, beginning at the time of the last savepoint
UNDO
y The restart will completely redo transaction 6. The modifications are not in the data area of the last savepoint REDO
PRINT ON DEMAND
sponsored by
95
95
Recovery process
Restore DAT_00004 Restore PAG_00006 Restore LOG_00010 Restore LOG_00011 Restart automatically Restart ready
LOG_00011
DAT_00004
Data 1
PAG_00005
Data 2
LOG_00010
Archive log
Data n
z Recovery always starts with a RESTORE DATA in the operation mode ADMIN. During
the last RESTORE DATA/PAGES the database immediately performs a restart, if the log entries belonging to the savepoint persist in the archive log. The restart reapplies the log entries.
z RESTORE LOG must be run, if the savepoint belonging to the complete/incremental
in the log.
PRINT ON DEMAND
sponsored by
96
96
Recover database
z To perform a recovery it is necessary to bring the database to the ADMIN mode which
can be done in the DBMGUI by pressing the yellow light in the traffic light symbol in the left upper corner.
z To start the backup you have to change to the selection Recovery->Database. In the
central window you can then choose which complete backup should be the basis for the recovery of the database. You can take the last complete backup (uppermost radio button) but also any other complete backup (middle radio button). With the Next Step icon you continue the recovery process.
PRINT ON DEMAND
sponsored by
97
97
z All previously made complete data backups are shown in this list. To continue the
recovery mark the backup which you want to use as the basis for the recovery and press the button Next Step.
PRINT ON DEMAND
sponsored by
98
98
Recovery strategies
z Now the simplest recovery strategy is shown. In the example above it is to restore the
incremental backup after the complete backup. No further log backups are required since all needed log information are still on the log device.
z Instead of restoring the incremental backup you could restore the log backups.
Therefore you have to mark one of the log backups all further needed backups would be marked automatically.
PRINT ON DEMAND
sponsored by
99
99
Start recovery
z To start the recovery you have to press the Start button. z Each time a backup media is restored the DBMGUI asks for the next backup. If the
backup media would be a tape and not a file you had to change the tape now. To continue the recovery press the Start button again.
PRINT ON DEMAND
sponsored by
100
100
Restart
z After the recovery from the backup media is finished the DBMGUI informs you that it is
possible to restart the liveCache. Then the log entries from the log volumes will be redone.
z When the restart is finished the liveCache is in ONLINE mode and all its data and
PRINT ON DEMAND
sponsored by
101
101
WORKSHOP
Sept. 30 Oct. 2, 02 Bremen, Germany
Configuration
z This unit introduces the key parameter of the liveCache configuration and demonstrates
PRINT ON DEMAND
sponsored by
102
102
<IndepData>/config which is usually /sapdb/data/config. The changes of the parameter file are logged in the file <SID>.pah located in the same directory.
z The parameter file is not readable and must not be changed directly since the
parameters are not independent and have to fulfil certain constraints. To change the parameters you have to use one of the administration tools like DBMCLI, DBMGUI, LC10 or WEBGUI.
z Within the LC10 the configuration parameters can be shown via the selection Current
Status->Configuration->Parameters->Currently. The history of each parameter can be accessed by pressing the triangle in front of it.
z According to their meaning for the administrator the parameters are divided into three
groups:
y General: These parameters can be changed by the liveCache administrator. y Extended, Support: Changes should be performed only in cooperation with the SAP
support.
PRINT ON DEMAND
sponsored by
103
103
Store changes
Change parameters
SAP AG 2002, Title of Presentation, Speaker Name 104
>Configuration->Parameters. Here you find a column New value which is highlighted for all parameters which you are allowed to change. The other parameters are either fixed after the initialization or they are determined by other parameters.
z By pressing the Check Input button you can check if your new parameter values fulfil
all required constraints. To store your updated values press the disk icon. The file which contains the constraints, rules and descriptions of the parameters is called cserv.pcf and can be found in <InstallationPath>/env.
z Notice that the configuration parameters are read only when the liveCache is started
which means parameter value changes do not take effect before liveCache is stopped and started again.
z In principle all parameters should have proper values after the installation and no
PRINT ON DEMAND
sponsored by
104
104
WORKSHOP
Sept. 30 Oct. 2, 02 Bremen, Germany
Performance Analysis
z At the conclusion of this unit you will be able to use the LC10 and the DBMGUI to find out
if the performance of your liveCache is limited by a bottleneck. Moreover, you will be given ideas of how to improve the performance.
PRINT ON DEMAND
sponsored by
105
105
z Analyzing an APO system for liveCache workload and bottlenecks, three different areas
must be covered:
y Estimate the liveCache share on the total APO response time and identify the APO
critical.
z These three areas are covered by different sets of SAP monitoring transactions y Workload analysis transaction ST03N y liveCache monitor transaction LC10 y A combination of runtime analysis transaction SE30, SQL trace transaction LC10
performance analysis has always to include all three parts shown above.
PRINT ON DEMAND
sponsored by
106
106
z High rate of I/O operations z Serialization on synchronization objects z Insufficient of CPU performance z Algorithmic errors in the COM routines z Algorithmic errors in the liveCache code
z There exist several causes for a poor liveCache performance. The most important are: y A high rate of I/O operations performed by the user tasks. y Serialization on liveCache synchronization objects. These objects are used to
synchronize the parallel access to shared liveCache resources, such as the data cache.
y Too many user run COM routines. y COM routines as well as the liveCache can raise a poor performance due to
algorithmic errors.
PRINT ON DEMAND
sponsored by
107
107
z Optimize setting of configuration parameters z Extend main memory z Increase number of CPUs z Call APO/liveCache support
z The most important sanction to improve the liveCache performance is to optimize the
APO/liveCache support.
PRINT ON DEMAND
sponsored by
108
108
z A reliable analysis of the liveCache for a productive system is only possible, if a z To get an impression how many commands (DB Procedures / COM routines)
sufficient number COM routine have already been executed. If less than about 50000 COM routines have been executed, the monitored data may not reflect a representative workload of a productive APO system.
have been executed, choose the tab SQL statistics in the selection Problem Analysis->Performance->Monitor. The tab displays for each SQL action such as reading, inserting or deleting an record how often it was executed. To find the number of executed COM routines look for the row External DBPROC calls. For a liveCache this number corresponds to the number of COM routines executed.
PRINT ON DEMAND
sponsored by
109
109
! Should be 100% !
z Although the liveCache is constructed to keep all data in the data cache when it is in
ONLINE mode the liveCache can accommodate more data than it can host in the data cache. However, if this happens the liveCache performance can suffer heavily from I/O operations which are due to swap pages from the data cache to the data devices and vice versa.
z To detect bottlenecks due to I/O operations use the selection Current Status->Memory-
>Areas->Data cache. There you can find information about the data cache filling level as well as about the data cache accesses.
z For optimal liveCache performance (i.e. to avoid I/O-operations when accessing data
and history pages) the data cache usage should be below 100%.
z Whether the performance is significantly effected by I/O operations can be seen from
the number of failed cache accesses. The average data cache hit rate should be above 99.9%. A lower rate is a hint for a too small data cache. A situation as shown above shows rather poor performance.
z With the SQL command monitor init you can reset the access counters to zero. This
time till the hit rate shows a stable value which is relevant for an analysis.
PRINT ON DEMAND
sponsored by
110
110
z The main reason of a poor cache hit rate is a data cache which is configured too small.
However, sometimes the hit rate is poor due to long running versions or transactions. To keep the consistent view of the versions or transactions the liveCache is forced to store a large number of history pages which fill the cache and lead to a roll out of data and history pages to the data devices.
z To find out if a bad hit rate is caused by versions or transactions check the selections
Problem Analysis->Performance->Monitor->OMS versions and Problem Analysis>Performance->Transactions. There should be no version older than four hours.
PRINT ON DEMAND
sponsored by
111
111
CACHE_SIZE
0.4 * FREE_MEMORY
MAX_VIRTUAL_MEMORY] - SHOW_STORAGE - MAXUSERTASK * _MAXTASK_STACK - 100 MB
: physical memory of the liveCache server : parameter from the liveCache configuration file : parameter from the liveCache configuration file : NT UNIX see MAX virtual memory in knldiag file call ulimit -a
: upper limit of memory for: task stacks of non user tasks + memory for COM routine DLLs+ memory for the liveCache program code : result of the command dbmcli d <liveCache_name> -u control,control show storage
z The above formula gives a suggestion for the configuration parameter CACHE_SIZE
determining the size of the data cache. However, depending on your particular profile it can be that the CACHE_SIZE has to deviate from the suggestion.
z If your cache hit rate is below 100% although the CACHE_SIZE is set as shown above
the liveCache. For NT this limit is displayed in the knldiag file for UNIX use the command ulimit a.
z On Windows NT you should use the Enterprise Edition to increase the
PRINT ON DEMAND
sponsored by
112
112
The heap size is all right if no OutOfMemory exceptions occur. If (#OutOfMemoryExceptions > 0) increase the OMS heap, if necessary on cost of the data cache.
z The free memory available for the data cache and the OMS heap should be divided in
the ratio 40/60, where the OMS heap gets the larger part of the memory.
z In contrast to the data cache the OMS heap is not allocated at the start of the liveCache
and thus there is no need to define the OMS_HEP_LIMIT in the configuration file. By setting the OMS heap limit to 0 you allow the liveCache to allocate as much heap memory as it can get from the operating system. However, on Windows NT and AIX the liveCache could crash if the OS cannot allocate anymore memory therefore you should set OMS_HEAP_LIMIT to the value suggested above. If the OMS_HEAP_LIMIT is not zero the liveCache stops to request heap memory from the OS if the OMS_HEAP_LIMIT is reached instead all COM routines requesting further memory are aborted.
z The heap memory is of sufficient size if there occur no OutOfMemory exceptions. They
must be avoided since they let a COM routine abort. The occurrence of OutOfMemory exceptions can be checked by executing the SQL command select sum (OutOfMemoryExceptions) from Monitor_OMS or checking the column OutOfMemory excpt. in the tab Transaction counter of the selection Problem Analysis->Performance->OMS monitor of the LC10
z If you find the number of OutOfMemory exceptions to grow you should increase the
PRINT ON DEMAND
sponsored by
113
113
DATA CACHE
16 8 64 40 132 28
u1
Data1 region Data2 region Data3 region Data4 region
u6
u2 u3
61
25
33
97
801
89
42
74
38
66
10
u4 u5
83
59
27
31
55
103
z When monitoring the liveCache task activities in the liveCache: Console ->Active Task
(or executing dbmcli d <liveCache_name> -u control,control show act) you should find the user tasks ideally to reside in the state Running or DcomObjCalled. If instead user tasks are often in the state Vbegexcl it could be that your performance suffers from the serialized access to internal liveCache locks. The liveCache calls these internal locks regions (They correspond to Latches in Oracle). Regions are used to synchronize the parallel access to shared resources. For instance searching for a page in the data cache is saved by regions. In each region at maximum one task can search for a page.
z If a task requests a region which is already occupied by another task the requesting task
is suspended as long as it cannot enter the region. This situation is displayed by the status Vbegexcl in the task monitor liveCache: Console ->Active tasks.
PRINT ON DEMAND
sponsored by
114
114
Collision rate
z The number of collisions, i.e. situations where a task must be suspended since it
requested an occupied region, is displayed in the liveCache: Console screen for each region.
z The collision rates of frequently used regions should not exceed 10%. Otherwise the
used to stripe the corresponding resource can be increased. However, since a high collision rate could be an indicator for algorithmic errors this should be done only in collaboration with the liveCache support.
PRINT ON DEMAND
sponsored by
115
115
if ( # CPUs of liveCache server < 8) MAXCPU = # CPUs of liveCache server else MAXCPU = # CPUs of liveCache server - 1
MAXCPU should set to the exact number of CPUs. But if there are more than 8 CPUs MAXCPU should be the number of CPUs reduced by one. This reserves one CPU for non user tasks. In particular the garbage collector can use this processor to remove deleted objects.
z A good choice for the number of garbage collectors (GC) is to choose
_MAX_GARBAGE_COLL as twice the number of data devices. This choice has no influence on the CPU usage by the GCs since all GCs run in one thread but it results in a good I/O performance of the GCs.
z If more user tasks are in the state Running and DcomObjCalled than the liveCache
PRINT ON DEMAND
sponsored by
116
116
z Even if the liveCache works fine the COM routines can cause a poor performance due
to algorithmic errors. To analyze those problems the liveCache supplies an expert tool to investigate the performance of COM routines. It lists the runtime, memory consumption and number of object accesses for each COM routine. All these data give hints which COM routine could be problematic. However, since the analysis is not simple this monitor should be used only by the APO support.
z Tab explanation: y Runtime: total and average runtime of each COM routine. y Object Accesses: number of object accesses from the private cache and from the
PRINT ON DEMAND
sponsored by
117
117
Activate/deactivate tracing
Flush trace
SAP AG 2002, Title of Presentation, Speaker Name 118
z To analyze the internal activities of the liveCache the liveCache can write a trace file.
This file is very helpful to look for the reasons of a bad performance which may due to algorithmic or programming errors within the liveCache. The file should be interpreted only by the liveCache support.
z The trace is not automatically written but must be activated using the DBMGUI. In the
selection Check->Tracing you can choose which operations should be traced. After activating the trace it is written into a main memory structure to avoid a slow down of the system due to trace I/O operations. To write the trace actually to a file it must be flushed. The resulting file is not yet readable but still an image of the memory structure. A readable file can be created in the tab Protocol.
PRINT ON DEMAND
sponsored by
118
118
Summary
(1) (2) (3) (4) (5) (6) (7) (8) (9)
liveCache concepts and architecture liveCache integration into R/3 via transaction lc10
Basic administration (starting / stopping / initializing) Complete data backup Data storage Advanced administration (log backup / incremental data backup/ add volume) Consistent views and garbage collection Memory areas Task structure
PRINT ON DEMAND
sponsored by
119
119
Further Information
Public Web:
Service Marketplace:
http://service.sap.com MySAP SCM Technology
PRINT ON DEMAND
sponsored by
120
120
Q&A
PRINT ON DEMAND
sponsored by
121
121
Feedback
http://www.sap.com/teched/bremen/
Conference Activities
PRINT ON DEMAND
sponsored by
122
122
PRINT ON DEMAND
sponsored by
123
123
WORKSHOP
Sept. 30 Oct. 2, 02 Bremen, Germany
PRINT ON DEMAND
sponsored by
124
124