You are on page 1of 140

TADM70

SAP System: OS and DB Migration

Contents
Course Overview .............................................................................. ix
Course Goals................................................................................. ix
Course Objectives ........................................................................... xi

Unit 1: Introduction ............................................................................1
Introduction .................................................................................... 2

Unit 2: The Migration Project .............................................................. 27
The Migration Project ...................................................................... 28

Unit 3: System Copy Methods............................................................. 53
System Copy Methods ..................................................................... 54

Unit 4: SAP Migration Tools................................................................ 67
SAP Migration Tools........................................................................ 68

Unit 5: R3SETUP/SAPINST ................................................................. 99
R3SETUP/SAPINST ......................................................................100

Unit 6: Technical Background Knowledge ............................................ 113
Data Classes (TABARTs) ................................................................. 114
Miscellaneous Background Information ................................................126

Unit 7: R3LOAD & JLOAD Files ..........................................................139
R3LOAD Files ..............................................................................140
JLOAD Files ................................................................................185

Unit 8: Advanced Migration Techniques ...............................................223
Time Consuming Steps during Export / Import ........................................225
MIGMON - Migration Monitor for R3LOAD .............................................241
MIGTIME & JMIGTIME - Time Analyzer................................................254
Table Splitting for R3LOAD...............................................................260
DISTMON - Distribution Monitor for R3LOAD .........................................278
JMIGMON - Migration Monitor for JLOAD..............................................283
2012 © 2012 SAP AG. All rights reserved. vii
Contents TADM70

Table Splitting for JLOAD .................................................................288

Unit 9: Performing the Migration.........................................................299
Performing an ABAP System Migration ................................................300
Performing a JAVA System Migration ...................................................319

Unit 10: Troubleshooting ..................................................................335
Troubleshooting ............................................................................336

Unit 11: Special Projects...................................................................363
Special Projects............................................................................364

Course Overview
• This course offers detailed procedural and technical knowledge on homogeneous and heterogeneous system
copies, which are performed by using R3LOAD/JLOAD on SAP NetWeaver systems with a focus on OS/DB
migrations. The training content is mostly release independent, and is based on information up to 7.30.
Previous releases, like R/3 4.x, R/3 Enterprise 4.7 (Web AS 6.20), ERP 2004 / NetWeaver ’04 (Web AS 6.40),
ECC 6.0 / NetWeaver ’04S /NetWeaver 7.0x are covered as well. The course attendance is the prerequisite
for the OS/DB Migration certification test.

Course Duration Details
Unit 1: Introduction
Introduction 90 Minutes
Exercise 1: Introduction 5 Minutes
Unit 2: The Migration Project
The Migration Project 90 Minutes
Exercise 2: The Migration Project 5 Minutes
Unit 3: System Copy Methods
System Copy Methods 30 Minutes
Exercise 3: System Copy Methods 5 Minutes
2012 © 2012 SAP AG. All rights reserved. ix
Course Overview TADM70

Unit 4: SAP Migration Tools
SAP Migration Tools 45 Minutes
Exercise 4: SAP Migration Tools 10 Minutes
Unit 5: R3SETUP/SAPINST
R3SETUP/SAPINST 15 Minutes
Exercise 5: R3SETUP/SAPINST 5 Minutes
Unit 6: Technical Background Knowledge
Data Classes (TABARTs) 30 Minutes
Miscellaneous Background Information 15 Minutes
Exercise 6: Technical Background Knowledge 10 Minutes
Unit 7: R3LOAD & JLOAD Files
R3LOAD Files 90 Minutes
JLOAD Files 15 Minutes
Exercise 7: R3LOAD & JLOAD Files (Part I) 20 Minutes
Exercise 8: R3LOAD & JLOAD Files (Part II,
Hands-On Exercise) 25 Minutes
Unit 8: Advanced Migration Techniques
Time Consuming Steps during Export / Import 10 Minutes
MIGMON - Migration Monitor for R3LOAD 15 Minutes
MIGTIME & JMIGTIME - Time Analyzer 10 Minutes
Table Splitting for R3LOAD 15 Minutes
DISTMON - Distribution Monitor for R3LOAD 5 Minutes
JMIGMON - Migration Monitor for JLOAD 10 Minutes
Table Splitting for JLOAD 10 Minutes
Exercise 9: Advanced Migration Techniques 15 Minutes
Unit 9: Performing the Migration
Performing an ABAP System Migration 30 Minutes
Performing a JAVA System Migration 15 Minutes
Exercise 10: Performing the Migration 10 Minutes
Unit 10: Troubleshooting
Troubleshooting 30 Minutes
Exercise 11: Troubleshooting 10 Minutes
Unit 11: Special Projects
Special Projects 15 Minutes
Exercise 12: Special Projects 5 Minutes

Unit 1
Introduction
Unit Overview
What is a homogeneous or heterogeneous system copy, which tools are available, what is the Going Live
OS/DB Migration Check Service, and from where to get information about the migration procedure.

Lesson Overview
Lesson Objectives
After completing this lesson, you will be able to:
• Distinguish between an SAP homogeneous system copy and an SAP OS/DB
Migration
• Estimate the problems involved with a system copy or migration
• Understand the functions of the SAP OS/DB Migration Check

Business Example
You want to understand which system copy / migration tools are provided by SAP and what is the difference
between a homogeneous and a heterogeneous system copy. Furthermore you are interested in the scope of
the OS/DB Migration Check service.

Figure 1: Definition of Terms

As the naming of SAP systems is changing frequently, the term “SAP System” is used throughout the course
material as a synonym for every SAP system type, which can be copied with R3LOAD or JLOAD. Explain the
difference between NetWeaver 7.00 and 7.02 (NetWeaver Enhancement Package 2).
Please note: Improved functionality was often introduced with new SAP Kernel versions. If the new SAP Kernel
was backward compatible to older SAP releases, the new functionality was available for the older releases as
well. Example: a SAP Web AS 6.20 running on SAP Kernel 6.40 can make use of R3LOAD 6.40 features.
Throughout the SAP Documentation and SAP Notes, the term NetWeaver ‘04S and NetWeaver 7.00 is used in
a mixed way, meaning the same.
The initial SAP service offering for OS/DB Migrations was originally called “SAP OS/DB Migration Service”, but
was renamed to “SAP OS/DB Migration Check Service”. Today, the term “SAP OS/DB Migration Service” is
used for SAP fix price projects, in which SAP consultants migrate customer systems to a different database
and/or operating system, mainly from remote.

Figure 2: Copying a SAP System

The only supported way to perform heterogeneous system copies is the R3LOAD/JLOAD method. Exceptions
are SAPDB, and DB2 UDB, were OS migrations with database means are possible. Every unsupported system

this is the fastest and easiest way to perform a homogeneous system copy. the usage of unsupported tools or methods is not forbidden in general (the tool and method support must be provided by the 3rd party organization in such a case). the the term SAP Migration Tools will be used. if not explicitly mentioned in SAP documents or SAP Notes. Figure 3: SAP System Copy / Migration Tools (1) The SAP System copy tools are used for homogeneous and heterogeneous system copies. In the remainder of this document. A client transport is not a true SAP System copy or migration. After the system copy. In most cases. If a system problem is related to a non-supported migration method afterwards. only the procedure differs! Give a short overview of the tool purposes without going into details. nor is it intended to do so. but efforts to fix problems caused by the unsupported tool or method. Figure 4: SAP System Copy Tools / Migration Tools (2) . This applies particularly to production systems. Some databases even allow a database backup to be restored in a different operating system platform (OS migration). The copy function cannot transport all of the system settings and data to the target system.0D. Since NetWeaver ’04 (6. Every EXP/IMP migration will be done on own risk! Migrated SAP Systems are supported without regard which method was used.copy can lead to billed SAP support for all problems which were caused by the non-supported method. the migrated SAP system is still under maintenance. The message is: There is no tool difference between homogeneous or heterogeneous system copies. Databases can be duplicated by restoring a backup. SAP cannot be made responsible for erroneous results. Note: 3rd party database tools and methods suitable for switching the operating system (OS migration) or even the database (DB migration) are not supported by SAP. the customer can be charged for the SAP fixing efforts. Client transports have no meaning to JAVA-based SAP Systems. Make clear that the famous Oracle EXP/IMP is not supported by SAP anymore. Nevertheless. SAP System copy tools used for heterogeneous system copies are called SAP Migration Tools. For further reference see SAP Note 96866 “DB copy by client transport not supported”.40) JAVA based systems can also be copied or migrated to any SAP supported operating system and database combination by SAP System copy tools. can and will be charged to the customer! SAP System copy tools can be used for system copies or migrations on any SAP supported operating system and database combination as of R/3 Release 3.

. R3SETUP is using the Perl PACKAGE SPLITTER. SAPINST provides the Perl and JAVA PACKAGE SPLITTER or the JAVA version only (release dependent). Explain that the Migration Support Tools are intended to improve the export/import procedure. The JAVA based tools are release independent and can be utilized on any SAP platform which supports the required JAVA version. MIGCHECK is implemented in JAVA. JLOAD is available since NetWeaver ’04 (Web AS 6. every SAP System can contain nonstandard objects! Special post. SCM/APO). It was developed to support system copies based on Web AS 6. Web AS 6. Table splitting is supported since R3LOAD 6. They are also mandatory for NetWeaver ’04 (Web AS 6.e. and for all systems based on BW functionality (i. Earlier versions of the JAVA Web AS (i.e.and pre-migration activities are required for them.40 standard.x and later. Describe the purpose of MIGMON and the signal files for a parallel export/import.00.0. The reports SMIGR_CREATE_DDL and RS_BW_POST_MIGRATION are required since BW 3. It is often seen in Unicode conversion scenarios. Since then. The generated DDL statements of SMIGR_CREATE_DDL are used to tell R3LOAD how to create nonstandard objects in the target database. The Distribution Monitor can be used if the R3LOAD caused CPU load should be distributed over several application servers. Normally the Distribution Monitor makes sense only.BW functionality is part of the ABAP Web AS 6. Figure 6: Support Tools for ABAP System Copies (2) MIGMON and MIGTIME are implemented in JAVA.40 in combination with MIGMON. Figure 5: Support Tools for ABAP System Copies (1) The PACKAGE SPLITTER is available in a JAVA and in a Perl implementation. the other is a PL/SQL script implementation and is available for Oracle only.20) did not store data in a database. JLOAD and JSIZECHECK are JAVA programs which are called by SAPINST. The RS_BW_POST_MIGRATION program adapts the non-standard objects to the requirements of the target system. if more than one application server is planned to use. JSIZECHECK is available since NetWeaver 04S / 7. Two TABLE SPLITTERs exist: One is database independent and is called R3TA.40).40) and later. This can improve the database server performance significantly.

Figure 7: Support Tools for JAVA System Copies JAVA Migration Support Tools are very similar to the ABAP versions. JPKGCTL (also called JSPLITTER) was developed to reduce the export/import run-time for large JAVA systems. JMIGMON and JMIGTIME do offer a similar functionality like MIGMON and MIGTIME. The goal of this training is to prevent problems. In fact R3LOAD/JLOAD can import in any SAP released OS/DB combination. A single JLOAD process exporting the whole database (like implemented in previous SAPINST versions) was often too slow as soon as the database was exceeding a certain size. if the operating system and the database are supported by the tools. but are available from a certain NetWeaver version only. by providing in-depth knowledge about each SAP System copy step and the tools which are involved. such as those mentioned above. While doing a system copy. so it was necessary to provide package and table splitting for JLOAD as for R3LOAD. The slide shows the reasons why SAP developed the OS/DB migration check and the OS/DB migration certification. Following the SAP guidelines ensures a smooth migration project. but R3SETUP/SAPINST can do this only. if they are configured for . Figure 8: Possible Negative Consequences of a System Copy The data of a productive system is one of the most valuable things the customer owns. everything must be done to prevent loss of data or data corruptions. Figure 9: Definition: SAP Homogeneous System Copy The point here is that R3SETUP and SAPINST will only be able to install the target system and loading data into it.

that. etc. On older SAP System releases. This can happen if the target platform requires a database or operating system version that was not backward released for the SAP System version that is to be migrated. we recommend involving the hardware partner as well. the migration must be done from a certified OS/DB migration consultant or a customer employee who has a certification also! An OS/DB migration is a complex process. customers can execute a homogeneous system copy by themselves. For the target system. etc. even an upgrade might be necessary. the same operating system can also mean an SAP certified successor like Windows 2003 / Windows 2008. The major difference is who can do the system copy and the need for a SAP OS/DB Migration Service. New hardware on the target system might be supported by the latest operating system and database version only. Stress the fact that even if no productive system is involved. it might be necessary to upgrade the database or the operating system of the source system first. even an upgrade might be necessary. If a system was installed with an SAP reserved SAPSID. it might be necessary to upgrade the database or the operating system of the source system first. Depending on the method used for executing the homogeneous system copy. Figure 11: Definition: SAP Heterogeneous System Copy This slide looks very much the same as for homogeneous system copies. With or without assistance from a consultant. Figure 10: Reasons for Homogeneous System Copies Of course all the mentioned points are also valid for heterogeneous system copies but are not major reasons! The term MCOD is used for SAP installations where [M]ultiple [C]omponents are stored in [O]ne [D]atabase. To see if a change is required. On older SAP System releases. check with SAP. Consultants are strongly advised to do all they can to minimize the risk with regard to the availability and performance of a production SAP System. Depending on the method used for executing the heterogeneous system copy. a homogeneous system copy can be used to change the SAPSID. . This can happen if the target platform requires a database or operating system version that was not backward released for the SAP System version that is to be copied. All the mentioned reasons above are also applicable to heterogeneous system copies. If you plan to use a new hardware type or make major expansions to the hardware (such as changing the disk configuration).

During an OS/DB migration. but the reasons for homogeneous system copies also apply. Figure 12: Common Heterogeneous System Copy Reasons Some of the mentioned points are also valid for homogeneous system copies but are not major reasons! The above mentioned points are the primary reasons for changing an operating system or database. the old settings cannot simply be taken unchanged. The reasons also partially apply to homogeneous system copies. The above table shows which term is being used for SAP System copies. when changing the operating system. Figure 13: Frequently used SAP Terms As the term migration. The term “SAP System copy” is used in a more unspecific way. the term heterogeneous system copy implies that it is some kind of OS and/or DB migration. Generally. The decisive factors for performance in a SAP System are the parameter settings in the database. during which the availability of the migrated system is restricted. Determining the new parameter values requires an iterative process. the operating system. this table should give a clear answer which term belongs to which system copy type. For example. this is called an OS migration and is a heterogeneous system copy. . system copy. and the SAP System itself (which depends on the operating system and the database system).New hardware on the target system might be supported by the latest operating system and database version only. etc. are used in any mix throughout the SAP documentation.

backup restores can be used to copy the database! The table above is only valid when using R3LOAD or JLOAD. Homogeneous system copies using Backup/Restore will require the same database version on source and target system. Further examples are: HP-UX PA-RISC to HP-UX IA64. Note: If the hardware architecture in a system copy does change. etc. LINUX X86 to LINUX POWER. Depending on the customer maintenance contract. SAP treats it like a homogeneous system copy and no “SAP OS/DB Migration Check” is required. i. Please stress the fact that the term "homogeneous system copy" does not automatically mean. if the operating system is called the same on source and target. The SAP migrations tools are always for free as the same tools are used for the homogeneous system copy or the standard installation. Solaris (SPARC) → Solaris (x86).e. or must be upgraded after the system copy.e. In other words.Figure 14: Homogeneous or Heterogeneous System Copy? The table tries to give an overview which system copy types are still homogeneous even if the look heterogeneous. system copy from Solaris SPARC to Solaris Intel). Linux (x86) → Linux (IA64). SAP assumes the operating system behavior will be the same without regards of the underlying platform. This does not automatically imply the possibility of a backup/restore to copy the database (i. It only points out. . Figure 15: SAP OS/DB Migration Check (1) The SAP OS/DB Migration Check is always fee based. it is a homogenous system copy. Explain that a system copy is homogeneous (same database type expected for all involved systems) if the operating system is called the same on the source and target. but the operating system type stays the same. not yet used services can be converted into an OS/DB Migration Check service. it is a homogeneous system copy. Please check the database documentation for details on available system copy procedures.

The software can be downloaded from the SAP Service Marketplace.The cost for the SAP OS/DB Migration Check is specific to the customer location and may differ from country to country. the cost of the OS/DB Migrations Check service is small compared to the costs which are caused by consultancy. and hardware / software license costs of the whole project. SAP checks the OS/DB migration project planning. Show the alias “osdbmigration” and “systemcopy”. This hides the costs of an OS/DB migration service from the customer. . If there is a discussion regarding: Why should a customer pay a lot of money for a service that seems to be nothing else than a standard GoingLive service. The required tools for homogeneous or heterogeneous system copies (installation software) are provided by SAP to customers free of charge. Explain available items (FAQs. testing efforts (by own employees). Demonstration: System Demo Purpose Logon to the SAP Service Marketplace. 2. Then the answer is. OS/DB Migration Service documents and so on). The SAP OS/DB Migration Check will be delivered as a remote service. In the “Remote Project Audit”. 1. Figure 16: SAP OS/DB Migration Check (2) Point out that the SAP OS/DB Migration Check service ensures the project compliance with the SAP heterogeneous system copy procedure. Every customer must order the OS/DB Migration Check service there is no exception! Hardware vendors are often bundle new hardware with a migration. SAP will not verify the system against data loss or corruptions! Figure 17: Information on the SAP OS/DB Migration Several sources exist to obtain information about OS/DB migrations. In this case the customer will only pay a single price for the new system and anything else is done from the hardware vendor. and checks the performance.

. Business Example In customer projects. 2. because of strategic customer decisions. Is it necessary to order a SAP OS/DB Migration Check for the planned database change? a) a) The system landscape contains a pre-production system only. a database upgrade will have to be done after the system copy. As the operating system and database version are not up-to-date.2. AIX 6. quality assurance. as long as the operating system and database combinations are supported by the respective SAP System release and SAP kernel version. In this case. According the SAP System copy rules. If the SAP System copy tool R3LOAD is used. The fact that the systems are not productive is regardless.1 1. Each system is configured as ABAP Web AS with JAVA Add-In. he also wants to change to the latest software versions in a single step while doing the system move. Task 2: An SAP implementation project must change the database system before going into production. you will be able to: • Differentiate between homogeneous and heterogeneous system copies and to know the procedural consequences for a migration project. no OS/DB Migration Check service is necessary. who must do the system copies? a) The change of a database involves a heterogeneous system copy. AIX 7. The customer system configuration was setup as a standard three-system landscape (development. 2.Facilitated Discussion When explaining the different methods to copy an SAP System (Slide: Copying a SAP System) the students should be asked: What is the task of a client transport and which limitations apply. Solution 1: Introduction Task 1: A customer plans to invest in a new and more powerful hardware for his ABAP-based SAP production system (no JAVA Web AS installed). 1. as its intention is to be used for productive systems only.2.1 Planned system configuration: Oracle 11. no additional OS/DB software upgrade will be necessary after the R3LOAD import. you must know whether a system move or a database change is a homogeneous or heterogeneous system copy and in which case it is necessary to order a SAP OS/DB Migration Check Service. Neither the database nor the operating system will be changed. which must be done from someone who is certified for OS/DB migrations. production). In the case that the new target database is not supported by the installation software. an upgrade to a new database or operating system software version is not a problem. a DB migration or an OS migration? Describe your solution! a) The system move will be a homogeneous system copy. will it be necessary to perform an operating system or database upgrade after the move? Describe your solution! a) Provided the fact that the installation software is able to install on the target operating system version and also supports the installation of target database release directly. compared to a system copy? Exercise 1: Introduction Exercise Duration: 5 Minutes Exercise Objectives After completing this exercise. Is the planned system move a homogeneous system copy. Current system configuration: Oracle 10. During a system copy.

In such a case. Explain that some kind of projects can only be done with SAP involvement. If mentioned in a system copy SAP Note. The project type specific activities can be seen as something over-and-above a standard migration procedure. Example: Minimize Downtime projects (MDS). The standard OS/DB migration procedure applies also to heterogeneous system copies of ABAP Systems in “Introductory Phase Projects” or “Pilot Projects”. You need to know which steps are required and what can be a reasonable time line to finish the tasks. it was decided that this particular product can only be migrated under SAP’s control (providing direct support from SAP development in case of problems).Unit 2 The Migration Project Business Example You want to setup an OS/DB Migration project. Figure 18: Project Schedule of an OS/DB Migration (1) An introductory phase applies to new SAP products only. Give a brief explanation on Introductory Phase Projects. “Pilot Projects” or a “Minimized Downtime Service” (MDS) for very large databases. .e. Usually the introductory phase is limited to few months only. customers must register to the introductory phase before starting the OS/DB Migration. Customer projects with required SAP involvement can be i.

there may be long delivery times. the schedule defined in the “SAP OS/DB Migration Check Project Audit questionnaire” must reflect test-runs and final migrations for all SAP Systems of the customer landscape. If the first test-run for those systems shows positive results. Migration test-runs are iterative processes that are used to find the optimal configuration for the target system. should make sure that there is enough time to stabilize the system.Figure 19: Project Schedule of an OS/DB Migration (2) Prepare for the “SAP OS/DB Migration Check Analysis Session” as soon as possible. The same project procedure applies to both the operating system migration and the database migration. Stress the need to have at least two weeks between the test and the final migration of a productive system. In some cases. one test-run suffices. Test and final migrations are mandatory for productive SAP Systems only. This is not only for testing purposes. Allow at least two weeks! SAP recommends to wait with a SAP release upgrade on a migrated productive system for 6 weeks! First get the system stable and then do the upgrade! SAP will schedule the “SAP OS/DB Migration Check Analysis Session” only if the “Remote Project Audit Session” was completed successfully. If you procure new hardware. Figure 21: Migration Partners . Figure 20: Time Schedule for Productive SAP Systems You should begin planning a migration early. it also prevents too much time pressure in the project. The time which is necessary to do serious tests varies from system to system. The recommendation. It runs on the productive SAP System (the source system) and must be performed before the final migration. but several repeated runs are required in other cases. an additional migration-run (final migration) is not necessary. test or quality assurance are less critical. not to start an upgrade before 6 weeks after the final migration. The “SAP OS/DB Migration Check Analysis Session” will be performed on the production migration source system and the “SAP OS/DB Migration Check Verification Session” will run on the migrated production system after the final migration. Most other SAP Systems like development. Nevertheless.

but nothing like this exists. or an ABAP Web AS with a JAVA Web AS.STR files do exist on DB. . Explain that the OS/DB Migration service is needed for every productive system in the migrated landscape! Figure 23: Hardware Procurement For safety reasons. or test systems. A 100% check would be a comparison of all table names in the *. but not for development. As there are often situations. that all tables in the R3LOAD structure files can be exported without problem. A heterogeneous system copy of a stand-alone JAVA system means that no ABAP system is copied in the migration project. Retaining the old system also simplifies error analysis. For this reason. but it can also be an ABAP Web AS with an JAVA Add-in. that all tables in the *. you can always switch back to the old system. the migration partner should be able to recognize and to solve the problem. Explain transaction DB02: Diagnostic → Missing Tables and Indexes (or in older versions of DB02: ABAP DDIC/DB DDIC consistency check). where dictionary inconsistencies are a problem. quality assurance. Because of this. SAP cannot control what the migration partner and the customer are doing. A productive system can be a stand-alone ABAP system. SAP is responsible for the proper functionality of the migration tools only. Useful SAP Notes are: • 9385 What to do with QCM tables (conversion tables) • 33814 Warnings of inconsistencies between database & R/3 DDIC (DB02) Figure 22: Contractual Arrangements Database or operating system specific areas in the SAP Service Marketplace may not be visible to the customer unless the contractual agreement regarding the new configuration is finalized with SAP. Understand consequences of missing objects on database and/or SAP ABAP Dictionary. Application-specific tests require knowledge of the applications. each using its own database. an OS/DB migration of productive SAP Systems must always be performed in a separate system. Point out that there is no tool available to verify. The “SAP OS/DB Migration Check” is mandatory for each productive system. should serious problems occur. ABAP Dictionary knowledge is required for System copies based on R3LOAD. the responsibility for the whole project lies on the shoulders of the migration partner and the customer. The services are checking the parameters for ABAP and JAVA-based systems.The above requirements refer to the technical implementation of the migration. A method to verify. would be a compare of the table names from the structure file against the ones from the database catalog.STR files with the database catalog. The more easy way is a test export. Also it is recommended to do a test migration in the office before doing it the first time at customer side.

For safety reasons. The migration project schedule must reflect correct estimates of the complexity of the conversion. it might not be sufficient to provide a duplicate of the current system. . Remote access is also a prerequisite for the “SAP OS/DB Migration Check”. SAP System. Make sure to include the dates for test and final migration steps of every SAP System. test und quality assurance systems are less critical and can often be migrated in a single step. SAP cannot approve any migration of a production SAP System in which the source system is deleted after the data export in order to set up the target database. it is essential that SAP has remote access to the migrated system. as there is always a fall back system. as soon as the “SAP OS/DB Migration Check” was requested. Figure 25: SAP OS/DB Migration Check Project Audit The “SAP OS/DB Migration Check Project Audit Questionnaire” will automatically be sent from SAP to the customer. because it can be copied from the migrated production system. consider the new disk layout. In many cases. Figure 24: Migrating a SAP System Landscape Each productive system must be migrated twice (test and final migration)! Development. its time schedule. The migration project time schedule should be created in consultation with the migration partner. the migration of a quality assurance system is not necessary. SAP checks for the following: • Is the migration partner technology consultant SAP-certified for migrations? • Does the migration project schedule meet the migration requirements? • Technical feasibility. Different system behavior between source and target system can be easily checked. Performing the migration of a productive system on a separate hardware gives additional safety in case of problems. not only for productive systems. and is this combination supported for the target system? The migration of an SAP System is a complex undertaking that can result in unexpected problems. Are hardware. Explain that there is no right or wrong order. as long as the production system is migrated twice. For this reason. and planned effort. and database versions compatible with the migration tools. Each database has its own specific hardware requirement. operating system.When you change the database. From a performance point of view.

database or SAP release first. In some cases it is advisable to upgrade the operating system. The Analysis Session only looks for performance – nothing else. Figure 27: SAP OS/DB Migration Check Analysis The “SAP OS/DB Migration Check Analysis Session” is focused on the special aspects involved in the platform or database change. Also check the “Product Availability Matrix” (PAM) in the SAP Service Marketplace. Figure 26: SAP Migration Tools The migrations tools must fit to the used SAP release and kernel.6D and below). Systems which are under SAP maintenance can be migrated with software versions (installation medias) available on the Marketplace. Explain that the content of the Migration CD is used to migrate "old" systems which are not under SAP maintenance anymore. Figure 28: Required Source System Information (1) . It is performed on the production SAP System with regard to the target migration system environment. ABAP and JAVA-based SAP Systems components will be checked. Most questions regarding tool versions are answered in the SAP System copy notes and manuals. it may be necessary to order the Migration CD set. before performing the migration. The results of the “SAP OS/DB Migration Check” are recorded in detail and provided to the customer through the SAP Service Marketplace. Only for those SAP installations that are running old database or operating systems (which are no longer supported by current installation software 4. It is also asking for technical details of the source and target systems. The resulting recommendations are for the target system. In rare cases if can be even necessary to use intermediate systems. They also include recommendations for the migration target system. Please open a call at the SAP Service Marketplace if in doubt about which tools to use in certain software combinations.This check verifies that the migration partner is certified and the time schedule is appropriate (two weeks between test and final migration in case of a production system).

archives. It could be the case.6C and below: is it an EBCDIC or ASCII based system? Case 1: Table exists in database but not in the ABAP Dictionary .table will not be exported.e. interfaces. The current system landscape must be known to have the big picture. The number of productive systems indicates the number of test and final migrations Which systems should be migrated in which order? What is the customer time schedule (deadlines)? When minimizing the downtime. ) ? Which files must be copied to the target system? The migration support tools like MIGMON and the PACKAGE SPLITTER used by SAPINST will need JAVA. If large tables are stored in separate locations (i. Next to the database size itself. which should be available before starting the migration. will the consultant have access to the source system (which limitations will apply)? The slides show the minimum information. The old Perl-based PACKAGE SPLITTER of R3SETUP needs Perl version 5. The sizes of the source databases indicate how long the migration will take. the amount of tuning efforts that are necessary increases and much more time must be spend on it. Explain the relevance of every item.15% of the source database size should be available as export file system free space. or fixed update schedules. If source and target system are not in the same location – which media will be available to transport the dump files? Figure 30: Required Target System Information . In case of a hosting environment. There may be OS/DB related dependencies between certain systems which must be analyzed first. table spaces). because of modifications. How to handle external files (spool files. customers might not allow the installation of additional software on productive systems. the size of the largest tables will influence the export significantly. For the first test migration 10% . transport system files. system interdependencies. that a certain Support Package Stack must be installed before a OS/DB migration can take place (i. Updating Support Packages can be a serious problem in some customer environments. logs. certain target database features can be utilized only if the Support Packages are current). Figure 29: Required Source System Information (2) The number of CPUs and information about the I/O sub system can help in determining the best number of export processes. Case 2: Table exists in ABAP Dictionary but not in database – export errors are to be expected. MDMP or UNICODE system? In case of AS/400 R/3 4. Because of strict software policies.e.It must be carefully checked that all software components can be migrated – in particular JAVA-based components! The exact version information of each software component is necessary to be able to download/order and use the right installation software. should this also be retained in the target database? On some databases it can increase performance or ease database administration.

The migration of a production system is often performed under intense time pressure. this will prevent load errors caused by insufficient space. Not all the tests and checks which were done during previous test runs must be necessarily done again in the final migration. Checklists will help you to keep track of what is to be done. In most cases it makes sense to have one cut-over-plan for the technical migration.Figure 31: Migration Test Run Generating the target database: • Make a generous sizing of the target database. and when to do it. This should be a short walk through the technical migration. As a final migration runs under time pressure. including an activity checklist and a time schedule. Include plenty of reserve time. and a separate one for application related tasks. An analysis of disk usage cannot be performed until after the data has been loaded. Figure 32: Final Migration A cut-over plan should be created. Configuring the test environment: • RFC connections • External interfaces • Transport environment • Backup • Printer • Archiving • etc. every step should be previously planned. Figure 33: SAP OS/DB Migration Check Verification . or set it to an auto extensible mode (if possible). Explain the necessary preparations for the final migration.

Exercise 2: The Migration Project Exercise Duration: 5 Minutes Exercise Objectives After completing this exercise. and also recommends not performing an upgrade to the next SAP System release until at least 6 weeks after the final migration. System set 2: 1 x Development. End users who know their daily business very well should do the major part of the testing. The database size will influence the expected downtime. System set 2 (BW): Development. it is not intended to verify that the migration was ok. . As this check runs 4 to 6 weeks after going live. The “old” production system should still be available. where the migration source system should be available. 1. 2 x Production. you must know about the proper timing and the required test phases. How many system copies are involved? (More than one answer can be right) a) System set 1: 1 x Development. 3. It is again only looking for performance issues. 2. Production. Alternate: 1 x Development. What is the reason for the recommended time duration between final migration and the next upgrade? a) Every time a system has been copied to a different operating system and/or database. System set 1 (ERP): Development. This should be the last time. What is the minimal duration recommended for the test phase? a) Two weeks is the minimum amount of time to be considered between the test and final migration of a productive system. Production. What should be done in the test phase. Because of this. 1. 2 x Production. 2. and who should perform it? a) The test phase should be utilized to check the migrated system regarding the most important customer tasks and business processes. homogeneous system copy from Production to Quality Assurance. All systems have to be migrated to a different database. The old production system can be switched off or deleted now. 1 x Quality Assurance. ABAP and JAVA-based SAP Systems will be checked. the direct cause of the problems may be hard to identify. two separate SAP OS/DB Migration Checks must be ordered. In the case that an upgrade immediately follows the migration. First get the system stable and then do the upgrade! Task 2: A customer SAP System landscape is made up of several systems.The “SAP OS/DB Migration Check Verification Session” should be scheduled 4 weeks after the final migration of the productive SAP System. Business Example To plan a system copy project. Quality Assurance. Two weeks might be sufficient even in complex environments. you will be able to: • Create a migration project plan and a time schedule that is compliant to SAP needs. You should know about the tasks of each OS/DB Migration Check service component and Solution 2: The Migration Project Task 1: The SAP heterogeneous system copy procedure for productive systems requires a test phase between test and final migration. 2 x Production. This is because several weeks are required to collect enough data for a performance analysis. it takes some time to get familiar with it and to establish a smooth-running production environment. How many SAP OS/DB Migration Checks must be ordered? a) System sets 1 and 2 contain productive systems.

Returns configuration and parameter recommendations for the target system. 20 GB. but not in the ABAP Dictionary. 1. Task 4: The SAP OS/DB Migration Check sessions have three major topics. The sizes of the largest ABAP tables are 34 GB. a) The largest ABAP tables will significantly influence the amount of time necessary to export or import the database. Business Example In as customer project. 1. but are not defined in the JAVA Dictionary. Tables that exist on the database. a) Because the JAVA tables will only need a little bit of time to export. Please indicate for every item what the impact on the R3LOAD/JLOAD migration will be. They will not be exported. 18 GB. 2. are ignored. certified migration partner. . The right approach depends on the involved database and the type of operating system used. it must be figured out. Project Audit Session a) Project Audit Session: Checks for technical feasibility. a) From a database size of 500 GB it can be expected. that there are supported and unsupported methods to migrate a SAP System.STR file. a) R3LDCTL only reads the ABAP Dictionary. this will not be critical for the overall export time. 4. Unit 3 System Copy Methods Unit Overview This unit gives an overview of available SAP system copy methods. Lesson Overview Contents • Database-specific and -unspecific methods for SAP homogeneous or heterogeneous system copies (OS/DB Migrations) Lesson Objectives After completing this lesson. Please explain the main tasks of each session type. that the R3LOAD / JLOAD export will need about 10% 15% (50 GB . Returns updated configuration and parameter recommendations. but not in the ABAP Dictionary. 3. The same happens to tables belonging to the JAVA schema. A single R3LOAD process for each large table will improve the export and import time. Verification Session a) Verification Session: Performance verification on the target system after going live. This chapter will discuss the supported ones. Transaction DB02 shows two tables belonging to the ABAP schema user that only exist on the database. As a consequence they are not inserted into any *. Analysis Session a) Analysis Session: Performance analysis on source system.Task 3: The following facts as listed below are known in inspecting the source system of a migration (ABAP Web AS with JAVA Add-In). 2. 3. The sum of all tables and index sizes of the JAVA schema does not exceed 2 GB. and time schedule. you will be able to: • Evaluate the database-specific and -unspecific options for performing SAP homogeneous or heterogeneous system copies (OS/DB Migrations) Mention. Of most importance are information about SAP products which cannot be migrated the standard way and R3LOAD restrictions that exist if a PREPARE of an upgrade was run or the Incremental Table Conversion (ICNV) was not finished. what’s the best method to move a system from one platform to another. The to total size of the database is 500 GB (used space).75 GB) of local disk storage.

x systems must be upgraded first. Figure 36: R3LOAD Restrictions (1) Point out that the PREPARE phase is dangerous if specifically mentioned in the system copy guide and not revised in the system copy note (otherwise the Upgrade must be done first). For DB migrations or Unicode conversions there is no alternate way. DB2 for LUW = DB2 for Linux. Windows The above table shows that all SAP supported database systems can be copied to each other by using R3LOAD. Figure 35: R3LOAD Method The message is: everything goes. This is also true for homogeneous system copies! Give a brief explanation of database-specific objects (used by BW) and how they must be handled. R3LOAD might not be as fast as database-specific methods. SAP will not deny further system support. . that the usage of every not supported method is done on own risk! SAP support in case of problems on such system copies will be billed by SAP if a problem is clearly caused be the non support system copy method. UNIX. Even if a system was copied with an unsupported method. Note: 1. but it is much more flexible with regard to different database versions on source and target system. Stress that BW 2. Make clear. Any Hotline or Remote Consulting effort that results from the use of a copy or migration procedure that has not been officially approved by SAP will be billed. The database specific methods might be faster for an OS migration than R3LOAD (if released by SAP). Oracle. and others offer their own specific migration methods for which they are responsible by themselves. HP. The database specific methods might be faster than the R3LOAD (if released by SAP) 2. IBM.Figure 34: Comment Many of the students used methods like Oracle EXP/IMP are not supported by SAP.

finish all table conversions! The transaction ICNV should not show any entry. see SAP Note: 585277 3. the endian type can be changed as well.Database copy on the same host. A complete reset of all PREPARE changes is not possible. heterogeneous system copies by database-specific methods must be approved by SAP. In case of Oracle.**: System Copy (supplementary note)” Figure 38: Database Specific System Copy Methods (ABAP) Point out that this is a list for Web AS Systems based on ABAP only. Certain databases can be even migrated to other operating systems by a simple restore.1 System copy (supplementary note)” • 888210 “NetWeaver 7. DB2: Copy . Restarting the PREPARE phase on the migrated system will not help. In case of a JAVA Add-In it must be checked.Database copy to another host. However.0 and BW 3. 2. DB4: SAVLIB/RSTLIB method. that Backup/Restore is supported. If RMAN is used. Related SAP Notes: • 771209 “NetWeaver 04: System copy (supplementary note)” • 777024 “BW 3.1 R3LOAD system copies the appropriate Support Package level must be applied and a certain patch level for R3LOAD and R3SZCHK is required (according SAP Note 777024). The SAP OS/DB Migration Check is required anyway! Notes on database specific methods for ABAP based systems (make sure that the method is also valid for JAVA Add-In installations): 1. Before using R3LOAD. If in doubt contact SAP before executing such kind of OS migration. . The Incremental Table Conversion implements database-specific methods which cannot be unloaded consistently by R3LOAD (danger for loss of data).On earlier SAP release the PREPARE phase imports and implements ABAP Dictionary changes which cannot be unloaded consistently by R3LOAD. DB6: Database director (redirect restore) or brdb6 tools.0 and 3. Dump . Figure 37: R3LOAD Restrictions (2) For BW 3. If it applies to your SAP release it is mentioned in the system copy guide and/or in a corresponding SAP Note. transportable table spaces can be a solution for an operating system migration as well.

d) Data archiving is done in the source database and the system copy to the target system should also be used to reduce the amount of required disk space e) In the case that systems should be moved in or out of a MCOD database. 2. see SAP Note: 552464 Figure 39: Database Specific System Copy Methods (JAVA) It is not enough to copy the database content! There are a lot of parameters which must be adapted on the target system. SAPINST will make sure that it is done the right way. Which specific checks should be done before using R3LOAD to export the source system? a) Make sure the PREPARE for the next SAP upgrade was not started (if this restriction applies to your SAP System release) and verify that the Incremental Table Conversion (ICNV) has completed.4. etc. 339912 9. You can insert this unit also behind unit 9: Advance migration techniques on day 2. SAPINST runs an internal function called “Migration Tool Kit” (“Migration Controller”) to adjust the SAP JAVA target system for the new instance name. SAP Notes: 659509. c) If the database storage unit names include the SAP SID. the installation of the target database according the R3LOAD method will allow you to choose new names. ADA: Cross platform restore if source and target OS is of same endian type. 1003028. SYB: Backup/Restore. there can be good reasons to use R3LOAD anyway. but needs more time for the export/import compared to a backup/restore scenario. see SAP Note: 628156 5. ORA: The SAPINST Backup/Restore method is released for all products. Nevertheless. DB6: Cross platform restore since DB2 UDB version 8 (for AIX. it should be known what the available options and their specific prerequisites are. b) The target disk layout is completely different from the source system and the database specific copy method does not allow adapting to new disk layouts. HP-UX. HDB: Check http://help. What could be some of the reasons for using the R3LOAD method? a) The source and target systems use the same operating system and database type but different versions. instance number. MSS: Detach/Attach database files. see SAP Notes: 89698. 7. this might be a good point to insert unit 11: Special Projects. SAP Note: 962019 8.com/hana_appliance for the respective guide 6. 1. ORA: Transportable Tablespace / Database. It takes about 30 minutes. see SAP Notes: 1035051. Solution 3: System Copy Methods Task 1: The homogeneous copy of an ABAP system performed with database specific means is in most cases much faster than using the R3LOAD method. INF: Informix Level 0 Backup. Task 2: Some databases allow OS migrations of SAP systems using database specific means. see SAP Note: 1591387 Operating system Endian types. host name. R3LOAD is quite flexible. 1. 1367451 11. If the time schedule allows it. Exercise 3: System Copy Methods Business Example For a SAP system move. 173970. Is it necessary in this case to order an SAP OS/DB Migration Check for productive systems? . see SAP Notes: 151603.sap. Solaris). 147243 10.

The R3LOAD and JLOAD export directory structure will be discussed. SAPINST requires JAVA and a graphic environment which it supports (Microsoft Windows. SAP Migration Tools Business Example You want to know. 2. when performing an SAP heterogeneous system copy. 3. . Unit 4 SAP Migration Tools Unit Overview This unit describes the SAP migration tools in detail. The SAP OS/DB Migration Check is required anyway. or X-Windows). R3SETUP can run in character mode where no graphic environment is available. an OS/DB migration certification is required to perform the system copy. Must one be certified in order to perform an OS/DB migration? a) Yes. which SAP tools are executed during an export/import based system copy. It also describes the tasks of R3SETUP/SAPINST and in which phase they are calling the migration tools. Is a test and final migration required for productive systems? a) A test and a final system migration is required. and what are the specific differences between the ABAP and JAVA system copy.a) It doesn’t matter which method is used to perform a heterogeneous system copy of a productive SAP ABAP System. Figure 40: Installation Programs R3SETUP and SAPINST This slide is only used to show that tasks and features of both programs are very similar.

The compression runs on block level.6C SAP Systems running on AS/400 (iSeries) must be converted to ASCII before an upgrade to a higher release can be possible. R3LDCTL creates the DDL<DBS>.40. R3LDCTL is still used for *. which also creates the *. This may answer question regarding data security when transferring R3LOAD dump files over public networks. Since 6. The conversion will be done at export time. additional DDL<DBS>_LRG. File corruption may not be discovered at load time.1I and 4.STR files. R3LDCTL/R3SZCHK can only run as a single process (no parallelization is possible).EXT files later on.TPL files for every SAP supported database.0B.1I or 4. which will be sorted and written to *.0B.EXT files. and writes them into *. The table DDLOADD is used to store the results of the table/index size calculation. Note that 4. Strange errors might happen.6C is the last R/3 version which runs on EBCDIC. R3SZCHK generates the target database size file DBSIZE. which can not be retrieved from the ABAP dictionary.Figure 41: ABAP DDIC Export and DB Object Size Calculation It is also important to know that R3LDCTL contains SAP release specific (hard coded) table definitions.TPL files are generated to support system copies of large databases more easy. that table DDLOADD is filled by R3SZCHK to store table and index sizes. as additional information is necessary only available in the source system. The restart capability has it limits where power failures or operating system crashes terminate the R3LOAD export or import phase.5A. built-in knowledge about the table and index structures of specific SAP internal tables. Stress the fact. the size computation of tables and indexes are removed from R3LDCTL (R/3 Load Control) and implemented in a separate program called R3SZCHK (R/3 Size Check). .78 GB for each database object (table or index). Every version of R3LDCTL contains release-specific.0B. Those 4. Character set conversions to Unicode are implemented since R3LOAD 6. There is no tool available to uncompress a R3LOAD dump file.10. As of version 4. See unit 10: Troubleshooting. The standard R3LOAD implementation contains an EBCDIC/ASCII conversion of LATIN-1 character sets only. and why R3SZCHK does not exist for 3. Other translations tables are available upon request. which are created by R3LOAD 3.XML for SAPINST.1I and 4. that there is no checksum on the dump files. Figure 42: ABAP Data Export/Import Understand. R3LDCTL reads the ABAP Dictionary to extract the database independent table and index structures. The size calculation is limited to a maximum of 1.EXT file generation on 3. Explain which files are created.

Figure 43: ABAP Migration Tools Compatibility It is not possible to use i. it does not mean that you can use R3LOAD 4. The parallel export/import of single tables using multiple R3LOADs processes is supported since R3LOAD 6.6D is used to install a 4. R3SZCHK. If an R3LOAD process terminates with an error. R3LOAD will need a valid migration key to perform the input. and many others.6D also! SAP can change the installation programs freely. Required adaptations can be the regeneration of database specific coding. From time to time.0B. Special care must be taken on restarts after OS crashes.STR files. This prevents unintended overlaps between field names in tables and R3LOAD key words. an R3LOAD 4. If source and target OS or DB are different. Figure 44: DDL Statements for Non-Standard DDIC Objects The students should understand that since NetWeaver ’04. BW functionality is an integral part of the standard. The program should run . These updates might have new installation programs. which are not part of the ABAP dictionary (mainly BW objects). R3LOAD performs a syntax check on the *. see the relevant SAP Notes for 3.6D on 4. For special considerations on migration tools for Release 3. Non-standard objects are using DB specific features/storage parameters. Now customers or SAP can decide to implement BW objects on any system.x.Before the data export/import. For SAP migration tool version dependencies.1I. as well as other inconsistencies. and out of space on export disk (see the troubleshooting section). R3LOAD checks these entries when starting the import. a restart function allows the data export/import to be continued after the last successfully recorded action. see the relevant SAP Notes. The report RS_BW_POST_MIGRATION performs necessary adaptations because of DB specific objects in the target system (mainly BW objects). power failures. If R3SETUP 4. The report SMIGR_CREATE_DDL generates DDL statements for non-standard database objects and writes it into <TABART>. The <TABART>.20 It must always fit to the kernel used and released by ASP.5A.40. every SAP System can contain non-standard DDIC objects (BW objects)! Since then SMIGR_CREATE_DDL should be executed for all system types. As of Release R/3 4.0B Oracle system. R3LOAD and kernel versions for the SAP System release in charge.40 on 6. but will still use the matching R3LDCTL. SAP provides updated installation software to support new operating systems or database versions for the installation of older SAP releases directly. ABAP dictionary adaptations.SQL files. maintaining aggregate indexes. Or R3LOAD 6.SQL file is used by R3LOAD to create the non-standard DB objects in the target database.STR files. bypassing the information in <PACKAGE>.e. The report must run to make sure that no non-standard DB objects get the wrong storage parameters on the target system. R3LOAD writes information about the source system into the dump file. Since NetWeaver ’04.

MIGMON can run optional.SQL file. Earlier versions of SAP systems can benefit from MIGMON as well.STR file.XML for SAPINST. Give a short explanation on the benefit to split *. Explain the difference between MIGMON Server and Client mode. Make clear that the content of table DDLOADD is used to compute the size of the target database. R3SETUP/SAPINST calls R3LDCTL and R3SZCHK to generate various control files for R3LOAD and to perform the size calculation for tables and indexes. Once the size of each table and index has been calculated. R3SZCHK. Explain when to start SMIGR_CREATE_DDL and how to deal with the <TABART>. SAPINST/MIGMON calls R3LOAD to generate task files for every *. For table splitting the usage of MIGMON is mandatory (6. The *.5A. R3SETUP/R3SZCHK computes the required database size.STR files.x versions! Figure 45: ABAP Web AS – Source System Tasks ≤ NW 04 This slide shows which tasks are executed by R3SETUP/SAPINST and what will be done by R3LDCTL. R3LDCTL will also do the size calculation for tables and indexes on R/3 releases before 4.SQL file was used or not. regardless of whether a <TABART>.STR file. Appropriate break-points must be implemented. and R3LOAD.STR files improves unload/load times. A special exit step was implemented to call MIGMON since SAPINST for NetWeaver ’04. The reports above are not applicable to BW 2.40 and later)! . R3SETUP/SAPINST/MIGMON generates R3LOAD command files for every *. Optional MIGMON can be used to reduce the unload and load time significantly. Depending on the database. update statistics is required before the size calculation or not. Point out that the *.CMD files are created by R3SETUP/ SAPINST. R3SZCHK creates a DBSIZE.independently.TSK files are created from SAPINST by calling R3LOAD with special options. R3SETUP generates a DBSIZE. The splitting of *.TPL.

20.Figure 46: ABAP Web AS – Target System Tasks ≤ NW 04 The *. Optional MIGMON can be used to reduce the unload and load time significantly.40 (NetWeaver ’04) and all SAP Systems using BW functionality based on Web AS 6. The ABAP DDIC consistency check makes sure that the ABAP DDIC fits to the DB DDIC (i. it is necessary to run update statistics to achieve the best possible performance. The last step in each migration process is to create database specific objects by calling SAP programs via RFC. For table splitting the usage of MIGMON is mandatory (6. which are required to bring the system to a proper state. Explain the task of RS_BW_POST_MIGRATION. table field order). the password of user DDIC of client 000 must be known. To be successful.e. the database is installed with or without support through R3SETUP or SAPINST. Appropriate break-points must be implemented in the R3SETUP/SAPINST installation flow. Depending on the database type. Ensuring ABAP DDIC (Dictionary) consistency means.CMD and *. MIGMON can run optional. The report RS_BW_POST_MIGRATION is called as one of the post-migration activities.40 and later)! . A special exit step was implemented to call MIGMON in SAPINST for NetWeaver ’04.TSK files are not copied from the source system! They are created again from scratch. A detailed explanation of this DDIC step should be done when discussing the example import. required since ABAP Web AS 6. Earlier versions of SAP systems can benefit from MIGMON as well. After the data load. the program “dipgntab” will be started to update the SAP System “active NAMETAB” from the database dictionary (the table field order).

MIGMON gives the control back to SAPINST. Even if MIGMON is configured automatically by SAPINST.Figure 47: ABAP Web AS – Source System Tasks ≥ NW 7. some SAPINST functionalities have been removed and MIGMON is called instead. Figure 48: ABAP Web AS – Target System Tasks ≥ NW 7. it can still be configured and called manually for special purposes.0 Explain that SAPINST calls MIGMON to handle the export. The above slide shows that the whole R3LOAD handling is done by MIGMON. . This makes it possible to run the export and import in parallel. After the export is completed. SAPINST implements MIGMON parameter related dialogs and generates the MIGMON property file. as long as the target system has already been prepared.0 (NetWeaver ’04S). In newer SAPINST versions there is an option to skip the update statistic. SAPINST uses MIGMON for the import as well. it can still be configured and called manually for special purposes. Since NetWeaver 7. The export and the import can run at the same time.0 Explain that SAPINST calls MIGMON to handle the import. Even if MIGMON is configured automatically by SAPINST.

SQL and SQLFiles. JLOAD is not designed to be a stand-alone tool. <TABLE>#.STR.LST (since 7. the directory names are case sensitive. Under UNIX. or manually according the system copy instructions).EXT files are only copied for Oracle to DB/<DBS>. Since NetWeaver 7. The *.Figure 49: ABAP Web AS – Export Directories and Files All files of the dump directory must be copied to the target system! Do not forget LABEL. if the report SMIGR_CREATE_DDL created them and they were copied to the database subdirectory (automatically by SAPINST.<nnn> files are stored in <dump directory>/DATA *.ASC. The <TABART>. and *. JAVA data is stored in a database. SAPINST will need to perform additional steps which are version and installed software components specific. Unlike R3LOAD which exports only table data.ASC to verify the dump directory location. R3SETUP and SAPINST automatically creates the shown directory structure on the named dump file system.02) files exist only. JLOAD can export the dictionary definitions and the table data into dump files. As of NetWeaver ’04. File system data is covered by SAPINST functionality. the *. For migrating a JAVA-based SAP system.EXT files and the target database size file DBSIZE. Figure 50: JAVA Data Export/Import The JLOAD features should be compared with R3LOAD. *. JLOAD deals with database data only. but separated by name. All *. *. Below the two directories we will find the well known directory structures of the previous releases again.WHR files do only exist in case of table splitting. Since NetWeaver 7. During the export procedure.TOC.TPL file is stored in <dump directory>/DB At import time.STR.EXT files are stored in the corresponding database subdirectory. R3SETUP and SAPINST will read the content of file LABEL.WHR files do only exist if the optional table splitting was used.TOC and the dump files are stored in the DATA directory. but there are still JAVA applications storing persistent data in the file system.0 the dump directory contains an ABAP and/or a JAVA subdirectory to store the exports into one location.00 SAPINST creates an ABAP and JAVA subdirectory automatically. In most SAPINST implementations. The *. .* are stored in <dump directory>/DB/ORA The DDLORA. Example target database: Oracle *. the files are then copied to the specified directory structures.

e.02 package and table splitting is available for JLOAD.02 JPKGCTL is used for it. the conversion weights for data and indexes are calculated using master data/index sizes. a restart function allows the data export/import to be continued after the last successfully recorded action. multiple JLOAD processes can run simultaneously. it was necessary to separate the meta data export from the table data export to allow a separate table creation for splitted tables. Later versions might have this active by default. JSIZECHECK will not have the R3SZCHK limitation of 1. The output is a DBSIZE. The export sizes are converted to import sizes using the conversion coefficients. Because of the need for faster exports and imports. Before NetWeaver 7. If JLOAD terminates with an error.EXT file for R3LOAD) are not required for JLOAD. All JLOAD Processes will now be started by JMIGMON.02. package and table splitting was implemented. If the computed size is less then some default values (i.JLOAD writes its data in a format that is independent of database and platform. then default sizes are used in the output file. In case of a database change during a heterogeneous system copy. Starting with NetWeaver 7. The JLOAD package size information is stored in “sizes. Figure 52: JAVA Target DB Size Calculation It is important to know. As a consequence. which is stored in the DB sub-directories of the export file system. JLOAD did not only export the table data. This format can be read and processed on all platforms supported by SAP.XML file.xml”. Files containing “Initial Extents” (like the *. As of SAPINST for NetWeaver 7.02 one single JLOAD process did the whole export or import. It was introduced with SAPINST 7.78 GB per table/index! The size calculation is not limited to a certain object size (like R3SZCHK). and 20-30% additional space is added for safety reasons. In previous version. Figure 51: JLOAD Job File Creation using JPKGCTL JPKCTL is creating JLOAD job files and supports package and table splitting. that JSIZECHECK does not provide any size information about tables or indexes.02 and can be switched on with a certain environment variable. Starting with 7. it also generated it own export/import job files. . 1GB for Oracle).

SDM repository) into the SDMKIT. The JSIZECHECK is called to create the DBSIZE. Since NW 04S SAPINST provides a selection for JAVA Add-In which exports the ABAP and the JAVA part in one single step. JLOAD is called to export the JAVA meta and table data. Note: The above graphic describes general steps which are important for a JAVA Web AS system copy.Figure 53: Flow Diagram JAVA Add-In / JAVA System Copy This diagram shows the export and import order. For applications storing their persistent data in the file system. The steps can vary in their order.XML files for all target databases where this file is needed. The software deployment manager (SDM) is called to put its file system components (incl. The log files for JSIZECHECK can be found in the installation directory. In NW 04 SAPINST must be called twice. SAPINST collects the files into SAPCAR archives. Since NW’04S SAPINST does this in a single step. . DB=Database CI=Central Instance Figure 54: JAVA Web AS – Source System Tasks NW 04 / 04S It is Important to understand. One time for the ABAP export and the second time for the JAVA part. that everything in the file system must be collected by SAPINST or by SDM.JAR file. JLOAD reads the database content only. Explain that NW04 exports the ABAP and JAVA stack separately (in case of Java Add-In).

to bring the system to a proper state.Figure 55: JAVA Web AS – Target System Tasks NW 04 / 04S JLOAD restores the database content only. Explain that the SDM is not used any long in 7. The database software installation is only required in cases where a JAVA Web AS is installed using its own database. hostname. JLOAD is called to load the database. Various post-migration tasks must be done. The log files for JSIZECHECK can be found in the installation directory.XML files for all target databases where this file is needed. SDM file system software components are re-installed (re-deployed). JMIGMON is mandatory for the export. etc). The steps can vary in their order. The steps can vary in their order. Note: The above graphic describes general steps which are important for a JAVA Web AS system copy. JPKGCTL distributes the JAVA tables to package files (job files) and can optionally split tables. Since NW 04S SAPINST provides a selection for JAVA Add-In which imports the ABAP and the JAVA part in one single step. The JSIZECHECK is called to create the DBSIZE. . opposed to an JAVA Add-in installation into an existing ABAP database. SDM reapplies its repository. Figure 56: JAVA Web AS – Source System Tasks – JPKGCTL In all NetWeaver versions using JPKCTL. Note: The above graphic describes general steps which are important for a JAVA Web AS system copy. Applications specific data is restored from SAPCAR archives. After import. SAPINST adjusts certain table contents regarding the new environment (instance. This internal SAPINST functionality is called in step "Java Migration Toolkit". JMIGMON calls JLOAD to export the JAVA table data. Package and table splitting is optionally but recommended.10 and later.

The steps can vary in their order.10 not required anymore. JLOAD is called to load the database.02. Various post-migration tasks must be done. The JPKGCTL/JMIGMON is active only if the environment variable “JAVA_MIGMON_ENABLED=true” was set before starting SAPINST 7.10 not required anymore.JAR file. to bring the system to a proper state. SDM file system software components are re-installed (re-deployed).02. If the environment variable was not set. the export looks like in NW 04S. Since 7. Later versions of SAPINST will use JPKGCTL/JMIGMON by default. Note: The above graphic describes general steps which are important for a JAVA Web AS system copy. Since 7. the import looks like in NW 04S. If the environment variable was not set.For applications storing their persistent data in the file system. Applications specific data is restored from SAPCAR archives. Later versions of SAPINST will use JMIGMON by default. Figure 58: JAVA Web AS – Export Directories and Files . Since 7.10 not required anymore. The JPKGCTL/JMIGMON is active only if the environment variable “JAVA_MIGMON_ENABLED=true ” was set before starting SAPINST 7.10 not required anymore. The software deployment manager (SDM) is called to put its file system components (incl. Figure 57: JAVA Web AS – Target System Tasks – JPKGCTL In all NetWeaver versions using JPKCTL. Since 7. SAPINST collects the files into SAPCAR archives. SDM repository) into the SDMKIT. JMIGMON is mandatory for the import.

Explain the SAP release order. JLOAD package and table splitting is not available for this version.0) The APPS directory holds archives from applications storing their persistent data in the file system. is not used anymore. Since NetWeaver 7. Figure 60: SL Toolset – ABAP/JAVA Dual Stack Split Note: The ABAP/JAVA Dual Stack Split is intended to be used in a homogeneous system copy scenario.STA files are in the SAPINST installation directory or in /usr/sap/<SAP SID>/<instance>/j2ee/sltools. Releases using SAPINST functionality based on 7. PORTAL (SAP Portal). but not for heterogeneous migrations.02. The repository is now stored in the database and can be exported with JLOAD. . JAVA applications were changed to store no persistent data in the file system. Software Deployment Manager (SDM) The DB sub-directories contain the target database size files created by JSIZECHECK (since SAPINST for NetWeaver 7. As NetWeaver 7. The subdirectories and files are only created if the application is installed and known by SAPINST.00 SAPINST creates an ABAP and JAVA subdirectory automatically. KM (Content Management and Collaboration) The APPS and SDM directory may disappear in future releases as no JAVA relevant persistent data is stored in the file system anymore. Directories: Applications (APPS). Below the two directories we will find the well known directory structures of the previous releases again.XML files will be created in the SAPINST installation directory. As a result SAPINST does not need to collect application files for system copies anymore.02 offers more features then SAPINST 7.02 and higher may provide these features later on. Figure 59: Changes in NetWeaver 7. The *_<PACKAGE>. JLOAD Dump (JDMP). Please check the system copy guides and SAP Notes for updates. storing their data in the file system.LOG. to copy the required files to the target system (see respective SAP Notes). Since NetWeaver 7. *_<PACKAGE>. if no JAVA applications are installed. The JLOAD. Examples for applications are: ADS (Adobe Document Services).STAT.10. DB.10 (released for certain SAP products only) was available before SAPINST 7. The JLOAD *.STA files are stored outside the install directory – which may be confusing to the students. The “SOURCE.PROPERTIES” file contains information that is used to create the central instance on the target system.10 and later For the students it will be hard to understand that SAPINST 7.10 the Software Deployment Manager (SDM) using a file system based repository. otherwise application specific directives must be performed.The APPS directory is empty.LOG and *.

EXT files are used to fill . which is based on SAP NetWeaver 7. it is intended for homogeneous system copies only .0 Definition of a Dual-Stack System SAP system that contains installations of both Application Server ABAP (AS ABAP) and Application Server Java (AS Java). • 1563579 Central Release Note for Software Logistics Toolset 1.SGN).XML) 3. but without installation and import into a new system. Export preparation of ABAP System DEV (generate DBSIZE. Related SAP Notes: • 1655335 Use Cases for Splitting Dual-Stack Systems. Information from *. Now start the export of DEV.3 above.0 SP1 for Systems Based on SAP NetWeaver. Remove JAVA stack: Similar to "Keep JAVA database".STR files. If you decide to present a parallel export/import. Stress the fact. but as MCOD installation.not for heterogeneous migrations.0 including Enhancement Package 3 and SAP Business Suite 7i2011. Keep JAVA database: Export JAVA stack and import into the same database. which contain all the necessary information to assemble a create table SQL statement for the target database.com/sltoolset. Do not run more than three R3LOADs for export or import because more would overload the system. Please be aware. it will no longer be possible to upgrade an SAP dual-stack system to a higher release. Export the Java System DEJ 2.STR file only contains database independent structures. but without installation and import into a new system. Exercise 4: SAP Migration Tools Solution 4: SAP Migration Tools Task 1: R3LDCTL reads the ABAP dictionary and writes database independent table and index structures into *.0 including Enhancement Package 3. As the *. Keep JAVA database: Export JAVA stack and import into the same database. how is R3LOAD able to assemble a create table SQL statement for the target database? a) R3LDCTL creates DDL <DBS>. • 1685432 Dual-Stack Split 2. as of SAP Business Suite 7i2011. 1. A dual-stack system has the following characteristics: • Common SID for all application servers and the database • Common startup framework • Common database (with different schemas for ABAP and Java) Available options for splitting a dual-stack system that is based on SAP NetWeaver into one ABAP stack and one Java stack each with own system ID (the dual-stack system is reduced to an ABAP system and the Java system is reinstalled): Move Java database: Export JAVA stack and import into a separate database.Remove JAVA stack: Similar to "Keep JAVA database".STR and *. After the database is created (~30 min) SAPINST will wait for the first MIGMON signal file (*. Table splitting preparation for ABAP System DEV 4. Furthermore. and start the target system import then. the installation of SAP dual-stack systems is no longer supported. Export the ABAP System DEV Remarks In general the DEV system can be exported and imported into QAS in parallel. As of SAP NetWeaver 7.The name Software Logistics Toolset stands for a product-independent delivery channel which delivers up-todate software logistics tools.sap. Use the SL Toolset Dual-Stack split instead. Remove original JAVA stack. Remove original JAVA stack. but as MCOD installation. http://service. run the steps 1 . TPL template files. Remove original JAVA stack. Demonstration: Purpose Demonstration of the export 1. that the normal JLOAD procedure does not support the removal of the JAVA stack.

STR files. For that purpose the installation tool R3SETUP/SAPINST must be stopped in certain phases.the table or index specific part of the statement. 1. How do the programs know how many command files to create if no table splitting is involved? a) Command files are created for every *.JAR file. In addition. The target database of a system copy does not require INITIAL EXTENTs when creating a table. primary key. What is the reason? a) Tables that made up the ABAP dictionary itself. 2. SAP System copy notes might give instructions on how to copy some files manually. and the SDM data is stored inside the SDMKIT. SAPINST or MIGMON create the command files.00 or by the tools which are called from it. Business Example The export or import phase of a R3LOAD based system copy should be improved.STR files can be found with transaction SE11 (table maintenance) in the SAP System. 2. How is JAVA Web AS related file system data handled in NetWeaver 7. and index. What else can be the purpose of the size computation? a) The sizes of tables and indexes are used to compute the amount of disk space that will be required to create the target database. Most of the file system data is collected in SAPCAR files. .00? a) The installed software components must be recognized by SAPINST 7. Task 4: JLOAD is used to export the JAVA data. Which programs generate the command files? a) The programs R3SETUP. Emphasis is on how to implement user-defined break-points to stop R3SETUP/SAPINST after/before certain installation steps. • Recognize the structure of the R3SETUP *. The Package Split-ters rely on size information from the *. 1. you will be able to: • Understand how R3SETUP and SAPINST control the export and import processes of homogeneous or heterogeneous system copies and how to influence their behavior. which is stored in the database. or used by internal kernel functions. The control files will be explained.STR file that can be found. Not all the tables within *. Unit 5 R3SETUP/SAPINST Unit Overview This unit describes the SAP installation programs R3SETUP and SAPINST. R3LDCTL contains a built-in knowledge about these tables and can write their structures directly into the *. You need to know how to prepare the tools for that. can not be viewed with standard dictionary transactions. A look at the database dictionary confirms that these tables do exist.R3S control files. Task 2: The program R3SZCHK computes the size of each table. R3SETUP/SAPINST Lesson Overview Contents The role of R3SETUP and SAPINST in the homogeneous or heterogeneous system copy process Lesson Objectives After completing this lesson.EXT files Task 3: Every R3LOAD process needs a command file to start a data export or import. 1. There are no questions about this chapter in the examination. and be able to adjust their contents if necessary.

Some parameters are not written to the R3SETUP command file until runtime.R3S File Structure It is important to understand the internal structure of *.R3S files are: CENTRDB. The DBRELOAD. Different versions of R3SETUP are using different *. CENTRAL.R3S files to be able to react on errors or to modify the contents of *. and CEDBMIG.R3S Files Many of the students know R3SETUP form their own installations. The first section with an ERROR status or without any status will be executed next. The beginning of a section is always indicated by the section name in square brackets.R3S file is only available for Oracle!!! The command file DBEXPORT. Older *. Each section contains a set of keys and corresponding parameter values. Because of this. and then examines the status of each section. after the test migration).Figure 61: R3SETUP: *.R3S for a combined installation of central instance and database. Figure 62: R3SETUP: *.R3S files.R3S files for doing the import. the system copy manuals must be read to figure out what to use. The [EXE] section represents an installation roadmap with all of the steps listed in sequence. Available for Oracle only.R3S files as selected. . DBRELOAD. The steps are executed as listed (the step with the lowest number first). Removing the OK status from a section will force R3SETUP to execute this section again.R3S calls other *.R3S file will not be overwritten from default values.R3S is only used for re-loading an already finished installation (that is. R3SETUP stops on error if a section can not be executed. it receives the status OK. Explain that the [EXE] section controls the execution order of R3SETUP steps.R3S controls the database export of a homogeneous or heterogeneous system copy. The command file consists of several sections. After a section has been successfully executed. The section receives the status ERROR. used for a combined installation of central instance and database for homogeneous or heterogeneous system copies. Parameters which are preset by editing the *. R3SETUP always reads the [EXE] section first to get the execution order.

the expected content of the LABEL.EXE for modifying *. modify database settings. or even call MIGMON. Figure 64: R3SETUP: LABEL. SAP Note: “784118 System Copy Java Tools” explains how to find the MIGMON software on the SAP Marketplace.ASC As different migration kits can be in charge when doing the export or import. but this is good point to explain it anyway.R3S file.SAR archive contains a PDF-document which shows how to use MIGMON with R3SETUP. As shown in the graphic. you may need to stop and make manual changes to the R3LOAD control files. The MIGMON*. The slide shows how the expected LABEL. SAP Note: “118059 Storage parameter for system copy with R3load” describes how to implement user break-points. that every Break Point (exit step) will only be executed once. R3SETUP can be forced to stop.R3S file.ASC file may differ.R3S files in an easy way.R3S to make sure that the import is read from the right location.ASC file in the export directory will be compared against the expected string inside DBMIG. by implementing user-defined break-points. The emphasis should be on the fact. .ASC content can be found inside a *. The content of the LABEL. The R/3 installation kits for Windows operating systems provide R3SEDIT.Figure 63: R3SETUP: User-Defined Break-Points Why to insert break points into the R3SETUP execution flow. The same mechanism is used by SAPINST. Between the execution of two command sections in a *.

You will probably have to increase or decrease individual values during. The Step Browser shows the components and steps that make up an installation.xml file contains the name of installation media (CDs) and the expected LABEL.XML Files The most important difference between R3SETUP and SAPINST is the usage of *.ASC content can be fount in the “package. The expected LABEL.xml” file. It is nearly impossible to do changes inside these files on an intuitive level. call SAPINST with the command line parameter “SAPINST_SET_STEPSTATE=true”. As long as the used SAPINST version does not provide a documented way to implement user break-points. groups of steps. You may manipulate the state of single steps. SAPINST records the installation progress in the “keydb.Figure 65: SAPINST: *. . and even whole components and their subcomponents. and primarily serve as guidelines for sizing the target database. The package. The values calculated for ABAP database storage units are estimates. that JSIZECHECK and R3SZCHK create files of the same name.ASC content.XML files.pdf Figure 67: Size of the Target Database It is confusing. To activate the Step Browser. the so-called “Step Browser”. SAP is not supporting this feature! The current version of SAPINST can be checked by executing “SAPINST –v”. SAPINST starting with 7. By invoking the context menu for a step and choosing “Insert Dialog Exit Step above Selection” or “Insert Dialog Exit Step below Selection” you may stop an installation before or after a certain step. or after the first test migration. A description of the files content is not available. the program must be forced to stop by intended error situations.xml” file. Figure 66: SAPINST: User-Defined Break-Points Explain that everyone who is making use of the Step browser functionality is individually responsible for the result. the “Step Browser” functionality is not supported officially. SAPINST can continue the installation from a failed step. so the usage is done on own risk! Show SAPINST step browser screen shots D:\Additional_Files\TADM70\Templates+ Doc\SAPINST_STEP_Editor SAPINST_STEP_EDITOR_screen_shots.0 SR2 offers the possibility to manipulate step execution via a graphic user interface. Please be aware. like the values for the initial extents. without having to repeat previous steps.

modifying the files can be risky. Do not skip steps. Exercise 5: R3SETUP/SAPINST Solution 5: R3SETUP/SAPINST Task 1: The installation program R3SETUP will be started with a command line containing the name of a “*. An erroneous step will get the status “ERROR”. The target database size calculation is based on estimations. Adjust the database size manually if required.R3S” files. What can be done to create such a “*. As the file structure is neither easy to read nor documented. to have a backup of the original content. For that purpose. The JAVA DBSIZE calculation does not have table or index size limitations.R3S” file will be copied first. Caution: The step order is defined in the [EXE] section. Earlier SAPINST versions can only be stopped by forcing intended errors. Use this method only if you want to repeat the installation exactly like it was done before! 2. How could this be achieved without modifying the files? a) Stop SAPINST before the step where you would like to start later on. “RESETUP –f DBMIG.R3S file that rebuilds the database without reinstalling the database software again. b) Remove the “STATUS=OK” lines from completed “*.R3S” file which starts with the generation of an empty database. a) Insert a break point in the “*. some very large tables should be stored in customer defined database storage units. If you reuse an already executed “*.R3S”). Restore the saved installation directory to its original location to redo the installation. it would be useful to have a SAPINST that starts at a certain step. 1. or on the installation CD. The purpose of “*. but the result is based on estimations as well. it would be helpful to have a DBMIG. A description of each installation step and related parameters can be found in the installation directory (sub-directory “doc”).78 GB will be normalized to an initial extent of 1. What can be done to force SAPINST to stop before a certain installation step? a) Since SAPINST NetWeaver 7. In the case where we need to repeat a system copy import. Copy the “*.e. For this purpose. the “*. we need a “DBMIG. 1. R3SETUP will begin the execution at the first step that has the status “ERROR”. Task 2: SAPINST stores all its installation information in “*.R3S” file at the place where R3SETUP should stop. Copy the entire installation directory as it is. For repeated test migrations or for the final migration of a production system.R3S” file using a new name.0 SR2 the step browser can be used to insert an exit dialog before or after an installation step.R3S” file to read (i. Next. you need to know how the .R3S” file? Different methods are possible. be sure to remove the STATUS=OK lines from all sections following an [EXE] order. Every time R3SETUP is started.ABAP Tables and indexes that are larger than 1. R3SETUP sets the status of completed steps to OK and stops on error if a step can not be executed successfully.R3S” file. What happens to R3SETUP parameters that were preset by hand? a) R3SETUP does not overwrite preset parameters with default values. as it might cause unexpected side effects.XML” files. 2.78 GB. or no status at all.R3S” files is not only to define installation steps. Begin editing the section where R3SETUP should start later on. it is also used to store parameters and status information. Unit 6 Technical Background Knowledge Data Classes (TABARTs) Lesson Overview Purpose of Data Classes (TABARTs) in the ABAP DDIC and R3LOAD control files Business Example In the target database of a migration.

Tables of a single TABART can be stored together somewhere in the database. Figure 70: TABART – Table Types (2) Tables in clusters or pools also contain TABART entries in their technical configuration. By this definition. To achieve this. Figure 69: TABART – Table Types (1) The table types are maintained in the ABAP Dictionary. examples of database storage units are: • Tablespaces (Oracle). but it should be used throughout the training. Figure 68: Definition Explain that the following slides like to be database independent. • Tablespaces/containers (DB2 LUW) The participants should understand that a TABART is an order criterion only.ABAP data dictionary and R3LOAD is dealing with Data Classes/TABARTs. or “table type” means the same. unless the tables are converted to transparent tables. The term “TABART”. “data class”. . • Dataspaces (Informix). TABART sounds more German. These entries do not become active. All tables belong to a certain TABART. regardless of the database used. the term database storage unit was chosen as a synonym for any database disk architecture.

Table DARTT contains short TABART descriptions in various languages. Table DDART contains all the TABARTs that are known in the SAP System. . which are mapped to not existing storage units.40 and later. As long as they are not used this is not a problem. Figure 74: Assignment: TABART – Database Storage Unit Show the content of at least TAORA and IAORA by using transaction SE11.Figure 71: TABART – Table Types (3) Figure 72: TABART – Table Types (4) Since NetWeaver ’04. which are from earlier SAP system versions. Explain that even TABARTs. The TS<DBS> tables contain the list of all SAP defined storage units in a database. DARTT. the above TABARTs can be found in any SAP System based on Web AS 6. Figure 73: Tables DDART. Note: table TSDB2 may not exist in NetWeaver systems. TS<DBS> Explain that the TS<DBS> tables can even contain storage unit names. will cause no harm if no tables are assigned to this TABART. Even if no BW info cube was created. some tables do exist belonging to the TABARTs as shown above.

. from table DD09L of the ABAP Dictionary. The “Initial Extent Size” actually used. which is the same for table and index. Show the content of at least TGORA and IGORA by using transaction SE11. “Next Extent”. “Min Extent”. Explain the consequences on changes in DD09L (tables are in different *.\ Figure 76: Table and Index Storage Parameters Explain that the size category in DD09L is mapped in table TG<DBS> and IG<DBS> to database dependent values. or retrieved from the database! DD09L: ABAP Dictionary. This information is written to the *. for Oracle) about the size of “Default Initial Extent”. and “Max Extent”. The size category is a single value.R3LDCTL reads tables TA<DBS> and IA<DBS>. Table TG<DBS> gives R3LDCTL the information (i. tables are mapped to a TABART and a size category. Figure 75: Technical Configuration – Table DD09L In DD09L. The participants should understand that the next extent size is never been computed.STR files. Select for table T000.e. This information is saved in the files DDL<DBS>. The size category in DD09L will be used as the next extent value in *. The assignment of a table to a specific table category is used to determine the “Next Extent Size” in *.EXT.STR. Note: table TGDB2 and IGDB2 may not exist in NetWeaver systems. is calculated and saved in *. Tables TA<DBS> and IA<DBS> only exist for databases with the appropriate architecture. and writes the assignments between TABARTs and database storage units into DDL<DBS>TPL.STR files). TG<DBS>/IG<DBS>: Assignment of size category (TABKAT = table category) to database storage parameters. technical configuration of tables (TABART and TABKAT) R3LDCTL extracts the corresponding TABART and size category (TABKAT) for each table.STR files. Show the content of at least TAORA and IAORA by using transaction SE11. Show the content of table DD09L by using transaction SE11.TPL.

See SAP Notes: • 046272 Implement new data class in technical settings • 490365 Tablespace naming conventions . Slide 2 shows how to implement a new TABART. these tables are only re-loaded into the correct storage units when tables DARTT. TS<DBS>. can be seen as well. tablespaces) in the source database during a migration.e. the new tablespace will be used for table COEP data and index storage location. sqlplus) to update DD09L. If SAPDBA or BRSPACE was used to create additional tablespaces. After the tables have been unloaded. In the example above. The official way to apply a new TABART to DD09L is SE11 and not table update on database level! Otherwise the ABAP DDIC is not aware of the change and the next update is overwriting it. but this is not strictly necessary. the files “DDL<Target DBS>. Duration: a few minutes. A fast check can be performed by calling R3LDCTL without parameters. the class for customer created TABARTs must be “USR”. Note: change the content of DD09L by calling transaction SE11 (technical setting maintenance). and DD09L have been maintained correctly. if the corresponding large table is a SAP delivered one. If you use database tools (i. The technical configuration of all tables (stored in DD09L) must include the correct TABART. TA<DBS>. USR##.Figure 77: Creating New TABARTs (1) Slide 1 is an overview of which tables are involved. IA<DBS>.TPL” and “DDL<Source DBS>. and four additional characters. It is recommended to name new database storage units like the TABART to identify their purpose. and USER#.STR” and “DDL<DBS>. see SAP Note 46272. A customer TABART name must start with “Z” or “Y”. To prevent SAP upgrades from overwriting these definitions. TABART names like U####. DDART. R3LDCTL generates the files “SAP<TABART>.TPL” should contain the customer-specific TABART and database storage unit names. See SAP Notes: • 046272 Implement new data class in technical settings • 163449 DB2/390: Implement new data class (TABART) Figure 78: Creating New TABARTs (2) For information on how to create a new TABART. the change is lost after an upgrade. If tables have been moved to customer-defined database storage units (that is. This will be a modification and is shown in SPDD later on.TPL” in the current directory.

you will be able to: • Explain the purpose of table DBDIFF • Understand how the R3LOAD/JLOAD data access is working • Distinguish between the R3SZCHK behavior if the target database type is the same or different than the source database type The ABAP and the JAVA database access is done via the DBSL interface. which is the abstraction layer between physical and logical database access.STR files then visible in the ABAP dictionary transaction SE11. .Figure 79: Moving Tables and Indexes Between SAP Releases In the past. • ABAP Dictionary parameters were not properly maintained after the customer had re-distributed the tables to new database storage units. You also want to know how the ABAP data types are translated into database specific data types. During a homogeneous or heterogeneous R3LOAD system copy. Lesson: Miscellaneous Background Information Lesson Overview Miscellaneous background information about table DBDIFF. R3LOAD/JLOAD data access. If it is essential to have single tables stored in specific database storage units. and R3SZCHK size computation. instead of being assigned to the TABART were currently being stored. check the *. If the Oracle reduced tablespace set is used for the target database. tables can move between different storage units when doing a R3LOAD system copy. SAP changed the mapping between tables and TABARTs in table DD09L some times. As a result. R3LOAD always creates tables and indexes in locations obtained from the ABAP Dictionary. tables may be moved unintentionally from one database storage unit to another. Table movement can significantly change the size of source and target database storage units. The reason for this could be that: • Some tables were assigned to TABARTs of other database storage units. • Older SAP System Releases were installed with slightly different table locations than subsequent releases. all thoughts about table and index locations are obsolete.STR files before starting an import. and some objects are defined even differently. Business Example You wonder why there are more tables in the *. This should not be a real problem unless the tables in charge do need a special storage location. Lesson Objectives After completing this lesson.

or need special treatment otherwise. not the data types of the database. R3LOAD uses the interface to read/write data to/from the database. which better fits to the ABAP data type. Figure 82: R3LOAD – ABAP Data Access This slide should make clear that R3LOAD does not need to know how to read or write database data. a change in the DBSL can do this. depending on the database type. but it also means that the content of the involved tables must be converted or exported and imported again. R3up. this involves database-specific objects and the tables of the ABAP Dictionary itself. The ABAP data types are modeled through the SAP database interface (DBSL) into the suitable data type for the database used. the DBSL interface stores the same amount of data in a different number of rows. If necessary. disp+work. are hard coded in R3LDCTL. If SAP decides to use a new data type on a database. Show the content of table DBDIFF by using transaction SE11. Tables. The SAP<TABART>. …). views. Refer to the ABAP Dictionary manual for further information. and indexes contained in the exception table DBDIFF.Figure 80: Exception Table DBDIFF Some tables might not be defined in the ABAP dictionary. . R3trans. Different databases provide different data types and limitations to store binary or compressed data in long raw fields.e. or the data definitions intentionally vary from those in the database. Generally. R3LDCTL reserves special treatment for tables. Show the field where the reasons are stored (show which options exist). since the ABAP Dictionary either does not contain information about these tables. which are not defined in the ABAP dictionary. are database-specific. tp.STR files contain the ABAP data types. Exceptions are listed in table DBDIFF. Every database specific stuff will be done by the DBSL interface that is basically a library which is linked to every SAP program which accesses the database (i. See related SAP Notes: • 033814 DB02 reports inconsistencies between database & Dictionary • 193201 Views PEUxxxxx and TEUxxxxx unknown in DDIC Figure 81: Database Modeling of the ABAP Data Types The DBSL interface is the translator between ABAP data types and database data types.

In this process. the size values can be taken from the source database. If you suspect that any tables are inconsistent. To determine the correct size values. The participants should understand that the “active nametab” contains the activated dictionary objects. the ABAP dictionary information is used to calculate the size of tables and indexes (R3SZCHK option -s DD). In the case of a homogeneous system copy.-) are hard coded inside R3SZCHK. No database specific data will be used as the computation is for a different database! The magic formulas . In the case of a database change. Transaction SE11 can be used to check the consistency of individual tables or views. Fix the NAMETAB problem with appropriate methods or mark the table entry as comment in the *. Tables that have a large number of extents can be given a sufficiently large initial extent in the target database.STR file. . Changes to the ABAP Dictionary are not written (and therefore are not effective) until they are activated in the NAMETAB. The data in the database is accessed via the runtime object of the active NAMETAB. since the data types and storage methods differ. Indexes. If the DB stays the same. you can check them individually using transaction SE11. The ABAP Dictionary should be OK in a standard SAP System. Sometimes tables exist in the active NAMETAB but not in the database.Figure 83: Consistency Check: ABAP DDIC – DB and Runtime Transaction SE11 can be used to verify the consistency of tables. the database statistics (update statistics and so on) must be current. the system checks whether the tables or view definitions in the ABAP Dictionary (DDIC) agree with the runtime object or database object. the table/index size information from the DB are used (R3SZCHK option -s DB). the sizing information from the source database cannot be used to size the target database. which are used to access table data. R3LOAD will stop the export on error. Database When performing a DB Migration. In this case. Figure 84: ABAP Size Computation: Tables.

Figure 86: JLOAD – Data Access The participants should understand that the SAP JDBC Interface is special and can not be compared with a standard JDBC interface. SAP transaction logic. Exclude lists tell JLOAD and JPKGCTL (JSPLITTER) which objects must not be exported and which objects need special treatment during the export (i. JLOAD uses SAP Open SQL to access database data. removal of trailing blanks) A catalog reader (JAVA Dictionary browser) will be available with 7.10. SAP OPEN SQL compatibility).e. a) Define new TABARTs ZZTR1 in tables DDART and DARTT. which are used by SAP Open SQL (i. 1. Exercise 6: Technical Background Knowledge Exercise Duration: 10 Minutes Business Example You need to know how to handle customer specific Data Classes/TABARTs and you are interested in information about how the ABAP and JAVA data types are converted to database specific data types. if JLOAD is used! Do not mix a JLOAD import with other methods (i. The name of the Java Data Dictionary table is: BC_DDDBTABLERT.e. The name should only be mentioned if explicitly asked for. b) Add the new tablespace name to TSORA. For that purpose the necessary tasks were done in the ABAP dictionary: TABART ZZTR1 was created and the tablespace name PSAPSR3ZZTR1 was defined. which is delivered from database vendors. The JAVA Web AS table and index definitions are stored as XML documents in the dictionary table. . Which changes were done to the ABAP dictionary of the source system? Which tables were involved? Note the table entries.Figure 85: JAVA Data Dictionary The Exclude list defines which tables must not be exported (because they are views!). database specific import tools). describing the table and indexes. Note: The JAVA Dictionary table will only be filled with the XML-documents. Solution 6: Technical Background Knowledge Task 1: The OS migration of a large Oracle database was utilized to move the heavily used customer table ZTR1 to a separate table space. The SAP JDBC interface implements specific extensions to the JDBC standard.e. the SAP JAVA DDIC.

and the mapping between TABART/tablespace. Task 3: The *. interface? 1.STR file before starting a time consuming export? Which steps are necessary? a) R3LDCTL can be executed stand-alone as the <sapsid>adm user. If no command line parameters are provided. The ABAP dictionary tables.c) Map TABART ZZTR1 to tablespace PSAPSR3ZZTR1 in tables TAORA and IAORA. They also know what never should be touched. contents and structure of the R3LOAD control and data files Figure 87: Overview: R3LOAD Control and Data Files . Task 2: A customer database was exported using R3LOAD. 1. What is the reason that no additional *. The existence of customer TABARTs does not cause the creation of additional *. It will take a few minutes. The created files can then be checked for proper content. which were stored in separate Oracle tablespaces.STR files contain database independent data type definitions as used in the ABAP dictionary. d) Change the TABART entry for table ZTR1 to ZZTR1 in table DD09L. which are used to define additional TABARTs. they should know very well how to influence the R3LOAD and JLOAD behavior by manipulating the control file contents. if no tables have been mapped to it. Task 4: Every database vendor provides a JDBC interface for easy database access. 2. Lesson: R3LOAD Files Lesson Overview Purpose. which knows how to handle them. 1.TPL files in the current directory. A look into the export directory shows that no additional *.STR files exist for tables.STR files were created besides the standard ones? a) The technical settings (table DD09L) of the involved objects were not changed. because it calls the database interface (DBSL). R3LDCTL will create *. Important JDBC extensions are the usage of the SAP JAVA Dictionary and the implementation of the SAP transaction mechanism.STR files. Why is SAP using its own JDBC interface? a) Standard JDBC interfaces do not provide features required by SAP applications. How is R3LOAD able to convert database independent into database specific data types? a) R3LOAD does not need specific knowledge about the data types of the target database. It describes every R3LOAD and JLOAD control file in every aspect. What can be done in advance to check the proper creation of an *.STR and DDL<DBS>. Unit 7 R3LOAD & JLOAD Files This is the most important unit of the entire course. Note: Table and index data can also be stored in the same tablespace. After the participants heard this unit. were properly maintained. containing the list of tablespaces.

TPL: Description The “DDL<DBS>. “Next Extent Size” classes are defined separately for tables and indexes. views. or indexes from the load process. They contain the primary key of each row which cannot be properly translated to Unicode.TPL Figure 89: DDL<DBS>. the primary key or secondary indexes are generated either before. Normally the R3LOAD based data export is done sorted by primary key.XML files during Unicode conversions. The assignment of TABART and data/index storage is made here for databases that support the distribution of data among database storage units. Figure 90: DDL<DBS>. provided this is supported by the target database. delete and truncate data SQL statements can be defined for better performance in R3LOAD restart situations. This default behavior can be switched on and off in the DDL<DBS>.R3LOAD writes <PACKAGE>*. Figure 88: R3LOAD: DDL<DBS>. Depending on the database used. Database specific drop. These files are not discussed in this course. Typical examples include tables LICHECK and MLICHECK. A negative list can be used to exclude tables. The content is used by transaction SUMG to fix the problems in the target system.TPL” files contain the database-specific description of the create table/index statements. or after the data is loaded.TPL file. R3LOAD uses these descriptions to generate the tables and indexes.TPL: Naming Conventions .

40 “DDL<DBS>_LRG.EXT files. You can also see a DDLMYS. The Sybase ASE related template file is called DDLSYB.TPL” files are created to support unsorted exports (were it makes sense).TPL SAP Note: 1591424 SYB: Heterogeneous system copy with target ASE Figure 91: DDL<DBS>. . but this is not used. Since R3LDCTL 6.0 EHP 2. Variables are indicated by “&” and filled with various values from *. The “DDL<DBS>.6D Function / Section names: • Create primary index order. primary keys and secondary indexes by R3load.TPL” files are generated by R3LDCTL.Explain that the DDL<DBS>_LRG. Starting with NW 7. For Oracle parallel index creation was added. *.02 SP9 or ERP 6. and from the storage sections of the DDL<DBS>.TPL: Structure – Create Table The DDL files are templates used to generate database specific SQL statements for creating tables.TPL files are used to support unsorted exports and in case of Oracle a parallel index creation. sorted / unsorted export: prikey • Create secondary index order: seckey • Create table: cretab • Create primary key: crepkey • Create secondary index: creind • Do not create and load table: negtab • Do not create index: negind • Do not create view: negvie • Do not compress table: negcpr • Storage location: loc • Storage parameters: sto Figure 92: DDL<DBS>.TPL file itself.TPL file. migrations to Sybase ASE are supported.TPL: Internal Structure ≤ 4.STR.

EXT exists or when it does not contain the table.TPL: Structure − Negative List The negative list can be used to prevent tables.TPL: Structure − Create Index Secondary indexes can be unique or ununique. indexes. and views from being loaded. Figure 95: DDL<DBS>.Figure 93: DDL<DBS>. .TPL: Structure − Table Storage The default initial extent is only used when no <PACKAGE>. The entries are separated by blanks and can be inserted into a single line. Primary keys are always unique. Figure 94: DDL<DBS>.

STR files. If R3LOAD cannot find a specific table or index entry in the <PACAKAGE>.XML).TPL: Structure − Index Storage The default initial extent is only used when no “<PACKAGE>. tablespaces for Oracle) can be added to the DDL<DBS>.EXT” exists.TPL) or SAPINST (DBSIZE.TPL: Structure − Second Example A less complex example from MaxDB. If you do this. The same index storage parameters are used for primary and secondary indexes. Figure 97: DDL<DBS>.New TABARTs for additional storage units (i. and the corresponding create database templates for R3SETUP (DBSIZE. Figure 96: DDLDBS. . It is easier to change the ABAP Dictionary before the export.TPL by changing the table and index storage parameters. change the *. or when it does not contain the index.EXT file. the missing entry is ignored and default values are used. than to change the R3LOAD control files.e.

TPL: Internal Structure ≥ 6. The “&where&” condition is used when restarting the import of splitted tables. . All other sections are similar to 4.6D and below.TPL: Structure − DROP/DELETE Data Above are the templates for dropping objects and deleting/truncating table data.10 Do not change the sections marked “do not change” unless explicitly asked to do so in an SAP Note or by SAP support. Some functions apply to specific database types or database releases only. drop view: drpvie • Truncate data: trcdat • Delete data: deldat • Do not create table: negtab • Do not load data: negdat • Do not create index: negind • Do not create view: negvie • Do not compress table: negcpr • Storage location: loc • Storage parameters: sto Figure 99: DDL<DBS>. drop secondary index: drpind • Create view: crevie.Figure 98: DDL<DBS>. sorted / unsorted export: prikey • Create secondary index order: seckey • Create table: cretab. drop primary key: drppky • Create secondary index: creind. Function / Section names: • Create primary key order. drop table: drptab • Create primary key: crepky.

making this mechanism obsolete.EXT Figure 101: <PACKAGE>. In case of Oracle dictionary managed tablespaces the values for the “initial extent” can be increased or decreased as required. this information is accurate enough for package splitting.78 GB (more precisely 1700 MB). If R3LOAD cannot find a specific table or index entry in the <PACAKAGE>.EXT: Initial Extent (1) The <PACKAGE>.EXT: Initial Extent (2) The size of “initial extent” is based on assumptions about the expected space requirements of a table.EXT files will created for all database types. otherwise small tables or indexes could block the storage unit easily. compression. Even if the maximum size of a table is limited to 1.EXT file. and the data type used. Figure 102: <PACKAGE>.TPL/DBSIZE.XML and for package splitting. the missing entry is ignored and default values are used. Factors such as the number and average length of the data records.log if reaching the size limit: • WARNING: REPOLOAD in SLEXC: initial extent reduced to 1782579200 • WARNING: /BLUESKY/FECOND in APPL0: initial extent reduced to 1782579200 . play an important role. Typical warning in R3SZCHK. R3ZSCHK limits the maximum initial extent to a value of 1. Today’s database releases handle the storage more flexibly. This was implemented to prevent data load errors of very large tables because of having not enough consecutive space in a single storage unit. because the extent values are used to compute the size of the target database DBSIZE.78 GB. Observe database-specific limitations for maximum “initial extent” sizes.Figure 100: R3LOAD: <PACKAGE>.

STR).EXT file.STR will never be exported.5A). The JAVA-based Package Splitter makes sure that the Nametab tables are always put into the same file (SAPNTAB. The ABAP Nametab tables DDNTF / DDNTT (and since 6. Table type (conversion type with code page change): • C = Cluster table • D = Dynpro (screen) table • N = Nametab (active ABAP Dictionary) • P = Pooled table • Q = Unicode conversion related purpose • R = Report table • T = Transparent table • X = Unicode conversion related purpose R3LOAD activity: • all = Create table/index and load data • data = Load data only (table must be created manually) . The data of tables in SAP0000.x DDNTF_CONV_UC / DDNTT_CONV_UC for Unicode conversions) require a certain import order.STR: Description (1) The term “package” is used as a synonym for R3LOAD structure files (*. It indicates how to buffer tables in an OS/390 DB2 database.STR). ABAP report loads must be regenerated on the target system.STR: Description (2) The buffer flag is used for OS/390 migrations (as of Release 4. If R3LOAD cannot find a specific table or index entry in the <PACAKAGE>.STR Figure 104: <PACKAGE>. the missing entry is ignored and defaults values are used. Figure 105: <PACKAGE>.Figure 103: R3LOAD: <PACKAGE>.

This will prevent the export or import of unwanted table data.<nnn>” file. . R3LOAD will not create a data export or import row inside the task file.STR: Object Structure (1) The total of field lengths is the offset of the next data record to read in the “<PACKAGE>. the index MLST~1 will be created on all databases except ADA and MSS. In the above example. The index MLST~1AD will be created on ADA and MSS only.40. Figure 107: <PACKAGE>. but do not load any data For tables which are marked with “struct”. starting with R3LOAD 6. The “dbs:” was implemented.• struct = Create table/index.STR: Object Structure (2) The “dbs:” list specifies databases for which the object should be created. A leading “!” means the opposite. Figure 106: <PACKAGE>. Comments are indicated by a “#” character in the first column.

STR: View Structure Views are not generated in the target system until all of the tables and data have been imported.TOC Figure 110: <PACKAGE>.10. the <PACKAGE>.TSK file is used for the restart. The corresponding “SAPVIEW.EXT” file does not contain any entries or does not even exist.TOC: Description The content of the <PACKAGE>. since views do not require any storage space other than for their definition in the DB Data Dictionary.6D and below. to restart an interrupted export.TOC file is used by R3LOAD version 4. . As of R3LOAD 6. Figure 109: R3LOAD: <PACKAGE>.Figure 108: SAPVIEW.

6D Figure 112: <PACKAGE>. If R3LOAD export processes are interrupted due to a system crash or a power failure. *. See the troubleshooting chapter for details on how to prevent this situation.TOC.LOG file will be automatically renamed to *. This can lead to missing data or duplicate keys later on.SAV and the existing *.TOC: R3LOAD Restart Export The above restart description is only valid for R3LOAD less or equal to 4. R3SETUP adds the “-r” command line option automatically when restarting R3LOAD.STR file or of the whole database.TOC file may list more exported tables than the dump file really contains (since the operating system was not able to write all the dump file buffers to disk).LOG.6D! A restart without option “-r” will force R3LOAD to begin the export at the very first table of the *. but not cleared.TOC: Internal Structure ≤ 4. It is recommended to delete the related *. The existing import *. Figure 113: <PACKAGE>.Figure 111: <PACKAGE>. a restart can be dangerous as it starts after the last *. and dump files before repeating a complete export of a single *.TOC file will be reused. In this case. the *.10 .STR file in charge.TOC: Internal Structure ≥ 6.TOC entry which might not be valid.

Since R3LOAD 6.10, the *.TSK file is used to restart a terminated data export! The *.TOC file is read to find the
last write position only.

Figure 114: <PACKAGE>.TOC: Internal Structure ≥ 6.40
In case of splitted tables, the WHERE condition used during the export is written into the respective *.TOC file.
Before starting the import, R3LOAD compares the WHERE condition in the *.TOC file against the where
condition in the *.TSK file. R3LOAD assumes a problem if they do not match and stops on error.
If there is an error during data load and R3LOAD must be restarted, the WHERE condition is used for selective
deletion of already imported data. Unicode code pages: 4102 Big Endian, 4103 Little Endian.
Non-Unicode code pages: 1100, MDMP (for exports of MDMP systems).

Figure 115: R3LOAD: <PACKAGE>.<nnn>

Figure 116: <PACKAGE>.<nnn>: Description
Depending on the source database used for the export, a data compression ratio of between 1:4, and 1:10 or
more can be achieved. The compression is performed at block level, so the file cannot be decompressed as a
whole.
Some versions of R3SETUP/SAPINST are asking for the maximum dump file size (other versions use different
defaults - check the *.CMD file for the used value). Each additional dump file (for the same *.STR file) is
assigned to a new number (such as SAPAPPL1.001 or SAPAPPL1.002). The files of a PACKAGE are all

generated in the same directory (if not specified differently in the *.CMD file – >=6.10 only!). Make sure that the
available disk space is sufficient.
A checksum calculation at block level is implemented as of R3LOAD 4.5A to ensure data integrity.
R3LOAD versions 4.5A and above compare source system information obtained from the dump file against the
actual system information. If R3LOAD detects a difference in OS or DB, a migration key is necessary to
perform the import (see GSI section in export log file).

Figure 117: <PACKAGE>.<nnn>: Internal Dump File Structure
R3LOAD reads a certain amount of database data into an internal buffer and does a compression on it. The
number of written blocks (group) will depend on the compression result and block size used. This figure is also
written into the dump, to tell R3LOAD how many blocks to read later on.
Since 4.5A, a header block is used to identify heterogeneous system copies and to verify the migration key.
Implemented with 4.5A, was that every group of compressed data blocks has its own checksum. Before a
checksum can be verified, all blocks of a group must be read by R3LOAD. If a dump file has been corrupted
during a file transfer, typical R3LOAD read errors will be: RFF (cannot read from file), RFB (cannot read from
buffer), or “cannot allocate buffer of size ...”. For more details, see unit “Troubleshooting”.

Figure 118: R3LOAD: <PACKAGE>.LOG

Figure 119: <PACKAGE>.LOG: Export Log ≤ 4.6D
The header entries (GSI) of the export *.LOG file show important information which can be useful for the
migration key generation.

Figure 120: <PACKAGE.LOG>: Export Log ≥ 6.10
Since R3LOAD 6.10 the installation number of the exported system was added.

Figure 121: <PACKAGE.LOG>: Import Log ≤ 4.6D

LOG>: Import Log ≥ 6.6D.40 .STR. the restart point for an interrupted import is read from the <PACKAGE>.10 and < 6.Figure 122: <PACKAGE>. only the *. The import process will terminate on error.TSK file is used to restart an interrupted import! The restart point for the data load is the first entry in the *. R3SETUP adds the “-r” option automatically when restarting R3LOAD.log file.40 Since R3LOAD 6.LOG: R3LOAD Restart Import Up to R3LOAD 4.10. The existing import *. Figure 123: <PACKAGE>. Figure 124: <PACKAGE.LOG file will be automatically renamed to *.LOG: Import Log ≥ 6. as the database objects already exist.SAV. The restart performs a delete data (DELETE FROM) or a drop table/index. A restart without option “-r” will force R3LOAD to begin at the very first table of the *.TSK file of status error (err) or execute (xeq).

separate time stamps for create table. Meaning of section names: • icf: Independent control file • dcf: Database dependent control file • dat: Data dump file location • dir: Directory file (table of contents) . load data.As of R3LOAD 6. Figure 127: <PACKAGE>.CMD” files contain the names and paths of the files from where R3LOAD retrieves its instructions. The name of the “<PACKAGE>. SAPINST. The default maximum size (fs) of a dump file is often 1000M (1000 MB). and MIGMON.CMD: Internal Structure ≤ 4. Possible units: • B = Byte • K = Kilobyte • M = Megabyte • G = Gigabyte Do not change the block size (bs). R3LOAD dump files can be redirected to different file systems by adapting the “dat:” entry. This allows a much better load analytics than on previous releases.CMD” file must be supplied in the R3LOAD command line. Figure 125: R3LOAD: <PACKAGE>.6D The “<PACKAGE>.CMD: Description Command files are automatically generated by SAP installation programs R3SETUP.40.CMD Figure 126: <PACKAGE>. and create index are implemented.

the first dump file SAPPOOL. If more than one PACKAGE is mentioned in a *..CMD file..CMD: Internal Structure ≥ 6.TPL file is often read from the installation directory.STA: Description . This might be useful in certain cases. The 4th . This is done for the option to adapt storage locations and so on. and so on. R3SETUP/SAPINST copied it from the export directory.001 will be written to /migration/DATA. dump file will be stored in the last defined dump location. the second dump file SAPPOOL. .. 5th. Figure 128: <PACKAGE>. etc.002 to /mig1/DATA. a single R3LOAD will execute them in sequential order.STA Figure 130: <PACKAGE>.• ext: Extent file (not required at export time) The DDL<DBS>. Figure 129: R3LOAD: <PACKAGE>. In this case.10 Meaning of section names: • tsk: Task file • icf: Independent control file • dcf: Database dependent control file • dat: Data dump file location (up to 16 different locations) • dir: Directory file (table of contents) • ext: Extent file (not required at export time) In the above example.

TSK file content. Objects or data can be easily omitted from the import process by simply changing the status of the corresponding <PACKAGE>. after it was created by R3LOAD. Use R3LOAD option –s <stat file> to make use of the statistic feature.TSK: Description Since R3LOAD is using task files.TOC files. Figure 133: <PACKAGE>. The generation of statistic files is switched off by default. .The values are estimates and serve primarily to display the load progress.TSK: Internal Structure for Export The slide above shows the initial <PACKAGE>.TSK Figure 132: <PACKAGE>.LOG or *. Please check unit 8 “Advanced Migration Techniques” for the table split case. See SAP Note 455195 “R3LOAD: Purpose of TSK Files” for further reference. Complex restart situations with manual user interventions are minimized or more easy to handle.TSK row. the restart points are no longer read from *. Figure 131: R3LOAD: <PACKAGE>.

but used in exceptional cases only. The Status “ ign” can be used to omit a task action and to document it as well. • err = Failure occurred while processing the task. Setting a task manually to “ok” will have the same result. Figure 135: <PACKAGE>. The corresponding <PACKAGE>. There is also an action “D” which can be used to delete objects with R3load. • ign = Ignore task.Figure 134: <PACKAGE>. do nothing. It is also used to find the right restart position after a termination. • ok = Task successfully processed.TSK file shows the content after R3LOAD has stopped on error.TSK: Syntax Elements The “<PACKAGE>. but it is not visible for later checks.TSK” files are used to define which objects have to be created and which data has to be exported/imported by R3LOAD. . The next run will drop the object ordelete/truncate data before re-doing the task. Please check unit 8 “Advanced Migration Techniques” for the table split case.LOG file contains the error description/reason. Status • xeq = Task not yet processed.TSK: Internal Structure for Import The above <PACKAGE>.

BCK with <PACKAGE>. This must be done by merging the file <PACKAGE>.log After starting the database export or import.BCK file and then stop on error.TSK to <PACKAGE>.WHR) is added to the task file. and executes them.STR DDLORA.TSK.TSK ORA –l SAPAPPL0. import.TSK.BCK and inserts line-by-line from <PACKAGE>. R3LOAD searches in the <PACKAGE>. The “-merge_bck” option can only be used in combination with “-e” or “-i”. Note: If more than one R3LOAD is executing the same task file by accident. caused by operating system crashes. etc.BCK after each run. one of the processes will find an existing <PACKAGE>.STR” files.TSK for not completed tasks of status “err” or “xeq”. but does not start an export or import.TSK as soon as a task (create. This should prevent running parallel R3LOAD processes against the same database objects.TSK. The export or import will start immediately after the merge is finished! The merge option “-merge_only” merges the <PACKAGE>.TSK. it may be necessary to rebuild an already used <PACKAGE>.TSK files.TSK: Create Task File R3LOAD creates the “<PACKAGE>. In case of table splitting.TSK. export.TSK file for export R3LOAD -ctf E SAPAPPL0. Example: Create *.TSK. ignore) was finished successfully (status: ok) or unsuccessfully (status: err).TSK” files from existing “ <PACKAGE>.BCK into a new <PACKAGE>.TSK file after a hard termination. In the case of restarting.BCK into the <PACKAGE>.TSK: Merge Option (1) In rare cases. R3LOAD renames <PACKAGE>. the content of the WHERE file (*. Figure 137: <PACKAGE>. power failures. R3LOAD automatically deletes <PACKAGE>.Figure 136: <PACKAGE>.TPL SAPAPPL0. .TSK.

and the missing lines will be copied to <PACKAGE>.TSK: Merge Option (3) After R3LOAD has been restarted with option “-merge_bck”. In this stage.Figure 138: <PACKAGE>.TSK. For example. Figure 140: <PACKAGE>. R3LOAD solves this problem by changing the status of each “xeq” line to “err”.TSK.TSK: Merge Option (2) R3LOAD stops on error if a <PACKAGE>. a power failure interrupted the import processes and R3LOAD will not be able to cleanup the <PACKAGE>. as it is not clear how to proceed.TSK. the content of <PACKAGE>.BCK file is found. The current content of both files are shown above.TSK. R3LOAD will attempt to drop each object before creating it. to force a “DROP” or “DELETE” statement before repeating an import task.TSK: Merge Option (4) After the task file merge is completed. .BCK will be compared against <PACKAGE>.BCK and <PACKAGE>.TSK. Errors caused by drop statements are ignored. Figure 139: <PACKAGE>.TSK files. it is not known whether some objects not listed in <PACKAGE>.TSK already exist in the database.

.TSK: R3LOAD Restart Behavior No special R3LOAD restart option is necessary! Rare cases are hard terminations caused by power failures and operating system crashes. Export write order: dump data.Figure 141: <PACKAGE>.).SQL Figure 143: Why Do BW Objects Need Special Handling? In the case of BW non-standard database objects. The missing information is held in the BW meta data (i.STR files.. .e. <PACKAGE>. the ABAP Dictionary contains table and index definitions that are not sufficient to describe all objects properties. <PACKAGE>. partition information. which writes database specific DDL statements into *.TOC.TSK Figure 142: R3LOAD: <TABART>. the report SMIGR_CREATE_DDL was developed.SQL file. The *. To overcome the existing limitations of R3LDCTL and R3LOAD. Additional information from BW meta data cannot be inserted into *. and the direct execution of DDL statements from a *.STR file content is enough to export and import BW data via R3LOAD.SQL files. but is insufficient to create the BW object in the target system. bit mapped indices. R3LDCTL reads the ABAP Dictionary only. So it is possible to create non-standard database objects and to load data into them using R3LOAD. R3LOAD was extended to switch between the normal way of creating tables and indexes. .

LST.TPL entries.SQL files into the file SQLFiles.02.TPL. Since NetWeaver 7. Afterwards. .SQL: Content Variants Example 1: R3LOAD creates table /BI0/B0000103000. Example 2: R3LOAD creates table /BI0/B0000106000 and primary key /BI0/B0000106000~0 in a single step. Depending on the DDL<DBS>.Figure 144: <TABART>. As the /BI0/B0000106000~0 SQL section is empty. SMIGR_CREATE_DDL inserts the list of created <TABART>. content data will be loaded before or after the primary key /BI0/B0000103000~0 creation. This configuration is used to make sure that table and index will always be created together.SQL: File Generation The report SMIGR_CREATE_DDL is mandatory for all systems using non-standard database objects (mainly BW objects). Figure 145: <TABART>. independent from DDL<DBS>. data will be loaded. using the supplied CREATE TABLE statement. R3load will not try to create /BI0/B0000106000~0 again.

SQL: Example Content (2) . forcing R3load to load data after the index creation (which can be useful for sometable types). Figure 146: <TABART>. The variable &APPL0& will be replaced by the TABART according to DDL<DBS>. Figure 147: <TABART>.STR file content.SQL: Example Content (1) The example above combines a create table and a create unique index statement.Example 3: R3LOAD creates table /BI0/B0000108000 and will load data into it. The table will not have a primary key. As the index /BI0/B0000108000~0 has no SQL section. no further action is required. Empty SQL sections are used to prevent R3LOAD from creating objects according to *.TPL content.

Figure 149: Import Log showing the <TABART>. R3load will then scan the internal list for a matching object name prior to assembling a create object statement.LST is examined and the existence of the mentioned <TABART>.20. The <TABART>.SQL files and the SQLFiles. instead of building a statement according to DDL<DBS>. the SQL statement from the <TABART>. Independent from the SQLFiles.STR file.SQL file is not in the SQLFiles. Before R3LOAD assembles the first SQL statement. This was implemented as an additional safety mechanism.TPL content.SQL file mentioned in the list.SQL file names. R3LOAD can only read one <TABART>.Figure 148: R3LOAD: Execution of External SQL Statements Since R3LOAD 7.SQL files based on the TABART in the respective <PACKAGE>. Even if a <TABART>. The SQLFiles.LST it will be used.SQL file per *. if there is a <TABART>.20.LST is searched in the current directory and then in the export DB directory.SQL file will be added to an internal list (index). SAPINST is taking care.SQL file is searched in the current directory first and then in the DB/<DBS> directory.LST is put into the right place. it searches for a <TABART>. All object names in the <TABART>. that the <TABART>.SQL file may not be mentioned in the import log file before R3LOAD 7. R3LOAD is searching for the <TABART>.LST is read by R3LOAD to retrieve the *. R3LOAD will abort. the file SQLFiles. which matches the TABART of the first object.LST content. which cannot be found. if it was found by R3LOAD.SQL files is verified.SQL file will be used.STR file! The usage of a <TABART>. If a match has been found.SQL Search The SQLFiles.SQL file. .

For example: R3LOAD -ctf I → resulting task file content: • T TAB01 C xeq • D TAB01 I xeq • P TAB01~0 I xeq • R3LOAD -o D -ctf I → resulting task file content:content: • T TAB01 C xeq . R3LOAD 6. The option “-continue_on_error” is dangerous for the export! On MDMP systems. Option “-o” can be combined with option “-ctf” (create task file).5A. For the conversion of MDMP systems to Unicode.TSK file. In the combination with “-ctf”. the corresponding tasks will not be inserted into the *. which indicates “do no conversion”. Figure 151: Common R3LOAD Command Line Options (2) The statistic data file is useful to watch the load progress of large data dump files. and “-i” (import). “-e” (export).TOC file. The “-k” or “-K” option is not valid for R3LOAD below 4. if a database monitor shows the cause of the slowing down of the database leading to the number of commits as opposed to loading of the data.x automatically uses a dummy code page called “MDMP”. Load tests are recommended. For additional R3LOAD options. Changing the value can also decrease performance.000 rows. The default commit count is approximately 1 commit for 10. see unit 11 “Special Projects”. The MDMP code page entry can be seen in the *.Figure 150: Common R3LOAD Command Line Options (1) Increasing the commit count can improve database performance. The “-o” options is used in the case of the import of splitted tables. see “R3LOAD -h”.

content. 1045847 ORACLE DIRECT PATH LOAD SUPPORT IN R3LOAD 1046103 ORACLE DIRECT PATH LOAD SUPPORT IN R3LOAD 7. 1014782 MaxDB: FAQ System Copy 3. Database specific load options can be listed by “-h”. Business Example Problems occurred during a JLOAD system copy. and structure of the JLOAD control. For troubleshooting. The options are used to speed up the R3LOAD import bypassing database mechanisms. to make it easier to identify patch levels. 1464560 FAQ R3load in MaxDB. Lesson Objectives After completing this lesson. 1591424 SYB: 7. Figure 152: DB Specific R3LOAD Option: Load Procedure Fast 1. 1054852 Recommendations for migrations to MS SQL Server 4. and data files This lesson describes the JLOAD control and data files in every aspect. the “-v” command line option shows the program compile time.• P TAB01 C xeq Since R3LOAD 6. you will be able to: • Understand the purpose. . and structure of the JLOAD control and data files. 0905614 DB6: R3load -loadprocedure fast COMPRESS 1058427 DB6: R3load options for compact installation 2.02 Heterogeneous system copy with target Sybase ASE 1672367 SYB: 7. you need to know the purpose of all the various control files created.30 Heterogeneous system copy with target Sybase ASE Figure 153: R3LOAD Command Line Examples JLOAD Files Lesson Overview This lesson explains the purpose. If in doubt about which options are recommended.40. check to see what R3SETUP or SAPINST is using.00 AND LATER 5. contents. which are not required for a system copy load.

Figure 154: Overview: JLOAD Control and Data Files SAPINST 7.02 and its improvements on JLOAD (JPKGCTL.00 Job files are used to specify JLOAD actions. 7.00.02 examples. The job file can also contain a maximum data dump file size. For example: <export file=“EXPDUMP” size=“100MB”> Future versions may contain additional object types like database views (which are not used yet). In general: if no JPKGCTL was used or is available. JLOAD can create the EXPORT. the default size is set to 2 GB. JMIGMON) were implemented for NetWeaver 7. but were released earlier. This leads to a situation that a lower NetWeaver version (7. The default data dump file name is “EXPDUMP”. Without such a parameter.e.10 have an higher version number. JLOAD behaves like in NW 7.40 / 7.02 the first time.XML and IMPORT. if JPKGCTL was run the behavior is similar to the NW 7.10). Other versions like NetWeaver 7.02) seems to provides more (advanced) JLOAD functionalities than a higher NetWeaver version (i. which exports the whole JAVA schema (meta data and table data). . SAPINST in NetWeaver ’04 SR1 and NetWeaver 04S is starting a single JLOAD process. Figure 155: JLOAD: Job Files Figure 156: Export Job File 6.XML files by itself.

XML / EXPORT_POSTPROCESS. SAPINST is starting a single JLOAD process.00 Job files are used to specify JLOAD actions. Figure 159: Import Job File 6.30 In 7.30. In NetWeaver 04 SR1 and NetWeaver 04S. and create indices afterwards (post-processing). For table splitting it is necessary to create the table first.02. This allows multiple JLOAD export and import processes.XML) is separated from the data export (EXPORT_<PACKAGE>. several package exports. there is one meta data export. The meta data describing a table or index (EXPORT_METADATA.JPKGCTL 7.02 Starting with SAPINST 7.XML). then load data.40 / 7. and for each package its own post-process export job file. which imports the entire JAVA schema. . JPKGCTL can be used to create the JLOAD job files.Figure 157: Export Job Files created by JPKGCTL 7. Figure 158: Export Job Files .

Figure 161: Import Job Files – JPKGCTL 7. then load data. multiple package imports.30 In 7. Figure 162: JLOAD: Status Files .Figure 160: Import Job Files created by JPKGCTL 7. there is one general meta data import.30.02 For table splitting it is necessary to create the table first. and for each package its own post-process import job file. and create indexes afterwards (postprocessing).

As soon as an item is imported. in case of a restart.02 The meta data export is separated from the table data export. or “ERR” for failed.40 / 7.STA” file can be found under: /usr/sap/<SAP SID>/<Instance>/j2ee/sltools Check the SAPINST log file for the location in other versions. a new line will be added to the *. The content of the *.00 The above *.40 / 7.STA file. . The content of the *.STA file is used to identify where to proceed.Figure 163: Export Status File 6. or “ERR” for failed. in case of a restart. Figure 164: Export Status Files 7. a new line will be added to the *. The status can either be “OK” for successful. As soon as an item is exported. In NetWeaver 04 SR1. Figure 165: Import Status File 6.STA file contains the import status. the “EXPORT.STA file is used to identify where to proceed. The status can either be “OK” for successful.STA file contains the export status.STA file.00 The above *.

primary key).XML.XML.LOG. NetWeaver 7. otherwise the export starts from scratch. and finally the secondary indexes are generated (post-processing).LOG Files Figure 168: Export Log The existence of a matching export *.02 JLOAD writes log files with the following naming conventions: EXPORT_METADATA.XML.LOG.Figure 166: Import Status Files 7.STA file identifies a restart situation. and EXPORT_POSTPROCESS.LOG. It separates the meta data export from table data export. EXPORT_<PACKAGE>. .02 First the meta data is applied (create table. Figure 167: JLOAD: *. then the data import takes place (insert).

XML. and IMPORT_POSTPROCESS.Figure 169: Import Log The existence of a matching import *.XML” files will be read by JMIGTIME to create the corresponding export/import time lists and HTML graphics. NetWeaver 7.02 JLOAD writes log files with the following naming conventions: IMPORT_METADATA.LOG. .LOG. IMPORT_<PACKAGE>.XML.LOG.STAT.02 The “*_PACKAGE.STA file identifies a restart situation.XML. Figure 170: JLOAD: Export / Import Statistic Files Figure 171: Export / Import Statistic Files 7. It separates the meta data import from table data import and postprocessing. otherwise the import starts from the first data dump file entry.

002.Figure 172: JLOAD: Data Dump File Figure 173: Data Dump File Structure 6. . <DUMP>.e.40 / 7...). <DUMP>.00 If not otherwise specified in the export job file. a dump file can grow up to 2 GB before an additional file will be automatically created (i. Because the length of each data block can be found in the respective header. .001. JLOAD can easily search for a certain location inside the data dump file. Figure 174: Data Dump File Structures for separated Meta Data If JPKGCTL was used meta data and table data were put into separate dump files.

Figure 175: JPKGCTL (JSPLITTER): Package Sizes

Figure 176: Size Information for JMIGMON
After the package splitting was completed, JPKGCTL writes the “sizes.xml” file containing the expected
package sizes. This helps JMIGMON to identify large packages which should be exported first.

Figure 177: Common JLOAD Command Line Options
Parameters:
• url = url for database to connect
• driver = JDBC database driver
• auth = database logon
If no job file is specified, the complete database will be exported by default. In addition, suitable
“EXPORT.XML” and “IMPORT.XML” files will be generated. The default log file name will be “JLOAD.LOG”,
unless a job file is specified; in this case, the log file will get the same name as the job file, with *.XML replaced
by *.LOG.

Figure 178: Common JLOAD Command Line Options

Exercise 7: R3LOAD & JLOAD Files (Part I)
Solution 7: R3LOAD & JLOAD Files (Part I)
Task 1:
In a DB migration of a large database to Oracle, it was decided to move the heavily-used customer table ZTR1
(TABART APPL1) to a separate table space. No changes were done to the ABAP dictionary in advance. The
export was executed the normal way.
1. What changes should be done to the R3LOAD files for creating table ZTR1 and its indexes in tablespace
PSAPSR3ZZTR1 and to load data into it?
Fragment of SAPAPPL1.STR
tab: ZTR1
att: APPL1 4 ?? T all ZTR1~0 APPL1 4
fld: MANDT CLNT 3 0 0 not_null 1
fld: MBLNR CHAR 10 0 0 not_null 2
fld: TSTMP FLTP 8 0 16 not_null 0
ind: ZTR1~PSP
att: ZTR1 APPL1 4 not_unique
fld: MANDT
fld: TSTMP

After the import is finished, which dictionary maintenance tasks should be done?
a) To create additional tablespaces on the target database, the files DBSIZE.TPL or DBSIZE.XML must be
adapted. A new TABART / table-space assignment must be added in the file DDLORA.TPL file:
# table storage parameters
ZZTR1 PSAPSR3ZZTR1
# index storage parameters
ZZTR1 PSAPSR3ZZTR1
… and the original TABART in the SAPAPPL1.STR file has to be changed
from:
tab: ZTR1 att: APPL1 4 ?? T all ZTR1~0 APPL1 4
fld: MANDT CLNT 3 0 0 not_null 1
fld: MBLNR CHAR 10 0 0 not_null 2
fld: TSTMP FLTP 8 0 16 not_null 0
ind: ZTR1~PSP
att: ZTR1 APPL1 4 not_unique
fld: MANDT
fld: TSTMP

to:
tab: ZTR1
att: ZZTR1 4 ?? T all ZTR1~0 ZZTR1 4
fld: MANDT CLNT 3 0 0 not_null 1
fld: MBLNR CHAR 10 0 0 not_null 2
fld: TSTMP FLTP 8 0 16 not_null 0
ind: ZTR1~PSP
att: ZTR1 ZZTR1 4 not_unique
fld: MANDT
fld: TSTMP

2. After the import is finished, which dictionary maintenance tasks should be done?
a) After the import is finished, the ABAP dictionary should be maintained for table ZTR1 (update tables DDART,
DARTT, TSORA, TAORA, IAORA and DD09L).

Task 2:
An Informix export of a heterogeneous system copy with R3LOAD 6.x is short on disk space. None of the
available file systems is large enough to store the expected amount of dump data.
All TABARTs will fit into the “sapreorg” file system, except TABART CLUST, which has a size of 600 MB.
File system A: /tools/exp_1 ~ 400 MB free
File system B: /oracle/C11/sapreorg/exp ~ 4500 MB free
File system C: /usr/sap/trans/exp_2 ~ 350 MB free
1. Which SAPCLUST.cmd file content would allow an export without any manual intervention?
tsk: "/oracle/C11/sapreorg/install/SAPCLUST.TSK"
icf: "/oracle/C11/sapreorg/exp/DATA/SAPCLUST.STR"
dcf: "/oracle/C11/sapreorg/install/DDLINF.TPL"
dat: "/oracle/C11/sapreorg/exp/DATA/" bs=1k fs=1000M
dir: "/oracle/C11/sapreorg/exp/DATA/SAPCLUST.TOC"

a) Original SAPCLUST.cmd
tsk: "/oracle/C11/sapreorg/install/SAPCLUST.TSK"
icf: "/oracle/C11/sapreorg/exp/DATA/SAPCLUST.STR"
dcf: "/oracle/C11/sapreorg/install/DDLINF.TPL"
dat: "/oracle/C11/sapreorg/exp/DATA/" bs=1k fs=1000M
dir: "/oracle/C11/sapreorg/exp/DATA/SAPCLUST.TOC"

Modified SAPCLUST.cmd
tsk: "/oracle/C11/sapreorg/install/SAPCLUST.TSK"
icf: "/oracle/C11/sapreorg/exp/DATA/SAPCLUST.STR"
dcf: "/oracle/C11/sapreorg/install/DDLINF.TPL"
dat: "/tools/exp_1/DATA/" bs=1k fs=300M
dat: "/usr/sap/trans/exp_2/DATA/" bs=1k fs=300M
dir: "/oracle/C11/sapreorg/exp/DATA/SAPCLUST.TOC"

2. Which other solutions are possible with more or less manual intervention?
a) Move the dump files of small packages out of the export directory, as soon as they are completed. It can
also be helpful to reduce the dump file size, to move completed dump files of large packages sooner.

Task 3:
While doing an export, R3LOAD stops on error, because an expected table does not exist. This seems to be
an inconsistency between the ABAP Dictionary and the database dictionary. As most of the tables are already
exported, it does not make sense to restart the SAP instance to fix the problem and to repeat the export
afterwards.
1. How can R3LOAD 4.x be forced to skip the export of the table?
a) a) R3LOAD 4.x: In the *.STR file, the definitions of the non-existing table (and its indexes) can be marked as
comments, by placing a “#” at the beginning of each line. Deleting the entries would also work, but afterwards
the change will not be visible to others who might be searching for errors. Restart R3LOAD.
2. How can R3LOAD 6.x be forced to skip the export of the table?
a) R3LOAD 6.x: Change the status of the table entry inside the export *.TSK file to “ign” (ignore). This will help
fix the export problem, but for the import, you will still have to a change the *.STR file (see R3LOAD 4.x).
Restart R3LOAD.

Task 4:
During a heterogeneous system copy, because of a mistake while cleaning up some tables in an Oracle
database, the content of table ATAB was accidentally deleted. The SAP System was not started yet, but the
load of all tables is already finished.
1. R3LOAD 4.x: What can be done to load the content of table ATAB without re-creating the table or an index?
At least two solutions are possible. Which files must be created, and what should the R3LOAD command line
look like? Table ATAB belongs to TABART POOL.
SAPPOOL.cmd:
icf: /exp/DATA/SAPPOOL.STR
dcf: /install/DDLDBS.TPL
dat: /exp/DATA/ bs=1k fs=1000M
dir: /exp/DATA/SAPPOOL.TOC
ext: /exp/DB/DBS/SAPPOOL.EXT

Note: Check for R3LOAD command line options at the end of Unit 7!
a) a) Copy SAPPOOL.STR to ATAB.STR
b) Remove everything from ATAB.STR that doesn’t belong to table ATAB.
c) Inside ATAB.STR, change the action field from “all” to “data”
d) Copy SAPPOOL.cmd to ATAB.cmd
e) Change the content of ATAB.cmd
from:
icf: /exp/DATA/SAPPOOL.STR
dcf: /install/DDL<DBS>.TPL
dat: /exp/DATA/ bs=1k fs=1000M
dir: /exp/DATA/SAPPOOL.TOC
ext: /exp/DB/<DBS>/SAPPOOL.EXT

to:
icf: /<directory path>/ATAB.STR
dcf: /install/DDL<DBS>.TPL
dat: /exp/DATA/ bs=1k fs=1000M
dir: /exp/DATA/SAPPOOL.TOC
ext: /exp/DB/<DBS>/SAPPOOL.EXT

f)
R3load –i ATAB.cmd –p ATAB.log –k migration key

Business Example You want to execute R3LOAD standalone.CMD files. Solution 8: R3LOAD & JLOAD Files (Part II. The R3load command line looks different: R3load –o TIVP –i ATAB. If R3SZCHK computes the initial extent of tables smaller than needed.log –k <migration key> Task 5: In an Oracle OS migration the database must be installed with dictionary-managed tablespaces because of certain reasons.x features and command line options can be used to load table ATAB again? a) As in solution 1. Hands-On Exercise) Exercise Duration: 25 Minutes Exercise Objectives After completing this exercise. Note: You will logon as an administrator! Please do not make any changes to the system. the number of next extents increases because the size category values are often too small.x: Which R3LOAD 6. password. R3LOAD 6. Exercise 8: R3LOAD & JLOAD Files (Part II. These size categories are part of the technical settings of tables and will not be updated by any external database administration tool. Use name work“<group-id> (i. The customer adapted the Next Extents values in the source database on a regular base. and hostname as supplied by the trainer). It will set required environment variables.e. Hostname: ________________ Group-ID ________________ Telnet user ________________ Password: ________________ Hostname ________________ Instance #: ________________ SAP user ________________ Password: ________________ Client # ________________ If there is a unique group number on your workstation monitor. 1. Depending on the training setup. What can be done to reduce the number of extents in the next test run? a) The initial or next extent values of the involved tables should be increased by modifying the *. After the test import. except for those explained in this exercise. Hands-On Exercise) Task 1: This is a hands-on exercise for which you must logon to the source system of the example migration. use the Windows Remote Desktop Connection or Telnet to logon to system DEV (logon method. some large tables and indexes show a huge amount of extents.STR file. 2. you will be able to: • Use R3LOAD standalone. to fix problems or to make use of specific settings not possible in the standard setup. Perform the following preparation steps: Change to the drive and the directory as supplied by the trainer Copy the whole directory “TEMPLATE” to your work directory. but skip step c). please use this number as your Group-ID. • Manually create *.e. What are the reasons of so many extents in the target database? a) The next extent values used by R3LOAD are obtained from the size categories of the ABAP Dictionary. ZZADDRESS00) .2. work00) Use the editor “notepad” for the Windows Remote Desktop Connection or “xvi” for Telnet to perform the following modifications: In ZZADDRESS.bat” in your work directory.STR change the table and primary key name ZZADDRESS to ZZADDRESS<group-id> (i. with SAPINST. user.TSK and *.e. 1.e. i.EXT or *. “xcopy TEMPLATE work00”) Execute “env.cmd –p ATAB. Repeat this step after each logon! Change to your work<group-id> directory (i.

40/V1. Which fields belong to the primary key of table ZZADDRESS<group-id> a) The primary key uses field NAME only.TSK MSS-l ZZADDRESS. but not in the ABAP Dictionary Task 4: 1.TPL ZZADDRESS.\ bs=1K fs=1000M dir: ZZADDRESS.TSK icf: ZZADDRESS.EXT change the table and primary key name ZZADDRESS to ZZADDRESS<group-id> ZZADDRESS~0 to ZZADDRESS<group-id>~0 In ZZADDRESS. which can be used to import table ZZADDRESS<groupid> Note: R3load for Windows is recognizing “\” and “/” as path separator. press “Escape” first a) ZZADDRESS.TPL dat: .4 id: adb1c36e00000046 cp: 4103 data_with_checksum tab: [HEADER] fil: ZZADDRESS. Save the shown output to a file.TSK icf: ZZADDRESS. Use R3LOAD to create the import task file ZZADDRESS. end insert mode: press “Escape” Delete character under cursor: press “x” Delete character while in insert mode: press “Backspace” Save file: enter “:wq” (write and quit).TOC vn: R6. a)R3load -ctf I ZZADDRESS.STR dcf: DDLMSS. which works very well on telnet sessions.STR dcf: DDLMSS. press “Escape” and try it again Do not use cursor keys while in insert mode.STR tab: ZZADDRESS00 att: SSEXC 0 XX T all ZZADDRESS00~0 USER 0 fld: NAME CHAR 30 0 0 not_null 1 fld: CITY CHAR 30 0 0 not_null 0 ZZADDRESS. ZZADDRESS00~0) In ZZADDRESS.log ZZADDRESS.e. Task 3: 1.STR DDLMSS. a) tsk: ZZADDRESS.EXT .TSK: T ZZADDRESS00 C xeq P ZZADDRESS00~0 C xeq D ZZADDRESS00 I xeq Task 5: 1.TOC ext: ZZADDRESS.001 1024 1 1 eot: #0 rows 20050606141758 tab: ZZADDRESS00 fil: ZZADDRESS.“xvi” survival guide The editor “xvi” is a “vi” implementation for Windows systems.TOC change the table name ZADDRESS to ZZADDRESS<group-id> Hint: Edit Notes .001 1024 22 eot: #20 rows 20050606141758 eof: #20050606141758 Task 2: 1. Insert mode: press “i”.EXT ZZADDRESS00 16384 ZZADDRESS00~0 16384 ZZADDRESS. if it doesn’t work./ bs=1K fs=1000M dir: ZZADDRESS.TOC ext: ZZADDRESS.TPL dat: . Use an editor to create an R3LOAD command file. a) Check for tables that only exist in the database.EXT Alternate notation: tsk: ZZADDRESS.TSK. Logon to SAP System DEV and verify the ABAP Dictionary against the DB Dictionary in transaction DB02.ZZADDRESS~0 to ZZADDRESS<group-id>~0 (i.

ZZADDRESS00" Wattenberg Muenchen Werle Offenbach (20 rows affected) Note: As the dump was created on a little endian Unicode system (see ZZADDRESS.TOC Alternate notation: tsk: ZZADDRESS. The command line is case sensitive! osql -E -Q "SELECT * FROM dev. argv) has not been called.TPL dat: . Try the import again.log: (IMP) INFO: import of ZZADDRESS00 completed (20 rows) ZZADDRESS. table "ZZADDRESS00" 2.ZZADDRESS<group-id>" a) Import table ZZADDRESS00: R3load -dbcodepage 4103 -i ZZADDRESS.TSK and ZZADDRESS.CMD tsk: ZZADDRESS..STR dcf: .TSK icf: . What happened? What is the content of ZZADDRESS.TSK forces R3LOAD to delete the table content before starting the import. Import table ZZADDRESS<group-id> with R3LOAD and check the content of table ZZADDRESS<groupid> by using the MSS command line utility “osql”.log? a) Because of the primary key on field “NAME”. sapparam: SAPSYSTEMNAME neither in Profile nor in Commandline Task 7: Repeat the verification of the ABAP Dictionary against the DB Dictionary (do a refresh!).TSK: T ZZADDRESS00 C ok P ZZADDRESS00~0 C ok D ZZADDRESS00 I ok osql -E -Q "SELECT * FROM dev.001 file! a) Create directory “export” and copy ZZADDRESS. R3LOAD returns an error (rc=26 error).TSK file contains: D ZZADDRESS00 I err ZZADDRESS.STR dcf: ./DDLMSS. sapparam(1c): No Profile used../ bs=1K fs=1000M dir: ZZADDRESS. For more information on “osql”. the import must be performed with dbcodepage “4103”. it is impossible to insert two identical names. Create a new sub-directory in your work directory.TOC . Edit the copied command file: ZZADDRESS.log: (IMP) ERROR: DbSlEndModify failed rc = 26. Task 8: Try to load table ZZADDRESS<group-id> again by changing the ZZADDRESS<group-id>.log ZZADDRESS. Create a task and a command file to export ZZADDRESS<group-id>./ZZADDRESS..log: (IMP) INFO: import of ZZADDRESS00 completed (20 rows) Task 9: 1.CMD into it.Task 6: 1. ZZADDRESS.\ZZADDRESS. 1. The ZZADDRESS.\DDLMSS.TOC).txt” in your work directory. Change to directory “export”.. Name it “export”. osql -E -Q “select count(*) from dev.\ bs=1K fs=1000M dir: ZZADDRESS. see document “MSS_osql.TOC and ZZADDRESS. what happens now? a) The import works the second time as the status “err” in ZZADDRESS. Ignore R3LOAD messages starting with “sapparam”: sapparam: sapargv( argc. Does the output look different than before? New entries? a) Transaction DB02 will show your imported table ZZADDRESS00. If not.TSK icf: . TSK file: D ZZADDRESS<group-id> I ok → D ZZADDRESS<group-id> I xeq 1.. Export table ZZADDRESS<group-id> and compare the number of exported rows against the number of table rows in the database.ZZADDRESSgroup-id” Note: Make sure not to overwrite your existing ZZADDRESS.cmd -l ZZADDRESS.TPL dat: . refresh the display (tables of your student neighbors might be visible as well).

log ZZADDRESS..log There should be 20 rows in the database.TSK MSS -l ZZADDRESS.TOC file. Business Example You need to know the long running OS/DB Migration steps to estimate the time schedule in a cut-over plan. The sum of all of these provides the estimated size of the target database.CMD -l ZZADDRESS. Lesson: Time Consuming Steps during Export / Import Lesson Overview How to identify. . Different databases have different space requirements for storing the data. which can be utilized to speed up the export and the import phase of a migration.\ZZADDRESS. and the same number should be mentioned in the *. Unit 8 Advanced Migration Techniques This unit describes advanced techniques. As a conclusion they will be able to execute them as early as possible and to avoid them during the downtime (if possible).. it’s important to know the time consuming migration steps.TPL ZZADDRESS. minimize.TSK containing the following line (you can use R3LOAD or an editor): R3load -ctf E .STR . or avoid time consuming steps during the export/import phases Lesson Objectives After completing this lesson.TSK D ZZADDRESS00 E xeq Start the export: R3load -datacodepage 4103 -e ZZADDRESS. you will be able to: • Identify the time consuming steps during export / import • Minimize the downtime by applying appropriate measures For the students. The programs R3LDCTL/R3SZCHK compute the INITIAL EXTENT of all tables and indexes for the target database.\DDLMSS. For comparable results use a SQL statement like this: SELECT COUNT (*) FROM CDCLS WHERE PAGENO=’0’ .Create the task file ZZADDRESS. that the number of rows on ABAP cluster tables will differ between source and target system in case of a Unicode Conversion because of their compressed content. Figure 179: General Remarks Please take into account. Figure 180: Technical View: Time Consuming Export Steps (1) It depends on the SAPINST version and database whether the above tasks are available or not.

large data updates should be avoided after creating the WHERE conditions (or compute them again). If possible. the update statistic can be postponed to a later point in time where it can run in parallel with other activities. special consideration apply to ROWID splitting. If using the Oracle PL/SQL table splitter. More information in SAP Note: 1043380. If some of the tables a very large the downtime can be decreased by splitting the large tables into several smaller package. You do not need to compress these files again (you may even find that the resulting file is larger than before). Export and import processes should run in parallel during the system copy process. external USB disks. or tapes. Figure 181: Technical View: Time Consuming Export Steps (2) The most important way to tune export performance is to optimize the use of parallel export processes. Normally the first database update statistic is started directly after the database import. it can be useful to set the dump file size to a small value. the copy can be started. Transportable storage devices can be DVDs. When R3LOAD stores the exported data into dump files. Note: the MIGRATION MONITOR waits until all dump files of a package have been completed. Figure 182: Technical View: Time Consuming Import Steps If a parallel export / import using R3LOAD is planned the database must be ready to import when the export starts. SAPINST Export Preparation: You want to build the target system up to the point where the database load starts.Depending on the database. the conditions will fetch all data in the table. The computed WHERE conditions are defined in such a way that data added or deleted afterwards doesnt matter. SAPINST Table Splitting Preparation: Optional step for preparing the table splitting before starting the export of a SAP System based on ABAP. which should run before the export. which can be the processed in parallel. before the export of the source system has finished. table splitting can be a time consuming process. it uses a very efficient compression algorithm. To save time for coping very large amounts of dump data to the target media/system. As soon as a dump file is completed. If short on time. . In most cases there is not enough time for that during the export/import downtime. like 300 MB. laptops.

R3SETUP/SAPINST must be restarted after all data is loaded. Only start R3LOAD processes for the *. This will cause competition between your R3LOAD process and the R3LOAD processes started by R3SETUP to process the same *. If you are starting R3LOAD manually. as a backup task file already exists.LOG” file. Your own R3LOAD process must be started with the same parameter set as was used by R3SETUP or SAPINST before.log” files in the installation directory (for example: <sapsid>adm). make sure that your current working directory is the installation directory! . If R3LOAD processes terminate with an error condition. In the case of using SAPINST. Change into the install directory. R3SETUP/SAPINST will stop after all R3LOAD processes are finished.LOG File Do not forget to add the restart parameter “-r” to the command line of R3LOAD (4.Figure 183: Saving Time on Import – After Load Errors R3SETUP or SAPINST is starting a one-time R3LOAD process for each package. Log-on as the operating system user who owns the “<PACKAGE>. The execution of R3SETUP/SAPINST must be repeated until all R3LOAD processes are successful.6D and below only). If you know that the cause of an R3LOAD error termination is fixed. to execute the remaining steps of the installation/migration.STR files that have already been processed by the current run of R3SETUP or SAPINST. Figure 184: R3LOAD Parameters from Import *. the second R3LOAD process will be stopped automatically. Never restart R3SETUP while your own R3LOAD processes are running. The parameters can be obtained from the corresponding “<PACKAGE>. you can save time by starting R3LOAD beside R3SETUP or SAPINST.STR file.

This ensures the maximum parallelism of R3LOAD processes.40. SAPINST 6. Optimizing the database parameters speeds up the export or import process and can prevent time-consuming errors because of bottlenecks. The term package is used a synonym for *.STR) into several smaller files. Figure 187: JAVA-Based Package Splitter The JAVA Package Splitter can also be used for earlier releases than Web AS 6. you can: Split package files (*. Always try to export/import large tables first. In a fast and stable network environment the usage of the R3LOAD socket method can save time. This causes long running R3LOAD export and import processes for the TABARTs. only the JAVA Splitter will be used.STR) and creating additional package files for large tables. To save time and optimize the parallelism of R3LOAD processes. .Figure 185: Export / Import Time Diagram In customer databases. Reduce CPU load on the database server by running R3LOAD on a different system. or separate large tables into additional package files. Very large tables should be exported / imported with multiple R3LOAD processes (table splitting). Starting with NetWeaver 04S. Figure 186: Optimizing the Export / Import Process Export and import times are reduced by splitting package files (*.40 for NetWeaver ’04 can call the JAVA or the PERL Package Splitter (depending on the selected option). most of the transaction data is stored in tables belonging to only a few TABARTs.STR files.

R3S command file for R3SETUP releases since 4. Package file names for split *. The Perl script is self-explanatory. The splitting of *. and split there. thus preserving them The SPLITSTR. The installed version of Perl can be checked with “perl – v”. Calling SPLITSTR.4. . SPLITSTR. if tables are named in a provided input file.6 and SAPINST calls SPLITSTR. The R3LOAD import process has to be started first. because the export process must connect to an existing socket . the *. R3LOAD writes directly to the opened socket and does not need any dump or table of content (*.x or higher is installed. (Do not try to use an earlier version. Error situations will be handled.STR files are generated automatically.otherwise the process fails.TOC) file.PL script is not intended to be used on 3. Do not use the Perl splitter for Unicode conversions! The DBEXPORT. and split there. even if R3LOAD provides the socket option).PL on already split files. The exporting and importing processes will use their respective task files for restart.40 and later.The Splitter analyzes the content of *.EXT files. Fine tuning can be done to *.STR files are generated automatically.STR files is even possible without *. as with conventional exports and imports.PL analyzes the content of *.PL if the option has been selected.EXT files to find the best splitting points. In the case where no Perl is installed. Make sure that the export or import process does not fail because of database resource bottlenecks.STR files.STR files can be transported to another system.x *.x migrations. Figure 189: R3LOAD Export/Import Using Sockets Socket connections are released for R3LOAD 6. The Migration Monitor does support socket connections in an easy-to-configure way.PL may also be used for earlier Release 4. The documentation is provided as PDF file together with the splitting tool.EXT files to find the best split points. Alwayssplit from the original files.STR files can be transported to another system. Package file names for the split *.STR files after a test migration. The results are erroneous! A Perl version is available for every operating system. Fine tuning can be done to *.STR files after a test migration. Figure 188: PERL-Based Package Splitter The Perl script SPLITSTR. Network interruptions will terminate the export and import process immediately. In the case where no JAVA JRE 1.PL without parameters or using the “-help” option causes a help text to appear. as it can lead to problems. R3LOAD restarts can make the import more time consuming than expected. Do not use SPLITSTR. the *.

TOC file will be created .TOC files are created.Figure 190: R3LOAD Socket Connections – Technical View The same files are used. Figure 191: <PACKAGE>. The socket port can be any free number between 1024 and 65535 on the import host. The importing process must be invoked before the export process can be started. but no dump or *. Meaning of section names: • tsk: Task file • icf: Independent control file • dcf: Database dependent control file • dat: Socket port number and name or IP address of the import host • ext: Extent file (not required at export time) The “dir” section is not required because no <PACKAGE>.CMD – Socket Connection ≥ 6.40 R3LOAD must be started with the “-socket” command line option. The R3LOAD control files must be accessible as usual. on the source and target system. as in standard R3LOAD scenarios.

As this is a long running task. Figure 194: SAPINST: Complete / Modify Package Table (6. that tables are containing the right number of rows after import. Database and table depending exceptions are handled automatically.40 The importing process listens on the specified port and waits for the exporting process to connect. This is an indicator that R3LOAD did run for them. to check the import completeness. The second check is to verify that each action in the task file is completed successfully (option: -checkObjects). It can be used to make sure. The “Table Checker” feature is used to check the number of table rows.LOG – R3LOAD Socket Logs ≥ 6.40) SAPINST: allows for custom export/import order definitions and even the change of individual parameters for each single package.Figure 192: <PACKAGE>. Unsuccessful tasks are listed in an output file. Figure 193: Migration / Table Checker (MIGCHECK) – Features The JAVA-based “Migration Checker” was developed to check that there is a log file for each package file (option: -checkPackages). it can be started manually only. . The two features are used by SAPINST for NetWeaver 04S and later.

PKGCMDFILE: The name of the command file for this package.. Sorting takes time and needs a large temporary storage. Certain table types are not allowed to be exported in an unsorted way.. (Will be generated automatically. PKGNAME: Name of the package (*. Do not override this option when you export MaxDB. so that you can avoid performance problems during the import. but if you want to use your own.TPL files to simplify unsorted exports since NetWeaver 04. you should refer to Note 1054852. ORDER_BY_PKEY. If you use MaxDB as target database. Take care about consequences in the target system (performance impact).. the system unloads the data as sorted. PGKDIR: Path where DATA and DB sub-directories for the package reside.TPL file: prikey: . you may enter its name). Figure 195: Unsorted Export Before starting an unsorted export. .STR file). R3LDCTL generates DDL<DBS>_LRG. instead of the mechanism above. PKGFILESIZE: Size of the data dump file. The load starts with the lowest values first. Negative values are also allowed. you should export all of the tables as sorted. As NetWeaver 04S uses MIGMON to start R3LOAD. if it can be omitted.TPL file. please read SAP Note: “954268 Optimization of export: Unsorted unloading”! By default. the export will be faster. you can unload sorted data only. This is controlled by the following entry in the DDL<DBS>.ORDER: defines the sequence in which the packages are to be loaded. PKGID: Identifier <SAPSID>_<DBSID>. you must export all of the tables as sorted. If you use MSSQL as the target database. the advanced features of the Migration Monitor are used. If you have to unload the tables as unsorted and if you use MSSQL as the target database. PKGDDLFILE: The name of the DDL<DBS>. SAP Note 954268 explains the considerations release and code page dependent. PKGLOADOPTIONS: Additional DB specific R3LOAD options that will be applied when this package is imported. If you use MaxDB as the source database.

In such cases. R3LOAD will not be able to read table data from i. Customer databases can contain tables and indexes that require a larger “initial extent” than the maximum possible in a single data container.STR files after export.EXT file and adapt the “next extent” size class in the relevant *. This gives the database some space for internal administration data.STR file was changed after export.STR Do not re-order tables in *.002 and for the next table from file *.001.Figure 196: Changing R3LOAD Table Load Sequence in *. Figure 197: Initial Extent Larger than Consecutive DB Storage The situation above can be a problem on Oracle dictionary managed tablespaces. . Lesson: MIGMON . The new “initial extent” size should be slightly less than the maximum available space in the data container. but should not apply to locally managed tablespaces as well. File *.STR file.Migration Monitor for R3LOAD Business Example You need to know the appropriate MIGMON configuration scenario for specific customer SAP System landscapes.STR file. If more than one dump file exists for a single *.e. if the table order in the *. reduce the “initial extent” in the *.

and MIGMON is used for the import. The client MIGMON is used to transfer the files to the target host and to signal the importing MIGMON. In the case of socket usage. Even if MIGMON is not used for the import. Groups of packages can be assigned to different DDL*. The note also describes how to download the software from SAP Marketplace.TPL files. The export job number is ignored. Already existing *. where R3SETUP/SAPINST performs the database export. Even if MIGMON was not used to perform the export. the import can still benefit from the advanced MIGMON R3LOAD control features.TSK or *. The export server mode applies where R3SETUP/SAPINST will be replaced for the export. that a package is ready to load. Data transfer configuration variants . because the Export Monitor requests the job number from Import Monitor during startup. Figure 200: Migration Monitor (MIGMON) – Parameters The number of export and import processes can be different.CMD files will not be overwritten.Figure 198: Migration Monitor (MIGMON) – Features (1) SAP Note: 784118 “System Copy JAVA Tools”. Figure 199: Migration Monitor (MIGMON) – Features (2) The export client mode applies. the advanced control features of the export processes can help to save time. the number of export and import processes is the same. but used.

where file systems can be shared. the export will be provided on a transportable media only (possibly no fast network connection to source system available).SGN) will be created in the network exchange directory. As soon as a package is successfully exported. but it is a reliable method of data transfer.properties” is generated from the exporting Migration Monitor before it exits and is used to inform the importing Monitor about the total number of packages and how many of them are erroneous. It can be combined with ftp to copy R3LOAD control files to the target system.STR. • Stand-alone: MIGMON runs stand-alone. If all export packages are ok. the corresponding signal file (*. it starts the load of the SAPVIEW.sh/bat files should be used.e.• FTP: File transfer via FTP between source and target system • Network: Export directory is shared between source and target system • Socket: R3LOAD will use sockets (requires R3LOAD 6. Now the importing Migration Monitor starts an R3LOAD process to load the dump from the shared export directory.the export_monitor_secure. i. the export directory and the network exchange directory are shared from the exporting to the importing system. exports should always be done to local file systems! In the example above.40 or higher). The file “export_statistics. or various Windows tools) . After the successful load of all packages. For consistency reasons. The usage of FTP might be a security risk. To hide FTP password in the command line (visible using “ps – ef” command on UNIX. the importing Migration Monitor stops looking for new packages in the exchange directory. Figure 202: Migration Monitor – FTP Configuration Variant . FTP Parameters contain the logon password. Figure 201: Migration Monitor – Net Configuration Variant The Migration Monitor Net Configuration Variant is useful in environments.

In the above example. the export and import directories are located on different hosts. As soon as a package is successfully exported. A network share. for each R3LOAD process started. After success. the corresponding files will be transferred to the importing system. that the right port numbers will be written into the corresponding *. FTP servers may have certain default settings. the signal file (*. the fastest method ever in export and import of data in that we have to have a stable network and we have to make sure that the exporting and importing databases always have enough resources to serve the R3LOAD processes. The communication between the export and import Monitor ensures.properties” file is used in the same way as in Net mode. which limit the amount of data which can be copied in a single session.SGN) will be created in the FTP exchange directory. not provide a sufficient description of the FTP problem. The socket port numbers are incremented one-by-one. Figure 203: Migration Monitor – Socket Configuration Variant The Migration Monitor socket method is in theory. The FTP exchange directory is on the target system. No port number is used twice. but a FTP file transfer is possible. In the case of unclear FTP transfer problems it is very important to check FTP server logs and settings. . Then the importing Migration Monitor starts an R3LOAD process to load the dump from the import directory. The importing Migration Monitor must be started first. where file systems cannot be shared.CMD files. Unusable port numbers are skipped (may be in use by others). using the provided socket port.The Migration Monitor FTP Configuration Variant is useful in environments. make sure that a whole port range (base port + number of R3LOAD packages + safety) is released for the duration of the migration. If a firewall is between the source and target system. or the Migration Monitor FTP file transfer (option –ftpCopy) can be used to copy the R3LOAD control files to the target system. a manual file copy. The exporting Migration Monitor connects to the importing Monitor. Pay attention to the FTP time-out settings. because the returned error information will sometimes. The “export_statistics.

Error (part 3 of splitted table) SWW_CONT-4=0 Not started yet (part 4 of splitted table) SWW_CONT-5=0 Not started yet (part 5 of splitted table) SWW_CONT-post=0 Not started yet (secondary index creation.Figure 204: Migration Monitor – Stand-Alone Configuration The Migration Monitor Stand-Alone Configuration Variant is useful in environments. In case of a file transfer restart. Figure 205: Migration Monitor – Control Files The export/import state or the file transfer state can be changed from minus (“-”) to zero (“0”) for restarting R3LOAD or a file transfer. pre-processing) . all dump files of a package are copied again. the export and import directories are located on different hosts in different locations. Sockets only: MIGMON for NetWeaver ’04 cannot restart the R3LOAD process by changing the state only (future versions will support this). or the existing connection is too slow for a file transfer.properties SAPAPPL1=0 Not started yet COEP=? Running SWW_CONT-1=+ Finished (part 1 of splitted table) SWW_CONT-2=+ Finished (part 2 of splitted table) SWW_CONT-3=. where source and target systems do not have a network connection. post-processing) SWW_CONT-pre=+ Finished (table and primary key creation. In the above example. The Migration Monitor is used to start R3LOAD processes only. Example: import_state. The file transfer between the source and target system will be done using transportable media.

SAPINST allows you to select different orders for unloading or loading the database.e. i. Figure 208: Summary: R3LOAD Unload/Load Order by Tool The MIGRATION MONITOR unload/load process order can be defined in the respective properties file.e. SAPINST Netweaver 04S requires a manual start of MIGMON if using the socket mode. a file can be provided that contains a list of packages used to define the unload/load order. i. If the file does not contain all existing packages.Figure 206: MIGMON Installation Tool Integration (1) The MIGMON server mode for pre-NetWeaver 04 SR1 versions can only be used if SAPINST has been forced to stop. by implementing an intended error situation. In addition. Figure 207: MIGMON Installation Tool Integration (2) The MIGMON server mode for NetWeaver 04 SR1 can only be used if SAPINST had been forced to stop. (Nothing will be lost). The MIGMON R3LOAD start features are integrated into SAPINST dialogs.STR file gives a good control over the unload or load process. the remaining packages are unloaded in alphabetical order and loaded by size – starting with the largest package. . The feature of customizing the execution order of each *. SAPINST NetWeaver 04S uses MIGMON to start R3LOAD processes. by implementing an intended error situation.

the packages will be imported as specified in “import_order.TPL files. Often two different export.TPL).and import-order files make sense. the tables were splitted from its standard *.Figure 209: MIGMON Export / Import Order In the above example. For the import it was decided to give them top priority because they have a lot of indexes and so the index creation times will exceed even the import time of the largest table SOFFCONT1. In the above example the tables GLPCA and MSEG are big. The Migration Monitor will export the packages in exactly the order as defined in “export_order. shows a how to export three packages unsorted (DDLORA_LRG.TPL File Usage The Migration Monitor can be used to export or import selected packages with specific DDL<DBS>. . Figure 210: Advanced MIGMON DDL*. if some tables have a lot of indexes but are small compared to the largest tables. but not the biggest.txt”. the largest tables should be exported first. On the target system. In this case the overall run-time of a smaller table can be much longer then for the larger table. If no package mentioned in “import_order. because of the index creation time.TPL) and the majority of all tables the standard way (DDLORA.STR files into package files containing one table only.e.txt”. i. Afterwards it will export the remaining packages in alphabetical order.txt”. For that purpose. is available for import (still exporting) the package with the next largest dump file will be used instead. The above export example.txt”. The package names were inserted into “export_order.

Time Analyzer Business Example You need to analyze the export/import behavior in an OS/DB Migration to minimize the downtime for the final migration of a productive system. For that purpose two different DDL<DBS>. Over time. JMIGTIME retrieves the time information from the JLOAD <PACKAGE>. R3LOAD 6. Lesson: MIGTIME & JMIGTIME .TOC files has been improved by adding more and more information.TPL) and the other two packages with index creation parallel degree 4 (DDLORA_par_4.TPL).TOC and *. the content of R3LOAD *. utilizes a special Oracle feature to parallelize the index creation. as seen above in the longest running tables. and additionally provides run-time information.LOG and *.40 writes separate time stamps for data load and index creation (earlier versions did not!). The Time Analyzer can handle all existing formats. The note also describes how to download the software from the SAP Marketplace. The remaining packages are imported as usual (DDLORA.LOG files.TPL). . Figure 212: Time Analyzer – Output Based on Export Files (1) The list output shows the start/end date and the export duration of each package. MIGTIME obtains the export import time information from *.XML files.STAT.The import example. Figure 211: Time Analyzer (MIGTIME / JMIGTIME) – Features SAP Note: 784118 “System Copy JAVA Tools”.TPL files were generated to import two packages with index creation parallel degree 2 (DDLORA_par_2.

because create table/index times are present in the log files. therefore the time for tables in the old R3load releases is not 100% correct: table time = table load time + index/pkey creation time for the previous table (if index/pkey is created after data load). 6.40 R3load releases too.40. The list of long running tables can be generated for pre-6. Figure 214: Time Analyzer – Output Based on Import Files (1) The list output shows the start/end date and the import duration of each package. table time is correctly determined. If the used R3LOAD version (i. only a time column. but it does not contain data and index columns. From R3LOAD 6. The log file contains time information for data load ends. Figure 215: Time Analyzer – Output Based on Import Files (2) .e. the output list can then distinguish between data load and index creation time.Figure 213: Time Analyzer – Output Based on Export Files (2) The HTML output gives a quick overview on the package run-time distribution.40) provides time stamps for each table import and primary key/index creation.

For each WHERE condition one R3LOAD can be started. The resulting “<table>. the overall time saving because of the parallel export itself is significantly enough. the usage of MIGMON is mandatory. a sequential import of splitted tables can be forced by defining MIGMON load groups. Because of the complex handling of splitted tables. The parallel export does not reduce the export time only.WHR” format (WHERE SPLITTER). Figure 217: R3TA Table Splitter R3TA analyzes a given table and returns a set of WHERE conditions that will select approximately the same amount of rows.WHR” file requires further splitting into “<table>-n.Figure 216: Time Analyzer – Time Join Lesson: Table Splitting for R3LOAD Lesson Overview Explanation of the table splitting procedure for R3LOAD Business Example You need to know how R3LOAD table splitting is working and how to troubleshoot problems. it will also allow an earlier start of the import. Even if the parallel import into a single table is not supported on your database. SAP Note: 952514 Using the table splitting feature . If the parallel import into a single table is not possible on a particular database type. Please check the respective system copy manual and related notes for current limitations.

Figure 218: Oracle PL/SQL Table Splitter
The PL/SQL table splitter analyzes a given table and returns a set of WHERE conditions that will select
approximately the same amount of rows. For each WHERE condition one R3LOAD can be started. Normally
the PL/SQL script is faster then R3TA as it is using Oracle specific features. The resulting *.WHR files can be
used without further splitting (no WHERE SPLITTER required).
SAP Note: 1043380 Efficient Table Splitting for Oracle Databases (the current PL/SQL table splitter script is
attached to the note)
Specific ROWID table splitting limitations:
• ROWID table splitting MUST be performed during downtime of the SAP system. No table changes are
allowed for ROWID splitted tables after ranges have been calculated and export was completed. Any table
change before the export requires a recalculation of the ROWID ranges.
• ROWID splitted tables MUST be imported with the “-loadprocedure fast” option of R3load.
• ROWID table splitting works only for transparent and non-partitioned tables.
• ROWID table splitting CANNOT be used if the target database is a non Oracle database.

Figure 219: Table Splitting in SAPINST ≥ NW04
Table splitting is a task which will be done before the export. The “split_input.txt” file must specify the tables to
split and how often. Take care about the different input formats in case of R3TA or the Oracle PL/SQL table
splitter. Check the corresponding system copy guide.
The “R3ta_hints.txt” contains predefined split fields for the most common large tables. More tables and fields
can be inserted with an editor. The file has to be located in the directory in which R3ta will be started. If
“R3ta_hints.txt” was found and the table to split is inside, the predefined field will be used, otherwise R3TA
analyzes each field of the primary key to find the best matching one. The “R3ta_hints.txt” is part of the
R3TA archive which can be downloaded from SAP Marketplace, if not already on the installation media.

CAUTION: When doing a system copy with the change of the code page (non-Unicode to Unicode; 4102 to
4103; 4103 to 4102), make sure not to use a WHERE condition with the PAGENO column included for cluster
tables (i.e. CDCLS, RFBLG, ).
The resulting “*.WHR” files will be written into subdirectory DATA of the specified export directory. Table
splitting will take place if the specified export directory is the same like for the R3LOAD export later on. The
“whr.txt” file contains the name of the splitted tables. It can be used as an input file for the package splitter to
make sure that each splitted table has it own *.STR file.
It depends on the SAPINST release whether a database type can be selected or not. SAPINST 7.02 can make
use of the Oracle PL/SQL splitter if the database type Oracle was selected. Radio buttons allow to choose
between the R3TA and the PL/SQL table splitter.

Figure 220: Example of an R3TA Based Table Splitting
The above example shows the R3TA WHERE file creation for an Oracle database.
The CKIS.STR is provided to the command line to tell R3TA which fields belong to the primary key.
R3TA generates a CKIS.WHR file containing the computed R3LOAD WHERE conditions, a set of files to
create a temporary index, and a further set of files to drop the temporary index.
It must be decided on an individual base, whether it makes sense to create an additional index or not.

Figure 221: R3TA Example: Create Temporary Index (Optional)
Depending on the database type, database optimizer behavior, table type, table field or table size, a temporary
index can improve the R3LOAD data selection considerably. To find out whether a temporary index makes
sense or not, a SQL EXPLAIN statement can help to check the database optimizer cost factor on the data to
select. Indexes should be checked on a copy of the productive system for example.

The corresponding system copy guide describes how to create or delete R3TA related indexes.

Figure 222: R3TA Example: Drop Temporary Index
If the temporary index does not improve the R3LOAD export, it can be dropped using the predefined files or
with SQL commands directly.

Figure 223: R3TA Example: WHERE Condition File CKIS.WHR
R3TA writes all WHERE conditions for a table into one single file. It must be split into pieces to utilize a parallel
export with MIGMON.
If it cannot be achieved to create exactly the number of splits as requested, it can happen that more or less
WHERE conditions are created. In the example above, 10 split were requested but R3TA created 11.

WHR files directly usable by MIGMON. Figure 225: Example of an Oracle PL/SQL Based Table Splitting The above example shows the PL/SQL script based WHERE file creation for an Oracle database. Opposite to R3TA. A description of the WHERE splitter usage is available in the splitter archive.Figure 224: R3TA Example: CKIS. In case of SAPINST. A split strategy can be chosen between field or ROWID splitting.WHR Splitting Each WHERE condition must be put into a separate file. it will be called automatically. the WHERE splitter must called manually. If R3TA was called directly. otherwise the MIGMON mechanism to support table splitting would not work as intended. . the PLS/SQL splitter creates *. The WHERE splitter is part of the JAVA package splitter archive. ROWID splitting can be used if the target database is Oracle (“-loadprocedure fast” must be used for the import).

Figure 226: Example: MIGMON Export Processing (1) As soon as MIGMON finds “*. .WHR” files.STR” file for each splitted table. The “*.TSK” file a corresponding “*. Figure 227: Example: MIGMON Export Processing (2) For each “*.CMD” file will be created.CMD” files automatically. it generates the necessary “*.TSK” files will be created with the special option “-where” to put the WHERE condition into it.TSK” and “*. Make sure to have a separate “*.

directing the optimizer to choose a certain execution plan). So it is easy to find out which part of a table is stored in which dump file.TOC file. The R3TA WHERE file was splitted and the results were copied into “/exp/DATA”. R3LOAD is assumed to run in “/inst” and the export directory is named “/exp”. In case of the Oracle PL/SQL splitter. I the case of a mismatch. The “/inst/split” directory is used to run R3TA some days or hours before the database export.WHR” file is scanned for an optional database hint to be utilized during the data export (currently implemented for Oracle only. .Figure 228: Example: MIGMON Export Processing (3) R3LOAD inserts the used WHERE condition into the *. Figure 229: Example: Directory Content after Export To simplify the graphic above. Furthermore this information is used for a safety mechanism to make sure the import does run with the same WHERE conditions as the export did (otherwise it could lead to a potential data lost in import restart situations). R3LOAD stops on error. the WHERE files and be put directly into “/exp/DATA”.WHR" means. the respective „*.20: "(DB) INFO: Read hintfile: D:\EXPORT\ABAP\DATA\CKIS-1. no deep directory structures are shown (like SAPINST is creating) and the files under “<export_dir>/DB” are not explicitly mentioned. The export log file information of R3LOAD 7.

TSK” files are generated for each WHERE condition. Figure 231: Example: MIGMON Import Processing (2) After the table was created successfully. After successfully creating the table.TSK” files will be created with the special option “-where” to put the WHERE condition into it.Figure 230: Example: MIGMON Import Processing (1) MIGMON makes automatically sure. For databases with the need of a primary key before import. .CMD” files for table creation are generated before data import.state.TSK” and “*. it will be created together with the table. that the “*. The “*. This preparation phase is marked in the MIGMON “import. the data load processes are started. multiple “*.properties” file as “<table>-pre=+”.

Figure 232: Example: MIGMON Import Processing (3) For each “*.CMD” file is generated. R3LOAD compares the WHERE condition between the “*. R3LOAD stops on error in case of a mismatch. Before starting the import. In case of an Oracle “-loadprocedure fast”.TSK” files and terminates on error in case of a mismatch.TOC” and “*. Figure 233: Example: MIGMON Import Processing (4) After start. A successful import is only possible if the WHERE condition used for the export is identical to the one during import. R3LOAD compares the WHERE condition between the “*. which can result in a data loss.TSK” file. .TOC” and “*. Otherwise a possible restart would delete more or less data from a table.TSK” files. the corresponding “*. R3LOAD does not commit data until the import is finished successfully.

Figure 235: Example: Force Sequential Import of CKIS Splits If the target database does not allow to import with multiple R3LOAD processes into the same table (because of performance or locking issues). MIGMON can be instructed to use a single R3LOAD process for a specified list of packages. but it will make sure that no two R3LOAD processes import into CKIS.Figure 234: Example: MIGMON Import Processing (5) After all parallel import processes for the splitted table were finished.txt”. The total number of running R3LOAD processes is the sum of the specified number of processes in “import_monitor_cmd.properties” file as “<table>-post=+”. In the above example.txt” is read by MIGMON to set the import order. the remaining tasks can be started: creating the primary key and secondary indexes. but it makes sense to name it like the table in charge. that CKIS-1 is imported before CKIS-2. Beside the number of R3LOAD processes (jobNum=) the R3LOAD arguments for task file generation (taskArgs=) and import (loadArgs=) can be defined individually for each group.state. that is CKIS-1 to CKIS-11. the file “import_order. the remaining task is the secondary index generation only. All packages belonging to group [CKIS]. This does not guarantee. will be imported using one single R3LOAD process (jobNum = 1). This post-import phase is marked in the MIGMON “import. A group can have any name. For databases creating the primary index before import.properties” and the number of processes defined in “import_order. .

but do create table.TSK: load data.e.Figure 236: Example: Directory Content after Import For Oracle: • CKIS__DPI. primary key.Distribution Monitor for R3LOAD Business Example In a test run of a Unicode Conversion project. or indexes • CKIS__DT. from an administrative point of view it will be more easy to have a homogeneous operating system landscape. indexes. it was identified that the CPU load on the database server was the bottleneck of the R3LOAD export. Figure 238: DISTMON – Restrictions . a wide range of system combinations are possible.TSK: create table. a mix of two 4 CPU systems and one 8 CPU system or even systems running on different operating systems. or load data Lesson: DISTMON . Nevertheless. If more than one R3LOAD server is planned.TSK: create primary key and indexes. various types of applications servers can be used. file system sharing can be complex otherwise. but do not create primary key. but do not create table. Figure 237: DISTMON – Distribution Monitor To distribute the R3LOAD CPU load to different systems. Running R3LOAD on a separate server would solve the problem. it makes sense to utilize the Distribution Monitor. i. or load data • CKIS__TPI. As long as the operating systems and DB clients libraries are supported by the respective SAP release.

Status information is read from the shared communication directory. the export of each application server is written to local mounted disks and not to NFS mounted file systems. The start is initiated by DISTMON. The status monitor allows the monitoring of the application servers from a single user interface. That means. each application server can run a MIGMON for export and a second one for the import. The start is initiated by DISTMON. DISTMON can only handle the ABAP data export. Control files (*. Figure 240: DISTMON Distribution Process Each MIGMON will be started locally on the respective application server. Figure 239: DISTMON Server Layout The communication directory is used to share configuration and status information among the servers.properties) are generated here and distributed during the preparation phase. That means. DDL*. It is physically mounted on one of the involved systems and shared to the other application servers.DISTMON is making use of R3LOAD features not available below 6. Each MIGMON runs independently and does not know about other MIGMONs in the case of parallel export/import on the same server.properties and import_monitor_cmd. Each MIGMON will be started locally on the respective application server.40. each application server can run a MIGMON for export and a second one for the import.STR. Because of safety reasons. JAVA stacks must be exported using JLOAD. Each . export_monitor_cmd.TPL.

Lesson: JMIGMON . Status information is read from the shared communication directory.xml” containing the package sizes to support an ordered export with the largest packages first.02.MIGMON runs independently and does not know about other MIGMONs in the case of parallel export/import on the same server. The JLOAD package files must be created with JPKGCTL before starting the export or import. exports should always be done to local file systems / directories! In the example above. As soon as a package is successfully exported. Figure 242: JMIGMON – Net Configuration The JMIGMON network configuration is useful in environments.SGN” files like in the MIGMON implemenation. the corresponding signal file (*.Migration Monitor for JLOAD Business Example You need to know. For consistency reasons. Figure 241: JAVA Migration Monitor (JMIGMON) – Features The very first implementation came with SAPINST 7. The status monitor allows the monitoring of the application servers from a single user interface. where file systems can be shared between source and target system.SGN) will be created in the network exchange directory. the appropriate JMIGMON configuration scenario for a specific customer SAP System landscape. Failed JLOAD processes can be restarted by changing the content of the file “export/import. The parallel export / import makes use of “*. . Now the importing JMIGMON starts an JLOAD process to load the dump from the shared export directory. the export directory and the network exchange directory are shared from the exporting to the importing system.jmigmon.states”. JPKGCTL creates a “sizes.

More detailed information can be found in the respective job log. will force JMIGMON to restart the job. where source and target systems do not have a network connection. The JMIGMON is used to start JLOAD processes only.XML=+ Finished (splitted table) EXPORT_0. or the existing connection is too slow for a file transfer.console.XML=+ Finished (splitted table) EXPORT_14_J2EE_CONFIGENTRY.log” should be inspected in case of export or import errors.XML=.XML=? running (splitted table) IMPORT_0. or terminated on error.jmigmon_states: IMPORT_METADATA. Figure 244: JMIGMON.XML=+ finished MPORT_13_J2EE_CONFIGENTRY. In the above example.failed . The file transfer between the source and target system will be done using transportable media. Changing a package state from minus (“-”) to zero (“0”).Control and Output Files The “jmigmon.XML=+ finished (splitted table) IMPORT_14_J2EE_CONFIGENTRY.Figure 243: JMIGMON – Stand-Alone Configuration The JMIGMON “Stand-Alone Configuration” is useful in environments. Example export_jmigmon_states: EXPORT_METADATA. The JMIGMON state files are used to control which packages are already exported.XML=+ Finished Example Iimport. currently in use. the export and import directories are located on separate hosts in different locations.XML=+ Finished EXPORT_13_J2EE_CONFIGENTRY.

JMIGMON will use the content to start the export in the package size order. the “sizes. If multiple fields are provided. only this field is used for splitting. JPKGCTL will add as many tables to a package until the size limit is reached.xml” file containing the expected export size of each package is written. A small size will result into a large number of package files compared to a large size which will create few packages only. all fields of the primary key are checked for highest selectivity.properties” file. the most selective field is used.properties” file is generated according user input by SAPINST. After all packages are created. The distribution criteria is the package size as provided in the “jsplitter_cmd.Lesson: Table Splitting for JLOAD Business Example You need to know. Figure 245: JPKGCTL – Package and Table Splitting The “split” parameter defines the size limit for JLOAD packages. JPKGCTL connects to the database. The number of packages is related to the size limit parameter. the package file will contain this single table only. If a table is equal or larger then the given size. how JLOAD table splitting is working and how to troubleshoot problems. Figure 246: JPKGCTL (JSPLITTER) – Workflow The “jsplitter_cmd. The “splitrulesfile” is only required if table splitting is planned. . In the case where a single field is explicitly given. It can contain entries in three different formats. reads the database objects definitions and calculates the sizes of items to be exported. If only the number of splits is specified. The tables are distributed to the JLOAD job files (packages).

STR files) 10 additional *. 1. If the requested number of splits cannot be achieved. it must be explicitly mentioned in a split rule file. After the JAVA. Exercise 9: Advanced Migration Techniques Solution 9: Advanced Migration Techniques Task 1: A customer database of an ABAP SAP System has 10 very large tables that are between 2 and 20 GB in size and some other large tables ranging from 500 – 2. For each job file of a splitted table. If even this does not result into useful WHERE conditions. but then it will only check the fields of the primary key.or Perl-based Package Splitter was executed with option “-top 10” (move the 10 largest tables to separate *. but contain other tables than expected. It makes sense for large tables which do influence the export time significantly. JPKGCTL is able to find a useful split column automatically. What can be the reason of this behavior? Hint: What file is read to get the table size? What happens to large tables? .Figure 247: JPKGCTL (JSPLITTER) – Table Split Strategy Table splitting is an optional task.000 MB.STR files exit. JPKGCTL gives up and no table splitting takes place. the number of splits will be automatically reduced. a separate JLOAD is started. Figure 248: EXPORT<PACKAGE>.XML of a splitted Table The WHERE condition is used to select data of a specified range. If a different field should be used.

1. Figure 249: Technical Migration Steps (ABAP-Based System) Many migration steps can be performed in parallel in the source and target systems. Check the corresponding SAP Notes and SAP System upgrade guides for further reference. 10 will run in parallel Figure 250: Technical Migration Preparation (1) Just before you start the migration. c) The data load can be started automatically as soon as the first package is signaled to be ready. Unit 9 Performing the Migration Lesson: Performing an ABAP System Migration Business Example You need a quick overview about the executed steps in an ABAP System Migration. you should be prepared to start step 8 (create database in the target system). minimize the amount of data in the migration source system. Be-cause of this. In order to do this. b) Automatic file transfer to the target system is possible. Task 2: In a preparation of an R3LOAD heterogeneous system copy. This avoids jobs failing directly after the first start of the migrated SAP System. In the case where MIGMON is used for concurrent export/import. the export will need to have been stopped after R3SZCHK has started. the package splitter catches the first 10 largest tables found in the *. Figure 251: Technical Migration Preparation (2) To reduce the time required to unload and load the database. After step 3 (generate templates for DB sizes) has been performed in the source system. 9. release all the corrections and repairs before starting the export. If the database contains tables that are not in the ABAP Dictionary.a) R3SZCHK limits the computed table sizes to a maximum of 1. or can be transferred to another system where Perl or JAVA is available to perform the split. Once step 6 (file transfer) is complete. Before the migration make sure to de-schedule all jobs in the source system. steps 78 should already have been performed in the target system. what can be done to improve the export time? a) The *. the customer was asked to install Perl 5 or a JAVA JDK on his Windows production system. because of restrictive software installation policies. the steps 4. check all the migration-related SAP Notes for updates. 1. A 20 GB table will have the same *.EXT entry as a 2000 MB table.STR files can be split manually using an editor.EXT files. Caution: If the split is done in advance. The reports BTCTRNS1 (set jobs into suspend mode) and BTCTRNS2 (reactivate jobs) can be helpful. Figure 252: Technical Migration Preparation (3) . 5. be sure that no new changes have been made to the ABAP dictionary since the initial creation of the *. What are the benefits of using the client mode? a) No changes to the standard R3SETUP or SAPINST export process is required. Nevertheless.STR files! Otherwise you risk inconsistencies. If the target system has a new SAP SID. 6. check whether some of these tables also have to be migrated. Task 3: The Migration Monitor has a client and a server export mode. but he denied.78 GB.

Depending on the target database additional options might be available.SQL file for a certain TABART. As long as the report terminates with status “successfully”.TSK Files SAPINST/MIGMON call R3LOAD to create task files. Its better to spend additional time on verifying the copied files against the original files than spending several hours or even days to transport them to the target system.00/7. Figure 256: Export Database with R3LOAD R3SETUP/SAPINST/MIGMON start a number of R3LOAD processes.EXT and *. if the target system is to run on an ASCII-based platform. Appropriate checksum tools are available for every operating system. the execution of “SMIGR_CREATE_DDL” is a must! Make sure not to make any changes to the non-standard objects after “SMIGR_CREATE_DDL” has been called! If no database specific objects exist. R3SETUP calls the Perl script to split *.SQL file will be created. If the selected TABART or table is not a BW object. which can be selected in the field “Database Version”. The resulting <TABART>. no <TABART>. Copy <TABART>. The “Installation Directory” can be any file system location.CMD and *. The R3LOAD processes write the dump files to disk. everything is ok. If WHERE files exist. Do not use NFS file systems as an export target for the dump files! Dump files can be unnoticeably damaged and cause data corruption! Figure 257: Manual File Transfer (1) EBCDIC R3LOAD control files created on AS/400 systems must be transferred in ASCII mode. As soon as an R3LOAD process terminates (whether successfully or due to error).STR files. SCM/APO). DBSIZE. the size of the database and the database type. SAPINST uses the JAVA. the WHERE conditions will be inserted into the *. A separate R3LOAD processes is started for each command file. then no <TABART>. R3SETUP/SAPINST/MIGMON start a new R3LOAD process for the next command file.TPL is created by R3SETUP. The runtime of R3SZCHK depends on the version.STR Files and Tables The generated *. Figure 254: Split *.STR and *. For NetWeaver 04 and later. Follow the guidelines in the homogeneous/heterogeneous system copy manual.SQL files to the SAPINST export install directory or directly into the “<export_dir>/DB/<target_DBS>” directory.or the Perl-based Package Splitter. On large databases table splitting will reduce the export / import run-time significantly.SQL files will be generated. Depending on the version. See SAP Notes: • 771209 “NetWeaver 04: System copy (supplementary note)” • 888210 “NetWeaver 7.SQL file will always have the name of the TABART.TSK files. only to discover that some files had been corrupted by the copy procedure used. “Optional Parameters” allows the creation of a single <TABART>. or for a specific table only.The execution of report “SMIGR_CREATE_DDL” is mandatory for all SAP systems using non-standard database objects (BI/BW. R3SETUP/SAPINST/MIGMON generates command files. from the information computed by R3SZCHK and stored in table DDLOADD.EXT files will be split into smaller units to improve the unload/load times. Figure 255: Generate Export *.STR Files R3SETUP/SAPINST calls R3LDCTL and R3SZCHK. In cases where dump files must be copied to transportable media. Figure 258: Manual File Transfer (2) . make sure that the files are copied correctly.10: System Copy (supplementary note)” Figure 253: Generate *.

Figure 260: Get Migration Key (2) Since 4. If in doubt check the log files. Since 7.SQL files. Figure 263: Generate Import *. The summary at the end of this file should not report any error! . do not copy them! Figure 259: Get Migration Key (1) The migration key must be requested from the customer. Figure 264: Import Data with R3LOAD R3SETUP/SAPINST/MIGMON starts the import R3LOAD processes.0B do not contain information about the source system.LST is generated by SMIGR_CREATE_DDL together with the *. because he has to accept the shown migration key license agreement. Generally. If asked for the SAP R/3 Release.e.TSK files are generated separately for export and import. the migration key is identical for different SAP Systems of the same installation number. The migration key in NetWeaver 7. Starting with 4. and you can always adjust the values in subsequent tests. The migration key must match the R3LOAD version. See SAP Note 338372 “Migration key does not work” for further reference. R3SETUP/SAPINST will start the program “dipgntab” for that purpose. R3SETUP/SAPINST uses its content to determine whether the load data is read from the correct directory. See SAP Note 338372. The *.02. Before opening a problem call.TSK files. The node name shown by “uname -a” or “hostname” should be the “DB Server Hostname”. because it is obtained from the first three characters of “(GSI) INFO: dbname”! The R3LOAD log files of 3. The experience gained through the test migration is better than any advanced estimate you could calculate. always generate the migration key from the node name which is listed in the “(GSI) INFO” section of the R3LOAD export log (source system) and MIGKEY. Some systems are using several different hostnames (i. If WHERE files exist.log”.6D.TSK Files SAPINST/MIGMON call R3LOAD to create task files. enter the release version of the used R3LOAD. R3SETUP/SAPINST/MIGMON generate commands files. The file “MIGKEY. Therefore.ASC” is generated during the export of the source database. some values will be too large and others will be too small. the SQLFiles. R3SETUP and SAPINST test the migration key by calling R3LOAD -K (upper-case K). in a cluster environment).5A and above. All updates of the active NAMETAB are logged in the file “dipgntab<SAP SID>.The file “LABEL. as in versions 4.00 Systems with “old” SAP license installed (upgraded system) is different then for the “new” SAP license.1I and 4.log” contains the check results. Check the corresponding system copy note for details.log (target system). be generous in your database sizing during the first migration test run. Figure 261: Install SAP and Database Software Figure 262: Create Database The size values for the target database that are calculated from the source database serve only as starting points. the consistency between the ABAP and database dictionary will be checked and updated.CMD and *. Figure 265: Technical Post-Migration Activities (1) The general follow-up activities are described in the homogeneous and heterogeneous system copy guides and their respective SAP Notes. Check the migration key as soon as possible! All entries are case sensitive.5A.CMD and *. the WHERE conditions will be inserted into the *. Therefore. In some installations the System ID can even be in lower-case letters. Before the copied system is started the first time.

After the migration.6B. whether or not a *. R3SETUP/SAPINST does not load these tables. . On versions before 4. No missing BW objects will be shown in transaction DB02 afterwards.Figure 266: Technical Post-Migration Activities (2) In many cases. These loads are no longer valid after a hardware migration. For a list of the tables. will need some adjustments. can now be rescheduled with BTCTRNS2. more or less indexes will be required. indexes). Figure 271: Post-Migration Test Activities Take care when setting up the test environment. isolate the system.SQL file was used or not. The program should run independently. Table DBDIFF will be adapted accordingly. For this reason. Figure 268: Technical Post-Migration Activities (4) The non-standard database objects (mainly BW objects) which were identified on the source system and are recreated and imported into the target system. External systems do not distinguish between migration tests and production access. Run the report in the background.STR file contain the generated ABAPs (ABAP loads) of the SAP System. After ok from customer side. Adapt DBDIFF to New DB: Depending on the database type. Adapt Aggregate Indexes: runs CHECK_INDEX_STATE Adapt Basis Cube Indexes: runs CHECK_INDEX_STATE Generate New PSA Version: runs RS_TRANSTRU_ACTIVATE_ALL Delete Temporary Tables: runs SAP_DROP_TMPTABLES Repair Fact View: runs SAP_FACTVIEWS_RECREATE L_DBMISC: Database specific tasks (if defined for current database) Restriction to One Cube: restricts CHECK_INDEX_STATE to a single cube only Figure 270: Technical Post-Migration Activities (6) The tables in the SAP0000.40 or higher. Each ABAP load is generated automatically the next time a program is called. call report RDDLDTC2. the change of a database system will also include a change in the backup mechanism. the SAP jobs that have been set to suspend mode via report BTCTRNS1. Use transaction SGEN (starting with Release 4. The report RDDGENLD requires the file REPLIST in the SAP instance work directory. The system will be slow unless all commonly used programs are generated.0 and BW 3. In order to make sure that every program will be re-generated according to the new database needs. the already generated programs will be invalidated. the SAP System statistics and backup information for the source system can be deleted from the target database. views. Invalidate Generated Programs: Generated programs can be database specific. To prevent unwanted data communication to external systems. Make sure to get familiar with the changed/new backup/restore procedures. see the system copy guide. Figure 267: Technical Post-Migration Activities (3) Report RADDBDIF creates database-specific objects (tables. run transaction SAMT or report RDDGENLD. RADDBDIF is usually called by R3SETUP/SAPINST via RFC (user DDIC. because the execution can take a while. To create the file REPLIST in the source system. The RFC parameters can be changed in transaction SM59. client 000) after the data is loaded. For further reference check SAP Note 777024 “BW 3. The data source system connection can be checked in transaction RSA1. The report RS_BW_POST_MIGRATION will do this.6B) to regenerate all ABAPs.1 System copy” and/or read the corresponding chapter “Final Activities” in the homogeneous and heterogeneous system copy 6. Figure 269: Technical Post-Migration Activities (5) The report variants SAP&POSTMGRDB and SAP&POSTMGR are pre-defined for system copies changing/not changing the database system.

Another possibility can be that the application is not known by SAPINST. i. The largest package will be exported first. check all the migration-related SAP Notes for updates. Figure 277: JPKGCTL (JSPLITTER) only JPKGCTL is optional used since SAPINST 7. the corresponding SAP Notes will give instructions how to deal with it.XML and IMPORT.ASC” and the “LABEL.02. as mentioned in the appropriate SAP Notes regarding homogeneous and heterogeneous system copies. It helps JMIGMON to export the packages by size.XML”. If this is the case. Lesson: Performing a JAVA System Migration Business Example You need a quick overview about the executed steps in a JAVA System Migration. Figure 274: Generate Template for Target DB Size Figure 275: Collect Application Data from File System If SAPINST does not recognize the installed application and its related files. an already existing checklist from a previous upgrade/migration can be a valuable source of ideas.e. Figure 272: Technical Migration Steps (JAVA-Based System) Figure 273: Technical Migration Preparation Just before you start the migration. and so on. those which are keeping their data in the file system. the SDM repository itself is installed in the file system and will be redeployed into the target system from the SDMKIT. . Make sure to use the right version of the installation CD. The “<export_dir>/APPS” directory might be empty if no applications are installed. If a table was splitted. then the next smaller packages. For every export job file an import job file is generated with the same name but “EXPORT” is replaced with “IMPORT”. JLOAD is generating the EXPORT. Figure 278: Export Database with JLOAD Note: In versions without JPKGCTL. To identify any differences between the original and the migrated system.xml” file contains the expected export size of each generated packaged job file.JAR file. The copy of other applications might need the installation of a certain support stack and a matching SAPINST. Packaged job files containing multiple tables are named “EXPORT_<n>. Figure 276: Collect SDM Data In SAP releases below 7.XML by itself. no archives will be created.To develop a cut over plan. Figure 279: File Transfer The file “LABELIDX.ASC” files are generated during the export of the source database. Applications that are not recognized by SAPINST may require operation system specific commands to copy the respective directories and files to the target system. the resulting job files are named the same but with a different number.XML” and job files for a single table only are named “EXPORT_<n>_<TABLE>.10. The “sizes. involve end users as soon as possible. SAPINST uses its content to determine whether the load data is read from the correct directory.

you need to know the right installation number for the key request. External systems do not distinguish between migration tests and production access. the content of SDMIT. which are named “Others” in the above slide. You will need to reconfigure them.XML and IMPORT.10 well behaving JAVA programs should not write persistent data into the file system. After system copy. SAP Knowledge Warehouse.Future releases may be able to create additional directories/files in the export directory. involve end users as soon as possible. External programs or interfaces that are directly writing into the database. which contains the necessary file system components as collected in the source system. The license key of the source system is not valid for the target system. 1. isolate the system. Since 7.10 and later.JAR will be used. In the meantime. external programs or interfaces which are writing directly to the database and the planned SAP SID of the target system.XML by itself. This step is not necessary anymore in NetWeaver 7. only the SAP instance is shutdown. For a re-installation. SAP ERP. Adobe Document Services. for example SAP BW or SAP Enterprise Portal. if required. can cause inconsistencies! . while the export is running. SAP Notes will give instructions how to copy or to extract data manually. JLOAD is generating the EXPORT. To prevent unwanted data communication to external systems. Figure 285: Post-Migration Test Activities Take care when setting up the test environment. the customer was asked for scheduled backups. To identify any differences between the original and the migrated system. Figure 280: Install DB/SAP Software and Extend Database In SAPINST versions below NetWeaver 04S the database size is set using a default value. Figure 283: Restore Application Data to File System More and more JAVA-based software components will be integrated into the SAPINST system copy procedure. Figure 282: Import database with JLOAD In versions without JPKGCTL. the public-key certificates will be invalid on the target system. SAP CRM. For that purpose. Figure 281: Deploy SDM File System Data The Software Deployment Manager holds its repository in the file system. need to maintain the RFC destination. scheduled SAP System jobs. Why is it important to know which jobs are scheduled (SAP System or external jobs)? a) While the export is running. Exercise 10: Performing the Migration Business Example You need to request an OS/DB Migration Key from SAP. Figure 284: Technical Post Migration Activities The general follow-up activities are described in the homogeneous and heterogeneous system copy guides and their respective SAP Notes. You will required to provide a new one. JAVA systems that include components that connect to an ABAP backend using the SAP JAVA Connector (SAP JCo). Component specific follow-up activities for SAP BW. Solution 10: Performing the Migration Task 1: In the preparation of a R3LOAD system copy.

What would be the correct installation number to request the migration key? a) The installation number of the source system will always be used to generate the migration key. always check these entries. Jobs that need to be run for verification purposes may only be capable of being executed once. make sure to enter the version of R3LOAD and not the version of the SAP (base) System. In this case. the following combination of hostnames are needed to request the migration key: Export system: “ciserv01” Import system: “cidb001” 3. In this stage. 2. the transport system initialization (SE06) will close them without releasing the transports to the file system. Some of them are quite rare. and to invoke them again on the target system.b) Scheduled database backups can shutdown the database. Which hostnames must be provided when filling the migration key request form in the SAP Service Marketplace? a) The hostname of the system that R3LOAD is running on is of great importance. critical jobs should be set to “planned” status before starting the export. The versioning can be confusing on SAP releases running on a backward compatible kernel. the target system has not been properly configured. depending on their nature. As the export has to be done on the central instance. Task 2: The export of a heterogeneous system copy runs on the central instance SAP System with the following configuration: Source system: Installation number: 012004711 Standalone database server hostname: “dbsrv01” Central instance server hostname: “cisrv01” Target system: Installation number: 012000815 Central instance with database. but nevertheless they occur! Point out. If this is not possible. c) After starting the migration target system the first time. Unit 10 Troubleshooting This unit is discussing special error situations. 2. hostname: “cidb001” 1. On earlier SAP System versions. the most important knowledge for trouble shooting is to understand the restart behavior of R3LOAD and JLOAD very well. Hint: The BTCTRNS1 and BTCTRNS2 reports are available to set all scheduled jobs to “suspended” status on the source system. Where can information about the R3LOAD version and which hostnames were used for export/import (especially in environments where systems have a lot of network controllers and IP addresses or are even clustered) be found? a) The R3LOAD export and import log files contain the used hostnames. “012004711”. In the case that the SAP SID will be changed during the system copy. or decrease the export performance. In case of a migration key mismatch. which actions should be taken before the export? a) In the case that the SAP SID is changed. Hint: When requesting the migration key. Unit Overview Lesson: Troubleshooting Business Example . this can either be harmful or not harmful. The R3LOAD version is mentioned in the log files. scheduled jobs may be executed immediately. If not. take suitable precautions in the target system. all open Corrections and Repairs should be released in the source system.

This situation must be corrected. Error text: Database user SAPR3 is not authorized to perform the INSERT. PSAPTEMP). if an error occurs. Figure 289: Useful R3LOAD Environment Variables (2) The R3LOAD warning level is useful in cases where files cannot be found or opened without an obvious reason. If no additional disk space is available. DB2 SQL error 551 occurs during an INSERT to table ATAB.PL script contain the right name of Perl in your system? For the JAVA PACKAGE SPLITTER: Is the right JAVA version installed? R3LOAD exports data in the primary key order. the export can be started to a file system which has about 10% . For some tables. this is intentional and is caught by a special handling routine in R3LDCTL. copy already finished dump files to a different location while the export is running. 3. Useful values are between 1 and 4. but in the case of database connection problems. Increase the database storage units that are used for sorting (i. You want to understand their reasons and how to avoid them. For the Perl PACKAGE SPLITTER: Is at least Perl version 5 installed? Does the default search path point to the right Perl executable? Does the first line of the SPLITSTR. much more information is written then normal. As a rule of thumb. Older installation software may not activate Oracle undo management. In general. more temporary databases disk space may be required for sorting. Figure 290: R3LOAD – Load Termination Example Initial situation: 1. the trace can give valuable hints for troubleshooting. or the tables do not exist on DB at all. The contained list of environment variables can assist you in the analysis of database connection problems. caused by wrong environment settings. The migration tools used are not compatible with the current SAP System or database version? Are changes to the root user environment necessary before starting R3SETUP/SAPINST? Make sure that the database user can access the directories and files of the import file system. you can reduce the number of parallel running R3LOAD processes. The R3LOAD trace level is forwarded to the DBSL interface only. These tables are usually recorded in the exception table DBDIFF. R3LOAD response: . Increase database storage units that are used for sorting (i.There are sometimes strange error situations which are not easy to understand. Table ATAB has been created successfully. Often QCM* tables are involved. as long as something is set.e. Most of the output can only be interpreted from developers.15% of the database size. it must involve a table whose data definition is unintentionally wrong. and do not cause errors. the more output that is written. Alternatively. Oracle rollback segment problems: Restart R3SETUP/SAPINST and try again. Depending on the database used.e. Related SAP Note: 9385 “What to do with QCM tables” Figure 287: R3LOAD – Load Terminations Not enough temporary database disk space for index creation (sorting). Figure 288: Useful R3LOAD Environment Variables (1) The R3LOAD warning level can have any value. or implement the necessary measures in the database (Oracle). Therefore. 2. or reduce the number of parallel running R3LOAD processes. PSAPTEMP). Figure 286: R3LOAD – Unload Terminations The migration tools used are not compatible with the current SAP System or database version? Password problems? Changes to the root user or the environment necessary before starting R3SETUP/SAPINST? The active object definition in the ABAP Dictionary differs from the object definition in the database. Not enough space to unload the data in the dump file system. The higher the value.

trailing blanks can be inserted. R3SETUP/SAPINST is given a negative return code and starts a new R3LOAD process for the next command file. To do this.STR file to be read is TABLE09. In this case. Figure 295: Power Failure / OS Crash at Export Time (2) In the above example. silently corrupted primary keys can lead to double exported data records. A termination occurs. but it can happen. Figure 294: Power Failure / OS Crash at Export Time (1) In case of an OS crash or power failures. Data will be loaded.TOC or *. The slide above describes a rare situation. Exports into a network mounted file systems can lead to similar symptoms if the network connections breaks. 2. R3LOAD always exports table data without trailing blanks! Similar data existing in the source database.TOC or *. As the operating system could not flush all of the file buffers to disk. R3load exported TABLE08 already and the *. the number of records in the *. Abnormal terminations are caused by external events. The next table in the *.LOG” file. duplicate keys are caused by unexpected terminations at export or import time. In the source database. SAP ABAP Systems do not write data with trailing blanks into table fields. Figure 293: Duplicate Key at Import Time In most cases. but in the ABAP DDIC. This may or may not be intentional. R3LOAD executes the truncate/delete SQL statement which is defined in the DDL<DBS>. with or without trailing blanks (because the data was modified by the SAP System). It is not a termination caused by a database error or file permission problems. a power failure or operating system crash occurred. or SMIGR_CREATE_DDL was not called on the source system. If external programs are directly writing into SAP System tables. Figure 292: R3LOAD – Restart Example R3LOAD ≥ 6.TSK was updated. which is TABLE08.TOC file contains block 48. The *.TPL file. which explains the SQL error 100 (Row not found). and not while creating the table. R3LOAD now opens up the dump file and does a seek to the next write position block 49 (which is behind end of file in this case).6D Situation after R3LOAD has been started again: 1. it is difficult to ascertain which OS buffers were flushed to the open files or what happened to the entire file.SQL file was found by R3LOAD. Some tables might not have a primary key on the source database. the result was a mismatch between the dump file and the *.The R3LOAD process that is processing file “SAPPOOL. the table contents must be deleted first.TSK file content. Please ask the customer for reasons and/or check for SAP Notes. Because the error occurred during the load. R3LOAD executes the SQL statement “DELETE FROM”.10 Situation after R3LOAD has been started again: (1) R3LOAD reads the first task of status “err” or “xeq” from the “SAPPOOL. no data has been loaded yet. R3LOAD until 4.CMD” cannot continue. which will be stored at .TOC file.TSK” file. this will cause duplicate key errors at import time. To do this. The following pages will provide reasons and problem solutions. due to the SQL error. Correcting the problem:Grant access authorization for table ATAB to user SAPR3 Figure 291: R3LOAD – Restart Example R3LOAD ≤ 4. Verify primary key/repair primary key and export the table again. Data will be loaded. In the above case. Cleanup source table and export again.6D: After restarting the export process.TOC file will be larger than the number of rows which are reported by a SELECT COUNT(*) statement. Restart complete. and not while creating the table. but the data was not yet written by the operating system into the dump file. R3LOAD looks for the last exported table in the *. the table contents must be deleted first. No *. Because the error occurred during the load. (2) Restart complete. as the last data block in the dump file. R3LOAD reads the last entry in the “SAPPOOL.

10 and above: As it is not clear whether the *. This problem is rare and it usually only happens to very small tables that are exported completely. The gap between block 42 (last physical write) and 49 contains random data. At import time. R3LOAD now opens up the dump file and does a seek to the next write position block 320.TSK file contains more or less entries than the corresponding dump file. Earlier versions will try to load the data and will usually stop on error while uncompressing the data.TSK file content. and the entry of *.TOC file.TSK files has already been written.LOG file. Figure 297: Power Failure / OS Crash at Export Time (4) Remove only files for packages that were not completed at the time of the system crash.TOC or *.TOC file contains block 320. and the entry of *.001 (which is behind end of file in this case). .Export Error due to Space Shortage (2) SAP Note: 769476 “Danger of inconsistencies” The above export log shows unload terminations due to a shortage of space in the file system. it is not recommended to use the “merge” option to restart without repeating the export of the involved R3LOADS from scratch. the content can be misleading. If the *.TOC or *. R3LOAD 6.TSK file contains more or less entries than the corresponding dump file.6D: After restarting the export process. RFF: Cannot read from file error.TSK files has already been written. the use of the “merge” option may not be 100% safe and can thus lead to the same problems as in earlier R3LOAD versions. R3LOAD will stop with a checksum error. R3LOAD versions since 4.TSK was updated. it is not recommended to use the “merge” option to restart without repeating the export of the involved R3LOADS from scratch. RFB: Cannot read from buffer error. The *. R3LOAD until 4.001 and so on.5A will create a block check sum (CRC) to identify corrupted data.TOC or *. Figure 296: Power Failure / OS Crash at Export Time (3) In the above example. the export will restart and finish without further problems. Figure 298: R3LOAD – Export Error due to Space Shortage (1) In the described case. The gap between block 319. After increasing the space of the file system.000 as the last valid data block in the dump file. the use of the “merge” option may not be 100% safe and can thus. The same scenario can happen if an export process is writing to an NFS file system while a network error occurs. or will stop with duplicate key errors.block 49 and so on. As the operating system could not flush all of the file buffers to disk. which will be stored at block 320. R3LOAD 6.992 (last physical write) and 320. Figure 299: R3LOAD . a power failure or operating system crash occurred. The next table in the *.STR file to be read is TABLE03.10 and above: As it is not clear whether the *.5A will create a block check sum (CRC) to identify corrupted data. lead to the same problems as in earlier R3LOAD versions. This problem is rare and it can happen to tables that are exported completely where only the last blocks have to be flushed to the dump file.TOC or *.TSK file has been updated before all data was flushed to the dump file by the operating system. or will stop with duplicate key errors. but the last 8 data blocks were not yet written by the operating system into the dump file. In the described case. where only a few blocks have to be flushed to the dump file. R3load exported the large TABLE02 already and the *. Earlier versions will try to load the data and will usually stop on error while uncompressing the data. R3LOAD looks for the last exported table in the *. R3LOAD versions since 4. Task files can be re-created by using the same R3LOAD command line as shown in the *. which is TABLE02. the result is a mismatch between the dump file and the *.001 contains random data.

BCK is necessary. so be sure to prevent any disturbance. The creation of the primary key stops with a duplicate key error. R3LOAD 6. R3load imported Table08 already. Figure 306: Duplicate Key Problem after Restarting Import (3) For verification. In case of import errors. This usually only happens to small tables.LOG files unless absolutely necessary. When manipulating the “<PACKAGE>.LOG or *. restarts the import. Nevertheless other database errors can lead to a similar erroneous restart situations. Figure 305: Duplicate Key Problem after Restarting Import (2) Database-specific delete commands (for example. a power failure or operating system crash occurred. before restarting a task. R3LOAD until 4.LOG file content. but it already exists.001 . As databases are designed to write data in a safe way.6D: The restart will try to create Table06. R3LOAD will drop or delete objects and data first. For verification.x is using the Oracle “TRUNCATE” statement to delete data. be very cautious and examine the completed results carefully. Figure 303: Power Failure / OS Crash at Import Time (3) Figure 304: Duplicate Key Problem after Restarting Import (1) In a restart scenario where the import of a large table fails and the SQL command that deletes the entire table content has also failed with an error. the described error should not happen any more. please make sure to receive the export logs along with the export dump files. As R3LOAD 6. it is difficult to ascertain which OS buffers were flushed to the open files or what happened to the entire file. This depends on the *. can result in duplicate keys. This kind of problem has often been caused by Oracle “Snapshot too old” situations.TSK file. count the table rows after import. A restart beginning with data load.LOG or *. count the table rows after import. R3LOAD stops on error. Do not modify the *. In the above example.TOC is not from same export as SAPxxx. the R3LOAD import is less critical in the case of power failures or operating system crashes as opposed to the export of data. but the *. Another scenario that could occur is that the first table will be treated right and the problem first arises with the second table. R3LOAD assumes that the table has been emptied and thus. All entries that are not marked as executed (“xeq”) will be set to error (“err”). the result is a mismatch between database content and the *. the export logs should be examined for troubleshooting. because of dump corruptions.TSK file contained only the information that Table05 and its primary key were created.10 and above: The merge of *. Figure 301: Power Failure / OS Crash at Import Time (1) Figure 302: Power Failure / OS Crash at Import Time (2) In the case of an OS crash or power failures. the Oracle command TRUNCATE) can work faster than the standard SQL command DELETE.LOG” files. Figure 307: Corrupted R3LOAD Dump Files (1) RFF=Read from file RFB=Read from buffer (RFF) ERROR: SAPxxx.Figure 300: Export Rules for R3SETUP / SAPINST / MIGMON The R3LOAD export process is very sensitive. If the R3LOAD export was done by someone else and you are responsible for the import only.TSK and *. As the operating system could not flush all of the file buffers to disk.

When using the Perl.. for instance. Figure 309: SICK – System Installation Check Failed Transaction SICK detects errors that generally indicate an incorrect SAP Basis installation. (RFB) ERROR: CsDecompr rc = -1Buffer data can not be decompressed. making sure that the right import order is used. 20% space as a rule of thumb). During the Unicode Conversion. SAPSDIC. The only 100% check whether a table in a package file is in the database or not. (RFB) ERROR: wrong checksum – invalid data The checksum of loaded data blocks is wrong.or JAVA-based Package Splitter (version 1). check to see if there are SAP Notes related to these issues. The revised JAVA package splitter is also available form SAP Marketplace.STR file. Typical buffer sizes range only a few MB. This will be possible.STR contains the tables in the right way.STR does not necessarily contain all Active Nametab tables anymore. The buffer is used to load a certain number of data blocks to uncompress it. if possible. or files of different exports have been accidentally mixed.log file contains the warning “The HEAD entry in TADIR is missing”.000) R3LOAD read an invalid buffer size from the dump file. (RFF) ERROR: buffer (.. Adjustments for the production migration can be determined from the results of the test migrations.STR files. is to compare the database dictionary with the content of the *. such as SAPDBA/BRCONNECT. Normally. Figure 313: Data/Index in Wrong Database Storage Unit . SAPSDIC. or use different algorithms. Did something take place during the export? Was there a restart situation? Use different checksum tools to compare original and copied files. Figure 312: Nametab Import Problem (Unicode Conversion) Warn the students they must never ever use the Perl-based Package Splitter when performing a Unicode Conversion! Problem: Executing “R3trans -d” returns “No TADIR in this system”. In NetWeaver 04S. See SAP Notes: 143272 R3LOAD: (RFF) ERROR: buffer (xxxxx kB) to small 438932 R3LOAD: Error during restart (export) Figure 308: Corrupted R3LOAD Dump Files (2) Analysis: Check the export log files. it can happen that the Active Nametab tables may be separated into different *. But after a split.STR files. and the trans.STR”. For more information see SAP Note: 833946 “Splitting of STR files”. Figure 311: R3LOAD – Load Termination due Space Shortage Be generous with database space during the first test migration (Add. Did they really finish successfully? If tables or indexes are missing. KB) too small (the figure is larger than 10. the Active Nametab tables must be specially treated by R3LOAD.The dump file is corrupted. Figure 310: DB02 – ABAP DDIC / Database Consistency Check Some required tables are created from external programs. only if the tables were imported in a certain order in the same *. Some database specific objects are created in the last step of the system copy via RFC programs. the JAVA-based Package Splitter has been improved to write the ABAP Nametab tables into a single package called “SAPNTAB.

a restart will start behind end-of-file. tables and indexes are moved to database storage units created by customers. If the last unsuccessful action was create object (table or index).STA file. In NetWeaver ’04 and NetWeaver 7. Note: The first guess might be wrong! Think about which R3LOAD files are read.x stopped. In NetWeaver ’04 and NetWeaver 7. Why could it be dangerous to restart a terminated export. In NetWeaver ’04 SR1. it makes sense to proceed with other tables (thus.02 there can be multiple JLOAD job files (packages). Since NetWeaver 7.TPL file. 1.02 there can be multiple JLOAD job files (packages). Check the *.TPL file have been used.STR will contain the original SAP TABART settings. caused by “out of space” in the database? a) In the case that R3LOAD were to stop. If the dump file con-tains less blocks than recorded in the *. Check the SAPINST log file for the file location in other versions. Check the SAPINST log file for the file location in other versions. the “IMPORT.STA file. If nothing is defined. If the last unsuccessful action was load data. the restart action will be drop table. it can still close all files properly (that means the files have a consistent state). DELETE will be used as default. Why isn’t it a problem to restart a terminated import. Task 2: R3LOAD provides restart functionality on export and import 1.STA” file can be found under: “/usr/sap/<SAPSID>/<Instance>/j2ee/sltools”. because the database returned an error. In NetWeaver 04 SR1. Figure 315: JLOAD Import Restart Behavior Even if a table fails to import. Oracle databases running on the “old” tablespace layout will be automatically installed with the reduced tabelspace set on the target system. which is defined as truncate ta-ble statement (section trcdat: ) in the DDL<DBS>.CMD files to find out which DDL<DBS>. Since NetWeaver 7. which failed because of errors. Task 3: JLOAD can restart a terminated export or import. R3LOAD was started again.STA” file can be found under: “/usr/sap/<SAPSID>/<Instance>/j2ee/sltools”. Afterwards R3LOAD is able to perform the right restart activities. caused by “out of space” in the export file system? a) It is not certain which R3LOAD file was written last. JLOAD stops if an error occurs. the restart action will be delete data.In many cases.00. there is only a single EXPORT. The blocks between end-of-file and the new write position contain random data. The installation programs R3SETUP and SAPINST can change the content of the DDL<DBS>. Solution 11: Troubleshooting Task 1: During the import of a table R3LOAD 6. saving time). 2.TOC file.TPL file before starting the import. The result is that the files “DDL<DBS>. the “EXPORT. Exercise 11: Troubleshooting Business Example You need to know which files are read during a restart and which precautions are required depending on the cause of an error. After the problem was fixed. What is the exact database statement that R3LOAD will use for the restart? a) R3LOAD will use the database command. 1. Figure 314: JLOAD Export Restart Behavior Because it does not make sense to continue with another table if a previous table failed to export.TPL” and *.00 there is only a single IMPORT. without maintaining the appropriate ABAP Dictionary tables. because of a database import error. 2. Which files are read to find the restart position in the dump file? . Which restart action will be automatically executed by R3LOAD? a) R3LOAD will delete all table data from the table before loading it again. A later restart will only deal with the erroneous objects.

During the downtime. Furthermore other technical maintenance events like upgrades or updates can be performed in course of this type of Near Zero Downtime procedure.STA file for erroneous entries.XML. As a “continue-onerror” strategy is used. It is used to configure and control the migration process between the source and the target system. Please discuss with SAP. whether a customer specific NZDT project is possible. with the incremental transfer mechanism. To log table changes.XML. The next table to export is read from the export job file EXPORT[_<PACKAGE>].a) JLOAD reads the content of the EXPORT[_<PACKAGE>]. A freeze trigger is used to force a short dump. insert. structure changes of tables and indexes are quite common.STA file will be repeated. the NZDT Workbench will be a separate system or is installed on the target system. Figure 317: NZDT/MDS . Freeze triggers are created for tables where no change is expected. During the table synchronization. but reduces the number of tables which must be synchronized during the online or offline delta replay. the NZDT method can be applied by specially trained SAP consultants only (which might change with future NZDT versions or procedures). which has been developed to copy very large databases. which will be fired as soon as the content of the table is changed. In BW and SCM systems.Features The NZDT Workbench runs on a NetWeaver system with installed DMIS Add-On. and remaining tables will be read from the job file IMPORT[_<PACKAGE>].STA to identify already exported tables. before downtime. it can reduce the technical system copy downtime significantly to few hours or even less. Figure 318: NZDT/MDS Scenario: Export Remaining Tables The table insert. or it was agreed with the customer to allow no change during the NZDT migration process. JLOAD proceeds with writing from the position found. it is also performing the data translation to Unicode. which were not already synchronized or changed . what helps to shorten the downtime. error flagged tables of the IMPORT[_<PACKAGE>]. the data stream runs through the NZDT Workbench. update. 2. delete triggers will usually be implemented for the 100 – 200 largest tables. The replay transfers nearly all data of the triggered tables. This will imply some system usage restrictions. Lesson: Special Projects Lesson Duration: 15 Minutes Lesson Overview Contents • Special considerations for NZDT/MDS system copies and Unicode Conversions • Technical description of the NZDT/MDS method Business Example Figure 316: NZDT – A Minimized Downtime Service (MDS) The Near Zero Downtime method (NZDT) is a SAP Minimized Downtime Service (MDS) using an incremental migration approach. update. At the moment (May 2012). This will set the respective table to a read only mode. The dump file is opened and blocks of already ex-ported tables are skipped. It is suitable for heterogeneous system copies and Unicode Conversions (or a combination of both). delete table triggers are created. Depending on the migration scenario. Which files are read to restart an import? a) JLOAD reads the content of the IMPORT[_ <PACKAGE>]. to cover about 90% of the database data for the online delta replay. In case of an Unicode Conversion. Compared to the standard system copy procedure. if a transaction tries to change the content of a record. Unit 11 Special Projects This unit can be inserted between any lessons after completing the first four units. a final synchronization takes place. transferring the records. The primary key of the record will be written into a log table. SAP delivers this type of project usually for a fixed price.

no jobs are running. additional information about the type of change (insert. Afterwards. Figure 319: Prepare NZDT Workbench and Source System (S1) Setup: • Install a separate NetWeaver system with the DMIS Add-On. does not need to be exported by R3LOAD. The target system will be isolated before starting it the first time. which is created after the triggers were established on the source system. and deletes in the source database. where the online replay was not 100% completed. the system is cloned (copied via backup/restore or advanced storage copy techniques). it might be advisable to install a separate application server for running the synchronization batch processes. As a consequence. The logging tables contain the primary keys of changed records. which was synchronized with the delta replay. Figure 322: System Ramp Down (S1) After the online replay is finished. Figure 323: Offline Delta Replay and Target Cleanup (S1) After the source system was ramped down. and the process status. the remaining tables are exported/imported using R3LOAD during the downtime. updates. transports which are intending to modify triggered tables. to avoid any data change. the source system must be ramped down for the final offline delta replay. That means. The achievable downtime is quite small compared to a conventional database export. . the translation to Unicode is done in the NZDT Workbench. a database trigger will be fired to update the logging table. The synchronization jobs scan the logging tables (here TAB01’ and TAB02’) for unprocessed records. There must be enough free space for the logging tables. A safe protocol makes sure that only those records are marked as completed (processed). Figure 321: Online Delta Replay . it might be possible to use the clone system directly as the target system. update. Usually. this takes few minutes only. etc. running the NZDT Workbench (based on SLO Migration Workbench technology). Every table. must be postponed to a point in time. or data was changed during ramp down. These records will be transmitted via the NZDT Workbench to the target system.during the ramp down. the offline (final) delta replay takes place. a time stamp. The table structure of the selected tables must not change during the NZDT process. or deleted. On heavily-used systems. The SAP System of the clone will never be started and is used for the export only. which transfers the data for those tables. In case of an Unicode Conversion. there must be no system activity anymore: users are locked out. After completing the last delta replay. Triggers and logging tables are created to record inserts. all transports into the source system must be examined for dangerous objects. interfaces are stopped. when the NZDT procedure is completed. The technical downtime depends mainly on the source system database performance and on the amount/size of the remaining tables. This means. In cases where the NZDT method is used for a homogeneous system copy. R3LOAD is used to export the clone system. Changes to the same record will be optimized in a way that the last recorded change is transmitted only.Synchronize Table Data (S1) The online delta replay table synchronization can be scheduled individually for each table. Every time a row is inserted. The database triggers and log tables are not active in the target system. Figure 320: Create Clone and Target System (S1) After the triggers were established in the source system. the source and target systems will be stopped. to avoid further data changes. which have been successfully updated in the target database. delete). All tables will be exported via R3LOAD and imported into the target system. • Install the DMIS Add-On on the source system Using the NZDT Workbench. thus balancing the batch load on the source system. The remaining tables will be deleted form the target system to prepare the R3LOAD import. the selected 100 – 200 large tables in the source system can now be prepared. updated.

On heavily-used systems. and the synchronization status (insert. Figure 325: NZDT/MDS Scenario: Offline Delta Replay All tables of the system must be classified to apply insert. delete. The logging tables contain the primary keys of changed records. The rest will get freeze triggers. or deleted. to avoid any data change. the technical migration is finished. and even smaller than in the NZDT scenario. The technical downtime depends mainly on the source system database performance and on the amount of delta data which must be synchronized offline. Because almost all tables have triggers (and many of them have freeze triggers). After the imported is completed. update. The synchronization jobs scan the logging tables (here TAB01’ and TAB02’) for unprocessed records. After completing the online delta replay. the system is cloned (copied via backup/restore or advanced storage copy techniques).Figure 324: Export/Import Remaining Tables (S1) The remaining tables will be exported/imported by R3LOAD. Tables with freeze triggers have already their final state after the import. processed). it might be possible to use the clone system directly as the target system. The target system will be isolated before starting it the first time. update. The database triggers and log tables are not active in the target system. Changes to the same record will be optimized in a way that the last recorded change is transmitted only. update. These records will be transmitted via the NZDT Workbench to the target system. delete triggers. The amount of data in the remaining tables must not be larger than that of which can be exported and imported in the customer-defined maximum technical downtime (i. Most transports must be postponed to a point in time. where the remaining tables are exported by R3LOAD. it might be advisable to install a separate application server for running the synchronization batch processes. updated. In cases where the NZDT method is used for a homogeneous system copy. On the NZDT Workbench.Synchronize Table Data (S2) The online delta replay table synchronization can be scheduled individually for each table. there must be strict transport rules in place. additional information about the type of change. few hours). As a consequence. The table insert. which is created after the triggers were established in the source system. A safe protocol makes sure that only . • Install the DMIS Add-On on the source system. Figure 327: Create Clone and Target System (S2) After the triggers were established in the source system. The resulting target system can be prepared for productive operation now. Every time a row is inserted. There must be enough free space for the logging tables. R3LOAD is used to export the clone system. it must be specified whether they should be synchronized during the online delta replay or the offline delta replay. The SAP System of the clone will never be started and is used for the export only. Figure 328: Online Delta Replay . running the NZDT Workbench (based on SLO Migration Workbench technology). No table structures must be changed during the NZDT process. thus balancing the batch load on the source system. The achievable downtime is very small compared to a conventional database export. delete triggers or a freeze trigger to them. update. For tables with insert. transports are difficult to manage.e. when the NZDT procedure is completed. All tables will be exported via R3LOAD and imported into the target system. a database trigger will be fired to update the logging table. the remaining unsynchronized table data will be transferred during the downtime offline delta replay. only allowing emergency transports into the source system. Figure 326: Prepare NZDT Workbench and Source System (S2) Setup: • Install a separate NetWeaver system with the DMIS Add-On. delete triggers will be usually implemented for thousands of tables. the tables of the source system can now be classified and the triggers and logging tables will be created.

ABAP coding must be reviewed using UCCHECK. The offline delta replay is typically completed in less than one hour and is finishing the technical migration. there must be no system activity anymore: users are locked out. the source system must be ramped down for the offline delta replay. additional features are implemented regarding the data context. except the ones with a freeze trigger.Unicode Interfaces: Solution Overview. A reverse conversion from Unicode to non-Unicode is not possible. .XML files which are used for final data adjustments in the target system (transaction SUMG). The Unicode Conversion is only applicable if a minimum Support Package Level is installed. During the Unicode export. The “OS/DB Migration Check” applies to Unicode Conversions changing their target operating system or database system. a vocabulary must be created to allow an automated conversion.those records are marked as completed (processed). a Unicode conversion at import time is not supported for customer systems. no jobs are running. Because the context is only available in the source system. which have been successfully updated in the target database. Figure 329: System Ramp Down (S2) After the online replay is finished. The Unicode Conversion of MDMP systems will cause more efforts compared to the ones for single copy page systems (increasing with the number of installed code pages). and slow hardware might require an incremental system copy approach. interfaces are stopped. and for tables which could not be transferred online (i. For this data. The creation and maintenance of this vocabulary is a time consuming task. For Non-SAP Interfaces. The involvement of experienced consultants will shorten the whole process significantly. Therefore. not all tables provide a language identifier for their data. See SAP Note: 745030 MDMP . UCCHECK is available as of Web AS 6. R3LOAD converts the data to Unicode while the export is running.20 and above. Figure 330: Offline Delta Replay – Synchronize Remaining (S2) After the source system is ramped down. Third-party products that are “SAP certified” are not automatically also Unicode compliant. the amount of tables needing a transfer. if possible. Certified products will be listed on the SAP Marketplace. Scenario 2 is synchronizing almost all tables using the delta replay. etc.20. Related SAP Notes: 0548016 Conversion to Unicode 1319517 Unicode Collection Note Figure 332: Unicode Conversion Challenges Very large databases. the translation to Unicode is done in the NZDT Workbench. That means.e. to avoid further data changes. is smaller than in scenario 1. As it is not sufficient for R3LOAD to read the raw data. MDMP / Unicode Interfaces cause high efforts and the recommendation is to minimize the number of language specific interfaces. The resulting target system can be prepared for productive operation now. It transfers the changes of those tables. In case of a Unicode Conversion. a detailed analysis of all existing interfaces is necessary. In MDMP systems. Check the Unicode Conversion SAP Notes for more information. USR02). where the largest tables are synchronized only and the rest must be exported/imported (transferred) using R3LOAD. Byte offset programming or dynamic programming (data types determined dynamically during program execution) can require lots of effort. small downtimes. A specific Unicode-certification is available. the offline (final) delta replay takes place. R3LOAD writes &. Because of the freeze triggers. the vendors need to be contacted. where the online delta replay was not 100% finished or data was changed during ramp down. Figure 331: R3LOAD Unicode Conversions Unicode SAP Systems require SAP Web AS 6.

What is the task of the implemented database trigger during a table update? a) As soon as the database has been inserted. 2. You are also interested in the general Unicode conversion approach. The found primary key is used to read the table data that must be transferred to the target system. As this context is not always properly maintained or even not known. . the meaning of a character depends on it language context. the trigger adds the primary key to the log table and appends some information about the change operation: insert. R3LOAD reads certain tables for additional Unicode conversion information during the export. 1. time consuming export preparation tasks are required to map data to the right language. deleted. If you have any corrections or suggestions for improvement. What is the reason? a) In a single code page system. and how does the synchronization work? a) The synchronization job scans the log table for records having the status “unprocessed”. After the transfer has been suc-cessfully completed and the receiving system has signaled. and the process status.Exercise 12: Special Projects Business Example You want to know how the SAP NZDT/MDS migration procedure is technically synchronizing the table data between the source and the target system. delete. 2. the record status is changed to “processed” in the log table of the source system. Database triggers are implemented to allow table synchronizations between the source and target system. please record them in the appropriate place in the course evaluation. Task 2: Unicode conversions of SAP MDMP systems (using multiple code pages) are more difficult than for non-MDMP systems (using a single-code page only). or updated a record. Feedback SAP AG has made every effort in the preparation of this course to ensure the accuracy and completeness of the materials. “update successful”. every character has a unique related Unicode character. update. which records to update in the target database. How are the synchronization jobs able to recognize. Solution 12: Special Projects Task 1: The SAP NZDT/MDS Migration procedure is used to export large customer tables while the source system is online. 1. Why is the Unicode conversion done at export time? a) The table data in an R3LOAD dump file does not provide enough information for a customer system Unicode conversion. In an MDMP system.