You are on page 1of 7

Lesson: Appendix

Figure 244: Database Instance Export on additional Hosts - File Locations

The package-filter files are stored in the shared directory DB/HDB. While creating the
package-filter files, additional "visual" files are generated. They contain the information about
which packages were applied to which package-filter and why (size or time information of
each package is supplied). The purpose of "visual" files is to comprehend the package
distribution process.
The file "package_filter_files.txt" contains the package-filter and the package-filter-visual file
names.
The file "package_filter_visual.txt" contains the information on how the package distribution
was performed to the package-filter files (top down).
hosta SAPNTAB 2820.90 MB
hostb REPOSRC 1700.00 MB
hostc DOKCLU 1282.89 MB
hosta SAPAPPL2_1 1282.89 MB
hostb SAPAPPL0_1 1030.95 MB
hostc SAPSSEXC_1 930.28 MB
...
The file "package_filter_hosta" contains the information about the packages inside the filter-
file
hosta SAPNTAB 2820.90 MB
hosta SAPAPPL2_1 1282.89 MB
...
The file "package_filter_hosta" contains the package names only:
SAPNTAB
SAPAPPL2_1
...
In an ideal case, the export of all servers is running in a way, that the importing server is
constantly busy and does not need to wait for packages. It does not mean, that all servers
need to be finished at the same time.

© Copyright. All rights reserved. 343


Unit 12: Appendix

Figure 245: Comparison between "Distribution Monitor" and "Database Instance Export on additional Hosts"

The Distribution manager was mainly developed to support Unicode conversions. The idea is
to distribute the export/import CPU load to different applications servers. The standard setup
is to scatter the R3LOAD packages to different servers and to perform the export and the
import from the same host the packages were distributed to. The configuration is a manual
task and requires some experience. The migration to SAP HANA is not supported.
The "Database Instance Export on additional Hosts" was especially developed to support
migrations to SAP HANA. It is part of the SWPM and requires no manual configuration effort.
The export is executed from different application servers and the import is done from a single
host. The export CPU load is distributed over different applications servers.
The concept of both methods is to move the export CPU load away from the database server.
The import methodology is different, but expects the performance bottleneck on the export
side anyway.

Table of Content of Unit 11 Appendix

Figure 246: Table of Content of Unit 11 Appendix

DMO without System Update


Since SUM 1.0 SP19, DMO offers the option to perform a migration run, without updating your
SAP system and hence without the need to provide a stack.xml file. This option is relevant, if
you want to carry out a migration of your SAP system to the SAP HANA database, but you do
not intend to update your SAP software, i.e. to avoid regression tests, or your SAP system is
already on the latest version.

Note:
Although the Software Update Manager executes no update, it still creates the
shadow system and repository during uptime. It also locks the workbench!

© Copyright. All rights reserved. 344


Lesson: Appendix

Figure 247: DMO without System Update

The DMO without software change is still an in-place procedure, which keeps the application
server on which the SUM is started. As DMO without System Update is derived from the
standard DMO, it cannot be avoided that a shadow system is created which requires the same
efforts and hardware resources as in an upgrade/update scenario including a workbench
lock. The uptime processing is not reduced compared to a standard DMO scenario. The main
benefit of "DMO without System Update" is the highly automated export/import process
which makes use of the memory pipe technology not available in SWPM. When combined with
"DMO with System Move" then a conventional export will be performed.

DMO with System Move


The database migration option (DMO) of the Software Update Manager (SUM) offers the
move of the primary application server instance (PAS) from the source system landscape to a
target system landscape during the DMO procedure since SUM 1.0 SP 20.
The SUM starts the system update and database migration procedure on the PAS of the
source system and executes the first part of the procedure, including the export of the
database content into files.
Next the files and the SUM directory are transferred to the target system and the remaining
part of the SUM with DMO procedure happens there.

Figure 248: DMO with System Move

It allows to switch the PAS host and a migration across data centers as no memory pipes are
used for export/import but the classic dump file approach.

© Copyright. All rights reserved. 345


Unit 12: Appendix

DMO with System Move can be combined with "DMO without Software Update".
The serial migration mode is intended for system move scenarios in which there is no fast
network connection between source and target data center. The DMO uptime preparations
must be completed on the source. The downtime starts, running the database export, which
needs to be completed before transferring the export dump to the target to start the import
there. Beside the export dump, the SUM folder must be copied to continue DMO on the
target.
The parallel migration mode is intended for system move scenarios in which a fast network
connection is available between source and target data center. The DMO uptime preparations
must be completed on the source. The downtime starts by running the database export. While
exporting the data, a manual data transfer is started via rsync (an rsync script is provided). As
soon as the first bucket (package) is completely transferred, the import can begin. Besides
the export dump, the SUM folder must be copied to continue DMO on the target.
The operating system (OS) of the source primary application server (PAS) host can be any
SAP support Unix-based operating system.
Target databases are SAP HANA and ASE only (June 2018, DMO of SUM 1.0 SP22)For
updates and restrictions please check the latest SAP Note for Database Migration Option
(DMO) of SUM.

Figure 249: DMO with System Move: Serial Migration Mode

In the scenario of "DMO with System Move" using the serial migration mode, the target
system must be prepared by installing an empty SAP HANA, an ASCS and PAS NetWeaver
system with the intended final system version. Afterwards SUM is started for the update
prepare phases and the shadow system is created. The downtime begins and the tables of the
shadow repository and all other tables are exported. The local administrator can now transfer
the SUM directory and the exported data to the target system. The transfer usually happens
via a transportable media (i.e. USB disk) or a WAN network connection (in this case
unfortunately not fast enough to support the parallel migration mode). On the target side,
SUM will be started from its transferred directory. It recognizes the scenario and drops the
previously installed schema to generate a new target schema and imports the data followed
by completing the update activities.

© Copyright. All rights reserved. 346


Lesson: Appendix

Figure 250: DMO with System Move: Parallel Migration Mode

In the scenario of "DMO with System Move" using the parallel migration mode, the target
system is prepared by installing an empty SAP HANA, an ASCS and PAS NetWeaver system
with the intended final system version. Afterwards DMO for SUM is started for the update
prepare phases and the shadow system is created. The SUM directory on the source system
must be constantly synchronized with the target (e.g. by using a script). Still in uptime, the
SUM is started on the target system (phase: HOSTCHANGE_MOVE). It recognizes the
scenario and drops the previously installed schema to generate a new and empty target
schema. The shadow repository tables are exported, transferred to the target and imported in
parallel. Then the downtime begins and the remaining tables of the source system are
exported, transferred and imported in parallel as well. SUM completes its update activities on
the target to finish the "DMO with System Move". The data transfers are performed by the
local administrator using the provided rsync script.

DMO Migration Benchmarking


The Benchmark Migration provides the possibility to simulate certain processes or only a part
of it with the objective of estimating their speed.
A database migration consists of two processes:
The export of data from the source system
The import of data into the target system
The Software Update Manager offers a benchmarking of these two processes with the
migration tool mode Benchmark Migration. You can simulate the export and import
processes or the export process only to estimate their speed. Ideally the benchmarking can
run on a sandbox with a recent copy of the production system.

© Copyright. All rights reserved. 347


Unit 12: Appendix

Figure 251: DMO Benchmarking

When starting the benchmark tool, no DMO run must be active. If DMO was started before, it
is required to reset the run, cleanup, and to stop all SAPup processes.
When exporting a percentage of the database only, take care which tables are exported to
avoid an export of the largest only.
The Benchmark does not start SMIGR_CREATE_DDL.
DMO declusters during import, so the "export only" benchmark does not include it. In case of
declustering a full export/import test is recommended.
If a Unicode conversion is involved, all preparation activities for Unicode must have been
completed before running the benchmark.
The number of parallel R3LOAD processes can be changed while running the benchmark.
Increasing the number is active nearly immediately, decreasing will kill processes (which is
not the case during a real export/import.
Different kinds of table splitting can be tested to find an optimum.
When running the export/import test during the uptime of the source system, it will only give
worst case runtime results, as not all resources on the source system can be used.
The "Export only" benchmark will not create any dump files as the R3LOAD "-discard"
parameter prevents this. The export benchmark results without import will probably be too
optimistic.
A parallel export/import is a better way to test the migration runtime.

A further way to test and improve the export/import performance is the SUM/DMO
possibility to select the "migration repetition option" in phase: PREP_INPUT/MIG2NDDB_INI.
It allows to stop after the downtime migration phase to perform a simple repetition to
optimize the procedure by tuning the parameters (like the number of R3load processes).

In "DMO with System Move", after completing the downtime migration phase, the whole
downtime (export/import) can be repeated by utilizing the duration information in the
DUR.XML files of the previous run. The results of the repetition are improved DUR.XML files.
So, it makes sense to plan the repetition of the downtime migration phase instead of running
the Benchmarking.

© Copyright. All rights reserved. 348


Lesson: Appendix

R3LOADs Role in S/4HANA Conversion


In a S/4HANA conversion, the role of R3LOAD is to export the source system and import the
data into the target system if the source system isn't already on SAP HANA. The R3LOAD
step is controlled by DMO. The S/4HANA data conversion takes place afterwards. If the ERP
system is already on SAP HANA, no R3LOAD is involved.

Figure 252: R3LOADs Role in S/4HANA Conversions

Note:
Some SAP S/4HANA system conversions scenarios might not be available or not
yet available if the source ERP system is already on SAP HANA. Please check the
respective SAP S/4HANA System conversion notes!

LESSON SUMMARY
You should now be able to:
● Understand recent system copy improvements and changes

© Copyright. All rights reserved. 349

You might also like