You are on page 1of 28

Migrating Large Enterprise (LE)

Applications from UNIX to Linux


with Minimal Downtime

A Dell Technical White Paper

Dell Oracle/Linux
Services
By Mahesh Pakala
Global Infrastructure Consulting Services
July 2009
1 EXECUTIVE SUMMARY.............................................................................................................2

2 THE NEED FOR DATA MIGRATION .........................................................................................3

3 UNIX TO LINUX MIGRATION CHALLENGES...........................................................................4


3.1.1 Cultural Training ......................................................................................................................4
3.1.2 Technical Training ...................................................................................................................4
3.1.3 Application Migration...............................................................................................................5
3.1.3.1 Web Applications ................................................................................................................................6
3.1.4 Database Migration .................................................................................................................6
3.1.4.1 Export, Import or DATAPUMP...........................................................................................................6
3.1.4.2 Transportable Database and Tablespaces .....................................................................................7
3.1.4.3 Oracle Streams....................................................................................................................................9
3.1.4.4 Third party Tools ...............................................................................................................................10
3.1.5 Additional Migration related recommendations.................................................................12
3.1.5.1 Performance Assessment and Testing ..........................................................................................12
3.1.5.2 Infrastructure: Functional and Technical........................................................................................13
3.1.5.3 Migrating To a New Release: Advantage of New Features ........................................................15
4 THIRD PARTY SOFTWARE FOR MIGRATING MISSION CRITICAL APPLICATIONS ........15
4.1.1 Goldengate Software ............................................................................................................15
4.1.1.1 Implementation Process Using GoldenGate TDM .......................................................................17
4.1.2 Quest Shareplex ....................................................................................................................19
4.1.2.1 Capture...............................................................................................................................................19
4.1.2.2 Transport ............................................................................................................................................20
4.1.2.3 Posting................................................................................................................................................20
5 SUMMARY ................................................................................................................................21

6 APPENDIX A : SAMPLE DATABASE & APPLICATION MIGRATION QUESTIONNAIRE ...21

7 REFERENCES ..........................................................................................................................27

8 AUTHORS .................................................................................................................................28

1 Executive Summary
This paper explores different migration strategies including third party software products like
Golden Gate’s (GG) transactional data management (TDM) software and Quests’ Shareplex product
for use as a transactional data replication solution and its suitability for availability improvement
during dynamic 9i, 10g & 11g database migration from any combination of Solaris, HP-UX, AIX,
Windows platforms to x86 Dell Linux platforms. This technical reference aims to assist by helping
readers understand the general mechanics of platform migration processes – how to apply
solutions to database migration, and determine the metrics for measuring the product’s capabilities
and scalability under a transaction load. This paper was not intended to be a full assessment of the
GoldenGate or Shareplex product or a comparison to any other similar product.

Many customers cannot tolerate the length of downtime (an offline database that is unavailable for
application users and transactions) required for a totally static (off-line) database migration. This
paper addresses the issue of the essential need to capture real-time transactional data on a
production Solaris/HP-UX/AIX/Windows platform and subsequent replication to a new Dell Linux
Servers where any interruption of service is required to be at zero. The knowledge gained from this
paper will assist in formulating strategies and methodologies to achieve near-zero downtime
dynamic database migrations to Oracle10g or Oracle11g.

This paper explains how to use various Oracle tools including third party tools such as
GoldenGate’s TDM or Quests Shareplex migration solution for migrating to Oracle Database 10g
Release 2 and Oracle Database 11g on a new Linux platform and complements the other Maximum
Availability Architecture(MAA) best practice papers that can be found on the Oracle Technology
Network.

2 The Need for Data Migration


Whether aiming for permanent change to a new environment, or the operational convenience of
“remote” processing, the ability to move data between computing platforms of different types can
significantly increase the flexibility of enterprise information technology operations.

The complexity of moving data between disparate platforms has been a barrier to exploiting data
assets in this way. Different platforms have incompatible volume (virtual storage device) metadata
formats, file system metadata formats and data formats in application files.

To transfer data between platforms, IT departments have had to choose between network (FTP)
copies, and the copying of data to tape on the source platform with the restoration of tape on the
target platform

Both options consume significant time and resources. As a result, many IT departments continue
to run applications on less-than-optimal platforms as migration to more suitable environments is
believed to be too resource-intensive. Others forego the business benefits of off-host backup, data
mining, and testing with live data because they believe that copying large data sets will result in
unacceptable application downtime.
In effect, each data set becomes captive to the server platform that processes it. A great deal of
enterprise data is stored in relational databases, therefore an ability to move databases between
disparate platforms would be especially beneficial. But consider, for example, the case of copying
of an Oracle database on a Solaris platform to a Linux platform. The Linux platform cannot
interpret Solaris volume metadata. The two platforms’ file system formats are also incompatible.
Finally, the platforms’ endian formats (the way in which multi-byte data items are interpreted)
differ, so Oracle instances on the two platforms cannot interpret each others’ data formats.

To migrate a database from a Solaris host to a Linux one, an administrator would have to:
• Stop application processing on the Solaris platform and shut down the database (so that a
business-consistent database image is available).
• Export data from the Oracle database into disk or tape files.
• Copy the exported data from Solaris to Linux, either by FTP or by tape exchange.
• Create an empty “target” database on the Linux platform.
• Import the exported data into the receiver database on the Linux platform.
Creating a business-consistent (unchanging) copy of a large database can mean hours of
application idle time. It’s not surprising that IT departments are reluctant to adopt operational
procedures that include moving databases between disparate platforms, despite potential business
benefits.

3 UNIX to Linux Migration Challenges


The migration to Linux is facilitated by proper planning and an assessment period. During this
assessment the business requirements and technical factors impacting the migration are
determined. This assessment should include business requirements such as availability and
performance, as well as technical requirements such as the interoperability of components in the
software stack. Some key areas that need proper planning and consideration that can help simplify
the process are listed below.

The migration process involves several challenges which need to be addressed effectively, a few
these are -
• Cultural Training.
• Technical Training.
• Application Migration.
• Database Migration.
• Infrastructure Requirements.
o High Availability.
o Disaster recovery.
o Back & Recovery.
o Monitoring.

3.1.1 Cultural Training


Some of the most significant challenges involved in migrating to Linux are cultural, rather than
technical. The proliferation of Linux based commodity servers from Dell, relative to large RISC-
based UNIX systems, requires a change in approach with respect to the way systems are
provisioned and managed. A standardized, automated approach to building, managing and
provisioning server hardware will assist in this area, as opposed to managing each system in a
unique manner by using tools such as RedHat Satellite Servers or Oracle GRID Bare metal
provisioning. In addition, the management tools and processes by which changes in Linux
systems are managed is different in Linux, relative to other UNIX systems.

Fortunately, Linux is similar enough to the other flavors of UNIX to allow for experienced System
Administrators to make the move to Linux with minimal disruption. There are numerous sources of
training (including Dell) available to assist with the transition.

3.1.2 Technical Training


For Unix System Administrators new to Linux, training is available from Dell and Oracle.
Additionally, Oracle provides training for Oracle DBA’s and developers that highlight the features of
the Linux operating system.
There is also a wealth of Oracle on Linux information on the Oracle Technology Network website
(http://otn.oracle.com/linux or http://www.dell.com/oracle ). These site also includes
downloads of the Oracle Database, Application Server, Dell Validated configurations, Dell validated
solutions and other products, white papers, along with sample code to help get your migration and
development projects started.

3.1.3 Application Migration

There are a few products available in the market to help migrate both your database and
application to the Oracle platform. These products are the Oracle Migration Workbench (Migration
Workbench), Application Migration Assistant and Oracle Platform Migration Assistant

Migration Workbench
The Migration Workbench is an Oracle tool that simplifies the process of migrating third-party
database systems to the Oracle platform (Oracle10g and Oracle11g Database). The Migration
Workbench migrates your entire Microsoft SQL Server database schema, including triggers and
stored procedures, in an integrated, environment.

Application Migration Assistant


The Application Migration Assistant is a tool that helps you migrate an application. You use the
Application Migration Assistant to identify statements that require modification to run on the Oracle
platform. The Application Migration Assistant contains a search rules file, Oracle Search Rules for
SQL Migrations, which targets SQL statements that you may need to change.
Application migration is necessary due to incompatibilities between Microsoft SQL Server and
Oracle11g Database. For example, the application may use SQL statements that are not SQL-92 (
ISO/IEC 9075 ) compliant or the application may use a feature that is not supported by Oracle10g
Database. In addition to database incompatibilities, differences between implementation of
database API's also cause problems. For example, the application may use a JDBC feature specific
only to a JDBC driver for Microsoft SQL Server.

Oracle Platform Migration – Oracle Applications


Migration of Oracle Applications has been greatly simplified with the release of the Oracle Platform
Migration Utility. This utility allows you to migrate your Oracle Applications Suite applications tier to
Linux in less than one day. This utility uses your existing applications source code files and
preserves many of your customizations. The Oracle Platform Migration Utility canminimize the
impact of migration. Benchmarking the performance of the migrated Linux applications tier can be
achieved by comparing performance to Unix RISC based mid-tier server running the same Oracle
Applications database.

As an example, the migration of an Oracle E-Business Suite database tier to Linux can be achieved
using the process discussed below in the database migration section. If the endian formats are the
same, the additional information can be found in the following Oracle MetaLink documents:

• E-Business Suite 11i - MetaLink Note 729309.1 (Transportable Databases)


• E-Business Suite R12 - MetaLink Note 734763.1 (Transportable Databases)
3.1.3.1 Web Applications
The steps and factors impacting the migration of server based web applications to Linux depend
on the nature of the web application technology. The Oracle Application Server provides a
complete solution for deploying web-based applications on Linux.

Java/JSP/Servlet Applications
Migration of Java-based server applications to Linux is a straightforward process. The JDK required
to run Java on Linux is readily available, and will need to be downloaded and installed on the Linux
application servers in target. The platform independent nature of compiled Java class files already
allows them to be compiled on one platform and deployed to another, so the migration of Java
application code across platforms will not generally have a large impact.
In situations where Java-based Web applications are moving non-Oracle systems to Oracle on
Linux, connectivity to the Oracle database needs to be considered. Oracle supplies a set of JDBC
drivers and documentation for the development of Java based Oracle applications.

IIS/ASP Applications
ASP applications can be migrated to Java with the use of the Oracle Migration Kit for ASP. This is a
free utility that migrates your proprietary ASP applications to industry standard Java code,
compatible with the Oracle Application Server and can be run on any platform with JDK support,
such as Linux.

Having the right tool to assist with testing, tuning and optimizing code early on in development is
essential to delivering high-quality, high-performing Java application code to production. The
Quest’s Java performance tuning solution, JProbe will not only help development stay on schedule,
remain on budget and satisfy all project requirements, but will also promote best practice for code
quality.

3.1.4 Database Migration


Efficient and reliable methods for database maintenance, such as migrating data to a new
platform, have existed as a part of Oracle through various software releases. However, as
maintenance windows continue to shrink and database size continues to grow, the importance
placed on the time required to migrate a database to a more reliable or cost-effective platform has
grown considerably.

By default, features and functionality provided by the Oracle database are platform generic, so
there is no issue or loss of functionality when migrating platforms. Data can be migrated by using
any of the following options :

• Export/Import with DataPump


• Transportable Database / Tablespaces
• Oracle Streams : The source has to be 10.2.0.4 and above
• Third party Software : Goldengate or Quests Shareplex :Source can be any version of
Oracle or any RDBMS(DB2, SQL Server etc)

3.1.4.1 Export, Import or DATAPUMP


The supported way to accomplish cross platform migration prior to Oracle Database 10g Release 2
was to export the data from the database on the old platform, create a new database on the new
platform, and import the data into the new database. For a large process this process could take a
number of days. It is important to take into consideration the time that will be required to migrate
the data, as this can impact total time required for production conversion. Where a very large
database is involved, it may be beneficial to create a number of smaller export files from the
original database, by schema owner or even object, and then perform parallel multiple imports into
the target database.

The simplest and recommended method to migrate a database to a platform with a


different endian format is to use Oracle Data Pump full database export and import. If
the time it takes Data Pump to migrate the database to the new platform does not fit within the
defined maintenance window even after following the Data Pump Performance guidelines in Oracle
Database Utilities, consider using the XTTS method or Third Party tools such as
GoldenGate/shareplex described here.

3.1.4.2 Transportable Database and Tablespaces

Starting with Oracle Database 10g Release 2, you can use Transportable Database. Note that if
the endian format is different between the source and target platforms, then you
cannot use Transportable Database. Instead, use a different method, such as cross-
platform Transportable Tablespaces (Different Endian formats). If migrating to a platform
that is the same endian format as the source platform (e.g. moving from a little endian platform to
another little endian platform), the recommended migration method is Transportable Database
(TDB). TDB can simplify the platform migration process.

Few of the key benefits of using XTTS are:


• Reduced overall complexity of the process because XTTS is a high-level copy of data
• Reduced errors since XTTS moves objects as a unit unlike table-by-table methods that
could miss objects or rows of data
• No need to create or rebuild indexes
• Movement of tablespaces between heterogeneous OS platforms
• Endian conversion, if needed, is only 3% slower than a standard OS file copy
• Statistics do not have to be recollected for tables and indexes that are moved

We can check whether a tablespace can be transported and if it is self contained, run:
dbms_tts.transport_set_check(<ts name1.,<ts name n>,……,true);

Run the procedure with the tablespace names as sysdba -


sqlplus> execute dbms_tts.transport_set_check('LAVA',TRUE);

This is going to populate a table called transport_set_violations. This table is owned by the user
sys. To query the table, either precede the tablename by 'sys.' or create a synonym:

Check the view for any possible violations.


SVRMGR> select * from transport_set_violations;
VIOLATIONS
--------------------------------------------------------------------------------
Sys owned object CATEGORIES in tablespace LAVA not allowed in pluggable set
1 row selected.

The contents of table transport_set_violations are only retained for the session. Connecting as
another user or disconnecting from the SQL*Plus session loses the contents of
transport_set_violations.

Remove all violations. Some of the violations may include materialized views. These should
be dropped and recreated manually after the tablespaces are transported.

3.1.4.2.1 RMAN Technique

RMAN is using is combination with transportable tablespaces for migrating to a target server.

If converting the datafiles at the source and then ftping the files to target then we have to convert
the Oracle datafiles using RMAN prior to transporting to new TARGET=DELL Platform :

<SOURCE=SOLARIS> rman target=/

Recovery Manager: Release 10.2.0.4.0 - 64bit


connected to target database: V1024 (DBID=328790991)

RMAN> CONVERT TABLESPACE 'REPOSIT'


TO PLATFORM=‘Linux 64-bit’
PARALLELISM 4
DB_FILE_NAME_CONVERT='/database/db101/V101/datafile/reposit01.dbf',
'/tmp/stage/';

Starting backup at 24-FEB-09


using target database controlfile instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=8 devtype=DISK
channel ORA_DISK_1: starting datafile conversion
input datafile fno=00006 name=/database/db101/V101/datafile/reposit01.dbf
converted datafile=/tmp/stage/reposit01.dbf
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:02
Finished backup at 24-FEB-09

The converted datafile is staged in /tmp/stage directory until it is copied to the target DELL server.

If converting using RMAN at the target using NFS mounted drives, then –
run {
allocate channel d1 device type disk;
allocate channel d2 device type disk;
allocate channel d3 device type disk;
CONVERT DATAFILE
‘database/db101/V101/datafile’
FROM PLATFORM 'HP Tru64 UNIX'
PARALLELISM 3
DB_FILE_NAME_CONVERT
' /database/db101/V101/datafile', '+DATA_DG' ;

The biggest advantage of a design based on NFS is that it eliminates the need for having
temporary space on the target side equal to the amount of data being moved. That is, if an ftp
approach is taken, the large TB’s amount of data would have to be moved via ftp to a equally large
holding area on the target and the large data would then have to be endian-converted to the
permanent large TB’s location.
RMAN cannot do endian conversion "in place." Most environments simply do not have an extra
large TB’s ready to use as temporary space, so the ftp to temporary space is often not feasible.
The other advantage is that it makes the process a single step. If a mechanism such as ftp were
used, moving the files would be one step and the conversion would be the second.

For additional details, see the MAA best practice white paper Oracle Database 10g Release 2 Best
Practices: Platform Migration using Transportable Database.

3.1.4.3 Oracle Streams

Oracle Streams enables the propagation and management of data, transactions, and events in a
data stream either within a database, or from one database to another. The source and target
database can be on different operating systems but to enable this type of data migration the
Oracle versions have to be 10.2.0.4 upwards.

Oracle Streams consists of three components: capture, propagation, and apply. Each of these
components is made up of several Operating System processes, as shown in Figure

Ensure that all Streams databases are running Oracle Database 10g Release 2 (release 10.2.0.4 is
recommended) and apply any required critical patch sets.
Logical Change
Records
Redo Not grouped
into Trxs LCR
Records
REDO Data LCR
Read Prep Build Capt
Source er arer er ure :
Database Streams
Pool
Capture at Source, Downstream or Target Propagation
Database
Apply at Target
Database LCR
Target Appli Coordina Read LCR
er tor er
Database :
Streams
Committed Trxs
Conflict Detection Transactions Pool
Grouped and sorted
Errors Handling To be applied
In dependency order
Custom Code

This approach of database migration using streams is very complex; it has extensive environment
setup challenges, learning curve to understand replication/AQ/streams functionality and most
importantly will need extensive manual intervention.

If the time requirements for database migration using transportable tablespace are not within the
recommended RTP/RPO requirements it is advisable to evaluate third party software products.

For more information refer – the Oracle Database Administrator guides and Oracle MAA site.

3.1.4.4 Third party Tools

Third Party tools such as GoldenGate and Quest’s Shareplex Software can provide faster and easier
Oracle data migration with minimal down time. The key advantage of these tools is that the source
database need not be only Oracle and can be any operating system. This method is explained at a
high-level here and in-depth in Section Four.

GoldenGate’s software has modular components which move transactional data between
heterogeneous databases in sub-seconds. The GoldenGate Capture process reads transaction logs
to identify changed data and only committed transactions are moved. GoldenGate Trail Files store
queued data in a platform-independent format, and the GoldenGate Delivery process applies the
transactions to the target database using native SQL commands. In addition to moving changed
data, GoldenGate can be installed to perform full initial loads to instantiate a database. The
software can also be set up for bi-directional data movement as well.
GoldenGate Manager (not shown in the diagram) provides a command line interface with which to
perform a variety of administrative, housekeeping, and reporting activities, such as the
establishment of parameters to configure and fine-tune GoldenGate processes as well as starting,
stopping, and monitoring GoldenGate Capture and GoldenGate delivery modules.

Figure 1
For a near zero-downtime migration of Oracle databases, the process with GoldenGate is as
follows:
• The GoldenGate Capture process is initiated to capture data changes on a system that
used for remote capture purposes (production system). These captured changes (stored in
Trail Files) are buffered until the target environment is instantiated.
• The database is then exported from production using the consistency = y flag and then
imported to the end target migration server/database. Please note that there’s no need to
use consistent=y if keys are on the tables.
• Once the instantiation occurs at the target, GoldenGate delivery applies buffered changes
from Trail Files to the target LINUX environment.
• GoldenGate configures failback replication prior to user cutover. This provides customers
with a contingency plan against unforeseen problems in the new environment.

The above approach offers the following benefits:

• Zero/minimal disruption to the business application during platform migration.

• Elimination of planned outage, usually associated with database/platform migration.

• Improved methods for data validation.

• A failback option for repeatability and to minimize business risk.

The other solution is Quest’s SharePlex which is database replication solution that supports high
availability, reporting, data movement and application integration on Oracle databases. With
SharePlex, you can similarly:
• Ensure high availability and disaster recovery
• Eliminate risks associated with migrations
• Improve the performance of OLTP systems
• Optimize business intelligence applications
SharePlex employs a streaming process outside of the database instance. This ensures a very small
footprint and minimal impact to database performance and network capacity.
SharePlex also works across multiple operating environments and different versions of Oracle. It
offers 24x7 unattended monitoring of your enterprise environment for dramatically reduced
downtime.

Different Data movement options for the Migration/DR/HA/Reporting

3.1.5 Additional Migration related recommendations

Migration of a database platform from UNIX to Linux requires the consideration of a few additional
aspects such as:
• Performance Assessment and Testing
• Infrastructural : Functional & technical Aspects
• Migrating to a new release: Taking advantage of new features
• Tuning on Linux
• Real Application Clusters

3.1.5.1 Performance Assessment and Testing

In order to assess and test the performance of a database system on completion of migration to
Linux, it is important to understand and quantify workload and performance characteristics on the
existing(source) platform. If these factors are not well understood and quantifiable, it is difficult to
size hardware and manage the migrated system on Linux. With a realistic understanding of what is
to be evaluated, an analysis of the performance of the Linux-based solution can be undertaken.

A variety of commercial applications in addition to Oracle’s RAT are available that can be used to
generate a database workload and record throughput for tests of concurrency.

We can use Benchmark Factory (BF) a tool provided by Quest Software that allows us to load
Oracle database with either industry standard benchmarks or real transactions drawn from our
applications or Oracle trace files. BF tool allows to the simulation of the load from multiple end-
users and to crank-up the volume of active users in distinct steps. This assists us to tune the
system after migration is complete and understand system production load profiles. In addition,
Oracle tools such as AWR and ASH reports can be used to observe various internal Oracle metrics
(wait events from ASH, AWR reports) and external metrics (CPU, RAM, disk and network
enqueues).

3.1.5.2 Infrastructure: Functional and Technical

It is important to assess the infrastructure requirements of the current UNIX platform prior to
migrating to Linux by covering functional areas such as:
• High Availability
• Disaster Recovery
• Backup and Recovery
• Monitoring

Tools and applications used to accomplish these tasks on UNIX hardware need to be evaluated on
the current platform. A pathway should then be determined whereby equivalent functionality can
be delivered on Linux. In some cases, the existing tools supplied by ISV’s(e.g: Oracle Applications,
WebLogic, Siebel etc.) on RISC-based UNIX hardware will be available on Linux and suitable for
use with minimal disruption. In other cases, the move to Linux will require a different approach to
achieving business requirements, such as high availability.

3.1.5.2.1 High Availability


It is critical to have a clear understanding of the business requirements for the availability of the
systems to be ported to Linux when considering implementing a high availability (HA) solution. In a
large UNIX environment, the business requirement for HA may be met by implementing
proprietary, hardware vendor specific methods. In addition, these solutions are typically
Active/Passive systems, requiring another large, expensive UNIX -based server to be available (and
idle) in the event of a server outage.

A platform migration to Linux requires a different model to ensure high availability. Fortunately,
Oracle products such as Oracle Database with Real Application Clusters, and Oracle Application
Server provide the functionality to failover connections, and route requests to available machines.
This functionality can allow the deployment of clusters of smaller Dell servers running Linux. This
can provide insulation against an outage should any one node fail. Hardware infrastructure is more
fully utilized in this model, as the requirement for an entire monolithic “hot standby” system is
removed.
3.1.5.2.2 Backup and Recovery
Prior to migrating a database environment, a thorough assessment of the business requirements
(legal and operational), Recovery Point Objectives (RPO), and Recover Time Objectives (RTO)
should be performed. The assessment should also include the technologies to be used (native tool,
3rd party tools, etc.) and determine the components requiring protection (database, application
software, custom code, etc.) For backups of the Oracle database, RMAN is platform independent
and most 3rd party media manager support it’s API.

3.1.5.2.3 Disaster Recovery


Although sometimes confused with High Availability, Disaster Recovery is the ability to recover not
from the loss of a single machine, but from a catastrophic failure that impacts most or all of a
physical infrastructure and the associated data. This functionality can be achieved by using Oracle’s
Data Guard or other third party tools from vendors such as from GoldenGate, Quest etc. utilizing
Dell Linux-based servers.

3.1.5.2.4 Monitoring
If using Oracle Enterprise Manager for management and performance monitoring of Oracle
products, there is no impact to the migration to Linux as the monitoring tool has all the same
functionality and is certified for use on Linux out-of-the-box. If 3rd party tools are being used for
systems monitoring and management, their availability on Linux will need to be determined. There
are also minor differences in the standard diagnostic utilities on Linux, relative to those on other
UNIX on RISC-based platforms.

Dell servers include OpenManage TM, Dell's lifecycle systems management software, which
provides seamless management of Dell servers, storage systems, network switches, desktops and
notebooks. OpenManage enables enterprises to deploy, maintain and monitor hardware using an
open framework that for easy integration with enterprise management software.

The figure below illustrates the process used to manage Dell servers by using a combination of
OEM and OpenManage. At every layer, this combination can provide monitoring for CPU, memory,
disk and table utilization in addition to many other environment metrics of the running system.
All systems requiring
2 patches identified and
reports generated

Reference catalog
1 retrieved from Dell
FTP site 3 Patches and bundles
retrieved from Dell FTP
Enterprise site and staged in
repository
Manager

4 Patches and Bundles


deployed to affected
systems

Figure 2
3.1.5.3 Migrating To a New Release: Advantage of New
Features
As a database platform migration project creates change to the existing infrastructure and some
level of testing activity, those migrating often take the opportunity to upgrade to the most recent
Oracle Database as part of the migration project. Combining the platform migration with a
database upgrade can save considerable time and testing effort versus separate efforts in serial, as
regression and performance testing need only be performed once. In addition he impact to
database clients is minimized as there is only one production cutover, rather than two for a
migration followed by an upgrade.

There may also be new features of the database that can be easily be taken advantage of within
the scope of the migration to provide immediate, benefits for user and administrators. For
example, migrating from an earlier Oracle release to Oracle11g Release 2 would allow you to take
advantage of improvements in the areas of Advanced compression, Additional Partitioning options,
Real Application Testing (DB Replay), Snapshot standby made easy, Performance features(Query
caching, Adaptive cursors sharing, SQL Plan baseline, Plan Management), ADDM for RAC,
Enhancements to RAC Cache Fusion, Active Data Guard, Automatic Workload Management and
others with little effort.

4 Third Party Software for Migrating


Mission Critical Applications

4.1.1 Goldengate Software


GoldenGate’s transactional data management (TDM) software product provides real-time capture,
routing, transformation (if desired), and delivery of database transactions across multiple
heterogeneous platforms, operating systems, and databases. Although GoldenGate has many
features applicable to high availability, real-time data integration, and distributed data processing
needs of large-scale enterprises, the focus of this evaluation is to determine the suitability of
GoldenGate for the purposes of dynamic (near-zero downtime) database migration. In addition to
moving data to the new environment, GoldenGate can be configured to allow for fail-back
synchronization. With this configuration, the application operates against the newly migrated
environment, while transactions are continually shipped back to the legacy environment so that a
fail-back scenario can be deployed should any issues or instability arise on the newly migrated
platform.

Figure 3 below illustrates three aspects of the logical architecture relevant to our dynamic data
migration methodology. The first data flow path exemplifies the process of “initial load”
instantiation of the source data into the target database. GoldenGate may be used for this initial
load; alternatively, Oracle export/import utilities or other methods of bulk data transfer can be
substituted as they may provide higher data transfer rates. The essential requirement is to
instantiate target database tables to a consistent point-in-time from which in-flight and
subsequently new transactions continue to be captured and applied by GoldenGate. The second
data path shown in this figure illustrates the transactional data management architecture used for
capturing and delivering in-flight transactions, fundamental to achieving near-zero-downtime
database migrations. Finally, the fail-back data flow path depicts retention of new target database
and old legacy source database in synch as a contingency. GoldenGate also provides capabilities to
verify data consistency between active Oracle databases as shown at the bottom of the diagram.

Figure 3: Database Migration Solution using GoldenGate

GoldenGate supports log-based extraction (from Oracle redo logs and if necessary, archive logs).
For certain objects it offers fetch functionality to directly retrieve changes from the tables. For the
purposes of this evaluation, log-based extraction was used. Log-based extraction enables it to
obtain specific point-in-time transactions from the Oracle system change number (SCN).

The GoldenGate Capture component reads committed data transactions and writes these to
GoldenGate Trail Files, which are stored on disk in a proprietary universal format, allowing the
ability to accumulate continuous activity for sub-second delivery and the application of this activity
to the target database. The universal format of Trail Files enables transactional data management
across various operating systems and different database vendors.

Original source transaction ordering is preserved on the target as the transactions are captured
from the redo logs in SCN order. Only committed transactions are captured and sent to the target
database, reducing network transfers and eliminating the need for the apply process to rollback
uncommitted transactions. GoldenGate checkpointing helps to ensure that in the event of a failure
during migration, the migration process can be automatically restarted from the failure point. The
GoldenGate Capture component can access Oracle’s archive logs to maintain point-in-time data
accuracy to handle any process failures. Additionally, GoldenGate is equipped to automatically
handle collisions of duplicate or missing records when replicating on the target, as may occur
during the overlap of time between the datapoint when ongoing transactions are extracted and
held for later replication, and bulk data transfer is used to populate the target database.

Design Considerations for Performance


GoldenGate TDM features a scalable architecture in which both the capture and delivery operations
can be parallelized and customized to fully utilize system resources, maximizing throughput and
performance. Although not shown in Figure 3, multiple capture and/or delivery processes can be
configured on both the source and target servers to minimize overall latency. GoldenGate
performance can be further enhanced via its ability to perform smart transaction grouping and the
application of data in arrays in a single database operation. If network capacity is insufficient for
the volume of data being sent to the target, introducing Data Pump processes helps with routing
larger volumes of changed data. Data Pumps continuously push (“pump”) the Trail Files from the
source system to target system(s). An alternate option may be to compress the Trail Files for
transmission.

Phased Migration with Bi-Directional Data Movement


As mentioned earlier, GoldenGate can be set up for bi-directional data movement to keep two
active databases synchronized. This capability allows users to work with two or more database
instances at the same time and allows for a phased migration approach. Businesses can choose to
run the old and new environment until a decision is made to retire the legacy system. For
organizations that have major risks associated with moving their systems, this type of set-up can
minimize these risks. This method is used typically in application upgrades (which may also include
database migrations) and allow users to switch between old and new application environments at
their convenience. Home Shopping Network, a GoldenGate customer, used this type of
implementation to run Siebel 6 and Siebel 8 environments simultaneously for several months,
allowing for a phased user migration to the Siebel 8 instance.
(http://www.goldengate.com/download-cs/GGCS_HSN_web.pdf)

4.1.1.1 Implementation Process Using GoldenGate TDM

This is a simple process of setting up the GoldenGate software which is unidirectional. This does
not include the advanced features such as parallelism, bi-directional replication and failback
options etc.

High Level Oracle Migration Implementation Process:

Preparations:
1: Verify Initial Load Method
• There are a number of options for the initial load method using GoldenGate and Oracle’s
database utilities. We recommend using Oracle Export/Import utilities to instantiate the
target database.

Verify Storage Requirements


• GoldenGate creates Trail Files from the activity produced on the source Oracle 9.2
system. All changed activity is stored in the GoldenGate Trail Files.
• The storage requirements depend on the amount of activity the Source Oracle 9.2
system produces and precisely what type of GoldenGate architecture will be put in
place.
• GoldenGate has self-managing parameters that can delete these Trail Files once they
are consumed and/or when a time frame has been met. Trail Files are not deleted until
the parameter criteria have been met.

2: Create GoldenGate Install Directories on source and target servers

3: Create GoldenGate Users on source and target Oracle instances

Assigning a GoldenGate user:


The following GoldenGate processes require database logins:
● Capture: to log into the source Oracle database.
● Delivery: to log into the target Oracle database.
● Appropriate GoldenGate utilities.

We recommend creating a database user that is dedicated to the GoldenGate processes. It can
be the same user for all Capture and Delivery processes.

4: Identify source Oracle 9.2 system Name and IP address.


5: Identify target Oracle 10g R2 system Name and IP address.
6: Verify that no firewall issues between the database instances
7: Verify that there are no blocked system ports that will be used by GoldenGate Manager and
instances.

Phase 1 - Pilot Implementation


Create empty target Oracle 10g R2 database.
Install GoldenGate onto source and target servers
Configure and Start Migration Data Movement
Enable Supplemental Logging on the source Oracle Database
Create GoldenGate SubDirectories on source & target servers
Start GoldenGate Manager processes on source & target servers
Start to queue data by starting GoldenGate Capture.
Perform initial load from the source Oracle system to the target Oracle system.
Validate replication
Validate data
Configure and start bi-directional replication (for failback option)
Execute test plan
Document pilot

Phase 3 - Live Production Implementation


Create empty target Oracle 10g R2 database
Install GoldenGate onto source & target servers

Configure and Start Migration Data Movement


Enable Supplemental Logging on source Oracle Database
Enable Supplemental Logging on source Oracle tables
Create GoldenGate SubDirectories on source & target servers
Start GoldenGate Manager processes on source & target servers
Start Oracle Capture on the source Oracle system
Perform initial load to target Oracle based on agreed method
Start Oracle Delivery on the target server
Validate replication
Validate data
Configure and start bi-directional replication
Execute test plan
Transition service

Phase 4 - Project Closure


Decommission software
Documentation

4.1.2Quest Shareplex

The other reliable option for data migration is to use Quest’s Shareplex solution. SharePlex can
capture a modification to selected objects immediately, as soon as it is written to the Oracle log
and even before the transaction is committed.. SharePlex completely complies with the Read
Consistency Model, so that target instances are accurate representations of the source database.

Archive
Logs
Capture
Reader
Online Process
Redo Logs Q

Reader
Process Export
Export
Q Proces
Source: s
Oracle9i
Quest
Transport Network
Target:
Oracle11g Layer
Post Import
Process Post Process
SQL Q

4.1.2.1 Capture

The SharePlex capture process gathers changes from the production database. In capture process:
• The capture process reads from the Oracle redo logs
• The network transports only changes to replicated objects, (not the rest of the database
housekeeping information contained in the redo logs)
• Database resources are not required to capture and move the data. A small repository
does reside in Oracle to record information such as when replication started/stopped, etc.
• The capture process within SharePlex can read the online redo logs, go back through
archive logs and even prompt for archive logs which have been taken off line to secondary
storage. It is this capability that adds to the fault tolerance. For example, if the capture
process is terminated for some reason, it can go back in time.

The capture process resides on the source system, reading the online redo logs automatically
generated by Oracle. These reads of the redo log are completed as file system reads and not
through the database. By using the redo logs as our source of change information, Quest is able to
replicate the changes to the production database without incurring additional overhead on that
instance. Oracle uses the redo logs for their database recoverability. As a result, the redo logs are
a reliable source of change information, whose format is fairly stable. Quest has reverse
engineered the format of the redo logs starting with the Oracle 6 release (that version is now
retired but SharePlex supports replication of subsequent Oracle versions starting with the 7.3.4
release), and Shareplex is now able to handle the new format of oracle 9i, 10g and 11g redo logs.

The capture process monitors the redo logs for changes continually. When a record appears in the
redo logs, SharePlex determines if it is for an object to be replicated. If it is, SharePlex adds
addressing information such as to which hosts the change should be sent and then puts the
change information into its queues. The queues reside outside the database. The data is
immediately processed and transported to the target systems without waiting for a commit or a
rollback, which would introduce unnecessary latency. (When a commit or rollback appears in the
redo logs, it is subsequently sent, and the transaction is appropriately completed on the target
systems.)

4.1.2.2 Transport

SharePlex uses its own network protocol combined with TCP/IP to transport data between the
source and target systems. The process confirms the receipt and the appropriate order of the
network packets, providing fault tolerance for network interruptions while ensuring data integrity
and completeness. No additional middleware is required to transport the data.

4.1.2.3 Posting

The SharePlex post process converts the change information into SQL statements. The SQL
statements are then applied to the open target instance using standard SQL*Plus connections.
Because SharePlex updates the target instances using standard SQL like any other application, the
concern about its supportability from Oracle is unwarranted.
Part of the key to SharePlex' accurate replication is its ability to maintain Oracle's read consistency
from source to target, not only replicating the order of the transactions, but also their context.
SharePlex replicates the combination of updating transactions to the target as they occurred on the
source, so that the resulting replica is reliable for disaster recovery. To accomplish this, SharePlex
creates connections to the target database to mirror the updating connections on the source
system, so that transactions can be applied to the target instance in parallel, as they occurred on
the source.
5 Summary
The proven Oracle utilities and tools such as Oracle DATAPUMP, DATA EXPORT, and Cross
Platform Transportable tablespaces are effective methods of migrating the data across platforms
with different endian formats. But incase large enterprise have a near ZERO downtime SLAs for
their Oracle Database applications and need a very effective failback solution, then its
recommended to use third party tools such as GoldenGate or Quest Software for migration.

It is also recommended to take advantage of this migration process by implementing newer Oracle
technologies and features. The key features to implement would be to include higher availability
architectures such as Oracle RAC, upgrade to the latest Oracle software like clusterware and ASM ,
Linux OS version and implement ASM .

6 Appendix A : Sample Database &


Application Migration Questionnaire
Usage Guidelines:
This worksheet is to be used for documenting Customers application architecture, infrastructure
and technical requirements so that an educated estimate can be made for how long it will take to
migration/port the database and application to Oracle architecture.

Audience:
Technical Architects, Lead Designers, Business Analysts, Lead Developers, Lead DBAs.

Contact Information:
Name ___________________________
Title ___________________________
email ___________________________
Phone ___________________________

General Application Information Worksheet

Application :
Name of the Application ( Abbreviation):

General Application Description ( Brief Value Proposition):


Is your application aligned with any specific industries ( Telecommunications, Government, Utilities etc )

Application Category ( Online Transaction Processing, Data Warehousing, Hybrid ):

What types of users access your application? ( Back office, front office, self service, general public ):

How much customization does your application require for each new implementation?

Is your application multilingual?

How do you handle application and infrastructure version changes for your customers?

Application deployment architecture ( Client/Server, Web based ):

n-Tier configuration ( 2 Tier, 3 Tier, Specify partition criteria ):

TP monitor being used (If any)

What is the primary development, deployment platform for the database?

Does the Application support databases from multiple vendors? Please specify.

What is the primary development, deployment platform for the application server:

Does the Application support application servers from multiple vendors? Please specify.

What development tools and methodologies do you use to develop your application?

What are the operating systems that your application supports? Please specify.

What other vendors’ products does your application integrate / interface with?

Are there specific considerations within your Application to support Performance, Availability, Scalability and
Serviceability?

Future Intentions Information Worksheet

Application:
What major changes are you planning for your application in future releases?

What new technologies are you planning to adopt? ( RAC, Web Services, wireless device support, portal, etc )
What additional Oracle technologies do you plan to adopt?

Are you planning to deploy your application in an online hosting mode?

What new applications are you planning to develop?

Do you have plans for any future Proof of Concept projects?

What is the Integration Strategy for the Application ( Enterprise Application Integration, B2B )

Detailed Application Architecture/Infrastructure Worksheet

Existing Resources/Infrastructure/Technology Name/Value Additional Info.


Client Tier
Application Programming Technology(s) used? (Applet, Plug-in, Script,
Thick Java Client, C++, C, VB, Delphi, PowerBuilder etc)

Deployment requirements? (Browser dependency, plug-in etc)

Protocol(s) used to communicate with Application Server Tier? (HTTP(s),


IIOP, RMI, SOAP, SQL*Net, RPC, etc)

Client Code deployment mechanisms (if required)? (Physical distribution,


Java Web Start, etc)

What types of devices does your application support? ( Web browsers,


desktops, wireless devices, PDA’s, etc )

Application Server Tier(s)


General Application Server usage. Mention products used in each of the
following categories:
HTTP Server
Caching Technologies
J2EE Server
Portal
Reports Server
Integration / Middleware
Unified Messaging Server
Business Intelligence Server
Wireless Server
Single Sign On Server
Existing Resources/Infrastructure/Technology Name/Value Additional Info.

Application Programming Language(s) used? (Java, C, Perl, etc)

Presentation Technologies used?


Servlet
Java Server Pages
Active Server Pages
PL/SQL Server Pages

Application Logic Technologies used?


Enterprise Java Beans
CORBA
DCOM
Which of the following Enterprise Java Bean technologies are used:
Session
Container Managed Persistence
Bean Managed Persistence
Message Driven
Does your application adhere to the J2EE specification ( no use of
proprietary extensions )

Middleware Technologies used?


O-R mapping or persistence tools used. ( Toplink, BC4J )
Caching
Single Sign On
Legacy system adapter connectivity tools

Management / Infrastructure Technologies used?


Database Management Tools
Scheduling software
Scripting languages ( Tcl /Tk )
Monitoring utilities
Application Server Clustering? ( EJB, Load Balancing, Failover, etc )

Application Tier Partitioning? (Compute vs. Data Intensive Logic)

Other Information regarding Tiers (Specify)

Online Service Specific Architectural Considerations


Infrastructure supports Multiple Companies per Instance?
Existing Resources/Infrastructure/Technology Name/Value Additional Info.
How is Data Segmentation accomplished? (Application Code, Virtual
Private Database)

What Security is in place to ensure secure data segmentation?

Software Engineering Considerations


Software Development methodology used? ( Agile, Extreme, WISC, RUP,
SASD, etc )

CASE Tools used? If so, what specific artifacts are produced? List by
application tier ( database, app server, client etc)

Modeling languages used? (UML etc) If so, what specific artifacts are
modeled?

IDE used? (JDeveloper, Forms, PowerBuilder, etc)

Leveraged Frameworks? (BC4J, Struts, MVC, etc )

Leveraged Component Technologies? (COM, DCOM, .Net, CORBA, J2EE,


Web Services etc)

External Integration Capabilities


Event System in Place?

Messaging System?

Standard Naming Service?

Data Transformation Technology?

Directory Services (Oracle Internet Directory, Active Directory, iPlanet )

Workflow System?

Security Considerations
Standard Outside Authentication (LDAP, JNDI)

Authorization Model (programmatic/declarative, data level)


Existing Resources/Infrastructure/Technology Name/Value Additional Info.
Encrypted Data Transfer between Tiers?

General Data Interface Layer Worksheet

Data Access features being used in your current data interface layer.
Data Access method being used. (ODBC, JDBC, SQLJ, ADO, DAO, OCI, PRO*, PL/SQL, etc)

Database Connection and session management

Stored Procedures? (Java, PL/SQL)

What types of SQL Statements are used? Static/Dynamic SQL

Do you use large objects (e.g. BLOB/CLOB/BFILE) or Object data types or other user-defined data types?

Database –Configuration Information

Database: Name/Value Comments


Database used (Oracle / DB2 / SqlServer etc.)
Database Release ( 8.1.7, 9.0.1 etc )
Database version (Standard/Enterprise)
Approximate database size (in MB/GB/TB - please indicate)
# of Users (Concurrent and/or Batch)
# of Tables / Views
# of Procedures and Packages
# of Triggers
Optimizer mode in use? (COST / RULE / CHOOSE)
Is the database auditing enabled?
How is uniqueness generated for database rows? (natural keys,
timestamps, database sequences, etc)

How is referential integrity enforced? (triggers in the database, database


integrity constraints, application coding, etc)

Database features in use (replication, security, RAC, objects database


features, data guard etc.):
Additional Information
Specify any application or development issues or additional information here.

7 References
• Various papers on Oracle (http://otn.oracle.com )
• Various papers on Linux (http://www.redhat.com )
• Migrating from Unix to Oracle on Linux (http://www.oracle.com/linux )
• Platform Migration using Transportable Tablespaces: Oracle Database 10g Release 2
(http://www.oracle.com/technology/deploy/availability/pdf/MAA_WP_10gR2_Platform
MigrationTTS.pdf)
• Platform Migration Using Transportable Database Oracle Database 11g and 10g Release 2
(http://www.oracle.com/technology/deploy/availability/pdf/MAA_WP_10gR2
_PlatformMigrationTDB.pdf)
• GoldenGate Solutions and Technology white paper (http://www.goldengate.com )
• Quest : http://www.quest.com
8 Authors
Migrating Mission Critical Applications from UNIX to Linux with Minimal Downtime

July 2009

Author: Mahesh Pakala


Mahesh Pakala is a highly experienced professional in Information Technology and Media Industry in the
Silicon Valley. He has worked in various senior management roles in the industry and is on advisory panel of
companies as a technology and business consultant.

He works in the GICS (Global Infrastructure Consulting Services) Group of Dell Inc., and assisting large
enterprise customers with HA architectures and solutions. He has extensive work experience in areas of –
engineering, media and technology with companies, such as Oracle Corporation (System Performance Group
& RDBMS Kernel Escalations), Ingres (Computer Associates), Fujitsu and startups like eLance and Grand
Central Communications. He has been a speaker in the areas of High Availability and System Architecture at
various conferences.
Contact for more Information – Mahesh_Pakala@dell.com

Reviewed By: Irem Radzik, Alok Pareek, Sunil Shenoy, Puneet Arora, Jeremy Greening, Ron Piwetz

Copyright © 2009, Dell Inc. All rights reserved.


This document is provided for information purposes only and the contents hereof are subject to change without notice. This
document is not warranted to be error-free, nor subject to any other warranties or conditions, whether expressed orally or
implied in law, including implied warranties and conditions of merchantability or fitness for a particular purpose. We
specifically disclaim any liability with respect to this document and no contractual obligations are formed either directly or
indirectly by this document. This document may not be reproduced or transmitted in any form or by any means, electronic
or mechanical, for any purpose, without our prior written permission.
Quest, Oracle, GoldenGate and Dell, are registered trademarks of Quest, Oracle Corporation, Goldengate and DELL and/or
its affiliates. Other names may be trademarks of their respective owners.

This document is intended to address migrating a database only. Regardless of the method chosen to migrate to a new
platform, there are additional areas that must be considered to ensure a successful transition to the new platform, such as
understanding platform-specific features and changes in the Oracle, Quest and Goldengate software. Refer to the platform-
specific installation guides, release notes, and READMEs for details.

You might also like