Professional Documents
Culture Documents
Ggs Studentguide Winux
Ggs Studentguide Winux
Fundamentals
Student Guide
Version 10.4
October 2009
Copyright 1995, 2009 Oracle and/or its affiliates. All rights reserved.
This software and related documentation are provided under a license agreement containing restrictions on use and
disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement
or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute,
exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or
decompilation of this software, unless required by law for interoperability, is prohibited.
The information contained herein is subject to change without notice and is not warranted to be error-free. If you
find any errors, please report them to us in writing.
If this software or related documentation is delivered to the U.S. Government or anyone licensing it on behalf of the
U.S. Government, the following notice is applicable:
U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data
delivered to U.S. Government customers are "commercial computer software" or "commercial technical data"
pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such,
the use, duplication, disclosure, modification, and adaptation shall be subject to the restrictions and license terms
set forth in the applicable Government contract, and, to the extent applicable by the terms of the Government
contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software License (December 2007).
Oracle USA, Inc., 500 Oracle Parkway, Redwood City, CA 94065.
This software is developed for general use in a variety of information management applications. It is not developed
or intended for use in any inherently dangerous applications, including applications which may create a risk of
personal injury. If you use this software in dangerous applications, then you shall be responsible to take all
appropriate fail-safe, backup, redundancy, and other measures to ensure the safe use of this software. Oracle
Corporation and its affiliates disclaim any liability for any damages caused by use of this software in dangerous
applications.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their
respective owners.
This software and documentation may provide access to or information on content, products, and services from third
parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any
kind with respect to third-party content, products, and services. Oracle Corporation and its affiliates will not be
responsible for any loss, costs, or damages incurred due to your access to or use of third-party content, products, or
services.
Contents
Real-Time Access
to
Real-Time Information
Real-Time Information
Real-Time Access
Availability: the degree to
which information can be
instantly accessed.
Mission-Critical
Systems
Company Strength
and Service
GoldenGate Software
established in 1995
Acquired by
Oracle in 2009
Rapid Growth in
Strategic Partners
Our partnerships are rapidly increasing with major technology players, including
database and IT infrastructure, packaged applications, business intelligence, and
service providers.
And because our software platform supports a variety of solution use cases Our
more than 500 customers are using our technology for over 4000 solutions around the
world. What we typically find that once an initial solution is implemented and the
benefits achieved, our customers then find additional areas across the enterprise
where we can further drive advantages for them.
Real Time
Moves with sub-second latency
Heterogeneous
Moves changed data across
different databases and platforms
Transactional
Maintains transaction integrity
Additional Differentiators:
Performance
Extensibility &
Flexibility
Reliability
Resilient against
interruptions and failures
7
TRANSACTIONAL
DATA INTEGRATION
Live Standby for an immediate fail-over solution that can later re-synchronize with
your primary source.
Active-Active solutions for continuous availability and transaction load distribution
between two or more active systems.
Zero-Downtime Upgrades and Migrations
Feeding a reporting database so that you dont burden your source production
systems.
Operational Business Intelligence (BI)
Real-time data feeds to operational data stores or data warehouses, directly or via
ETL tools.
Transactional data integration
Real-time data feeds to messaging systems for business activity monitoring (BAM),
business process monitoring (BPM) and complex event processing (CEP).
Uses event-driven architecture (EDA) and service-oriented architecture (SOA).
Live Standby
Active-Active
Live Reporting
Upgrades
Migrations
Maintenance
Real-Time Data Warehousing which gives you real-time data feeds to data
warehouses or operational data stores
9
Real-Time Access
Live Standby
Active-Active
Zero-Downtime Operations for:
Upgrades
Migrations
Maintenance
Improved Uptime
Higher Performance
Faster Recovery
Minimized Data Loss
Lower TCO
For High Availability and Disaster Tolerance solutions, its about real-time or
CONTINUOUS access to your data via your critical applications.
The benefits that Oracle GoldenGate drives here include:
-Improved uptime and availability (helping you reach aggressive service level
agreements/SLAs)
-Higher Performance for your production systems help to eliminate scalability or
response time delays that can give users the impression of an availability or access
issue
-Faster Recovery and Minimized Data Loss so you can achieve higher Recovery
Time Objectives (RTOs) and Recovery Point Objectives (RPOs)
- and an overall lower Total Cost of Ownership by putting your standby systems to
work for other solutions!
Benefits:
11
Benefits:
Benefits:
Eliminate planned downtime during hardware, database, OS and/or
application upgrades and migrations
Minimize risk with fail-back contingency
Improve success with phased user migrations
13
Real-Time Information
Fresher Data
Minimal Overhead
No Batch Windows
Data Integrity
Ease of Integration
Live Reporting
Operational Business Intelligence
Transactional Data Integration
For our Real-Time Data Integration solutions, its about real-time information or
access to CURRENT operational data.
The benefits that Oracle GoldenGate drives here include:
Fresher, real-time data available for use and decision-making remove latency as a
technical constraint.
Minimal overhead and impact on your source systems and overall architecture to
capture and move real-time data
No requirement for batch windows
Transactional data integrity helps improve overall data quality
Ease of integration Oracle GoldenGate easily fits into existing and desire
architecture, and is overall easy to maintain over long term.
Benefits:
Use real-time data for better, faster decision making
Remove reporting overhead on source system
Reduce cost-to-scale as user demands and data volumes grow
Leverage cost-effective systems for reporting needs
15
Benefits:
Use real-time data for better, faster decision making
Eliminate batch window dependency
Reduce overhead on source system
Maintain referential integrity for data quality
Leverage its flexibility for transformations and integration with ETL
18
Benefits:
19
Oracle GoldenGate provides real-time data integration between OLTP systems nonintrusively and with minimal impact. Distributed databases and the applications they
support can continuously access, utilize, and act on the most current operational data
available. The solution can also integrate with JMS-based messaging systems to
enable event driven architecture (EDA) and to support service oriented architecture
(SOA).
17
Technology Overview
Source
Database(s)
Capture
Source
Trail
Delivery
Target
Trail
Network
(TCP/IP)
Bi-directional
Target
Trail
Delivery
Source
Trail
Capture
Target
Database(s)
Oracle GoldenGate consists of decoupled modules that are combined to create the best
possible solution for your business requirements.
On the source system(s):
GoldenGates Capture (Extract) process reads data transactions as they occur,
by reading the native transaction log, typically the redo log. Oracle GoldenGate only
moves changed, committed transactional data, which is only a % of all transactions
therefore operating with extremely high performance and very low impact on the data
infrastructure.
Filtering can be performed at the source or target - at table, column and/or row level.
Transformations can be applied at the capture or delivery stages.
Advanced queuing (trail files):
To move transactional data efficiently and accurately across systems, Oracle
GoldenGate converts the captured data into a Oracle GoldenGate data format in
trail files. With both source and target trail files, Oracle GoldenGates unique
architecture eliminates any single point of failure and ensures data integrity is
maintained even in the event of a system error or outage.
Oracle
Routing:
Data is sent via TCP/IP to the target systems. Data compression and encryption are
supported. Thousands of transactions can be moved per second, without distance
limitations.
On the target system(s):
A Server Collector process (not shown) reassembles the transactional data into a
target trail.
The Delivery (Replicat) process applies transactional data to the designated target
systems using native SQL calls.
Bi-directional:
In bi-directional configurations/solutions, this process runs the same in reverse, to
concurrently synchronize data between the source and target systems.
Manager processes (not shown) perform administrative functions at each node.
Delivery:
All listed above
MySQL, HP Neoview, Netezza, and any
ODBC compatible databases
ETL products
JMS message queues
19
Management
Speed
Subsecond Latency
Volume
Thousands of TPS
Log-based Capture
Native, Local Apply
Efficient IO and
Bandwidth Usage
Bidirectional
Group Transactions
Bulk Operations
Compression
One-to-Many,
Many-to-One
Cascade
Transaction Integrity
Transparent Capture
Guaranteed Delivery
Conflict Detection,
Resolution
Dynamic Rollback
Incremental TDM
Initial Data Load
GUI-based Monitoring and
Configuration
Proactive Alerts
Encryption
Real-Time Deferred or
Batch
Event Markers
Integration
Heterogeneous Data
Sources
Mapping
Transformation
Enrichment
Decoupled Architecture
Table, Row, Column
Filtering
XML, ASCII, SQL Formats
Queue Interface
Stored Procedures
User Exits
ETL Integration
Java/JMS Integration
Benefits:
Reduce financial/legal risk exposure
Speed and simplify IT work in
comparing data sources
No disruption to business systems
Improved failover to backup systems
Confident decision-making and
reporting
21
Architecture
Primarily used for change data capture and delivery from database
transaction logs
Can optionally be used for initial load directly from database tables
Source
Database
Network
(TCP/IP)
Extract
Target
Database
Server
Collector
Trail
Replicat
Transaction
Log
Manager
Manager
Manager processes on both systems control activities such as starting, monitoring and
restarting processes; allocating data storage; and reporting errors and events.
Change Data Capture & Delivery using a Data Pump
Source
Database
Network
(TCP/IP)
Target
Database
Server
Collector
Extract
Remote
Trail
Replicat
Transaction
Log
Local
Trail
Data
Pump
Manager
Manager
23
Initial Load
Network
(TCP/IP)
Source
Database
Tables
Target
Database
Extract
Replicat
Server
Collector
Manager
Files
Or DB Bulk
Load Utility
Manager
Direct Load (Extract sends data directly to Replicat to apply using SQL)
Direct Bulk Load (Replicat uses Oracle SQL*Loader API)
File to Replicat (Extract writes to a file that Replicat applies using SQL)
File to database utility (Extract writes to a file formatted for a DB bulk load utility)
Change data capture & delivery can be run either continuously (online)
or as a special run (batch run) to capture changes for a specific period of
time.
Checkpointing - Extract
2 input checkpoints
1 output checkpoint for each trail it writes to
Start of oldest uncommitted
transaction in log
Last record
read from log
Input:
Transaction Log
Checkpoints
Checkpoints are used during online change synchronization to store the current read
and write position of a process. Checkpoints ensure that data changes marked for
synchronization are extracted, and they prevent redundant extractions. They provide
fault tolerance by preventing the loss of data should the system, the network, or a
GoldenGate process need to be restarted.
25
Checkpointing - Replicat
Start of current
uncommitted transaction
Last record
read from trail
Input:
GoldenGate Trail
Checkpoints
27
Note: You can run log-based change capture after the initial data load if you set the
extract begin time to the start of the longest running transaction committed
during the initial data load.
Configuring Oracle GoldenGate
Target
Database
Transaction
Log
Extract
Local
Trail
2. Change Capture
Data
Pump
Remote
Trail
Replicat
4. Change Delivery
Target
Database
Transaction
Log
Extract
Local
Trail
Data
Pump
Remote
Trail
2. Change Capture
Replicat
4. Change Delivery
29
Installing Oracle GoldenGate installs all of the components required to run and
manage GoldenGate processing, and it installs the GoldenGate utilities.
Manager must be running on each system before Extract or Replicat can be started,
and must remain running while those processes are running so that resource
management functions are performed.
The source definitions file contains the definitions of the source tables and is required
on the target system in hetereogeneous configurations. Replicat refers to the file to
when transforming data from the source to the target.
To reconstruct an update operation, GoldenGate needs more information than Oracle
and SQL Server transaction logs provide by default. Adding supplemental log data
forces the logging of the full before and after image for updates.
Identify the proper release of GoldenGate for your source and target environments
Database and version
Operating system and version
For Windows: Do not install Oracle GoldenGate into a folder that contains spaces in
its name, for example GoldenGate Software. The application references path
names, and the operating system does not support path names that contain spaces,
whether or not they are within quotes.
Syntax:
INSTALL <item> [<item> ]
Example:
C:\GGS> INSTALL ADDEVENTS ADDSERVICE
Note: The uninstall command is:
INSTALL DELETESERVICE DELETEEVENTS
this way, Manager will stop when the user logs out. By using install, you can install
Manager as a Windows service so that it can be operated independently of user
connections and can be configured to start either manually or when the system starts.
You can configure the Manager service to run as the Local System account or as a
specific named account. The configuration of a service can be changed by using the
Services applet of the Windows Control Panel and changing the service Properties.
DELETESERVICE Removes the GoldenGate Manager service.
AUTOSTART Specifies that the service be started at system boot time (the default).
MANUALSTART Specifies that the service be started only at user request (with GGSCI
or the Control Panel Services applet).
USER Specifies a user name to logon as when executing Manager. If specified, user
name should include the domain name, a backward slash, and the user name.
PASSWORD Specifies the users password for logon purposes. This can be changed
using the Control Panel Services applet.
A GLOBALS file stores parameters that relate to the GoldenGate instance as a whole,
as opposed to runtime parameters for a specific process. This file is referenced when
installing the Windows service, so that the correct name is registered.
gzip d {filename}.tar.gz
For UNIX, z/OS, or Linux: Use the gzip and tar options appropriate for your
system. If you are installing GoldenGate into a cluster environment, make certain
that the GoldenGate binaries and files are installed on a file system that is available
to all cluster nodes. After installing GoldenGate, make certain to configure the
GoldenGate Manager process within the cluster application, as directed by the
vendors documentation, so that GoldenGate will fail over properly with the other
applications. The Manager process is the master control program for all GoldenGate
operations.
A GoldenGate instance is a single installation of GoldenGate.
Prepare Environment: Installation NonStop SQL/MX
For a SQL/MX source, install Oracle GoldenGate on OSS running
on the NonStop source system:
Download .gz file to /ggs
gzip d {filename}.tar.gz
tar -xvf {filename}.tar
GGSCI> CREATE SUBDIRS
Run the ggmxinstall script
For a SQL/MX target, install Oracle GoldenGate
Either on OSS running on the NonStop target system
(as described above)
Or on an intermediate Windows system(as described earlier)
The ggmxinstall script SQL compiles the Extract program and installs the VAMSERV
program in the NonStop Guardian space.
33
Contents
dirchk
dirdat
dirdef
dirpcs
dirprm
Parameter files
dirrpt
dirsql
SQL scripts
dirtmp
dirchk
Contains the checkpoint files created by Extract and Replicat processes, which store
current read and write positions to support data accuracy and fault tolerance. Written
in internal GoldenGate format. Do not edit these files.
The file name format is <group name><sequence number>.<ext> where <sequence
number> is a sequential number appended to aged files and <ext> is either cpe for
Extract checkpoint files or cpr for Replicat checkpoint files. Examples: ext1.cpe,
rep1.cpr
dirdat
The default location for GoldenGate trail files and extract files created by Extract
processes to store records of extracted data for further processing, either by the
Replicat process or another application or utility. Written in internal GoldenGate
format. Do not edit these files.
File name format is a user-defined two-character prefix followed by either a six-digit
sequence number (trail files) or the user-defined name of the associated Extract
process group (extract files). Examples: rt000001, finance
dirdef
The default location for data definitions files created by the DEFGEN utility to contain
source or target data definitions used in a heterogeneous synchronization
environment. Written in external ASCII.
File name format is a user-defined name specified in the DEFGEN parameter file.
These files may be edited to add definitions for newly created tables. If you are unsure
of how to edit a definitions file, contact technical support. Example: defs.dat
dirpcs
Default location for status files. File name format is <group>.<extension> where
<group> is the name of the group and <extension> is either pce (Extract), pcr
(Replicat), or pcm (Manager).
These files are only created while a process is running. The file shows the program
name, the process name, the port, and process ID that is running. Do not edit these
files. Examples: mgr.pcm, ext.pce
dirprm
The default location for GoldenGate parameter files created by GoldenGate users to
store run-time parameters for GoldenGate process groups or utilities. Written in
external ASCII format.
File name format is <group name/user-defined name>.prm or mgr.prm. These files
may be edited to change GoldenGate parameter values. They can be edited directly
from a text editor or by using the EDIT PARAMS command in GGSCI. Examples:
defgen.prm, finance.prm
dirrpt
The default location for process report files created by Extract, Replicat, and Manager
processes to report statistical information relating to a processing run. Written in
external ASCII format.
File name format is <group name><sequence number>.rpt where <sequence number>
is a sequential number appended to aged files. Do not edit these files. Examples:
fin2.rpt, mgr4.rpt
dirsql
The default location for SQL scripts.
dirtmp
The default location for storing large transactions when the size exceeds the allocated
memory size. Do not edit these files.
35
Parameter file
37
Starting Manager
You must start Manager before most other configuration tasks performed in GGSCI.
Use either START MANAGER or START MGR.
On Windows systems, you can also start and stop Manager through the standard
Windows services control applet (in Control Panels).
Prepare Environment: Manager Sample MGR Parameter File
PORT 7809
DYNAMICPORTLIST 8001, 8002, 95009520
PURGEOLDEXTRACTS /ggs/dirdat/aa*, USECHECKPOINTS
PURGEOLDEXTRACTS /ggs/dirdat/bb*, &
USECHECKPOINTS, MINKEEPDAYS 5
AUTOSTART ER *
AUTORESTART EXTRACT *, WAITMINUTES 2, RETRIES 5
LAGREPORTHOURS 1
LAGINFOMINUTES 3
LAGCRITICALMINUTES 5
This parameter file has the Manager listening on PORT 7809. Ports 8001, 8002, and
those in the range 9500 to 9520 will be assigned to the dynamic processes started by
Manager.
This manager process will recycle GoldenGate trails that match the file name of
/ggs/dirdat/aa* and /ggs/dirdat/bb*. It will only recycle the trail once all Extracts
and Replicats have a checkpoint beyond the file (USECHECKPOINTS), however bb* trails
will not be purged until there has been no activity for 5 days.
The manager will automatically start any Extract and Replicat process at startup and
will attempt to restart any Extract process that abends after waiting 2 minutes, but
only up to 5 attempts.
The manager will report lag information every hour, but only for processes that have
3 and 5 minutes of latency. The message will be flagged informational for lags of 3
minutes and critical for any process that has a lag greater than 5 minutes.
The Problem
When Capturing, Transforming, and Delivering data across disparate systems and
databases, you must understand both the source and target layouts. Understanding
column names and data types is instrumental to GoldenGates data synchronization
functions.
The Solution - The DEFGEN Utility Program
The DEFGEN utility program produces a file containing a definition of the layouts of
the source files and tables. The output definitions are saved in an edit file and
transferred to all target systems in text format. Replicat and Collector read in the
definitions at process startup and use the information to interpret the data from the
GoldenGate trails.
When transformation services are required on the source system, Extract can use a
definition file containing the target, rather than source, layouts.
39
Specifies
The output definitions file location and name
SOURCEDB
USERID
TABLE
Database access
You need to assign a database user for each of the GoldenGate processes, unless the
database allows authentication at the operating system level. While not required,
GoldenGate recommends creating a user specifically for the GoldenGate application.
To ensure that processing can be monitored accurately, do not permit other users or
processes to operate as the GoldenGate user.
In general, the following permissions are necessary for the GoldenGate user:
On the source system, the user must have permissions to read the data dictionary or
catalog tables.
On the source system, the user must have permissions to select data against the
tables.
On the target system, the user must have the same permissions as the GoldenGate
user on the source system plus additional privileges to perform DML on the target
tables.
41
Oracle logs - On UNIX, GoldenGate reads the online logs by default, or the archived
logs if an online is not available. On the Windows platform, GoldenGate reads the
archived logs by default, or the online logs if an archive is not available. GoldenGate
recommends archive logging be enabled, and that you keep the archived logs on the
system for the longest time possible. This prevents the need to resynch data if the
online logs recycle before all data has been processed.
DB2 - In addition to enabling logging at a global level, each table to be captured must
be configured to capture data for logging purposes. This is accomplished by the DATA
CAPTURE CHANGE clause in the CREATE TABLE statement.
Sybase To capture database operations for tables that you want to synchronize with
GoldenGate, each one must be marked for replication. This can be done through the
database, but GoldenGate recommends using ADD TRANDATA.
GoldenGate uses the secondary transaction log truncation point to identify
transaction log entries that have not been processed by the Extract process. The
secondary truncation point must be established prior to running the GoldenGate
Extract process. The GoldenGate process will manage the secondary truncation point
once it has been established.
NonStop SQL/MX
During the installation of SQL/MX, the script ggmxinstall sets a pointer to the VAM
that will work with Extract to capture changes from the TMF audit trail.
43
The Distributor database must be used only for source databases to be replicated by
GoldenGate. One Distributor can be used for all of these databases. GoldenGate does
not depend on the Distributor database, so transaction retention can be set to zero.
Because GoldenGate does not depend on the Distributor database, but rather reads
the logs directly, the GoldenGate extraction process can process at its customary high
speed. For instructions on installing the replication component and creating a
Distributor database, see the GoldenGate for Windows and UNIX Administrator
Guide.
MANAGESECONDARYTRUNCATIONPOINT
Required TRANLOGOPTIONS parameters control whether or not GoldenGate maintains
the secondary truncation point. Use the MANAGESECONDARYTRUNCATIONPOINT option
if GoldenGate will not be running concurrently with SQL Server replication. Use the
NOMANAGESECONDARYTRUNCATIONPOINT option if GoldenGate will be running
concurrently with SQL Server replication. Allows SQL Server replication to manage
the secondary truncation point.
Primary Key Requirement - The requirement that all tables to be captured from
SQL Server 2005 source databases must have a primary key is a requirement of the
Microsoft replication component, which is utilized by GoldenGate as part of the logbased capture process.
Connecting - Use SQL Server Management Studio for SQL Server 2005/2000 to
connect to a MS SQL Server 2005 or 2000 database. Use Enterprise Manager only for
MS SQL Server 2000.
1. edelivery.oracle.com
2. Starting and stopping processes; monitoring processes; reporting lag, errors and
events; purging trail files.
45
Golden Gate Software Command Interface (GGSCI) provides on-line help for all
commands. The following is an example of the information returned when you enter
HELP STATUS EXTRACT:
Use STATUS EXTRACT to determine whether or not Extract groups are running.
Syntax:
STATUS EXTRACT <group name>
[, TASKS]
[, ALLPROCESSES]
<group name> is the name of a group or a wildcard (*) to specify multiple groups.
ALLPROCESSES displays status of all Extract processes, including tasks.
TASKS displays status of all Extract tasks.
Examples:
STATUS EXTRACT FINANCE, STATUS EXTRACT FIN*
GGSCI Commands
EXTRACT
REPLICAT
EXTTRAIL
RMTTRAIL
TRANDATA
CHECKPOINT
TABLE
ADD
MANAGER
ALTER
CLEANUP
DELETE
INFO
KILL
LAG
ER
REFRESH
SEND
START
STATS
TRACE
TABLE
STATUS
STOP
Objects
Manager, Extract, Replicat GoldenGate processes.
ER Multiple Extract and Replicat processes.
EXTTRAIL Local trail.
RMTTRAIL Remote trail.
TRANDATA Transaction data (from transaction logs).
CHECKPOINTTABLE Checkpoint table (on target database).
TRACETABLE Oracle trace table (on target database).
Commands
ADD Creates an object or enables TRANDATA capture.
ALTER Changes the attributes of an object.
CLEANUP Deletes the run history of a process or removes records from a checkpoint
table.
DELETE Deletes an object or disables TRANDATA capture.
INFO Displays information about an object (status, etc).
KILL Forces a process to stop (no restart).
LAG Displays the lag between when a record is processed by the process and the
source record timestamp.
REFRESH Refreshes Manager parameters (except port number) without stopping
Manager.
SEND Sends commands to a running process.
START Starts a process.
STATS Displays statistics for one or more processes.
STATUS Displays whether a process is running.
STOP Stops a process gracefully.
47
Commands
Parameters
Database
DDL
DUMPDDL [SHOW]
Miscellaneous
Parameter commands
SET EDITOR Changes the default text editor for the current GGSCI session from
Notepad or vi to any ASCII editor.
EDIT PARAMS Edits a parameter file.
VIEW PARAMS Displays the contents of a parameter file.
Database commands
DBLOGIN Establishes a database connection through GGSCI.
ENCRYPT PASSWORD Encrypts a database login password.
LIST TABLES List all tables in the database that match a wildcard string.
DDL commands
DUMPDDL Saves the GoldenGate DDL history table to file.
SHOW option displays the DDL information in standard output format.
Miscellaneous commands
!command Executes a previous GGSCI command without modification.
CREATE SUBDIRS Creates default directories within the GoldenGate home directory.
FC Edit a previously issued GGSCI command.
HELP Displays information about a GGSCI command.
HISTORY List the most recent GGSCI commands issued.
INFO ALL Displays the status and lag for all GoldenGate processes on a system.
OBEY Runs a file containing a list of GGSCI commands.
SHELL Run shell commands from within GGSCI.
SHOW Displays the GoldenGate environment.
VERSIONS Displays OS and database versions.
VIEW GGSEVT Displays the GoldenGate event/error log.
VIEW REPORT Displays a process report for Extract or Replicat.
GGSCI Examples
Start a Manager process
GGSCI> START MGR
Add an Extract group
GGSCI> ADD EXTRACT myext, TRANLOG, BEGIN NOW
Add a local trail
GGSCI> ADD EXTTRAIL /ggs/dirdat/rt, EXTRACT myext
Start an Extract group
GGSCI> START EXTRACT myext
49
This is especially useful to schedule GoldenGate batch jobs to run during off-peak
hours using a command-line capable scheduler
Target
Database
Transaction
Log
Extract
Local
Trail
Data
Pump
Remote
Trail
2. Change Capture
Replicat
4. Change Delivery
51
To configure Extract to capture changes from transaction logs, perform the following
steps:
Set up a parameter file for Extract with the GGSCI EDIT PARAMS command.
Set up an initial Extract checkpoint into the logs with the GGSCI ADD EXTRACT
command.
Optionally, create a local trail using the GGSCI ADD EXTTRAIL command and a data
pump Extract (and parameter file) reading from the local trail.
Set up a remote trail using the GGSCI ADD RMTTRAIL command.
Start the Server Collector process on the target system or let the Manager start the
Server Collector dynamically.
Start Extract using the GGSCI START EXTRACT command. For example: GGSCI>
START EXTRACT FINANCE
GGSCI sends this request to the Manager process, which in turn starts Extract.
Manager monitors the Extract process and restarts it, when appropriate, if it goes
down.
Change Capture - ADD EXTRACT Command
Add the initial Extract checkpoint with the GGSCI command ADD EXTRACT:
SOURCEISTABLE
TRANLOG
[<bsds name>]
SOURCEISTABLE Creates an Extract task that extracts entire records from the
database for an initial load. If SOURCEISTABLE is not specified, ADD EXTRACT
creates an online change-synchronization process, and one of the other data source
options must be specified. When using SOURCEISTABLE, do not specify service
options. Task parameters must be specified in the parameter file.
TRANLOG [<bsds name>]
Specifies the transaction log as the data source. Use this option for log-based
extraction. TRANLOG requires the BEGIN option.
Use the <bsds name> option for DB2 on a z/OS system to specify the BSDS
(Bootstrap Data Set) file name of the transaction log. Make certain that the BSDS
name you provide is the one for the DB2 instance to which the Extract process is
connected. GoldenGate does not perform any validations of the BSDS specification.
EXTFILESOURCE <file name>
Specifies an extract file as the data source. Use this option with a secondary Extract
group (data pump) that acts as an intermediary between a primary Extract group and
the target system. For <file name>, specify the fully qualified path name of the file,
for example c:\ggs\dirdat\extfile.
EXTTRAILSOURCE <trail name>
Specifies a trail as the data source. Use this option with a secondary Extract group
(data pump) that acts as an intermediary between a primary Extract group and the
target system. For <trail name>, specify the fully qualified path name of the trail, for
example c:\ggs\dirdat\aa.
53
Database
Any
Oracle,
SQL/MX
DB2 z/OS
DB2 LUW
LSN <value>
SQL Server,
Ingres
c-tree
Sybase
55
Specifies
DESC <description>
THREADS <n>
PASSTHRU
Create a data-pump Extract group named finance that reads from the
GoldenGate trail c:\ggs\dirdat\lt.
ADD EXTRACT finance, EXTTRAILSOURCE c:\ggs\dirdat\lt
Examples:
ADD EXTTRAIL c:\ggs\dirdat\aa, EXTRACT finance, MEGABYTES 10
ADD RMTTRAIL c:\ggs\dirdat\bb, EXTRACT parts, MEGABYTES 5
<trail name> The fully qualified path name of the trail. The actual trail name can
contain only two characters. GoldenGate appends this name with a six-digit sequence
number whenever a new file is created. For example, a trail named /ggs/dirdat/tr
would have files named /ggs/dirdat/tr000001, /ggs/dirdat/tr000002, and so forth.
<group name> The name of the Extract group to which the trail is bound. Only one
Extract process can write data to a trail.
MEGABYTES <n> The maximum size, in megabytes, of a file in the trail. The
default is 10.
57
Extract
Trail
/ggs/dirdat/rt000000
/ggs/dirdat/rt000001
Extract
Trail
/ggs/dirdat/rt000000
/ggs/dirdat/rt000001
59
For most business cases, it is best practice to use a data pump. Some reasons for using
a data pump include the following:
Protection against network and target failures: In a basic GoldenGate
configuration, with only a trail on the target system, there is nowhere on the source
system to store data that Extract continuously extracts into memory. If the network
or the target system becomes unavailable, the primary Extract could run out of
memory and abend. However, with a trail and data pump on the source system,
captured data can be moved to disk, preventing the abend. When connectivity is
restored, the data pump extracts the data from the source trail and sends it to the
target system(s).
You are implementing several phases of data filtering or transformation. When
using complex filtering or data transformation configurations, you can configure a
data pump to perform the first transformation either on the source system or on the
target system, and then use another data pump or the Replicat group to perform the
second transformation.
Consolidating data from many sources to a central target. When synchronizing
multiple source databases with a central target database, you can store extracted data
on each source system and use data pumps on each system to send the data to a trail
on the target system. Dividing the storage load between the source and target systems
reduces the need for massive amounts of space on the target system to accommodate
data arriving from multiple sources.
Synchronizing one source with multiple targets. When sending data to multiple
target systems, you can configure data pumps on the source system for each one. If
network connectivity to any of the targets fails, data can still be sent to the other
targets.
Trail
Primary
Extract
Trail
Data
Pump
Trail
Trail
A data pump can be set up to duplicate or selectively route the data to multiple trails.
However, if the trails are on multiple target systems and the communication to one of
the systems goes down, the Extract may exhaust its retries and shut down, causing
the updates to all targets to stop.
Data Pumps One to Many Target Systems
Primary
Extract
Trail
Data
Pump 1
Trail
Data
Pump 2
Trail
Data
Pump 3
Trail
61
1. Captures incremental changes from database transaction logs. It can also save
source data from the tables themselves or other GoldenGate trails. Writes the
captured data to GoldenGate trails or files.
2. From transaction logs (or archive logs) except for Teradata.
3. EXTTRAIL, EXTFILE
RMTHOST with RMTTRAIL, RMTFILE or RMTTASK
4. EDIT PARAMS
ADD EXTRACT
ADD {EXTTRAIL | RMTTRAIL |EXTFILE | RMTFILE}
START EXTRACT
5. The MEGABYTE <megabytes> option in the ADD EXTTRAIL or ADD RMTTRAIL
commands.
Data Pumps - Configuration
Add a data pump (source is the local trail from the primary Extract )
ADD EXTRACT <datapump>, EXTTRAILSOURCE ./dirdat/<trailid>
PASSTHRU parameter is used on a data pump if you do not need to do any data
transformations or user exit processing.
Add the data pump extract with a local trail as source and remote trail as destination.
Data Pumps Discussion Points
1.
2.
3.
4.
5.
6.
1. A secondary Extract process that reads from a local trail and distributes that data to a
remote system.
2. Allows a local trail on the source system, which is useful for recovery if the network or
target system fails.
3. To send to multiple target systems (so if one goes down, they dont all); to separate
out different tables; for parallel processing (faster).
4. RMTHOST is used to identify the name or IP address of the remote system and the
port that is being used.
5. The PASSTHRU parameter is used on a data pump (unless you need to perform data
transformation or user exit processing).
6. Review Architecture slide for change capture & delivery using a data pump.
63
Target
Database
Transaction
Log
Local
Trail
Extract
Data
Pump
2. Change Capture
Remote
Trail
Replicat
4. Change Delivery
Initial Load
Extract writes to
Load method
File to Replicat
Database utility
Direct load
Replicat (directly)
Replicat (directly)
Notes:
Initial Load
An initial load takes a copy of the entire source data set, transforms it if necessary,
and applies it to the target tables so that the movement of transaction data begins
from a synchronized state.
The first time that you start change synchronization will be during the initial load
process. Change synchronization keeps track of ongoing transactional changes while
the load is being applied.
Break mirror
Break from database mirroring.
Transportable tablespaces (Oracle)
Allows whole tablespaces to be copied between databases in the time it takes to copy
the datafiles.
Array fetch
GoldenGate 10.0 and later fetches 1000 rows (except for LOBs) but you can control
this by DBOPTIONS FETCHBATCHSIZE
65
Manager
Source
Database
Target
Database
Extract
Files
Replicat
File to Replicat
Captures data directly from the source tables and writes to an Extract file for Replicat
to process.
Extract parameters
SOURCEISTABLE instructs Extract to read the source tables directly rather than
the transaction log.
To format the output for processing by Replicat, use RMTTRAIL.
Using Replicat provides the ability to perform additional data transformation prior to
loading the data.
Execution
You can start Extract by the GGSCI command: START EXTRACT <name>
Or from the command shell with syntax:
extract paramfile <command file> [ reportfile <out file> ]
Manager
Source
Database
File
SQL*
Loader
File
BCP
File
SSIS
Target
Database
Extract
67
Manager
Manager
Source
Database
Target
Database
Extract
Replicat
Direct Load
Captures data directly from the source tables and sends the data in large blocks to the
Replicat process.
Using Replicat provides the ability to perform additional data transformation prior to
loading the data.
Extract parameters
Here you have RMTTASK (instead of RMTFILE in the Queue Data method).
RMTTASK instructs the Manager process on the target system to start a Replicat
process with a group name specified in the GROUP clause.
Execution
When you add Extract and Replicat:
SOURCEISTABLE instructs Extract to read the source tables directly rather than
the transaction log.
SPECIALRUN on Replicat specifies one-time batch processing where checkpoints are
not maintained.
The initial data load is then started using the GGSCI command START EXTRACT.
The Replicat process will be automatically started by the Manager process. The port
used by the Replicat process may be controlled using the DYNAMICPORTLIST
Manager parameter.
Manager
Manager
Source
Database
Source
Database
Oracle
Target
Extract
Replicat
SQL*Loader
API
69
1. File to Replicat (Extract writes to a file for Replicat to load via SQL)
File to database utility (Extract writes to ASCII files formatted for database utilities to
load)
Direct load (Extract writes directly to Replicat which loads via SQL)
Direct bulk load (Oracle only Extract writes directly to Replicat, which loads through
the SQL*Loader API)
2. ADD EXTRACT with SOURCEISTABLE
ADD REPLICAT with SPECIALRUN
3. HANDLECOLLISIONS; in the Replicat parameter file for change delivery; turn off
after initial load data processed.
Target
Database
Transaction
Log
Extract
Local
Trail
Data
Pump
2. Change Capture
Remote
Trail
Replicat
4. Change Delivery
Overview
GoldenGate trails are temporary queues for the Replicat process. Each record header
in the trail provides information about the database change record. Replicat reads
these trail files sequentially, and processes inserts, updates and deletes that meet
your criteria. Alternatively, you can filter out the rows you do not wish to deliver, as
well as perform data transformation prior to applying the data.
Replicat supports a high volume of data replication activity. As a result, network
activity is block-based not record-at-a-time. Replicat uses native calls to the database
71
for optimal performance. You can configure multiple Replicat processes for increased
throughput.
When replicating, Replicat preserves the boundaries of each transaction so that the
target database has the same degree of integrity as the source. Small transactions can
be grouped into larger transactions to improve performance. Replicat uses a
checkpointing scheme so changes are processed exactly once. After a graceful stop or a
failure, processing can be restarted without repetition or loss of continuity.
Change Delivery - Tasks
On the target system:
EDIT PARAMS
DBLOGIN
ADD CHECKPOINTTABLE
ADD REPLICAT
START REPLICAT
Replicat reads the GoldenGate trail and applies changes to the target database. Like
Extract, Replicat uses checkpoints to store the current read and write position and is
added and started using the processing group name.
Trail
Replicat
Target
Database
In this example:
DBLOGIN USERID and PASSWORD logs the user into the database in order to add the
checkpoint table.
Replicat parameters:
TARGETDB identifies the data source name (not required for Oracle)
USERID and PASSWORD provide the credentials to access the database
ASSUMETARGETS is used when the source and target systems have the same data
definition with identical columns.
DISCARDFILE creates a log file to receive records that cannot be processed.
MAP establishes the relationship between source table and the target table.
ADD REPLICAT names the Replicat group REPORD and establishes a local trail
(EXTTRAIL) with the two-character identifier rt residing on directory dirdat.
Change Delivery - Avoiding Collisions with Initial Load
If the source database remains active during an initial load, you must
either avoid or handle any collisions when updating the target with
interim changes
Avoiding Collisions
If you can backup/restore or clone the database at a point in time, you
can avoid collisions by starting Replicat to read trail records from a
specific transaction Commit Sequence Number (CSN):
START REPLICAT <group> ATCSN | AFTERCSN <csn>
73
If you cannot avoid collisions by the prior method, you must handle
collisions
Note: Once all of the change data generated during the load has been replicated, turn
off HANDLECOLLISIONS:
GGSCI > SEND REPLICAT <group> NOHANDLECOLLISIONS
GGSCI > EDIT PARAMS <group> to remove parameter
1. Reads change data from GoldenGate trails and applies them to a target database via
SQL commands.
2. When the source and target table structures (column order, data type and length) are
identical.
3. It uses the source definitions created by DEFGEN.
4. ADD CHECKPOINTTABLE (optional)
EDIT PARAMS
ADD REPLICAT
START REPLICAT
5. ADD CHECKPOINTTABLE (must be logged into database).
6. Identifies operations that could not be processed by Replicat.
7. Review Architecture slide for change capture & delivery (without data pump).
75
121
When transporting trails via TCP/IP, a Server Collector process on the target
platform collects, writes, and checkpoints blocks of records in one or more extract
files.
Extract Trails and Files - Contents
Each record in the trail contains an operation that has been
committed in the source database
Transactions are output in commit order
Operations in a transaction are grouped together, in the order
they were applied
-
By default, only the primary key and changed columns are recorded
77
Trail cleanup
If one Replicat is configured to process the trail, you can instruct the Replicat to purge
the data once it has been consumed. As long as Replicat remains current, your
temporary storage requirements for trails can be very low. If multiple Replicat
processes are configured against a single trail, you can instruct the Manager purge
data trail data as soon as all checkpoints have been resolved. As long as replication
processes keep pace, temporary storage requirements can be kept quite low.
GoldenGate Trails
Trail files are unstructured files containing variable length records. They are
unstructured and written to in large blocks for best performance.
Checkpoints
Both Extract and Replicat maintain checkpoints into the trails. Checkpoints provide
persistent processing whenever a failure occurs. Each process resumes where the last
checkpoint was saved guaranteeing no data is lost. One Extract can write to one to
many trails. Each trail can then be processed by one or many Replicat processes.
GoldenGate Data Format - File Header
Each trail file has a file header that contains:
Trail file information
Compatibility level
Character set
Creation time
File sequence number
File size
Extract information
GoldenGate version
Group name
Host name
Hardware type
OS type and version
DB type, version and character set
79
The input and output trails of a data pump must have the same
compatibility level
Each database management system generates some kind of unique serial number of
its own at the completion of each transaction, which uniquely identifies that
transaction. A CSN captures this same identifying information and represents it
internally as a series of bytes. A comparison of any two CSN numbers, each of which
is bound to a transaction commit record in the same log stream, reliably indicates the
order in which the two transactions completed. However, because the CSN is
processed in a platform-independent manner, it supports faster, more efficient
heterogeneous replication than using the database-supplied identifier.
All database platforms except Oracle, DB2 LUW, and DB2 z/OS have fixed-length
CSNs, which are padded with leading zeroes as required to fill the fixed length. CSNs
that contain multiple fields can be padded within each field, such as the Sybase CSN.
CSN values for different databases:
Ingres
<LSN-high>:<LSN-low>
Where:
<LSN-high> is the newest log file in the range. Up to 4 bytes padded with leading
zeroes.
<LSN-low> is the oldest log file in the range. Up to 4 bytes padded with leading
zeroes.
The valid range of a 4-byte integer is 0 to 4294967295.
The two components together comprise the Ingres LSN.
Example:
1206396546:43927
Oracle
<system change number>
Where: <system change number> is the Oracle SCN value.
Example: 6488359
81
Sybase
<time_high>.<time_low>.<page>.<row>
Where:
<time_high> and <time_low> comprise a sequence number representing the time
when the transaction was committed. It is stored in the header of each database log
page. <time_high> is 2-bytes and <time_low> is 4-bytes, both without leading
zeroes.
<page> is the data page, without leading zeroes.
<row> is the row, without leading zeroes.
The valid range of a 2-byte integer for a timestamp-high is 0 - 65535. For a 4-byte
integer for a timestamp-low, it is: 0 - 4294967295.
Example: 12245.67330.12.345
DB2 LUW
<LSN>
Where:
<LSN> is the decimal-based DB2 log sequence number, without leading zeroes.
Example: 2008-01-01 10:30:00.1234567890
DB2 z/OS
<RBA>
Where:
<RBA> is a 6-byte relative byte address of the commit record within the transaction
log.
Example: 1274565892
SQL Server
Can be any of these, depending on how the database returns it:
- Colon separated hex string (8:8:4) padded with leading zeroes and 0X prefix
- Colon separated decimal string (10:10:5) padded with leading zeroes
- Colon separated hex string with 0X prefix and without leading zeroes
- Colon separated decimal string without leading zeroes
- Decimal string
Where the first value is the virtual log file number, the second is the segment number
within the virtual log, and the third is the entry number.
Examples:
0X00000d7e:0000036b:01bd
0000003454:0000000875:00445
0Xd7e:36b:1bd
3454:875:445
3454000000087500445
c-tree
<log number>.<byte offset>
Where:
<log number> is the 10-digit decimal number of the c-tree log file, left padded with
zeroes.
<byte offset> is the 10-digit decimal relative byte position from the beginning of the
file (0 based), left padded with zeroes.
Example: 0000000068.0000004682
SQL/MX
<sequence number>.<RBA>
Where:
<sequence number> is the 6-digit decimal NonStop TMF audit trail sequence
number, left padded with zeroes.
<RBA> is the 10-digit decimal relative byte address within that file, left padded with
zeroes.
Together these specify the location in the TMF Master Audit Trail (MAT).
Example: 000042.0000068242
Teradata
<sequence ID>
Where:
<sequence ID> is the generic VAM fixed-length printable sequence ID.
Example: 0x0800000000000000D700000021
Record Header
Use the Logdump utility to examine the record header. Here is a layout of the header
record:
Hdr-Ind: Always E, indicating that the record was created by the Extract process.
UndoFlag: (NonStop) Normally, UndoFlag is set to zero, but if the record is the
backout of a previously successful operation, then UndoFlag will be set to 1.
RecLength: The length, in bytes, of the record buffer.
IOType: The type of operation represented by the record.
83
TransInd: The place of the record within the current transaction. Values are 0 for the
first record in transaction; 1 for neither first nor last record in transaction; 2 for the
last record in the transaction; and 3 for the only record in the transaction.
SyskeyLen: (NonStop) The length of the system key (4 or 8 bytes) if the source is a
NonStop file and has a system key.
AuditRBA: The relative byte address of the commit record.
Continued: (Windows and UNIX) Identifies (Y/N) whether or not the record is a
segment of a larger piece of data that is too large to fit within one record, such as
LOBs.
Partition: Depends on the record type and is used for NonStop records. In the case of
BulkIO operations, Partition indicates the number of the source partition on which
the bulk operation was performed. For other non-bulk NonStop operations, the value
can be either 0 or 4. A value of 4 indicates that the data is in FieldComp format.
BeforeAfter: Identifies whether the record is a before (B) or after (A) image of an
update operation.
OrigNode: (NonStop) The node number of the system where the data was extracted.
FormatType: Identifies whether the data was read from the transaction log (R) or
fetched from the database (F).
AuditPos: Identifies the position of the Extract process in the transaction log.
RecCount:
(Windows and UNIX) Used to reassemble LOB data when it must be
split into 2K chunks to be written to the GoldenGate file.
Alternative Formats
FORMATASCII
FORMATSQL
FORMATXML
FORMATASCII, BCP
FORMATASCII, SQLLOADER
85
Default output
Without options, FORMATASCII generates records in the following format.
Line 1, the following tab-delimited list:
The operation-type indicator: I, D, U, V (insert, delete, update, compressed
update).
A before or after image indicator: B or A.
The table name.
A column name, column value, column name, column value, and so forth.
A newline character (starts a new line).
Line 2, the following tab-delimited begin-transaction record:
The begin transaction indicator, B.
The timestamp at which the transaction committed.
The sequence number of the commit in the transaction log.
The relative byte address (RBA) of the commit record within the transaction log.
Line 3, the following tab-delimited commit record:
The commit character C.
A newline character.
BCP
Formats the output for compatibility with SQL Servers BCP (Bulk Copy Program) or
SSIS high-speed load utility.
COLHDRS
Outputs the tables column names before the data. COLHDRS takes effect only when
the extract is directly from the table.
DATE |TIME |TS
Specifies one of the following:
Date: Year to Day
Time: Year to Second
TransTime: Year to Fraction
DELIMITER
Use an alternative field delimiter (the default is tab).
EXTRACOLS
Includes placeholders for additional columns at the end of each record. Use this when
a target table has more columns than the source.
NAMES | NONAMES
Excludes column names from the output. For compressed records, column names are
included unless you also specify PLACEHOLDERS.
NOHDRFIELDS
Suppresses output of transaction information, the operation type character, the
before or after image indicator, and the file or table name.
NOQUOTE
87
SQL Statement
Commit indicator, C
Newline indicator
Scope of a transaction
Every record in a transaction is contained between the begin and commit indicators.
Each combination of commit timestamp and RBA is unique.
Alternative Formats: FORMATSQL Syntax
FORMATSQL
[, NONAMES ]
[, NOPKUPDATES ]
[, ORACLE ]
NONAMES
Omits column names for insert operations, because inserts contain all column names.
This option conserves file size.
NOPKUPDATES
Converts UPDATE operations that affect columns in the target primary key to a
DELETE followed by an INSERT. By default (without NOPKUPDATES), the output
is a standard UPDATE operation.
ORACLE
89
Formats records for compatibility with Oracle databases by converting date and time
columns to a format accepted by SQL*Plus. For example: TO_DATE('1996-0501','YYYY-MM-DD'
Alternative Formats: FORMATSQL Sample Output
B,2008-11-11:13:48:49.000000,1226440129,155,
DELETE FROM TEST.TCUSTMER WHERE CUST_CODE='JANE';
DELETE FROM TEST.TCUSTMER WHERE CUST_CODE='WILL';
DELETE FROM TEST.TCUSTORD WHERE CUST_CODE='JANE' AND
ORDER_DATE='1995-11-11:13:52:00' AND PRODUCT_CODE='PLANE' AND
ORDER_ID='256';
DELETE FROM TEST.TCUSTORD WHERE CUST_CODE='WILL' AND
ORDER_DATE='1994-09-30:15:33:00' AND PRODUCT_CODE='CAR' AND
ORDER_ID='144';
INSERT INTO TEST.TCUSTMER (CUST_CODE,NAME,CITY,STATE) VALUES
('WILL','BG SOFTWARE CO.','SEATTLE','WA');
INSERT INTO TEST.TCUSTMER (CUST_CODE,NAME,CITY,STATE) VALUES
('JANE','ROCKY FLYER INC.','DENVER','CO');
INSERT INTO TEST.TCUSTORD
(CUST_CODE,ORDER_DATE,PRODUCT_CODE,ORDER_ID,PRODUCT_PRICE,PRODUC
T_AMOUNT,TRANSACTION_ID) VALUES ('WILL','1994-0930:15:33:00','CAR','144',17520.00,3,'100');
INSERT INTO TEST.TCUSTORD
(CUST_CODE,ORDER_DATE,PRODUCT_CODE,ORDER_ID,PRODUCT_PRICE,PRODUC
T_AMOUNT,TRANSACTION_ID) VALUES ('JANE','1995-1111:13:52:00','PLANE','256',133300.00,1,'100');
C,
INLINEPROPERTIES | NOINLINEPROPERTIES
Controls whether or not properties are included within the XML tag or written
separately. INLINEPROPERTIES is the default.
TRANS | NOTRANS
Controls whether or not transaction boundaries and commit timestamps should be
included in the XMLoutput. TRANS is the default.
91
Viewing in Logdump
Logdump
The Logdump utility allows you to:
Display or search for information that is stored in GoldenGate
trails or files
Save a portion of a GoldenGate trail to a separate trail file
Logdump overview
Logdump provides access to GoldenGate Trails, unstructured file with a variable
record length. Each record in the trail contains a header, known as the GGS Header
(unless the NOHEADERS Extract parameter was used), an optional user token area, and
the data area.
Logdump Starting and Getting Online Help
To start Logdump - from the GoldenGate installation directory:
Shell> logdump
To get help:
Logdump 1 > help
The Logdump utility is documented in the:
Oracle GoldenGate Troubleshooting and Tuning Guide
HISTORY
- List previous commands
OPEN | FROM <filename> - Open a Log file
RECORD | REC
- Display audit record
NEXT [ <count> ]
- Display next data record
SKIP [ <count> ]
- Skip down <count> records
COUNT
- Count the records in the file
[START[time] <timestr>,]
[END[time] <timestr>,]
[INT[erval] <minutes>,]
[LOG[trail] <wildcard-template>,]
[FILE <wildcard-template>,]
[DETAIL ]
<timestr> format is
[[yy]yy-mm-dd] [hh[:mm][:ss]]
POSITION [ <rba> | FIRST ] - Set position in file
RECLEN [ <size> ]
- Sets max output length
EXIT | QUIT
- Exit the program
FILES | FI | DIR
- Display filenames
ENV
- Show current settings
Logdump Opening a Trail
Logdump> open dirdat/rt000000
Logdump responds with:
Current LogTrail is /ggs/dirdat/rt000000
Opening a trail
The syntax to open a trail is:
OPEN <file_name>
Where: <file_name> is either the relative name or fully qualified name of the file,
including the file sequence number.
93
012f
0000
0031
653a
6163
6972
0000
3000
0016
7572
6d63
6c65
6461
0800
0008
3300
693a
6361
3a73
742f
01e2
660d
000c
7465
7267
6f75
6572
4039
0a71
02f1
6c6c
6172
7263
3030
0000
3100
7834
7572
3a67
6536
3030
0c00
0006
eac7
6961
6773
0000
3030
0000
0001
7f3f
6e3a
3a67
1700
3700
0000
Length
Length
Length
Length
Length
Length
Length
3200
3400
3a68
6773
112e
0005
001d
Info x00
|
|
|
|
|
|
|
587
303
103
88
85
4
587
Len
587 RBA 0
0../0...f..q1.....2.
......3.....x4...?4.
.7.1uri:tellurian::h
ome:mccargar:ggs:ggs
Oracle:source6......
/dirdat/er0000007...
.8......@9..........
Length
303
3000 012f 3000 0008 660d 0a71 3100 0006 0001 3200 | 0../0...f..q1.....2.
etc.
95
I/O type
Operation
type and
time the
record was
written
Source
table
Image type:
could be a
before or
after image
Column
information
Record
data,
in hex
Record data,
in ASCII
GoldenGate trail files are unstructured. The GoldenGate record header provides
metadata of the data contained in the record and includes the following information.
The operation type, such as an insert, update, or delete
The transaction indicator (TransInd): 00 beginning, 01 middle, 02 end or 03 whole of
transaction
The before or after indicator for updates
Transaction information, such as the transaction group and commit timestamp
The time that the change was written to the GoldenGate file
The type of database operation
The length of the record
The relative byte address within the GoldenGate file
The table name
The change data is shown in hex and ASCII format. If before images are configured to
be captured, for example to enable a procedure to compare before values in the WHERE
clause, then a before image also would appear in the record.
COUNT output
The basic output, without options, shows the following:
- The RBA where the count began
- The number of records in the file
- The total data bytes and average bytes per record
- Information about the operation types
- Information about the transactions
Logdump Counting Records in the Trail (contd)
TCUSTMER
Total Data Bytes
Avg Bytes/Record
Delete
Insert
FieldComp
Before Images
After Images
TCUSTORD
Total Data Bytes
Avg Bytes/Record
Delete
Insert
Field Comp
Before Images
After Images
10562
55
300
1578
12
300
1590
229178
78
600
2324
14
600
2338
COUNT Syntax:
COUNT
[, DETAIL]
[, END[TIME] <time_string>]
[, FILE <specification>]
[, INT[ERVAL] <minutes>]
97
[, LOG] <wildcard>]
[, START[TIME] <time_string>]
COUNT options allow you to show table detail without using the DETAIL command
first, set a start and end time for the count, filter the count for a table, data file,
trail file, or extract file, and specify a time interval for counts.
Logdump Filtering on a Filename
Logdump 7 >filter include filename TCUST*
Logdump 8 >filter match all
Logdump 9 >n
________________________________________________________________
Hdr-Ind
:
E (x45)
Partition :
. (x00)
UndoFlag
:
. (x00)
BeforeAfter:
A (x41)
RecLength :
56 (x0038) IO Time
: 2002/04/30 15:56:40.814
IOType
:
5 (x05)
OrigNode
:
108 (x6c)
TransInd
:
. (x01)
FormatType :
F (x46)
SyskeyLen :
0 (x00)
Incomplete :
. (x00)
AuditRBA
: 105974056
2002/04/30 15:56:40.814 Insert
Len
56
Log RBA 1230
File: TCUSTMER
Partition 0
After Image:
3220 2020 4A61 6D65 7320 2020 2020 4A6F 686E 736F| 2 James Johnso
6E20 2020 2020 2020 2020 2020 2020 4368 6F75 6472| n Choudr
616E 7420 2020 2020 2020 2020 2020 4C41
| LA
Filtering suppressed
18 records
FILENAME specifies a SQL table, NonStop data file or group of tables/files using
a wildcard.
You can string multiple FILTER commands together, separating each one with a
semi-colon, as in:
FILTER INCLUDE FILENAME fin.act*; FILTER RECTYPE 5;
FILTER MATCH ALL
To avoid unexpected results, avoid stringing filter options together with one
FILTER command. For example, the following would be incorrect:
FILTER INCLUDE FILENAME fin.act*; RECTYPE 5; MATCH ALL
Without arguments, FILTER displays the current filter status (ON or OFF) and any
filter criteria that are in effect.
:
E (x45) Partition :
. (x00)
:
. (x00) BeforeAfter:
B (x42)
:
56 (x0038) IO Time
: 2002/04/30 16:22:14.205
:
3 (x03) OrigNode
:
108 (x6c)
:
. (x01) FormatType :
F (x46)
:
0 (x00) Incomplete :
. (x00)
: 109406324
545 records
99
Use SAVE to write a subset of the records to a new trail or extract file. By saving a
subset to a new file, you can work with a smaller file. Saving to another file also
enables you to extract valid records that can be processed by GoldenGate, while
excluding records that may be causing errors.
Options allow you to overwrite an existing file, save a specified number of records or
bytes, suppress comments, use the old or new trail format, set the transaction
indicator (first, middle, end, only), and clean out an existing file before writing new
data to it.
Syntax
SAVE <file_name> [!] {<n> records | <n> bytes}[NOCOMMENT]
[OLDFORMAT | NEWFORMAT]
[TRANSIND <indicator>]
[TRUNCATE]
Logdump Keeping a Log of Your Session
Logdump> log to MySession.txt
When finished
Logdump> log stop
Use LOG to start and stop the logging of Logdump sessions. When enabled, logging
remains in effect for all sessions of Logdump until disabled with the LOG STOP
command. Without arguments, LOG displays the status of logging (ON or OFF). An
alias for LOG is OUT.
Syntax
LOG {<file_name> | STOP}
Logdump Commands
Purpose
Examples
Viewing information
Making conversions
Controlling the
environment
Miscellaneous
HISTORY
OBEY
X
103
Reverse - Overview
The Reverse utility reorders operations within GoldenGate trails in
reverse sequence:
Provides selective back out of operations
Source Transaction
Log or
GoldenGate Trails
Extract
ext1
SPECIALRUN TRANLOG
BEGIN
END
GETUPDATEBEFORES
NOCOMPRESSDELETES
Filter criteria (if any)
EXTFILE or RMTFILE
(Table statements)
Write to a
single file or a
series of files
Source
Database
Reverse
File
input
File
Replicat
rep1
output
SPECIALRUN indicates a one-time batch process that will run from BEGIN date and
time until END date and time.
TRANLOG specifies the transaction log as the data source.
GETUPDATEBEFORES is used to include before images of update records, which
contain record details before an update (as opposed to after images).
NOCOMPRESSDELETES causes Extract to send all column data to the output, instead of
only the primary key. Enables deletes to be converted back to inserts.
END RUNTIME causes the Extract or Replicat to terminate when it reaches process
startup time.
105
1. A trail is a series of files on disk where GoldenGate stores data for further processing.
2. GoldenGate trail format, ASCII, SQL, XML
3. Logdump
Parameters
Parameters - Overview
Editing parameter files
GLOBALS versus process parameters
GLOBALS parameters
Manager parameters
Extract parameters
Replicat parameters
107
GLOBALS Parameters
GLOBALS Parameters
Control things common to all processes in a GoldenGate instance
Can be overridden by parameters at the process level
Must be created before any processes are started
Stored in <GoldenGate install directory>/GLOBALS
(GLOBALS is uppercase, no extension)
Must exit GGSCI to save
Once set, rarely changed
Parameters most commonly used
MGRSERVNAME ggsmanager1
Defines a unique Manager service name on Windows systems
CHECKPOINTTABLE dbo.ggschkpt
Defines the table name used for Replicats checkpoint table
CHECKPOINTTABLE
MGRSERVNAME
109
Manager Parameters
Sample Manager Parameter File
PORT 7809
DYNAMICPORTLIST 90019100
AUTOSTART ER *
AUTORESTART EXTRACT *, WAITMINUTES 2, RETRIES 5
LAGREPORTHOURS 1
LAGINFOMINUTES 3
LAGCRITICALMINUTES 5
PURGEOLDEXTRACTS /ggs/dirdat/rt*, USECHECKPOINTS
PORT establishes the TCP/IP port number on which Manager listens for requests.
DYNAMICPORTLIST specifies the ports that Manager can dynamically allocate.
AUTOSTART specifies processes that are to be automatically started when Manager
starts.
AUTORESTART specifies processes to be restarted after abnormal termination.
LAGREPORTHOURS sets the interval in hours at which Manager checks lag for Extract
and Replicat processing. Alternately can be set in minutes.
LAGINFOMINUTES specifies the interval at which Extract and Replicat will send a
informational message to the event log. Alternately can be set in seconds or hours.
LAGCRITICALMINUTES specifies the interval at which Extract and Replicat will send a
critical message to the event log. Alternately can be set in seconds or hours.
PURGEOLDEXTRACTS purges GoldenGate trails that are no longer needed based on
the option settings.
Manager Parameters
Purpose
Examples
General
COMMENT
Port management
PORT, DYNAMICPORTLIST
Process
management
AUTOSTART, AUTORESTART
Event management
Database login
SOURCEDB, USERID
Maintenance
PURGEOLDEXTRACTS
COMMENT
Port Management
DYNAMICPORTLIST
Specifies the ports that Manager can dynamically allocate.
DYNAMICPORTREASSIGNDELAY Specifies a time to wait before reassigning a port.
PORT
Establishes the TCP/IP port number on which Manager
listens for requests.
Process Management
AUTOSTART
AUTORESTART
111
USERID
Maintenance
CHECKMINUTES
Extract Parameters
Extract Parameter Overview
Extract parameters specify:
113
USERID and PASSWORD supply database credentials. (SOURCEDB is not required for
Oracle.)
RMTHOST specifies the target system while the MGRPORT option specifies the port
where Manager is running.
RMTTRAIL specifies the GoldenGate trail on the target system.
TABLE specifies a source table for which activity will be extracted.
Extract Parameters
Purpose
Examples
General
SOURCEDB, USERID
Selecting and
mapping data
Routing data
Formatting data
General
CHECKPARAMS
COMMENT
ETOLDFORMAT
GETENV
Reporting
Error handling
DISCARDFILE, DDLERROR
Tuning
Maintenance
PURGEOLDEXTRACTS, REPORTROLLOVER
Security
ENCRYPTTRAIL, DECRYPTTRAIL
OBEY
115
FETCHOPTIONS
Custom Processing
CUSEREXIT
Invokes customized user exit routines at specified points during
processing.
INCLUDE
MACRO
MACROCHAR
SQLEXEC
Tuning
ALLOCFILES
117
Security
DECRYPTTRAIL
ENCRYPTTRAIL | NOENCRYPTTRAIL
FETCHMODCOLS | FETCHMODCOLSEXCEPT
Forces column values to be fetched from the database when the columns are present
in the transaction log.
FILTER Selects records based on a numeric value. FILTER provides more flexibility
than WHERE.
KEYCOLS Designates columns that uniquely identify rows.
SQLEXEC Executes stored procedures and queries.
SQLPREDICATE Enables a WHERE clause to select rows for an initial load.
TOKENS Defines user tokens.
TRIMSPACES | NOTRIMSPACES
Controls whether trailing spaces are trimmed or not when mapping CHAR to
VARCHAR columns.
WHERE Selects records based on conditional operators.
Extract TRANLOGOPTIONS Parameter
Use the TRANLOGOPTIONS parameter to control database-specific
aspects of log-based extraction
Examples: Controlling the archive log
TRANLOGOPTIONS ALTARCHIVEDLOGFORMAT log_%t_%s_%r.arc
Specifies an alternative archive log format.
TRANLOGOPTIONS ALTARCHIVELOGDEST /oradata/archive/log2
Specifies an alternative archive log location.
TRANLOGOPTIONS ARCHIVEDLOGONLY
Causes Extract to read from the archived logs exclusively.
121
Replicat Parameters
Replicat Parameter Overview
Replicat parameters specify:
Group name associated with a checkpoint file
List of source to target relationships
Error handling
Various optional parameter settings
The Replicat process runs on the target system, reads the extracted data, and replicates
it to the target tables. Replicat reads extract and log files sequentially, and processes the
inserts, updates and deletes specified by selection parameters. Replicat reads extracted
data in blocks to maximize throughput.
Optionally, you can filter out the rows you do not wish to deliver, as well as perform data
transformation prior to replicating the data. Parameters control the way Replicat
processes how it maps data, uses functions, and handles errors.
You can configure multiple Replicat processes for increased throughput and identify
each by a different group name.
Replicat supports a high volume of data replication activity. As a result, network activity
is block-based rather than a record-at-a-time. SQL operations used to replicate
operations are compiled once and execute many times, resulting in virtually the same
performance as pre-compiled operations.
Replicat preserves the boundaries of each transaction while processing , but small
transactions can be grouped into larger transactions to improve performance. Like
Extract, Replicat uses checkpoints so that after a graceful stop or a failure, processing
can be restarted without repetition or loss of continuity.
Sample Replicat Parameter File
REPLICAT SALESRPT
USERID ggsuser, PASSWORD ggspass
ASSUMETARGETDEFS
DISCARDFILE /ggs/dirrpt/SALESRPT.dsc, APPEND
MAP HR.STUDENT, TARGET HR.STUDENT
WHERE (STUDENT_NUMBER < 400000);
MAP HR.CODES, TARGET HR.CODES;
MAP SALES.ORDERS, TARGET SALES.ORDERS,
WHERE (STATE = CA AND OFFICE = LA);
REPLICAT names the group linking together the process, checkpoints, and log files.
USERID, PASSWORD provide credentials to access the database.
ASSUMETARGETDEFS specifies that the table layout is the identical on the source and
target.
123
DISCARDFILE identifies the file to receive records that cannot be processed. Records
will be appended or the file will be purged at the beginning of the run depending on the
options.
MAP links the source tables to the target tables and applies mapping, selection, error
handling, and data transformation depending on options.
Replicat Parameters
Purpose
Examples
General
Processing method
Database login
SOURCEDB, USERID
EXTFILE, EXTTRAIL
Custom processing
Reporting
Error handling
Tuning
Maintenance
PURGEOLDEXTRACTS, REPORTROLLOVER
Security
DECRYPTTRAIL
General
CHECKPARAMS
COMMENT
GETENV
OBEY
Database Login
TARGETDB
USERID
125
SPACESTONULL | NOSPACESTONULL
Controls whether or not a target column containing only spaces is
converted to NULL on Oracle.
TABLE (Replicat)
Specifies a table or tables for which event actions are to take place
when a row satisfies the given filter criteria.
TRIMSPACES | NOTRIMSPACES
Controls if trailing spaces are preserved or removed for character
or variable character columns.
UPDATEDELETES | NOUPDATEDELETES
Changes deletes to updates.
USEDATEPREFIX Prefixes data values for DATE data types with a DATE literal, as
required by Teradata databases.
USETIMEPREFIX
Prefixes data values for TIME datatypes with a TIME literal, as
required by Teradata databases.
USETIMESTAMPPREFIX Prefixes data values for TIMESTAMP datatypes with a
TIMESTAMP literal, as required by Teradata databases.
Routing Data
EXTFILE
EXTTRAIL
Custom Processing
CUSEREXIT
Invokes customized user exit routines at specified points during
processing.
DEFERAPPLYINTERVAL Specifies a length of time for Replicat to wait before applying
replicated operations to the target database.
INCLUDE
References a macro library in a parameter file.
MACRO
Defines a GoldenGate macro.
SQLEXEC
Executes a stored procedure, query or database command during
Replicat processing.
Reporting
CMDTRACE
LIST | NOLIST
REPORT
REPORTCOUNT
SHOWSYNTAX
STATOPTIONS
TRACE | TRACE2
Error Handling
CHECKSEQUENCEVALUE | NOCHECKSEQUENCEVALUE
(Oracle) Controls whether or not Replicat verifies that a target
sequence value is higher than the one on the source and corrects
any disparity that it finds.
DDLERROR
Controls error handling for DDL replication.
DISCARDFILE
Contains records that could not be processed.
HANDLECOLLISIONS | NOHANDLECOLLISIONS
Handles errors for duplicate and missing records. Reconciles the
results of changes made to the target database by an initial load
process with those applied by a change-synchronization group.
HANDLETPKUPDATE
Prevents constraint errors associated with replicating
transient primary key updates.
OVERRIDEDUPS | NOOVERRIDEDUPS
Overlays a replicated insert record onto an existing target record
whenever a duplicate-record error occurs.
RESTARTCOLLISIONS | NORESTARTCOLLISIONS
Controls whether or not Replicat applies HANDLECOLLISIONS
logic after GoldenGate has exited because of a conflict.
REPERROR
Determines how Replicat responds to database errors.
REPFETCHEDCOLOPTIONS
Determines how Replicat responds to operations
for which a fetch from the source database was required.
SQLDUPERR
When OVERRIDEDUPS is on, specifies the database error
number that indicates a duplicate-record error.
WARNRATE
Determines how often database errors are reported.
Tuning
ALLOCFILES
RETRYDELAY
The MAP parameter establishes a relationship between one source and one target table.
Insert, update and delete records originating in the source table are replicated in the
target table. The first <table spec> is the source table.
With MAP, you can replicate particular subsets of data to the target table, for example,
WHERE (STATE = CA). In addition, MAP enables the user to map certain fields or
columns from the source record into the target record format (column mapping). You
can also include a FILTER command to evaluate data provided built-in functions for
more complex filtering criteria.
A table can appear in multiple maps as either source or target. For example, one might
replicate a sales file to either the east sales or west sales tables, depending on
some value or type of operation.
MAP <table spec> Specifies the source object.
TARGET <table spec> Specifies the target object.
DEF <definitions template> Specifies a source-definitions template.
TARGETDEF <definitions template> Specifies a target-definitions template.
COLMAP Maps records between different source and target columns.
EVENTACTIONS (<action>) Triggers an action based on a record that satisfies a
specified filter criterion or (if no filter condition) on every record.
EXCEPTIONSONLY Specifies error handling within an exceptions MAP statement.
EXITPARAM Passes a parameter in the form of a literal string to a user exit.
FILTER Selects records based on a numeric operator. FILTER provides more flexibility
than WHERE.
HANDLECOLLISIONS | NOHANDLECOLLISIONS
Reconciles the results of changes made to the target table by an initial load process with
those applied by a change-synchronization group.
INSERTALLRECORDS Applies all row changes as inserts.
INSERTAPPEND | NOINSERTAPPEND Controls whether or not Replicat uses an
APPEND hint when applying INSERT operations to Oracle target tables.
KEYCOLS Designates columns that uniquely identify rows.
REPERROR Controls how Replicat responds to errors when executing the MAP
statement.
SQLEXEC Executes stored procedures and queries.
TRIMSPACES | NOTRIMSPACES
Controls whether trailing spaces are trimmed or not when mapping CHAR to VARCHAR
columns.
WHERE Selects records based on conditional operators.
129
Selects
TABLE or MAP
Table
TABLE
WHERE
Row
FILTER
COLS | COLSEXCEPT
Columns
TABLE selection
The MAP (Replicat) or TABLE (Extract) parameter can be used to select a table.
MAP sales.tcustord, TARGET sales.tord;
ROWS selection
131
The following WHERE option can be used with MAP or TABLE to select rows for AUTO
product type.
WHERE (PRODUCT_TYPE = AUTO);
OPERATIONS selection
The following can be used with MAP or TABLE to select rows with amounts greater
than zero only for update and delete operations.
FILTER (ON UPDATE, ON DELETE, amount > 0);
COLUMNS selection
The COLS and COLSEXCEPT options of the TABLE parameter allow selection of columns
as shown in the example below. Use COLS to select columns for extraction, and
use COLSEXCEPT to select all columns except those designated by COLSEXCEPT, for
example:
TABLE sales.tcustord, TARGET sales.tord, COLSEXCEPT
(facility_number);
Use the FILTER clause for more complex selections with built-in
functions
Example
PRODUCT_AMT
=, <>, >, <, >=, <=
-123, 5500.123
"AUTO", "Ca"
@NULL,@PRESENT,@ABSENT
AND, OR
Arithmetic operators and floating-point data types are not supported by WHERE.
Data Selection WHERE Clause Examples
Only rows where the state column has a value of CA are returned.
WHERE (STATE = CA);
Only rows where the amount column has a value of NULL. Note
that if amount was not part of the update, the result is false.
WHERE (AMOUNT = @NULL);
Only rows where the amount was part of the operation and it has
value that is not null.
WHERE (AMOUNT @PRESENT AND AMOUNT <> @NULL);
133
When multiple filters are specified per a given TABLE or MAP entry, the filters are
executed until one fails or until all are passed. The failure of any filter results in a
failure for all filters.
Filters can be qualified with operation type so you can specify different filters for
inserts, updates and deletes.
The FILTER RAISEERROR option creates a user-defined error number if the filter clause
is true. In the following example error 9999 is generated when the BEFORE
timestamp is earlier than the CHECK timestamp. This also selects only update
operations.
FILTER (ON UPDATE, BEFORE.TIMESTAMP < CHECK.TIMESTAMP, RAISEERROR
9999);
Why is the example above not constructed like the one below? The
filter below will fail!
FILTER(NAME = JOE);
135
Example
TABLE SALES.ACCOUNT, FILTER (@RANGE (1,3));
@RANGE helps divide workload into multiple, randomly distributed groups of data,
while guaranteeing that the same row will always be processed by the same process.
For example, @RANGE can be used to split the workload different key ranges for a
heavily accessed table into different Replicat processes.
The user specifies both a range that applies to the current process, and the total
number of ranges (generally the number of processes).
@RANGE computes a hash value of all the columns specified, or if no columns are
specified, the primary key columns of the source table. A remainder of the hash and
the total number of ranges is compared with the ownership range to determine
whether or not @RANGE determines true or false. Note that the total number of
ranges will be adjusted internally to optimize even distribution across the number of
ranges.
Restriction: @RANGE cannot be used if primary key updates are performed on the
database.
The example above demonstrates three Replicat processes, with each Replicat group
processing one-third of the data in the GoldenGate trail based on the primary key.
Two tables, REP and ACCOUNT, related by REP_ID, require three Replicats
to handle the transaction volumes
Both of these MAP statements select the first of 2 ranges, so this Replicat is processing
the first range for both the ORDER table and the TRANSACTION table. Another
Replicat would include parameters to process the second of the ranges.
137
Column Mapping
Extract and Replicat provides the capability to transform data between two
dissimilarly structured database tables or files. These features are implemented with
the COLMAP clause in the TABLE and MAP Parameters.
Data Type Conversions
Numeric fields are converted from one type and scale to match the type and scale of
the target. If the scale of the source is larger than that of the target, the number is
truncated on the right. If the target scale is larger than the source, the number is
padded with zeros.
Varchar and character fields can accept other character, varchar, group, and datetime
fields, or string literals enclosed in quotes. If the target character field is smaller than
that of the source, the character field is truncated on the right.
);
For COLMAP:
<target field> is the name of a column/field in the target table
<source expression> is one of the following:
Numeric constant, such as 123
String constant, such as ABCD
The name of a source field or column, such as DATE1.YY
A function expression, such as @STREXT (COL1,1, 3)
Note: Function expressions are used to manipulate data, and begin with the at sign
(@).
Default Mapping
When you specify USEDEFAULTS, the process maps columns in the source table to
columns in the target with the same name. This can be useful when the source and
target definitions are similar but not identical.
Note: If you set up global column mapping rules with COLMATCH parameters, you
can map columns with different names to each other using default mapping.
139
This example:
Moves the HR.CONTACT CUST_NAME column value to the HR.PHONE NAME
column
Concatenates the HR.CONTACT AREA_CODE, PH_PREFIX and PH_NUMBER
with quote and hypen literals to derive the PHONE_NUMBER column value
Automatically maps other HR.CONTACT columns to the HR.PHONE columns that
have the same name.
Column Mapping Building History
This example uses special values to build history of operations data
INSERTALLRECORDS
MAP SALES.ACCOUNT, TARGET REPORT.ACCTHISTORY,
COLMAP (USEDEFAULTS,
TRAN_TIME = @GETENV(GGHEADER,COMMITTIMESTAMP),
OP_TYPE = @GETENV(GGHEADER, OPTYPE),
BEFORE_AFTER_IND = @GETENV(GGHEADER,
BEFOREAFTERINDICATOR),
);
Functions
Functions - Overview
141
Functions Example
MAP SALES.ACCOUNT, TARGET REPORT.ACCOUNT,
COLMAP ( USEDEFAULTS,
TRANSACTION_DATE = @DATE (YYYY-MM-DD,
YY, YEAR,
MM, MONTH,
DD, DAY),
AREA_CODE
= @STREXT (PHONE-NO, 1, 3),
PHONE_PREFIX = @STREXT (PHONE-NO, 4, 6),
PHONE_NUMBER = @STREXT (PHONE-NO, 7, 10) );
Description
CASE
EVAL
IF
COLSTAT
COLTEST
VALONEOF
143
Description
DATE
DATEDIFF
DATENOW
Formats that are supported for both input and output are:
CC
century;
YYYY four-digit year;
YY
two-digit year;
MMM alphanumeric month, such as APR;
MM
numeric month;
DDD numeric day of the year (e.g. 001, 365)
DD
numeric day of month;
HH
Hour;
MI
minute;
SS
seconds;
FFFFFF fraction (up to microseconds);
DOW0 numeric day of the week (Sunday = 0);
DOW1 numeric day of the week (Sunday = 1);
DOWA alphanumeric day of the week, (e.g. SUN;
JUL
Julian day;
JTSGMT and JTS Julian timestamp;
JTSLCT Julian timestamp that is already local time, or to keep local time when
converting to a Julian timestamp;
STRATUS Application timestamp;
CDATE C timestamp in seconds since the Epoch.
Formats supported for input are:
TTS
NonStop 48-bit timestamp
PHAMIS Application date format.
Calculating the century: When a two-digit year is supplied, but a four-digit year is
required in the output, the system can calculate the century (as 20 if the year is <
50), it can be hard-coded or the @IF function can be used to set a condition.
Discussion Points: DATE Function
2. What DATE expression would you use to convert a numeric
column stored as YYYYMMDDHHMISS to a Julian timestamp?
julian_ts_col = @DATE ("JTS", "YYYYMMDDHHMISS",
numeric_date)
145
Description
COMPUTE
NUMBIN
NUMSTR
STRCAT
STRCMP
STREQ
STREXT
STRFIND
STRLEN
Description
STRLTRIM
STRNCAT
STRNCMP
STRNUM
STRRTRIM
STRSUB
STRTRIM
STRUP
147
Functions - Other
Function
Description
BINARY
Keeps source data in its original binary format in the target when
source column is defined as character.
BINTOHEX
GETENV
GETVAL
HEXTOBIN
HIGHVAL,
LOWVAL
RANGE
TOKEN
Maps environmental values that are stored in the user token area to
the target column.
SQLEXEC
SQLEXEC - Overview
SQLEXEC advantages:
Extends GoldenGate capabilities by enabling Extract and Replicat
to communicate with the application database through SQL
queries or run stored procedures
Extends data integration beyond what can be done with
GoldenGate functions
The SQLEXEC option enables both Extract and Replicat to communicate with the users
database, either via SQL queries or stored procedures. SQLEXEC can be used to
interface with a virtually unlimited set of the functionality supported by the underlying
database.
Stored Procedure Capabilities
Stored procedures extend the functionality of popular databases such as Oracle, DB2,
SQL Server and Teradata. Users write stored procedures in order to perform custom
logic, typically involving the database in some way, using languages such as Oracles
PL/SQL and Microsofts Transact-SQL.
Extract and Replicat enables stored procedure capabilities to be leveraged for Oracle,
SQL Server and DB2. Tying together industry-standard stored procedure languages
with extraction and replication functions brings a familiar, powerful interface to virtually
unlimited functionality.
Stored procedures can also be used as an alternative method for inserting data into the
database, aggregating data, denormalizing or normalizing data, or any other function
that requires database operations as input. Extract and Replicat can support stored
procedures that only accept input, or procedures that produce output as well. Output
parameters can be captured and used in subsequent map and filter operations.
SQL Query Capabilities
In addition to stored procedures, Extract and Replicat can execute specified database
queries that either return results (SELECT statements) or update the database (INSERT,
UPDATE, and DELETE statements).
149
Before defining the SQLEXEC clause, a database logon must be established. This is
done via the SOURCEDB or USERID parameter for Extract, and the TARGETDB or
USERID parameter for Replicat.
When using SQLEXEC, a mapping between one or more input parameters and source
columns or column functions must be supplied.
When supplying at least one SQLEXEC entry for a given Replicat map entry, a target
table is not required.
SQLEXEC - DBMS and Data Type Support
SQLEXEC is available for the following databases:
Oracle
SQL Server
Teradata
Sybase
DB2
ODBC
The stored procedure interface supports the following data types
for input and output parameters:
Oracle
DB2
CHAR
CHAR
CHAR
VARCHAR2
VARCHAR
VARCHAR
DATE
DATETIME
DATE
VARCHAR2
DATE
All available numeric data types
LOB data types (BLOB and CLOB) where the length is less than 200 bytes
The ANSI equivalents of the above types
The stored procedure interface for SQL Server currently supports the following input and
output parameter types:
CHAR
VARCHAR
DATETIME
All available numeric data types
Image and text data types where the length is less than 200 bytes
TIMESTAMP parameter types are not supported natively, but you can specify other
data types for parameters and convert the data to TIMESTAMP format within the stored
procedure
The stored procedure interface for DB2 currently supports the following input and output
parameter types:
CHAR
VARCHAR
DATE
All available numeric data types
BLOB data types
The stored procedure interface for Sybase currently supports data types except TEXT,
IMAGE, and UDT.
The stored procedure interface Teradata version 12 and later supports CHAR,
VARCHAR, DATE and all available numeric data types.
Database Transaction Considerations
When specifying a stored procedure or query that updates the database, you must
supply the DBOPS option in the SQLEXEC clause. Doing so ensures that any database
updates are committed to the database properly. Otherwise, database operations can
potentially be rolled back.
As with direct table updates, database operations initiated within the stored procedure
will be committed in the same context as the original transaction.
151
The SQLEXEC option is specified as an option in TABLE and MAP statements within
EXTRACT and REPLICAT parameter files. Use either SPNAME (for stored
procedure) or QUERY (for SQL query).
SPNAME <sp name>
Is the name of the stored procedure in the database. This name can be used when
extracting values from the procedure.
153
statement. The results remain valid for as long as the process remains
running.
TRANSACTION
Executes the stored procedure or query once per source transaction. The
results remain valid for all operations of the transaction.
SOURCEROW
Executes the stored procedure or query once per source row operation. Use
this option when you are synchronizing a source table with more than one
target table, so that the results of the procedure or stored procedure or query
are invoked for each source-target mapping.
ALLPARAMS {REQUIRED | OPTIONAL}
REQUIRED specifies that all parameters must be present in order for the
stored procedure or query to execute.
OPTIONAL enables the stored procedure or query to execute without all
parameters present (the default).
PARAMBUFSIZE <num bytes>
By default, each stored procedure or query is assigned 10,000 bytes of space for input
and output. For stored procedures requiring more room, specify a <num bytes> with
an appropriate amount of buffer space.
MAXVARCHARLEN <num bytes>
Determines the maximum length allocated for any output parameter in the stored
procedure or query. The default is 200 bytes.
TRACE [ALL|ERROR]
If TRACE or TRACE ALL is specified, the input and output parameters for each
invocation of the stored procedure or query are output to the report file.
If TRACE ERROR is specified, parameters are output only after an error occurs in the
stored procedure or query.
ERROR <action>
Requires one of the following arguments:
IGNORE Database error is ignored and processing continues.
REPORT Database error is written to a report.
RAISE Database error is handled just as a table replication error.
FINAL Database error is handled as a table replication error, but does not
process any additional queries.
FATAL Database processing abends.
155
Examples:
SQLEXEC
SQLEXEC
SQLEXEC
SQLEXEC
SQLEXEC
call prc_job_count ()
select x from dual
"call prc_job_count ()" EVERY 30 SECONDS
call prc_job_count () ONEXIT
SET TRIGGERS OFF
2.
Example
MAP schema.tab1, TARGET schema.tab2,
SQLEXEC (SPNAME lookup, PARAMS (param1 = srccol)),
COLMAP (USEDEFAULTS, targcol = @GETVAL (lookup.param1));
<name>
The name of the stored procedure or query. When using SQLEXEC to execute the
procedure or query, valid values are as follows:
Queries: Use the logical name specified with the ID option of the SQLEXEC clause. ID
is a required SQLEXEC argument for queries.
Stored procedures: Use one of the following, depending on how many times the
procedure is to be executed within a TABLE or MAP statement:
- For multiple executions, use the logical name defined by the ID clause of the
SQLEXEC statement. ID is required for multiple executions of a procedure.
- For a single execution, use the actual stored procedure name.
<parameter>
Valid values are one of the following:
157
- The name of the parameter in the stored procedure or query from which the data
will be extracted and passed to the column map.
- RETURN_VALUE, if extracting values returned by a stored procedure or query.
Whether or not the parameter value can be extracted depends on:
1. The stored procedure or query executing successfully.
2. The stored procedure or query results have not yet expired.
Rules for determining expiration are defined by the SQLEXEC EXEC option. When a
value cannot be extracted, the @GETVAL function results in a column missing
condition. Usually this means that the column is not mapped. You can also use the
@COLTEST function to test the result of the @GETVAL function to see if it is
missing, and map an alternative value if desired.
Macros
Macros - Overview
Macros enable easier and more efficient building of parameters
By using GoldenGate macros in parameter files you can easily configure and reuse
parameters, commands, and functions. As detailed in the slide, you can use macros
for a variety of operations to enable easier and more efficient building of
parameters.
GoldenGate macros work with the following parameter files:
Manager
Extract
Replicat
Note: Do not use macros to manipulate data for tables being processing by a data
pump Extract in pass-through mode.
Macros - Creating
Macros can be defined in any parameter file or library
Macro statements include the following
Macro name
Optional parameter list
Macro body
Syntax
MACRO #<macro name>
PARAMS (#<param1>, #<param2>, )
BEGIN
<macro body>
END;
Syntax
<macro name> is the name of the macro. <macro name> must begin with the #
character, as in #macro1. If the # macro character is used elsewhere in the
parameter file, such as in a table name, you can change it to something else with
the MACROCHAR parameter. Macro names are not case-sensitive.
PARAMS (<p1>,<p2>...) describes each of the parameters to the macro. Names must
begin with the macro character, such as #param1. When the macro is invoked, it
must include a value for each parameter named in the PARAMS statement.
Parameter names are optional and not case-sensitive.
BEGIN indicates the beginning of the body of the macro. Must be specified before
the macro body.
<macro body> represents one or more statements to be used as parameter file input.
It can include simple parameter statements, such as COL1 = COL2; more complex
statements that include parameters, such as COL1 = #val2; or invocations of other
macros, such as #colmap(COL1, #sourcecol).
END ends the macro definition.
159
Macros - Invoking
Reference the macro and parameters anywhere you want the
macro to be invoked
EXTRACT EXSALES
MACRO #make_date
PARAMS (#year, #month, #day)
BEGIN
@DATE(YYYY-MM-DD, CC, @IF(#year < 50, 20, 19),
YY, #year, MM, #month, DD, #day)
END;
MAP SALES.ACCT, TARGET REPORT.ACCOUNT,
COLMAP
(
TARGETCOL1 = SOURCECOL1,
Order_Date = #make_date(Order_YR,Order_MO,Order_DAY),
Ship_Date = #make_date(Ship_YR,Ship_MO,Ship_DAY)
);
Invoking Macros
The example above demonstrates defining a macro named #make_date, calling the
macro two different times, with each instance sending a different set of source column
values to determine the target column values.
Note that the order and ship dates are determined as the result of calling the
make_date routine to populate the target columns.
Macros - Example
Consolidating Multiple Parameters
Define the macro:
MACRO #option_defaults
BEGIN
GETINSERTS
GETUPDATES
GETDELETES
INSERTDELETES
END;
Invoke the macro:
#option_defaults ()
IGNOREUPDATES
MAP SALES.SRCTAB, TARGET SALES.TARGTAB;
#option_defaults ()
MAP SALES.SRCTAB2, TARGET SALES.TARGTAB2;
Note that the macros result is altered by the IGNOREUPDATES parameter for the first
MAP statement.
Macros Libraries
Macros can be built in a library and referenced into your
parameter file
EXTRACT EXTACCT
INCLUDE /ggs/dirprm/macro.lib
Macro libraries
To use a macro library, use the INCLUDE parameter at the beginning of a parameter
file. The syntax is:
INCLUDE <macro name>
You may toggle the listing or suppression of listing of the output of libraries by using
the LIST and NOLIST parameters.
161
Macros Expansion
Macro Processor enables tracing of macro expansion with the
CMDTRACE option
Syntax
CMDTRACE [ ON | OFF | DETAIL ]
Default is OFF
Example
EXTRACT EXTACCT
INCLUDE /ggs/dirprm/macro.lib
CMDTRACE ON
MAP SALES.ACCOUNT, TARGET REPORT.ACCOUNT_HISTORY,
COLMAP (USEDEFAULTS,
#maptranfields () );
Macro expansion
The macro processor enables tracing of macro expansion for debugging purposes via
the CMDTRACE command. When CMDTRACE is enabled, the macro processor will
display macro expansion steps in the processs report file.
The ON option enables tracing, OFF disables it, and DETAIL produces additional details.
User Tokens
163
Option
LAG
LASTERR
JULIANTIMESTAMP
RECSOUTPUT
GoldenGate
GGENVIRONMENT
GGFILEHEADER
GGHEADER
RECORD
GoldenGate environment
Trail file header
Trail record header
Trail record location
Database
DBENVIRONMENT
TRANSACTION
Database environment
Source transaction
Operating
System
OSVARIABLE
OS environmental variable
General
HOSTNAME Returns the name of the system running the Extract or Replicat
process.
OSUSERNAME Returns the operating system user name that started the process.
PROCESSID The process ID that is assigned by the operating system.
@GETENV (GGHEADER, <return value>)
BEFOREAFTERINDICATOR Returns BEFORE (before image) or AFTER (after
image).
COMMITTIMESTAMP Returns the transaction timestamp (when committed) in
the format of YYY-MM-DD HH:MI:SS.FFFFFF.
LOGPOSITION Returns the sequence number in the data source.
LOGRBA Returns the relative byte address in the data source.
OBJECTNAME | TABLENAME Returns the table name or object name (if a
sequence).
OPTYPE Returns the type of operation: INSERT, UPDATE, DELETE, ENSCRIBE
COMPUPDATE, SQL COMPUPDATE, PK UPDATE, TRUNCATE, TYPE n.
RECORDLENGTH Returns the record length in bytes.
TRANSACTIONINDICATOR Returns the transaction indicator: BEGIN, MIDDLE,
END, WHOLE.
@GETENV (GGFILEHEADER, <return_value>)
COMPATIBILITY Returns the GoldenGate compatibility level of the trail file.
1 means that the trail file is of GoldenGate version 10.0 or later, which supports file
headers that contain file versioning information.
0 means that the trail file is of a GoldenGate version that is older than 10.0. File
headers are not supported in those releases. The 0 value is used for backward
compatibility to those GoldenGate versions.
Information about the trail file
CHARSET Returns the global character set of the trail file. For example:
WCP1252-1
CREATETIMESTAMP Returns the time that the trail was created, in local GMT
Julian time in INT64.
FILENAME Returns the name of the trail file. Can be an absolute or relative path.
FILEISTRAIL Returns a True/False flag indicating whether the trail file is a single
file (such as one created for a batch run) or a sequentially numbered file that is part
of a trail for online, continuous processing.
FILESEQNO Returns the sequence number of the trail file, without any leading
zeros.
FILESIZE Returns the size of the trail file when the file is full and the trail rolls
over.
FIRSTRECCSN Returns the commit sequence number (CSN) of the first record in
the trail file. NULL until the trail file is completed.
LASTRECCSN Returns the commit sequence number (CSN) of the last record in
the trail file. NULL until the trail file is completed.
FIRSTRECIOTIME Returns the time that the first record was written to the trail
file. NULL until the trail file is completed.
165
LASTRECIOTIME Returns the time that the last record was written to the trail
file. NULL until the trail file is completed.
URI Returns the universal resource identifier of the process that created the trail
file, in the format: <host_name>:<dir>:[:<dir>][:<dir_n>]<group_name>
URIHISTORY Returns a list of the URIs of processes that wrote to the trail file
before the current process.
Information about the GoldenGate process that created the trail file
GROUPNAME Returns the droup name associated with the Extract process that
created the trail. The group name is that which was given in the ADD EXTRACT
command. For example, ggext.
DATASOURCE Returns the data source that was read by the process.
GGMAJORVERSION Returns the major version of the Extract process that created
the trail, expressed as an integer. For example, if a version is 1.2.3, it returns 1.
GGMINORVERSION Returns the minor version of the Extract process that created
the trail, expressed as an integer. For example, if a version is 1.2.3, it returns 2.
GGVERSIONSTRING Returns the maintenance (or patch) level of the Extract
process that created the trail, expressed as an integer. For example, if a version is
1.2.3, it returns 3.
GGMAINTENANCELEVEL Returns the maintenance version of the process
(xx.xx.xx).
GGBUGFIXLEVEL Returns the patch version of the process (xx.xx.xx.xx).
GGBUILDNUMBER Returns the build number of the process.
Information about the local host of the trail file
HOSTNAME Returns the DNS name of the machine where the Extract that wrote
the trail is running.
OSVERSION Returns the major version of the operating system of the machine
where the Extract that wrote the trail is running.
OSRELEASE Returns the release version of the operating system of the machine
where the Extract that wrote the trail is running.
OSTYPE Returns the type of operating system of the machine where the Extract
that wrote the trail is running.
HARDWARETYPE Returns the type of hardware of the machine where the Extract
that wrote the trail is running.
Information about the database that produced the data in the trail file.
DBNAME Returns the name of the database, for example findb.
DBINSTANCE Returns the name of the database instance, if applicable to the
database type, for example ORA1022A.
DBTYPE Returns the type of database that produced the data in the trail file.
DBCHARSET Returns the character set that is used by the database that produced
the data in the trail file.
DBMAJORVERSION Returns the major version of the database that produced the
data in the trail file.
DBMINORVERSION Returns the minor version of the database that produced the
data in the trail file.
167
TOKENS (
= @GETENV
= @GETENV
= @GETENV
= @GETENV
= @GETENV
= @GETENV
= @GETENV
= @GETENV
(GGENVIRONMENT", OSUSERNAME"),
(GGENVIRONMENT", DOMAINNAME"),
(GGHEADER", COMMITTIMESTAMP"),
(GGHEADER", BEFOREAFTERINDICATOR),
(GGHEADER", TABLENAME"),
(GGHEADER", OPTYPE"),
(GGHEADER", RECORDLENGTH"),
(DBENVIRONMENT", DBVERSION");
specify the token identifier (e.g. TKN-GROUP-NAME) value to use for the target column
specification.
: jemhadar
: EXTORA
: AFTER
: 2003-03-24 17:08:59.000000
: 3604496
: 4058
: SOURCE.CUSTOMER
: INSERT
: 57
: BEGIN
: 1
: 0
: 1229
: 8
: ORA901
: GGOODRIC
: 9.0.1.0.0
: ora901
: AAABBAAABAAAB0BAAF
LOGDUMP example
Once environment values have been stored in the trail header, Logdump can display
them when the USERTOKEN ON option is used. The USERTOKEN DETAIL option provides
additional information.
169
User Exits
171
EXIT_CALL_TYPE indicates the processing point of the caller and determines the type
of processing to perform. Extract and Replicat call the shell routine with the following
calls:
EXIT_CALL_START - Invoked at the start of processing. The user exit can
perform initialization work.
EXIT_CALL_STOP - Invoked before the caller stops or ends abnormally. The user
exit can perform completion work.
EXIT_CALL_BEGIN_TRANS - In Extract, invoked just before the output of the
first record in a transaction. In Replicat, invoked just before the start of a
transaction.
EXIT_CALL_END_TRANS - In Extract and Replicat , invoked just after the last
record in a transaction is processed.
EXIT_CALL_CHECKPOINT - Called just before an Extract or Replicat checkpoint
is written.
EXIT_CALL_PROCESS_RECORD - In Extract, invoked before a record buffer is
output to an Extract file. In Replicat, invoked just before a replicated
operation is performed. This call is the basis of most user exit processing.
EXIT_CALL_PROCESS_MARKER - Called during Replicat processing when a
marker from a NonStop server is read from the trail, and before writing to the
marker history file.
EXIT-CALL_DISCARD_RECORD - Called during Replicat processing before a
record is written to the discard file.
EXIT_CALL_DISCARD_ASCII_RECORD - Called during Extract processing
before an ASCII input record is written to the discard file.
EXIT_CALL_FATAL_ERROR - Called during Extract or Replicat processing
just before GoldenGate terminates after a fatal error.
EXIT_CALL_RESULT - Set by the user exit routines to instruct the caller how
to respond when each exit call completes (see below).
EXIT_CALL_RESULT is set by the user exit routines and instructs the caller how to
respond when each exit call completes. The following results can be specified by the
operators routines:
EXIT_OK_VAL - If the routine does nothing to respond to an event, OK is
assumed. If the call specified PROCESS_RECORD or DISCARD_RECORD and
OK_VAL is returned, the caller processes the record buffer returned by the user
exit and uses the parameters set by the exit .
EXIT_IGNORE_VAL - Reject records for further processing.
EXIT_STOP_VAL - Instructs the caller to STOP immediately.
EXIT_ABEND_VAL - Instructs the caller to ABEND immediately.
EXIT_PROCESSED_REC_VAL - Instructs Extract or Replicat to skip the
record, but update the statistics that are printed to the eport file for that table
and for that operation type.
EXIT_PARAMS supplies information to the user exit routine, such as the program
name and user-defined parameters. You can process a single data record multiple
times:
PROGRAM_NAME - Specifies the full path and name of the calling process,
for example \ggs\extract or \ggs\replicat. Use this parameter when loading a
GoldenGate callback routine using the Windows API or to identify the calling
program when user exits are used with both Extract and Replicat processing.
FUNCTION_PARAM - Allows you to pass a parameter that is a literal string
to the user exit. Specify the parameter with the EXITPARAM option of a
TABLE or MAP statement. FUNCTION_PARAM can also be used at exit call
startup to pass the parameters that are specified in the PARAMS option of the
CUSEREXIT parameter.
MORE_RECS_IND Set on return from an exit. For database records,
determines whether Extract or Replicat processes the record again. This
allows the user exit to output many records per record processed by Extract, a
common function when converting Enscribe to SQL (data normalization). To
request the same record again, set MORE_RECS_IND to CHAR_NO_VAL or
CHAR_YES_VAL.
ERCALLBACK executes a callback routine. A user callback routine retrieves context
information from the Extract or Replicat process and its context values, including the
record itself, when the call type is one of the following:
EXIT_CALL_PROCESS_RECORD
EXIT_CALL_DISCARD_RECORD
EXIT_CALL_DISCARD_ASCII_RECORD
Syntax: ERCALLBACK (<function_code>, <buffer>, <result_code>);
<function_code> The function to be executed by the callback routine. The
user callback routine behaves differently based on the function code passed to
the callback routine. While some functions can be used for both Extract and
Replicat, the validity of
the function in one process or the other is dependent on the input parameters
that are set for that function during the callback routine.
<buffer> A void pointer to a buffer containing a predefined structure
associated with the specified function code.
173
Extract and Replicat export an ERCALLBACK function to be called from the user exit
routine. The user exit must explicitly load the callback function at run-time using the
appropriate Windows/Unix API calls.
175
Examples
CUSEREXIT userexit.dll MyUserExit
CUSEREXIT userexit.dll MyUserExit, INCLUDEUPDATEBEFORES, &
PASSTHRU, PARAMS "init.properties"
If the user exit is called from a primary Extract (one that reads the transaction log),
only INCLUDEUPDATEBEFORES is needed for that
Extract. GETUPDATEBEFORES is not needed in this case, unless other GoldenGate
processes downstream will need the before image to be
written to the trail. INCLUDEUPDATEBEFORES does not cause before images to be
written to the trail.
PARAMS "<startup string>
Passes the specified string at startup. Can be used to pass a properties file, startup
parameters, or other string. The string must be enclosed within double quote marks.
Data in the string is passed to the user exit in the EXIT_CALL_START
exit_params_def.function_param. If no quoted string is specified with PARAMS, the
exit_params_def.function_param is NULL.
Oracle Sequences
Oracle Sequences
Use the Extract SEQUENCE parameter to extract sequence values from the
transaction log; for example:
SEQUENCE hr.employees_seq;
Use the Replicat MAP parameter to apply sequence values to the target;
for example:
MAP hr.employees_seq, TARGET payroll.employees_seq;
177
Configuration Options
BATCHSQL
Operations containing the same table, operation type (I, U, D), and column list are
grouped into a batch. For example, the following would each be an example of a batch:
Inserts to table A
Inserts to table B
Updates to table A
Updates to table B
Deletes from table A
Deletes from table B
GoldenGate analyzes parent-child foreign key referential dependencies in the batches
before executing them. If referential dependencies exist for statements that are in
different batches, more than one statement per batch may be required to maintain
the referential integrity.
179
BATCHSQL
[BATCHERRORMODE | NOBATCHERRORMODE]
[BATCHESPERQUEUE <n>]
[BATCHTRANSOPS <n>]
[BYTESPERQUEUE <n>]
[OPSPERBATCH <n>]
[OPSPERQUEUE <n>]
[TRACE]
BATCHERRORMODE | NOBATCHERRORMODE
Set the response of Replicat to errors.
In NOBATCHERRORMODE (default), Replicat aborts the transaction on an error,
temporarily disables BATCHSQL, and retries in normal mode.
In BATCHERRORMODE, Replicat attempts to resolve errors without reverting to
normal mode.
Requires HANDLECOLLISIONS to prevent Replicat from exiting on an error.
BATCHESPERQUEUE <n>
Sets a maximum number of batches per queue before flushing all batches. The default
is 50. Note: A queue is a thread of memory containing captured operations waiting to
be batched. By default, there is one buffer queue, but you can change this with
NUMTHREADS.
BATCHTRANSOPS <n>
Controls the size of a batch. Set to the default of 1000 or higher.
BYTESPERQUEUE <n>
Sets the maximum number of bytes to hold in a queue before flushing batches. The
default is 20 megabytes.
OPSPERBATCH <n>
Sets the maximum number of rows that can be prepared for one batch before
flushing. The default is 1200.
OPSPERQUEUE <n>
Sets the maximum number of row operations that can be queued for all batches
before flushing. The default is 1200.
TRACE
Enables tracing of BATCHSQL activity to the console and report file.
Usage restrictions
Some statement types cannot be processed in batches and must be processed as
exceptions. When BATCHSQL encounters them, it flushes everything in the batch,
applies the exceptions in the normal manner of one at a time, and then resumes batch
processing. Transaction integrity is maintained. Statements treated as exceptions
include:
Statements containing LOB or LONG data.
Statements containing rows longer than 25k in length.
Statements where the target table has one or more unique keys besides the primary
key. Such statements cannot be processed in batches because BATCHSQL does not
guarantee correct ordering for non-primary keys if their values could change.
181
Compression
Options: Compression
GoldenGate provides optional data compression when sending
data over TCP/IP
Automatic decompression is performed by Server Collector on
remote system
Compression threshold allows user to set minimum block size for
which to compress
GoldenGate uses the zlib compression. More information can be
found at www.zlib.net
Example:
RMTHOST newyork, MGRPORT 7809, COMPRESS, COMPRESSTHRESHOLD 750
The destination Server Collector decompresses the data stream before writing it to
the remote file or remote trail. This typically results in compression ratios of at
least 4:1 and sometimes much better. However, compression can require
significant CPU resources.
Encryption
Message Encryption
Encrypts the messages sent over TCP/IP
Uses Blowfish, a symmetric 64-bit block cipher from CounterPane
Internet Security
Extract
Network
(TCP/IP)
Server
Collector
Trail
Replicat
Message Encryption
(Blowfish)
Trail or Extract File Encryption
(GoldenGate)
Parameters
Parameters
Database Password Encryption
183
Key value
0x420E61BE7002D63560929CCA17A4E1FB
0x027742185BBF232D7C664A5E1A76B040
3. Copy the ENCKEYS file to the source and target GoldenGate install directory
4. In the Extract parameter files, use the RMTHOST ENCRYPT and KEYNAME
parameters
RMTHOST West, MGRPORT 7809, ENCRYPT BLOWFISH, KEYNAME superkey
5. Configure a static Server Collector and start it manually with the -ENCRYPT and
-KEYNAME parameters
server -p <port> -ENCRYPT BLOWFISH -KEYNAME <keyname>
If you prefer to use a literal key, then instead of using KEYGEN, enter the literal key
in quotes as the key value in an ENCKEYS file:
##Key name Key value
mykey
DailyKey "
ENCKEYS
ENCKEYS
TCP/IP
Network
Extract
Extract Parameters:
RMTHOST MGRPORT...
ENCRYPT BLOWFISH,
KEYNAME <keyname>
Message Encryption
(Blowfish)
Server
Collector
Startup command:
server -p <port>
-ENCRYPT BLOWFISH
-KEYNAME <keyname>
Trail
Replicat
Extract
Network
(TCP/IP)
Server
Collector
Extract Parameters:
ENCRYPTTRAIL
<table statements>
Replicat
Trail
Replicat
Parameters:
DECRYPTTRAIL
<map statements>
Trail or Extract File
Encryption
185
Extract
Network
(TCP/IP)
Extract Parameters:
[ SOURCEDB ] , USERID
, PASSWORD <encrypted password>
, ENCRYPTKEY DEFAULT | <keyname>
Trail
Replicat
Replicat Parameters:
[ TARGETDB ] , USERID
, PASSWORD <encrypted password>
, ENCRYPTKEY DEFAULT | <keyname>
Password Encryption
Event Actions
187
Reports
Logs
Discards Chkpts
Reports
EVENT
PROCESSING
Extract
Transaction
Log
Logs
Discards Chkpts
EVENT
PROCESSING
Network
(TCP/IP)
Target
Trail
Replicat
EVENTACTIONS (
[STOP | ABORT | FORCESTOP]
[IGNORE [TRANSACTION [INCLUDEVENT]]
[DISCARD]
[LOG [INFO | WARNING]]
[REPORT]
[ROLLOVER]
[SHELL <command>]
[TRACE <trace file> [TRANSACTION] [PURGE | APPEND]]
[CHECKPOINT [BEFORE | AFTER | BOTH]]
[, ...]
)
Note: You can also use a TABLE parameter in a Replicat to trigger actions without
writing data to target tables
EVENTACTIONS
STOP Graceful stop
ABORT Immediate exit
FORCESTOP Graceful stop if the event record is the last operation in the
transaction, else log warning message and abort
IGNORE [TRANSACTION [INCLUDEVENT]] Ignore record. Optionally ignore
entire transaction and propagate the event record.
DISCARD Write record to discard file
LOG [INFO | WARNING] Log an informational or warning message to the report,
error and systems event files
REPORT Generate a report file
ROLLOVER (Extract only) Roll over the trail file
SHELL Execute a shell command
TRACE Write trace information to file
CHECKPOINT Write a checkpoint before and/or after writing the event record
Table statement for Replicat
TABLE <table spec>,
[, SQLEXEC (<SQL specification>), BEFOREFILTER]
[, FILTER (<filter specification>)]
[, WHERE (<where clause>)]
{, EVENTACTIONS ({IGNORE | DISCARD} [<action>])}
;
189
Info
EVENT
PROCESSING
Log
Warning
TX1
TX2
Target
Trail
Heartbeat
Replicat
TX4
Application
Application
Extract
Replicat
Trail
Network
(TCP/IP
Trail
Target
Extract
Trans Log
3. The Extract on the target,
already configured and running,
starts capturing transactions.
Application
ETL
Application
ETL
Replicat
Replicat
Trail
Network
(TCP/IP)
Target
Extract
Trans Log
3. When the second ETL process is completed, it generates an event record that is read by
Extract on the target. When Replicat on the source receives the event record, it triggers a
custom script to start the application based on the status of the batch process on the source.
191
Bidirectional Considerations
Transaction
Log
Extract
Network
(TCP/IP)
Trail
Replicat
Source
Target
Target
Source
Replicat
Trail
Network
(TCP/IP)
Extract
Transaction
Log
The top of this illustration shows changes from the left-side database being extracted
and sent over the network to be replicated in the right-side database. The bottom is
the reverse changes from the right-side database are extracted and sent to the left.
Options: Bidirectional - Capabilities
Available for both homogeneous and heterogeneous
configurations
Distributed processing
Both sides are live
GoldenGates low latency reduces the risk of conflicts
GoldenGate provides loop detection
Bidirectional Capabilities
GoldenGate supports bidirectional replication between two databases, whether the
configuration is for homogeneous or heterogeneous synchronization.
In a bidirectional configuration, both sides of the database may be live and processing
application transactions at the same time. It also may be that the target application is
standing-by waiting to be used if there is a fail-over. For this, operations are captured
and queued to be synchronized back to the primary database once it becomes
available.
Configuring GoldenGate for bidirectional may be as straightforward as configuring a
mirror set of Extract and Replicat groups moving in the opposite direction. It is
important, however, to thoroughly discuss special considerations for your
environment to guarantee data accuracy. The following slides discuss some of the
bidirectional concerns as well as known issues relating to file and data types.
Loops
Because there are both Extract and Replicat processes operating on the same tables in
bidirectional synchronization, Replicats operations must be prevented from being
sent back to the source table by Extract. If they are re-extracted they will be rereplicated beginning an endless loop. Loop detection is sometimes called ping-pong
detection.
Conflicts
Because GoldenGate is an asynchronous solution, conflict-management is required to
ensure data accuracy in the event that the same row is changed in two or more
databases at (or about) the same time. For example, User A on Database A updates a
row, and then User B on Database B updates that same row. If User Bs transaction
occurs before User As transaction is synchronized to Database B, there will be a
conflict on the replicated transaction.
193
Preventing looping
To avoid looping, Replicats operations must be prevented from being sent back to the
source table by Extract. This is accomplished by configuring Extract to ignore
transactions issued by the Replicat user. This slide illustrates why data looping needs
to be prevented.
Methods of preventing data looping are available for the following databases.
Oracle, DB2, and SQL Server databases using log-based extraction
Teradata databases using VAM-based extraction
SQL/MX using the checkpoint table.
Options: Bidirectional - Loop Example 2
Or
A row is inserted on system A
The insert is captured and sent to system B
The row is inserted on system B
The insert is captured and sent to system A
The insert is attempted on system A, but the operation fails
with a conflict on the primary key causing synchronization
services to HALT
Because of the constraint that the primary key must be unique, the insert fails in this
example so it does not trigger looping. However the failed insert causes the Replicat
to abend which stops all replication services, so it is another example of the need to
recognize and not extract Replicat transactions.
Options: Bidirectional - Loop Detection
Loop detection technique depends on the source database:
Oracle
Using EXCLUDEUSER or EXCLUDEUSERID (Oracle 10g and later)
Stop capture with Extract parameter:
195
Loop-detection operates by default for SQL Server, NonStop SQL/MP and Enscribe
sources. For other database types, you must handle loop-detection in various ways.
SQL Server
By default, SQL Server logging excludes the ggs_repl transactions written by Replicat.
DB2 and Ingres
Loop-detection is accomplished by identifying which identifier of the user performed
the operation. To deploy loop-detection for DB2, you must execute the Replicat using
a User ID different than the ID of the application. Then the EXCLUDEUSER argument
on the TRANLOGOPTIONS parameter can be added to Extract to trigger the detection
and exclusion of that User ID.
Options: Bidirectional - Loop Detection (contd)
Sybase
Teradata
You do not need to identify Replicat transactions that are applied
to a Teradata database.
c-tree
NonStop SQL/MX
To prevent data loopback, a checkpoint table is required in a bidirectional
configuration that involves a source or target SQL/MX database (or both).
Because there is not SQL transaction associated with a PURGEDATA operation, this
operation type is not supported for bidirectional replication. Because PURGEDATA
operations are DDL, they are implicit transactions, so GoldenGate cannot update the
checkpoint table within that transaction.
Conflict Detection
Low latency reduces the risk of encountering a conflict where the same record is
updated on both systems. This is the best overall method that we encourage in
bidirectional configurations.
197
Conflict detection can also be addressed at the application or system level by assuring
that records are always updated on one system. For example, all updates to card
holder (or account number) 0 1000000 are applied on system A, while updates to
1000001 or higher is applied on system B.
GoldenGate for UNIX and Windows (only) provides the capability to capture the
before and after values of columns so that comparisons may be made on the target
database before applying the values. Additional SQL procedures can be written based
on your application requirements.
Time
The conflict
When the two transactions occur at different times, the deposit transaction is
replicated to New York before the withdrawal transaction is extracted. But when the
two transactions occur at around the same time, they are applied independently,
resulting in an incorrect balance on both systems.
199
201
Restrictions exist for DDL operations that involve user defined types and LOB
data
Wildcard resolution
Standard GoldenGate asterisk wildcards (*) can be used with certain parameter
options when synchronizing DDL operations. WILDCARDRESOLVE is now set by default
to DYNAMIC and must remain so for DDL support.
User Exits
GoldenGate user exit functionality is not supported for use with DDL synchronization
activities (user exit logic can not be triggered based on DDL operations.) User exits
can be used with concurrent DML processing.
LOB data
With LOB data Extract might fetch a LOB value from a Flashback Query, and Oracle
does not provide Flashback capability for DDL (except DROP). When a LOB is fetched,
the object structure reflects current metadata, but the LOB record in the transaction
log reflects old metadata. Refer to the Oracle GoldenGate Reference Guide for
information on this topic.
User defined types
DDL operations that involve user defined types generate implied DML operations on
both the source and target. To avoid SQL errors that would be caused by redundant
operations, GoldenGate does not replicate those DML operations.
If DML is being replicated for a user defined type, Extract must process all of those
changes before DDL can be performed on the object. Because UDT data might be
fetched by Extract, the reasons for this rule are similar to those that apply to LOB
columns
SQLEXEC
Objects that are affected by a stored procedure must exist with the correct structure
prior to the execution of SQL. Consequently, DDL that affects structure must happen
before the SQLEXEC executes.
Objects affected by a standalone SQLEXEC statement must exist before the
GoldenGate processes start. This means that DDL support must be disabled for these
objects; otherwise DDL operations could change or delete the object before the
SQLEXEC executes.
Long DDL statements
GoldenGate 10.4 supports the capture and replication of Oracle DDL statements of up
to 2 MB in length (including some internal GoldenGate maintenance information).
Extract will skip statements that are greater than the supported length, but the
ddl_ddl2file.sql script can be used to save the skipped DDL to a text file in the
USER_DUMP_DEST directory of Oracle.
To use the new support, the DDL trigger must be reinstalled in INITIALSETUP
mode, which removes all of the DDL history. See the GoldenGate for Oracle
Installation and Setup Guide.
203
GETTRUNCATES
GoldenGate supports the synchronization of TRUNCATEs as a standalone function
(independently of full DDL synchronization) or as part of full DDL synchronization. If
using DDL synchronization, disable standalone TRUNCATE synchronization to avoid
errors caused by duplicate operations.
Table names
The ALTER TABLE RENAME fails if the old or new table name is longer than 18
characters (16 for the name and two for the quotation marks). Oracle only allows 18
characters for a rename because of the ANSI limit for identifiers.
Options: DDL Replication - Characteristics
New names must be specified in TABLE/MAP statements
Extract sends all DDL to each trail when writing to multiple trails
Supported objects for Oracle
clusters
tables
functions
tablespaces
indexes
triggers
packages
types
procedures
views
roles
materialized views
sequences
users
synonyms
Renames
To work around remote permissions issues that may arise when different users are
being used on the source and target, RENAME will always be converted to equivalent
ALTER TABLE RENAME for Oracle.
New Names
New names must be specified in TABLE/MAP statements in order to:
1) Replicate DML operations on tables resulting from a CREATE or RENAME,
2) CREATE USER and then move new/renamed tables into that schema
Renames
To work around remote permissions issues that may arise when different users are
being used on the source and target, RENAME will always be converted to equivalent
ALTER TABLE RENAME.
New Names
New names must be specified in TABLE/MAP statements in order to:
1) Replicate DML operations on tables resulting from a CREATE or RENAME,
2) CREATE USER and then move new/renamed tables into that schema
205
MAPPED
- Is specified in a TABLE or MAP statement
- Operations CREATE, ALTER, DROP, RENAME, GRANT*, REVOKE*
- Objects TABLE*, INDEX, TRIGGER, SEQUENCE*, MATERIALIZED VIW*
* Operations are only for objects with asterisk
UNMAPPED
- Does not have a TABLE or MAP statement
OTHER
- TABLE or MAP statements do not apply
- DDL operations other than those listed above
- Examples are CREATE USER, CREATE ROLE, ALTER TABLESPACE
Only one DDL parameter can be used in a parameter file, but you can combine
multiple inclusion and exclusion options to filter the DDL to the required level. When
combined, multiple option specifications are linked logically as AND statements. All
criteria specified with multiple options must be satisfied for a DDL statement to be
replicated.
Options
INCLUDE | EXCLUDE Identifies the beginning of an inclusion or exclusion clause.
INCLUDE includes specified DDL for capture or replication. EXCLUDE excludes specified
DDL from being captured or replicated.
The inclusion or exclusion clause must consist of the INCLUDE or EXCLUDE keyword
followed by any valid combination of other options of the DDL parameter. An EXCLUDE
must be accompanied by a corresponding INCLUDE clause. An EXCLUDE takes priority
over any INCLUDEs that contain the same criteria. You can use multiple inclusion and
exclusion clauses.
MAPPED | UNMAPPED | OTHER | ALL applies INCLUDE or EXCLUDE based on the DDL
operation scope.
MAPPED applies to DDL operations that are of MAPPED scope.
UNMAPPED applies to DDL operations that are of UNMAPPED scope.
OTHER applies to DDL operations that are of OTHER scope.
ALL applies to DDL operations of all scopes. DDL EXCLUDE ALL maintains up-to-date
metadata on objects, while blocking the replication of the DDL operations themselves.
OPTYPE <type> applies INCLUDE or EXCLUDE to a specific type of DDL operation. For
<type>, use any DDL command that is valid for the database, such as CREATE, ALTER,
and RENAME.
OBJTYPE <type> applies INCLUDE or EXCLUDE to a specific type of database object.
For <type>, use any object type that is valid for the database, such as TABLE, INDEX,
TRIGGER, USER, ROLE. Enclose the object type within single quotes.
OBJNAME <name> applies INCLUDE or EXCLUDE to the name of an object, for
example a table name. Provide a double-quoted string as input. Wildcards can be
used. If you do not qualify the object name for Oracle, the owner is assumed to be the
GoldenGate user.
207
When using OBJNAME with MAPPED in a Replicat parameter file, the value for
OBJNAME must refer to the name specified with the TARGET clause of the MAP
statement.
For DDL that creates triggers and indexes, the value for OBJNAME must be the name
of the base object, not the name of the trigger or index.
For RENAME operations, the value for OBJNAME must be the new table name.
INSTR <string> applies INCLUDE or EXCLUDE to DDL statements that contain a
specific character string within the command syntax itself, but not within comments.
Enclose the string within single quotes. The string search is not case sensitive
INSTRCOMMENTS <comment_string>s applies INCLUDE or EXCLUDE to DDL
statements that contain a specific character string within a comment, but not within
the DDL command itself. By using INSTRCOMMENTS, you can use comments as a
filtering agent. Enclose the string within single quotes. The string search is not case
sensitive. You can combine INSTR and INSTRCOMMENTS options to filter on a string in
the command syntax and in the comments.
Options: DDL Replication - String Substitution
DDLSUBST parameter substitutes strings in a DDL operation
Multiple statements can be used
DDLSUBST parameter syntax:
DDLSUBST <search_string> WITH <replace_string>
[INCLUDE <clause> | EXCLUDE <clause>]
Where:
<search_string> is the string in the source DDL statement you
want to replace, in single quotes
<replace_string> is the replacement string, in single quotes
<clause> is an inclusion or exclusion clause using same syntax as
INCLUDE and EXCLUDE from DDL parameter
DDLSUBST Clauses
DDLSUBST <search_string> WITH <replace_string>
[ {INCLUDE | EXCLUDE}
[, ALL | MAPPED | UNMAPPED | OTHER]
[, OPTYPE <type>]
[, OBJTYPE <type>]
[, OBJNAME <name>]
[, INSTR <string>]
[, INSTRCOMMENTS <comment_string>]
]
[...]
209
NOREPORT reports basic DDL statistics. REPORT adds the parameters being used and a
step-by-step history of the operations that were processed
ADDTRANDATA is valid for Extract. Use ADDTRANDATA to enable supplemental
logging for CREATE TABLE or update supplemental logging for tables affected by an
ALTER TABLE to add or drop columns, are renames, or have unique key added or
dropped.
DEFAULTUSERPASSWORD is valid for Replicat. It specifies a different password for a
replicated {CREATE | ALTER} USER <name> IDENTIFIED BY <password> statement from
the one used in the source statement. The password may be entered as a clear text or
encrypted using the default or a user defined <keyname> from ENCKEYS. When using
DEFAULTUSERPASSWORD, use the NOREPLICATEPASSWORD option of DDLOPTIONS for
Extract.
GETAPPLOPS | IGNOREAPPLOPS are valid for Extract. This controls whether or not
DDL operations produced by business applications except Replicat are included in the
content that Extract writes to a trail or file. The default is GETAPPLOPS.
GETREPLICATES | IGNOREREPLICATES is valid for Extract. It controls whether or not
DDL operations produced by Replicat are included in the content that Extract writes
to a trail or file. The default is IGNOREREPLICATES.
REMOVECOMMENTS is valid for Extract and Replicat. It controls whether or not
comments are removed from the DDL operation. By default, comments are not
removed.
REMOVECOMMENTS BEFORE removes comments before the DDL operation is
processed by Extract or Replicat. AFTER removes comments after they are used for
string substitution.
REPLICATEPASSWORD is valid for Extract. It applies to the password in a {CREATE |
ALTER} USER <user> IDENTIFIED BY <password> command. By default GoldenGate uses
the source password in the target CREATE or ALTER statement. To prevent the source
password from being sent to the target, use NOREPLICATEPASSWORD.
211
To implement security for GoldenGate commands, you create a CMDSEC file in the
GoldenGate directory. Without this file, access to all GoldenGate commands is
granted to all users. The CMDSEC file should be created and secured by the user
responsible for central administration of Extract/Replicat.
Managing: Command Level Security - CMDSEC File
Command security entries include:
Entry
Examples
COMMAND NAME
OBJECT NAME
GROUP NAME
USER NAME
ACCESS
Object
Group
User
Access Allowed?
STATUS
REPLICAT
ggsgroup
ggsuser
NO
STATUS
ggsgroup
YES
START
REPLICAT
root
YES
START
REPLICAT
NO
EXTRACT
200
NO
STOP
STOP
*
*
ggsgroup
ggsgroup
*
ggsuser
NO
YES
root
root
YES
NO
Can you see the error with the two STOP lines?
213
Since the CMDSEC file is the source of security, it must be secured. The administrator
must grant read access to anyone allowed access to GGSCI, but restrict write and
purge access to everyone but the administrator.
Trail Management
terminates, but the primary Extract group reading from logs or audit data continues
extracting data. It is not good practice to stop the primary Extract group to prevent
further accumulation. The transaction logs could recycle or the audit could be offloaded.
For trails on the target system, data will accumulate because data is extracted and
transferred across the network faster than it can be applied to the target database.
To estimate the required trail space
1. Estimate the longest time that you think the network can be unavailable.
2. Estimate how much transaction log volume you generate in one hour.
3. Use the following formula:
trail disk space = <transaction log volume in 1 hour> x <number of
hours down> x .4
Note: The equation uses a multiplier of 40 percent because GoldenGate estimates
that only 40 percent of the data in the transaction logs is written to the trail. A more
exact estimate can be derived by configuring Extract and allowing it to run for a set
time period, such as an hour, to determine the growth. This growth factor can then be
applied to the maximum down time.
Managing: Trail Management PURGEOLDEXTRACTS Parameter
Add PURGEOLDEXTRACTS to clean up trail files
In Extract or Replicat parameter file
Trail files are purged as soon as the Extract/Replicat finishes
processing them
Do not use if more than one process uses the trail files
In Manager parameter file
Set MINKEEP<hours, days, files> to define the minimums for
keeping files
Specify USECHECKPOINTS to trigger checking to see if local
processes have finished with the trail files
Specify a frequency to purge old files
Best Practices: All PURGEOLDEXTRACTS rules in the Manager parameter file
Explanation
FREQUENCYMINUTES |
FREQUENCYHOURS
USECHECKPOINTS
MINKEEPHOURS |
MINKEEPDAYS |
MINKEEPFILES
Syntax:
PURGEOLDEXTRACTS {<trail name> | <log table name> }
[, USECHECKPOINTS | NOUSECHECKPOINTS]
[, <minkeep rule>]
[, <frequency>]
Arguments:
<trail name>The trail to purge. Use the fully qualified name.
<log table name> When used to maintain log files rather than trail files, specifies the
log file to purge. Requires a login to be specified with the USERID parameter. The
table owner is assumed to be the one specified with the USERID parameter.
USECHECKPOINTS (Default) Allows purging after all Extract and Replicat processes
are done with the data as indicated by checkpoints, according to any MINKEEP rules.
NOUSECHECKPOINTS Allows purging without considering checkpoints, based on
keeping a minimum of:
Either one file if no MINKEEP rule is used
Or the number of files specified with a MINKEEP rule.
<minkeep rule> Use only one of the following to set rules for the minimum amount of
time to keep data:
MINKEEPHOURS <n>
Keeps an unmodified file for at least the specified number of hours.
MINKEEPDAYS <n>
Keeps an unmodified file for at least the specified number of days.
MINKEEPFILES <n>
Keeps at least n unmodified files, including the active file.
<frequency> Sets the frequency with which to purge old trail files. The default time
for Manager to process maintenance tasks is 10 minutes, as specified with the
CHECKMINUTES parameter. Every 10 minutes, Manager evaluates the
PURGEOLDEXTRACTS frequency and conducts the purge after the specified
interval. <frequency> can be one of the following:
FREQUENCYMINUTES <n>
Sets the frequency, in minutes, with which to purge old trail files. The default
purge frequency is 60 minutes.
FREQUENCYHOURS <n>
Sets the frequency, in hours, at which to purge old trail files.
Managing: Trail Management PURGEOLDEXTRACTS Example
Manager parameter file:
PURGEOLDEXTRACTS /ggs/dirdat/AA*, USECHECKPOINTS,
MINKEEPHOURS 2
For example:
Trail files AA000000, AA000001, and AA000002 exist. Replicat has
been down for four hours and has not completed processing any of
the files.
The result:
The files have not been accessed for 4 hours so MINKEEP rule allows
purging, but checkpoints indicate the files have not been processed
so purge is not allowed.
Additional examples:
Example 1
Trail files AA000000, AA000001, and AA000002 exist. The Replicat has been down for
four hours and has not completed processing.
The Manager parameters include:
PURGEOLDEXTRACTS /ggs/dirdat/AA*, NOUSECHECKPOINTS,
MINKEEPHOURS 2
Result: All trail files will be purged since the minimums have been met.
Example 2
The following is an example of why only one of the MINKEEP options should be set.
Replicat and Extract have completed processing. There has been no access to the trail
files for the last five hours. Trail files AA000000, AA000001, and AA000002 exist.
The Manager parameters include:
PURGEOLDEXTRACTS /ggs/dirdat/AA*, USECHECKPOINTS,
MINKEEPHOURS 4, MINKEEPFILES 4
Result: USECHECKPOINTS requirements have been met so the minimum rules will be
considered when deciding whether to purge AA000002. There will only be two files if
AA000002 is purged, which will violate the MINKEEPFILES parameter. Since both
MINKEEPFILES and MINKEEPHOURS have been entered, however, MINKEEPFILES is
217
ignored. The file will be purged because it has not been modified for 5 hours, which
meets the MINKEEPHOURS requirement of 4 hours.
Managing : Trail Management GETPURGEOLDEXTRACTS Command
GETPURGEOLDEXTRACTS
SEND MANAGER option
Displays the rules set with PURGEOLDEXTRACTS
Syntax
SEND MANAGER
{CHILDSTATUS |
GETPORTINFO [DETAIL] |
GETPURGEOLDEXTRACTS |
KILL <process name>}
AUTORESTART
AUTORESTART ER *, RETRIES 5, WAITMINUTES 3, RESETMINUTES 90
AUTOSTART
Manager parameter used to start one or more Extract or Replicat processes when
Manager starts. This can be useful at system boot time, for example, when you want
synchronization to begin immediately. You can use multiple AUTOSTART statements
in the same parameter file. The syntax is:
AUTOSTART <process type> <group name>
<process type> is one of the following: EXTRACT, REPLICAT, ER (Extract and Replicat)
<group name> is a group name or wildcard specification for multiple groups.
Example
AUTOSTART ER *
Note: Be careful to not include any batch tasks, such as initial load processes.
AUTORESTART
Manager parameter used to specify Extract or Replicat processes to be restarted by
Manager after abnormal termination. You can use multiple AUTORESTART statements
in the same parameter file. The syntax is:
AUTORESTART <process type> <group name>
[, RETRIES <max retries>]
[, WAITMINUTES <wait minutes>]
[, RESETMINUTES <reset minutes>]
<process type> Specify one of: EXTRACT, REPLICAT, ER (Extract and Replicat)
<group name> A group name or wildcard indicating the group names of multiple
processes to start.
219
RETRIES <max retries> is the maximum number of times that Manager should try to
restart a process before aborting retry efforts. The default is 2 retries.
WAITMINUTES <wait minutes> is the amount of time to pause between discovering that
a process has terminated abnormally and restarting the process. Use this option to
delay restarting until a necessary resource becomes available or some other event
occurs. The default delay is 2 minutes.
RESETMINUTES <reset minutes> is the window of time during which retries are
counted. The default is 20 minutes. After the time expires, the number of retries
reverts to zero.
Example
In the following example, Manager tries to start all Extract processes three times
after failure within a one hour time period, and it waits five minutes before each
attempt.
AUTORESTART EXTRACT *, RETRIES 3, WAITMINUTES 5,
RESETMINUTES 60
Managing: TCP/IP Errors
GoldenGate automatically attempts handling of IP errors
Error handling defaults are stored in tcperrs file located in the
GoldenGate installation directory/folder
You may customize any of these settings to fit your environment
Error handling includes:
Error
Response (ABEND or RETRY)
Delay (in centiseconds)
Maximum Retries
Response
Delay (csecs)
ECONNABORTED
#ECONNREFUSED
ECONNREFUSED
ECONNRESET
ENETDOWN
ENETRESET
ENOBUFS
ENOTCONN
EPIPE
ESHUTDOWN
ETIMEDOUT
NODYNPORTS
RETRY
ABEND
RETRY
RETRY
RETRY
RETRY
RETRY
RETRY
RETRY
RETRY
RETRY
ABEND
1000
0
1000
500
3000
1000
100
100
500
1000
1000
0
Max Retries
10
0
12
10
50
10
60
10
10
10
10
0
Changing TCPERRS
To alter the instructions or add instructions for new errors, open the file in a text
editor and change any of the values in the columns.
Reports overview
Each Extract, Replicat, and Manager process generates a standard report file that
shows:
parameters in use
221
REPORTCOUNT
- REPORTCOUNT EVERY 1000000 RECORDS
- REPORTCOUNT EVERY 30 MINUTES, RATE
- REPORTCOUNT EVERY 2 HOURS
REPORTROLLOVER
- REPORTROLLOVER AT 01:00
REPORT
Use REPORT to specify when Extract or Replicat generates interim runtime statistics in
a process report. The statistics are added to the existing report. By default, runtime
statistics are displayed at the end of a run unless the process is intentionally killed. By
default, reports are only generated when an Extract or Replicat process is stopped.
The statistics for REPORT are carried over from the previous report. For example, if the
process performed 10 million inserts one day and 20 million the next, and a report is
generated at 3:00 each day, then the first report would show the first 10 million inserts,
and the second report would show those plus the current days 20 million inserts, totaling
30 million. To reset the statistics when a new report is generated, use the
STATOPTIONS parameter with the RESETREPORTSTATS option.
Syntax
REPORT
{AT <hh:mi> |
ON <day> |
AT <hh:mi> ON <day>}
Where:
AT <hh:mi> generates the report at a specific time of the day. Using AT without ON
generates a report at the specified time every day.
ON <day> generates the report on a specific day of the week. Valid values are the days
of the week in text (e.g. SUNDAY).
REPORTCOUNT
Use REPORTCOUNT to generate a count of records that have been processed since
the Extract or Replicat process started. Results are printed to the report file and to
screen. Record counts can be output at scheduled intervals or after a specific number of
records. Record counts are carried over from one report to the other. REPORTCOUNT
can be used only once in a parameter file. If there are multiple instances of
REPORTCOUNT, GoldenGate uses the last one.
Syntax
REPORTCOUNT
[EVERY] <count>
{RECORDS | SECONDS | MINUTES | HOURS}
[, RATE]
<count> is the interval after which to output a count.
RECORDS | SECONDS | MINUTES | HOURS is the unit of measure for <count>.
RATE reports the number of operations per second and the change in rate, as a
measurement of performance.
REPORTROLLOVER
Use REPORTROLLOVER to define when the current report file is aged and a new one
is created. Old reports are renamed in the format of <group name><n>.rpt, where
<group name> is the name of the Extract or Replicat group and <n> is a number that
gets incremented by one whenever a new file is created, for example: myext0.rpt,
myext1.rpt, myext2.rpt, and so forth.
Note Report statistics are carried over from one report to the other. To reset the
statistics in the new report, use the STATOPTIONS parameter with the
RESETREPORTSTATS option.
Either the AT or ON option is required. Both options can be used together. Using AT
without ON generates a report at the specified time every day.
This parameter does not cause new runtime statistics to be written to the report. To
generate new runtime statistics to the report, use the SEND EXTRACT or SEND
REPLICAT command with the REPORT option. To control when runtime statistics are
generated to report files, use the REPORT parameter.
Syntax
REPORTROLLOVER
{AT <hh:mi> |
ON <day> |
AT <hh:mi> ON <day>}
223
SEND REPORT
Use SEND REPORT to communicate with a running process and generate an interim
statistical report that includes the number of inserts, updates, and deletes output
since the last report. The request is processed as soon as it is ready to accept
commands from users.
Syntax:
REPORT
[HANDLECOLLISIONS [<table spec>] ]
HANDLECOLLISIONS shows tables for which HANDLECOLLISIONS has been enabled.
<table spec> restricts the output to a specific target table or a group of target tables
specified with a standard wildcard (*).
VIEW REPORT
Use VIEW REPORT to view the process report that is generated by Extract or Replicat.
The report lists process parameters, run statistics, error messages, and other
diagnostic information.
The command displays only the current report. Reports are aged whenever a process
starts. Old reports are appended with a sequence number, for example finance0.rpt,
finance1.rpt, and so forth. To view old reports, use the [<n>] option.
Syntax
VIEW REPORT {<group name>[<n>] | <file name>}
<group name> The name of the group. The command assumes the report file named
<group>.rpt in the GoldenGate dirrpt sub-directory.
<n> The number of an old report. Report files are numbered from 0 (the most recent)
to 9 (the oldest).
<file name> A fully qualified file name, such as c:\ggs\dirrpt\orders.rpt.
Example 1
VIEW REPORT orders3
Example 2
VIEW REPORT c:\ggs\dirrpt\orders.rpt
Statistics overview
Extract and Replicat maintain statistics in memory during normal processing. These
statistics can be viewed online with GGSCI by issuing the STATS command. There are
many options with STATS, such as to reset the counters, or display only a brief totals
only, or to provide a per-table basis statistical report.
225
(Extract only)
(Replicat only)
STATS Command
Use STATS REPLICAT or STATS EXTRACT to display statistics for one or more groups.
Syntax:
STATS EXTRACT | REPLICAT <group name> or just STATS
<group name>
[, <statistic>]
[, TABLE <table>]
[, TOTALSONLY <table spec>]
[, REPORTRATE <time units>]
[, REPORTFETCH | NOREPORTFETCH]
[, REPORTDETAIL | NOREPORTDETAIL]
[, ... ]
<group name> The name of a Replicat group or a wildcard (*) to specify multiple
groups. For example, T* shows statistics for all groups whose names begin with T.
<statistic> The statistic to be displayed. More than one statistic can be specified by
separating each with a comma, for example STATS REPLICAT finance, TOTAL, DAILY.
Valid values are:
TOTAL Displays totals since process startup.
DAILY Displays totals since the start of the current day.
HOURLY Displays totals since the start of the current hour.
LATEST Displays totals since the last RESET command.
RESET Resets the counters in the LATEST statistical field.
TABLE <table>
Displays statistics only for the specified table or a group of tables specified with a
wildcard (*).
227
REPORTFETCH | NOREPORTFETCH
RESETREPORTSTATS | NORESETREPORTSTATS
STATOPTIONS Parameter
Use STATOPTIONS to specify information to be included in statistical displays
generated by the STATS EXTRACT or STATS REPLICAT command. These options also can
be enabled as needed as arguments to those commands.
Syntax:
STATOPTIONS
[, REPORTDETAIL | NOREPORTDETAIL]
[, REPORTFETCH | NOREPORTFETCH]
[, RESETREPORTSTATS | NORESETREPORTSTATS]
REPORTDETAIL | NOREPORTDETAIL
Valid for Replicat. REPORTDETAIL returns statistics on operations that were not
replicated as the result of collision errors. These operations are reported in the
regular statistics (inserts, updates, and deletes performed) plus as statistics in the
detail display, if enabled. For example, if 10 records were insert operations and they
were all ignored due to duplicate keys, the report would indicate that there were 10
inserts and also 10 discards due to collisions. NOREPORTDETAIL turns off reporting of
collision statistics. The default is REPORTDETAIL.
REPORTFETCH | NOREPORTFETCH
Valid for Extract. REPORTFETCH returns statistics on row fetching, such as that
triggered by a FETCHCOLS
clause or fetches that must be performed when not enough information is in the
transaction record. NOREPORTFETCH turns off
reporting of fetch statistics. The default is NOREPORTFETCH.
RESETREPORTSTATS | NORESETREPORTSTATS
Controls whether or not report statistics are reset when a new process report is
created. The default of NORESETREPORTSTATS continues the statistics from one report
to another (as the process stops and starts or as the report rolls over based on the
REPORTROLLOVER parameter). To reset statistics, use RESETREPORTSTATS.
Use a lag alert to notify you when a process has exceeded the threshold
GoldenGate provides proactive messaging for process that are not running:
DOWNCRITICAL messages for failed processes
DOWNREPORT reminders for failed processes
GoldenGate provides proactive messaging for processes that are lagging:
LAGINFO and LAGCRITICAL control warning and critical error messaging for latency
thresholds that have been exceeded
LAGREPORT controls the frequency of latency monitoring
Director Client can be configured to send automatic email alerts to operators or email
distribution groups for:
LATENCY ALERTS sends emails when latency thresholds that have been exceeded
MESSAGE ALERTS sends emails when any particular error message is generated
LAG Charts can be displayed to show history of average latency over a period of
time
229
DOWNREPORT
Whenever a process starts or stops, events are generated to the error log, but those
messages can easily be overlooked if the log is large. DOWNREPORTMINUTES and
DOWNREPORTHOURS sets an interval for reporting on terminated processes. Only
abended processes are reported as critical unless DOWNCRITICAL is specified.
Syntax
DOWNREPORTMINUTES <minutes> | DOWNREPORTHOURS
<hours>
<minutes> The frequency, in minutes, to report processes that are not
running.
<hours> The frequency, in hours, to report processes that are not
running.
Example
The following generates a report every 30 minutes.
DOWNREPORTMINUTES 30
DOWNCRITICAL
Specifies that both abended processes and those that have stopped gracefully are
marked as critical in the down report.
UPREPORT
Use UPREPORTMINUTES or UPREPORTHOURS to specify the frequency with which
Manager reports Extract and Replicat processes that are running. Every time one of
those processes starts or stops, events are generated. Those messages are easily
overlooked in the error log because the log can be so large. UPREPORTMINUTES and
UPREPORTHOURS report on a periodic basis to ensure that you are aware of the
process status.
Manager
Primary
Extract
Data Pump
Trail
Network
(TCP/IP)
Server
Collector
Trail
Target
Database
Replicat
Pump lag
Replicat lag
end-to-end latency
System Time
Write to Trail
Extract lag
System Time
Write to Trail
Trans
Log
Definitions:
Lag
This is the Extract lag or Pump lag in the diagram. It is the difference in time
between when a change record was processed by Extract (written to the trail) and the
timestamp of that record in the data source.
Latency
This is the Replicat lag in the diagram. It is the difference in time between when a
change is made to source data and when that change is reflected in the target data.
231
LAGINFO
Frequency to report lags to the error log
LAGINFOSECONDS <seconds>
LAGINFOMINUTES <minutes>
LAGINFOHOURS <hours>
LAGCRITICAL
Set lag threshold to force a warning message to the error log
LAGCRITICALSECONDS <seconds>
LAGCRITICALMINUTES <minutes>
LAGCRITICALHOURS <hours>
LAGREPORT
Manager parameter, LAGREPORTMINUTES or LAGREPORTHOURS, used to specify the
interval at which Manager checks for Extract and Replicat lag. The syntax is:
LAGREPORTMINUTES <minutes> | LAGREPORTHOURS
<hours>
<minutes> The frequency, in minutes, to check for lag.
<hours> The frequency, in hours, to check for lag.
Example
LAGREPORTHOURS 1
LAGINFO
Manager parameter, LAGINFOSECONDS, LAGINFOMINUTES, or LAGINFOHOURS, used to
specify how often to report lag information to the error log. A value of zero (0) forces a
message at the frequency specified with the LAGREPORTMINUTES or LAGREPORTHOURS
parameter. If the lag is greater than the value specified with the LAGCRITICAL
parameter, Manager reports the lag as critical; otherwise, it reports the lag as an
informational message. The syntax is:
LAGINFOSECONDS <seconds> |
LAGINFOMINUTES <minutes> |
LAGINFOHOURS <hours>
<seconds> The frequency, in seconds, to report lag information.
<minutes> The frequency, in minutes, to report lag information.
<hours> The frequency, in hours, to report lag information.
Example
LAGINFOHOURS 1
Email Alerts
Director Client can be configured to send automatic email alerts to operators or email
distribution groups for:
MESSAGE ALERTS sends emails when any particular error message is generated
LATENCY ALERTS sends emails when latency thresholds that have been exceeded
LAG Charts can be displayed to show history of average latency over a period of time
What is a System Alert?
A System Alert is a filter you set up in Director on event type and/or text or
checkpoint lag for a particular process or group of processes. If the criteria established
in the Alert match, an audible warning can be triggered, or an email may be sent to
one or more recipients.
Setting up a System Alert
233
Using the System Alert Setup page, choose to enter a new Alert, or change an existing
one, by selecting from the Select an Alert dropdown list. When the desired Alert is
displayed, complete the following fields:
Alert Name
This is a friendly name that you will use to refer to the Alert, it can be anything up to
50 characters.
Alert Type
Choose between the following options, note that some fields on the page will change,
depending upon what you select here.
Process Lag
This alert type allows you to specify a threshold for checkpoint lag, should lag go over
the threshold, notifications will be generated.
Event Text
This alert type allows you to specify event type or text, if the event matches,
notifications will be generated.
Instance Name
Choose a specific Manager Instance to apply the Alert criteria to, or you may make
the Alert global to all instances.
Process Name
Type the process name to apply the criteria to. You may use a wildcard (*) here to
make partial matches, for example:
'*' - would match all process names
'EXT*' - would match processes beginning with 'EXT'
'*SF*' - would match all processes with 'SF' anywhere in the name
Criteria
When Lag goes above
This field is displayed when you select a Process Lag Alert type. Enter the desired Lag
Threshold here.
When Event Type is
This field is displayed when you select an Event Text Alert type. Select the event
types (ERROR, WARNING) to match here.
When Event Text contains
This field is displayed when you select an Event Text Alert type. Enter the text to
match, leave blank or enter '*' to match any text.
Action
Send eMail to
Enter a comma-separated list of email addresses
Troubleshooting
Troubleshooting - Resources
Processing status
Events
Errors
Checkpoints
Process reports
Event/Error log
Discard file
System logs
235
Discard file
GoldenGate creates a discard file when the DISCARDFILE parameter is used in the
Extract or Replicat parameter file and the process has a problem with a record it is
processing. The discard file contains column-level details for operations that a process
could not handle, including:
the database error message
the trail file sequence number
the relative byte address of the record in the trail
details of the discarded record
System logs
GoldenGate writes errors that occur at the operating system level to the Event Viewer
on Windows or the syslog on UNIX. On Windows, this feature must be installed.
Errors appearing in the system logs also appear in the GoldenGate error log.
Director
Most of the information viewed with GGSCI commands can also be viewed through
Director Client and Director Web, GoldenGates graphical user interfaces. For more
information about Director, see the Director online help.
Troubleshooting Show Status, Events and Errors
These commands display basic processing status, events and errors
SEND <group>, STATUS shows current processing status
STATS <group> shows statistics about operations processed
INFO ALL shows status and lag for all Manager, Extract, and
Replicat processes on the system
INFO <group>, DETAIL shows process status, datasource,
checkpoints, lag, working directory, files containing processing
information
The data source can be transaction log or trail/extract file for Extract, or trail/extract
file for Replicat.
Sample INFO DETAIL command output (some redo logs purposely omitted due to space
constraints):
GGSCI > INFO EXT_CTG1 DETAIL
EXTRACT
EXT_CTG1
ABENDED
Checkpoint Lag
Status
File C:\ORACLE\ORADATA\ORA10G\REDO01.LOG
2005-12-22 23:20:39 Seqno 794, RBA 8919040
Seqno
0
RBA
Max
25380
Begin
2005-12-16 05:55
2005-12-15 15:11
* Initialized *
2005-12-15 14:27
2005-12-13 10:53
* Initialized *
Current directory
C:\GoldenGate\Oracle_10.1.10\win_ora101_v8020_019
Report file
C:\GoldenGate\Oracle_10.1.10\win_ora101_v8020_019\dirrpt\EXT_CTG1
.rpt
Parameter file
C:\GoldenGate\Oracle_10.1.10\win_ora101_v8020_019\dirprm\EXT_CTG1
.prm
Checkpoint file
C:\GoldenGate\Oracle_10.1.10\win_ora101_v8020_019\dirchk\EXT_CTG1
.cpe
Process file
C:\GoldenGate\Oracle_10.1.10\win_ora101_v8020_019\dirpcs\EXT_CTG1
.pce
Error log
C:\GoldenGate\Oracle_10.1.10\win_ora101_v8020_019\ggserr.log
237
Record Source = A
Type = 6
# Input Checkpoints = 2
# Output Checkpoints = 1
File Information:
Block Size = 2048
Max Blocks = 100
Record Length = 2048
Current Offset = 0
Configuration:
Data Source = 3
Transaction Integrity = 1
Task Type = 0
Status:
Start Time = 2006-06-09 14:15:14
Last Update Time = 2006-06-09 14:16:50
Stop Status = A
Troubleshooting Recovery
Both Extract and Replicat restart after a failure at their last read
checkpoint
SEND EXTRACT STATUS command reports when Extract is
recovering
Checkpoint information is updated during the recovery stage
allowing you to monitor the progress with the INFO command
If an error prevents Replicat from moving forward in the trail, you
can restart Replicat after the bad transaction:
START REPLICAT <group>
SKIPTRANSACTION | ATCSN <csn> |AFTERCSN <csn>
To determine the CSN to use, view the Replicat report file with
the VIEW REPORT <group> command or view the trail with the
Logdump utility
02:31
11:58
11:13
02:14
04:00
03:56
03:54
06:26
06:38
PM
AM
AM
AM
PM
PM
PM
PM
AM
5,855
5,855
4,952
5,931
5,453
5,435
5,193
5,636
5,193
241
The logs name is ggserr.log, located in the root GoldenGate directory. You can also
locate the file using the INFO EXTRACT <group>, DETAIL command. The location of the
ggserr.log file is listed with the other GoldenGate working directories, as shown
below:
GGSCI> info extract oraext, detail
EXTRACT
ORAEXT
Last Started 2005-12-28 10:45
Status
STOPPED
Checkpoint Lag
00:00:00 (updated 161:55:17 ago)
Log Read Checkpoint File C:\ORACLE\ORADATA\ORA920\REDO03.LOG
2005-12-29 17:55:57 Seqno 34, RBA 104843776
<some contents deliberately omitted>
Current directory
C:\GoldenGate802
Report file
C:\GoldenGate802\dirrpt\ORAEXT.rpt
Parameter file
C:\GoldenGate802\dirprm\ORAEXT.prm
Checkpoint file
C:\GoldenGate802\dirchk\ORAEXT.cpe
Process file
C:\GoldenGate802\dirpcs\ORAEXT.pce
Error log
C:\GoldenGate802\ggserr.log
Discard file
The location of the discard file is set in either the Extract or Replicat parameter file
by using the DISCARDFILE parameter:
DISCARDFILE C:\GoldenGate\dirrpt\discard.txt, <OPTION>
Options are:
APPEND: adds new content to old content in an existing file
PURGE: purges an existing file before writing new content
MEGABYTES <n>: sets the maximum size of the file (default is 1 MB)
Discard file sample
Aborting transaction beginning at seqno 12 rba 231498
error at seqno 12 rba
231498
Problem replicating HR.SALES to HR.SALES
Mapping problem with compressed update record (target format)...
ORDER_ID =
ORDER_QTY = 49
ORDER_DATE = 2005-10-19 14:15:20
243
Technical Support
245