You are on page 1of 245

Oracle GoldenGate

Fundamentals Student Guide Version 10.4

October 2009

Oracle GoldenGate Fundamentals Student Guide, version 10.4

Copyright 1995, 2009 Oracle and/or its affiliates. All rights reserved. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this software or related documentation is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is applicable: U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, duplication, disclosure, modification, and adaptation shall be subject to the restrictions and license terms set forth in the applicable Government contract, and, to the extent applicable by the terms of the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software License (December 2007). Oracle USA, Inc., 500 Oracle Parkway, Redwood City, CA 94065. This software is developed for general use in a variety of information management applications. It is not developed or intended for use in any inherently dangerous applications, including applications which may create a risk of personal injury. If you use this software in dangerous applications, then you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure the safe use of this software. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this software in dangerous applications. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. This software and documentation may provide access to or information on content, products, and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to third-party content, products, and services. Oracle Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-party content, products, or services.

Contents

ABOUT GOLDENGATE COMPANY AND SOLUTIONS.................................................................. 5 TECHNOLOGY OVERVIEW ................................................................................................................. 18 ARCHITECTURE ..................................................................................................................................... 22 CONFIGURING ORACLE GOLDENGATE ......................................................................................... 28 STEP 1. PREPARE THE ENVIRONMENT ....................................................................................................... 29 GOLDENGATE COMMAND INTERFACE...................................................................................................... 46 STEP 2. CHANGE CAPTURE ....................................................................................................................... 51 STEP 3. INITIAL LOAD ............................................................................................................................... 64 STEP 4. CHANGE DELIVERY ...................................................................................................................... 71 EXTRACT TRAILS AND FILES............................................................................................................. 76 GOLDENGATE DATA FORMAT .................................................................................................................. 79 ALTERNATIVE FORMATS .......................................................................................................................... 85 VIEWING IN LOGDUMP ............................................................................................................................. 92 REVERSING THE TRAIL SEQUENCE ......................................................................................................... 104 PARAMETERS ........................................................................................................................................ 107 GLOBALS PARAMETERS....................................................................................................................... 109 MANAGER PARAMETERS ........................................................................................................................ 110 EXTRACT PARAMETERS .......................................................................................................................... 113 REPLICAT PARAMETERS ......................................................................................................................... 122 DATA MAPPING AND TRANSFORMATION ................................................................................... 131 DATA SELECTION AND FILTERING .......................................................................................................... 131 COLUMN MAPPING ................................................................................................................................. 138 FUNCTIONS ............................................................................................................................................. 141 SQLEXEC ............................................................................................................................................. 149 MACROS ................................................................................................................................................. 158 USER TOKENS......................................................................................................................................... 163 USER EXITS ............................................................................................................................................ 170 ORACLE SEQUENCES .............................................................................................................................. 177 CONFIGURATION OPTIONS .............................................................................................................. 178 BATCHSQL .......................................................................................................................................... 179 COMPRESSION ........................................................................................................................................ 182 ENCRYPTION .......................................................................................................................................... 183 EVENT ACTIONS ..................................................................................................................................... 187 BIDIRECTIONAL CONSIDERATIONS ......................................................................................................... 192 ORACLE DDL REPLICATION ................................................................................................................... 202 MANAGING ORACLE GOLDENGATE ............................................................................................. 211 COMMAND LEVEL SECURITY ................................................................................................................. 211 TRAIL MANAGEMENT ............................................................................................................................. 214

PROCESS STARTUP AND TCP/IP ERRORS ............................................................................................... 219 REPORTING AND STATISTICS .................................................................................................................. 221 MONITIORING ORACLE GOLDENGATE ................................................................................................... 229 TROUBLESHOOTING ................................................................................................................................ 235 TECHNICAL SUPPORT ............................................................................................................................. 244

Oracle GoldenGate Fundamentals Student Guide

About GoldenGate Company and Solutions


Our Business
We enable real-time, continuous movement of transactional data across Operational and Analytical business systems.

Real-Time Access to Real-Time Information


Real-Time Access
Availability: the degree to which information can be instantly accessed.

Real-Time Information Mission-Critical Systems


Integration: the process of combining data from different sources to provide a unified view.

Oracle GoldenGate provides solutions that enable your mission-critical systems to have continuous availability and access to real-time data. We offer a robust yet easy platform for moving real-time transactional data between operational and analytical systems to enable both: - High Availability solutions, and - Real Time Integration solutions Real-Time Access -- meaning your critical data is accessible and available whenever you need it, 24x7. At the same time Real-Time Information -- meaning that the data available is as current as possible not 24 hours old, not even 4 hours old.

Oracle GoldenGate Success

Company Strength and Service

GoldenGate Software established in 1995

Acquired by Oracle in 2009

Global sales and support

Rapid Growth in Strategic Partners

500+ customers... 4000+ solutions implemented in 35 countries Established, Loyal Customer Base

Our partnerships are rapidly increasing with major technology players, including database and IT infrastructure, packaged applications, business intelligence, and service providers. And because our software platform supports a variety of solution use cases Our more than 500 customers are using our technology for over 4000 solutions around the world. What we typically find that once an initial solution is implemented and the benefits achieved, our customers then find additional areas across the enterprise where we can further drive advantages for them.

Oracle GoldenGate Fundamentals Student Guide

Transactional Data Management (TDM)


Oracle GoldenGate provides low-impact capture, routing, transformation, and delivery of database transactions across heterogeneous environments in real time Key Capabilities: Real Time
Moves with sub-second latency

Additional Differentiators: Performance


Log-based capture moves thousands of transactions per second with low impact Meets variety of customer needs and data environments with open, modular architecture Resilient against interruptions and failures

Heterogeneous
Moves changed data across different databases and platforms

Extensibility & Flexibility

Transactional
Maintains transaction integrity

Reliability

Our focus is on transactional data management (TDM) which means delivering a platform in which that data can be best utilized in real-time enterprise wide. Oracle GoldenGate captures, routes, transforms, and delivers transactional data in real time and it works across heterogeneous environments with very low impact and preserved transaction integrity. Our Key Capabilities in which we architect the product are: We move data essentially in real time with sub-second speed. Works in heterogeneous environments across different database and hardware types Transactional we are transaction aware and apply read-consistent changed data to maintain its referential integrity between source and target systems. We further Differentiate ourselves from other technologies with: - High performance with low impact we can move large volumes of data very efficiently while maintaining very low lag times/latency. - Our flexibility we meet a wide range of customer solution and integration needs, thanks to our open, modular architecture. - Our reliability our architecture is extremely resilient against potential interruptions; no single point of failure or dependencies, and easy to recover.

TRANSACTIONAL DATA INTEGRATION

Oracle GoldenGate provides the following data replication solutions:


High Availability

Live Standby for an immediate fail-over solution that can later re-synchronize with your primary source. Active-Active solutions for continuous availability and transaction load distribution between two or more active systems.
Zero-Downtime Upgrades and Migrations

Eliminate downtime for upgrades and migrations.


Live Reporting

Feeding a reporting database so that you dont burden your source production systems.
Operational Business Intelligence (BI) Real-time data feeds to operational data stores or data warehouses, directly or via ETL tools. Transactional data integration Real-time data feeds to messaging systems for business activity monitoring (BAM), business process monitoring (BPM) and complex event processing (CEP). Uses event-driven architecture (EDA) and service-oriented architecture (SOA).

Oracle GoldenGate Fundamentals Student Guide

Oracle GoldenGate Solutions


High Availability & Disaster Tolerance Real-Time Data Integration

Live Standby Active-Active Zero-Downtime Operations for:


Upgrades Migrations Maintenance

Real-Time Data Warehousing Live Reporting Transactional Data Integration

Oracle GoldenGate provides two primary solution areas: High Availability/Disaster Tolerance and Real-Time Data Integration. Within High Availability and Disaster Tolerance, we offer: Active-Active solutions for continuous availability and transaction load distribution between two or more active systems Zero-Downtime Operations that eliminates downtime for planned outages involving upgrades, migrations, and ongoing maintenance Live Standby for an immediate fail-over solution that can later re-synchronize with your primary source Within Real-Time Data Integration, we offer:

Real-Time Data Warehousing which gives you real-time data feeds to data
warehouses or operational data stores

Transactional Data Integration for distributing data in real-time between


transaction processing systems Live Reporting is for feeding a reporting database so that you dont burden your source production systems

Oracle GoldenGate for High Availability & Disaster Tolerance


High Availability & Disaster Tolerance

Live Standby Active-Active Zero-Downtime Operations for:


Upgrades Migrations Maintenance

Real-Time Access
Improved Uptime Higher Performance Faster Recovery Minimized Data Loss Lower TCO

For High Availability and Disaster Tolerance solutions, its about real-time or CONTINUOUS access to your data via your critical applications. The benefits that Oracle GoldenGate drives here include: -Improved uptime and availability (helping you reach aggressive service level agreements/SLAs) -Higher Performance for your production systems help to eliminate scalability or response time delays that can give users the impression of an availability or access issue -Faster Recovery and Minimized Data Loss so you can achieve higher Recovery Time Objectives (RTOs) and Recovery Point Objectives (RPOs) - and an overall lower Total Cost of Ownership by putting your standby systems to work for other solutions!

Oracle GoldenGate Fundamentals Student Guide

High Availability: Live Standby

Benefits:
Eliminate unplanned downtime Reduce data loss and isolate corrupt data Re-synchronize backup and primary systems Remove distance constraints Automate switchovers Improve ROI with active standby available for reporting

Live Standby: helps eliminate unplanned outages to enable continuous availability, with no geographic distance constraints. Oracle GoldenGate moves changed data from primary database to a standby in subseconds so that end users have a reliable failover system with up to date data that they can immediately switchover. There is no database recovery process required because changed data is queued outside of the database in persisted Trail files, and data loss risk is minimized. Oracle GoldenGate also isolates corrupt data during movement to make sure the secondary system is reliable when it is needed. The customers Return on Investment can be further increased by using the live standby system for reporting or testing Oracle GoldenGate allows the standby database to be open, so it does not have to sit idle and can be put to work!

11

High Availability: Active-Active

Benefits:
Achieve continuous availability Enable transaction load distribution (with built-in conflict resolution) Improve performance Lower TCO

Active-Active: Oracle GoldenGate enables bidirectional data movement between two or more databases that actively support an application, with no geographic distance constraints. The active-active solution allows data updates and changes (write activity) to occur on two or more active databases supporting live applications. Oracle GoldenGate synchronizes the two active databases by replicating the data between each at a logical level, and allows load distribution to improve system performance. In the case of an outage of one system, there is no downtime for the end user because the other active system continues with operations. Because Oracle GoldenGate is an asynchronous solution, conflict management is required to ensure data accuracy in the event that the same row is changed in two or more databases at (or about) the same time. Oracle GoldenGate provides capabilities to detect and resolve conflicts as well. A variety of active-active scenarios can be supported depending on the desired implementations. We have strong experience in active-active solutions for both High Availability as well as Zero-Downtime upgrades and migration projects.

Oracle GoldenGate Fundamentals Student Guide

Zero-Downtime Upgrades and Migrations

Benefits: Eliminate planned downtime during hardware, database, OS and/or application upgrades and migrations Minimize risk with fail-back contingency Improve success with phased user migrations

Zero Downtime Operations: is for eliminating planned outages during database, application or server upgrades, migrations, and/or maintenance. Oracle GoldenGate captures all the changed data in the primary system while the new system is initiated and prepared. Once the second or the new system is upgraded or migrated Oracle GoldenGate applies all the changed data to the new system. Oracle GoldenGate then keeps the two environments synched with our real-time replication. Often with such projects, there is always concern about what will happen once you switchover to the new environment. Oracle GoldenGate alleviates many of those risks with our fail-back capabilities -- after switchover Oracle GoldenGate captures the changes that happen in the new system so that the old system is kept up to date in case there is a need for fail back to the old environment. And currently in Oracle or HP Nonstop environments, our Oracle GoldenGate Veridata product verifies that the data is consistent in both systems before and even after switchover.

13

Oracle GoldenGate for Real-Time Data Integration


Real-Time Data Integration

Real-Time Information
Fresher Data Minimal Overhead No Batch Windows Data Integrity Ease of Integration

Live Reporting Operational Business Intelligence Transactional Data Integration

For our Real-Time Data Integration solutions, its about real-time information or access to CURRENT operational data. The benefits that Oracle GoldenGate drives here include: Fresher, real-time data available for use and decision-making remove latency as a technical constraint. Minimal overhead and impact on your source systems and overall architecture to capture and move real-time data No requirement for batch windows Transactional data integrity helps improve overall data quality Ease of integration Oracle GoldenGate easily fits into existing and desire architecture, and is overall easy to maintain over long term.

Oracle GoldenGate Fundamentals Student Guide

Data Integration: Live Reporting

Benefits: Use real-time data for better, faster decision making Remove reporting overhead on source system Reduce cost-to-scale as user demands and data volumes grow Leverage cost-effective systems for reporting needs

Oracle GoldenGate's Live Reporting enables both real-time reporting capabilities while improving the performance of operational source systems. Oracle GoldenGate feeds real-time data from the source to a secondary reporting-only database such as an operational data store (ODS). This allows reporting activity to be off-loaded from the production database. This secondary database can be a different database and/or platform from the production database, to lower the total cost of ownership and allow organizations to leverage emerging open source technologies. The solution also helps increase scalability as user demands and data volumes grow.

15

Operational Business Intelligence

Benefits: Use real-time data for better, faster decision making Eliminate batch window dependency Reduce overhead on source system Maintain referential integrity for data quality Leverage its flexibility for transformations and integration with ETL
18

For Real-Time Data Warehousing -- The Oracle GoldenGate Real-Time Data Warehousing solution enables continuous, real-time data feeds for data warehouses or operational data stores, to improve business intelligence. Our log-based changed data capture has very minimal impact on the source, no batch windows and moves the data in sub-seconds. Each transactions commit boundaries are maintained for data integrity. Oracle GoldenGates architecture also improves data recoverability in case there is an outage during the data movement. This is an important requirement as data latency decreases in feeding the analytical environment. Oracle GoldenGates trail files that store the changed data are persisted, so if needed they can be reapplied to the target and also source system without having to capture the data again. Transformations or co-existing with ETL: Oracle GoldenGate out-of-the box can support a number of common data transformations often required for data integration. However, where complex transformations are needed Oracle GoldenGate can be used to augment an existing ETL solution in several ways: 1) First, Oracle GoldenGate can deliver transactional data to staging tables in real time, which then would be used by the ETL to extract from and perform transformations and then load user tables. This method works best when the ETL product is optimized to perform the transformations within the target database. This is an ELT model. 2) Second method: Oracle GoldenGate provides the data to the ETL engine as flat files and in micro-batches. The latency depends on the ETL product and business requirements but we typically deliver every few minutes to an hour. 3) Third method: Oracle GoldenGate publishes changed data to a messaging system and the ETL solution (that can subscribes to the queue or topic) receives it in realtime. In each of these architectures combining real-time change data capture with ETL decreases data latency to real time or near real-time and eliminates the batch window dependency.

Oracle GoldenGate Fundamentals Student Guide

Transactional Data Integration

Benefits:
Easily integrate large volumes of real-time data between transaction processing systems Reduce overhead; Eliminate batch windows Improve scalability Enhance SOA and EDA environments (delivery to JMS-based messaging systems)

19

Oracle GoldenGate provides real-time data integration between OLTP systems nonintrusively and with minimal impact. Distributed databases and the applications they support can continuously access, utilize, and act on the most current operational data available. The solution can also integrate with JMS-based messaging systems to enable event driven architecture (EDA) and to support service oriented architecture (SOA).

17

Technology Overview

How Oracle GoldenGate Works: Modular Building Blocks


Capture: Committed changes are captured (and can be filtered) as they occur by reading the transaction logs. Trail files: Stages and queues data for routing.

Route: Data is compressed, encrypted for routing to targets. Delivery: Applies data with transaction integrity, transforming the data as required.

Capture

Source Trail

Network (TCP/IP)

Target Trail

Delivery

Source Database(s)

Delivery

Target Trail

Bi-directional

Source Trail

Capture

Target Database(s)

Oracle GoldenGate consists of decoupled modules that are combined to create the best possible solution for your business requirements. On the source system(s): GoldenGates Capture (Extract) process reads data transactions as they occur, by reading the native transaction log, typically the redo log. Oracle GoldenGate only moves changed, committed transactional data, which is only a % of all transactions therefore operating with extremely high performance and very low impact on the data infrastructure. Filtering can be performed at the source or target - at table, column and/or row level. Transformations can be applied at the capture or delivery stages. Advanced queuing (trail files): To move transactional data efficiently and accurately across systems, Oracle GoldenGate converts the captured data into a Oracle GoldenGate data format in trail files. With both source and target trail files, Oracle GoldenGates unique architecture eliminates any single point of failure and ensures data integrity is maintained even in the event of a system error or outage.
Oracle

Routing: Data is sent via TCP/IP to the target systems. Data compression and encryption are supported. Thousands of transactions can be moved per second, without distance limitations. On the target system(s): A Server Collector process (not shown) reassembles the transactional data into a target trail.

Oracle GoldenGate Fundamentals Student Guide

The Delivery (Replicat) process applies transactional data to the designated target systems using native SQL calls.

Bi-directional: In bi-directional configurations/solutions, this process runs the same in reverse, to concurrently synchronize data between the source and target systems. Manager processes (not shown) perform administrative functions at each node.

Oracle GoldenGate Supports Applications Running On


Databases Capture: Oracle DB2 Microsoft SQL Server Sybase ASE Ingres Teradata Enscribe SQL/MP SQL/MX Delivery: All listed above MySQL, HP Neoview, Netezza, and any ODBC compatible databases ETL products JMS message queues O/S and Platforms Windows 2000, 2003, XP Linux Sun Solaris HP NonStop HP-UX HP TRU64 HP OpenVMS IBM AIX IBM z/OS

Oracle GoldenGate is ideal for heterogeneous environments not just supporting different versions of the same database or operation system/hardware, but replicating and integrating data across vendor systems. We support log-based Capture of changed data from nearly all major database vendors. We can Deliver that data to an even wider range of targets including open source databases, several data warehouse appliances, ETL servers, and JMS message queues to support Service Oriented Architectures (SOA) and Event-Driven Architectures (EDA).

19

Oracle GoldenGate Advantages


Movement
Speed Subsecond Latency Volume Thousands of TPS Log-based Capture Native, Local Apply Efficient IO and Bandwidth Usage Bidirectional Group Transactions Bulk Operations Compression One-to-Many, Many-to-One Cascade

Management
Transaction Integrity Transparent Capture Guaranteed Delivery Conflict Detection, Resolution Dynamic Rollback Incremental TDM Initial Data Load GUI-based Monitoring and Configuration Proactive Alerts Encryption Real-Time Deferred or Batch Event Markers

Integration
Heterogeneous Data Sources Mapping Transformation Enrichment Decoupled Architecture Table, Row, Column Filtering XML, ASCII, SQL Formats Queue Interface Stored Procedures User Exits ETL Integration Java/JMS Integration

Oracle GoldenGate Director


Manages, defines, configures, and reports on Oracle GoldenGate components Key features: Centralized management of Oracle GoldenGate modules Rich-client and Web-based interfaces Alert notifications and integration with 3rd-party monitoring products Real-time feedback Zero-impact implementation

Oracle GoldenGate Director is a centralized server-based graphical enterprise application that offers an intuitive way to define, configure, manage, and report on Oracle GoldenGate processes. Oracle GoldenGate Director is a value added module to centralize management and improve productivity. Oracle GoldenGate Director supports all platforms and databases supported by Oracle GoldenGate.

Oracle GoldenGate Fundamentals Student Guide

Oracle GoldenGate Veridata


A high-speed, low impact data comparison solution Identifies and reports data discrepancies between two databases without interrupting those systems or the business processes they support Supports Oracle, Teradata, SQL Server, NonStop SQL/MP and Enscribe Supports homogeneous and heterogeneous compares Benefits: Reduce financial/legal risk exposure Speed and simplify IT work in comparing data sources No disruption to business systems Improved failover to backup systems Confident decision-making and reporting

Oracle GoldenGate Veridata is a high-speed data comparison solution that identifies and reports data discrepancies between databases without interrupting ongoing business processes. Using Oracle GoldenGate Veridata, companies can audit and verify large volumes of data across a variety of business applications with certainty, and maintain reliable data synchronization. Oracle GoldenGate Veridata reduces the amount of time and resources required to compare data, it minimizes the impact of human errors, and it ensures that potential problems can be instantly identified and addressed. Key Veridata Features: Compares large data volumes with high speed and efficiency Allows both data sources to be online by handling in-flight transactions Performs selective, parallel comparison Offers intuitive Web interface and personalized views Enables the comparison of databases that are different database versions or on different operating systems (HP Nonstop only) Supports the comparison of only the data changed since the initial comparison (delta processing) Why would you need Veridata? Data discrepancies arise even without malicious intent -- due to infrastructure problems, application errors, operator mistakes, configuration errors, or unexpected user behavior. With vigilant verification procedures using Oracle GoldenGate Veridata, companies can eliminate data inconsistencies across different business applications and avoid potential operational, financial, or regulatory risks.

21

Architecture

Oracle GoldenGate Data Capture and Delivery


Oracle GoldenGate Transactional Data Management: Primarily used for change data capture and delivery from database transaction logs Can optionally be used for initial load directly from database tables Especially useful for synchronizing heterogeneous databases Database-specific methods may be preferable for homogeneous configurations

Change Data Capture & Delivery

Source Database

Network (TCP/IP) Extract Transaction Log Server Collector Replicat

Target Database Trail

Manager

Manager

On the source system: An Extract process captures transactional changes from transaction logs The Extract process sends data across a TCP/IP network to the target system. On the target system: A Server Collector process reassembles and writes the data to a GoldenGate trail. A Replicat process reads the trail and applies it to the target database. This can be concurrent with the data capture or performed later.

Oracle GoldenGate Fundamentals Student Guide

Manager processes on both systems control activities such as starting, monitoring and restarting processes; allocating data storage; and reporting errors and events.
Change Data Capture & Delivery using a Data Pump

Source Database

Network (TCP/IP) Extract Transaction Log Local Trail Data Pump Server Collector Remote Trail Replicat

Target Database

Manager

Manager

On the source system: An Extract process captures transactional changes from the database transaction log The Extract process writes the data to a local GoldenGate trail. This preserves the captured data if the network or target trail fails. A second Extract process (called a Data Pump) sends the data across the network to the target system. On the target system: A Server Collector process reassembles and writes the data to a GoldenGate trail. A Replicat process reads the trail and applies it to the target database. This can be concurrent with the data capture or performed later. Manager processes on both systems control activities such as starting, monitoring and restarting processes; allocating data storage; and reporting errors and events.

23

Initial Load
Source Database Tables Extract Network (TCP/IP) Replicat Or DB Bulk Load Utility

Target Database

Server Collector

Files

Manager

Manager

GoldenGate initial load methods:


Direct Load (Extract sends data directly to Replicat to apply using SQL) Direct Bulk Load (Replicat uses Oracle SQL*Loader API) File to Replicat (Extract writes to a file that Replicat applies using SQL) File to database utility (Extract writes to a file formatted for a DB bulk load utility)

On the source system: An Extract process captures source data directly from tables The Extract process sends data in large blocks across a TCP/IP network to the target system. On the target system, one of the following scenarios: 1. Direct Load. Replicat reads the data stream and concurrently applies the data to the target database using SQL. 2. Direct Bulk Load (Oracle). Replicat can apply the data using the Oracle SQL*Loader API to improve performance. 3. File to Replicat. Server Collector reassembles and writes the data to Extract files. Replicat applies the data to the target database using SQL. 4. File to database utility. Server Collector reassembles and writes the data to files formatted for a bulk loader, which applies the data to the target database. Manager processes on both systems control activities such as starting, monitoring and restarting processes; allocating data storage; and reporting errors and events.

Oracle GoldenGate Fundamentals Student Guide

Online versus Batch


Change data capture & delivery can be run either continuously (online) or as a special run (batch run) to capture changes for a specific period of time. Initial load is always a special run (batch run).

Checkpointing - Extract
For change data capture, Extract and Replicat save checkpoints to a checkpoint file so they can recover in case of failure Extract maintains:
2 input checkpoints 1 output checkpoint for each trail it writes to
Start of oldest uncommitted transaction in log Input: Transaction Log Last record read from log

Checkpoints

Output: One or more GoldenGate Trails

End of last committed transaction written to trail

Checkpoints are used during online change synchronization to store the current read and write position of a process. Checkpoints ensure that data changes marked for synchronization are extracted, and they prevent redundant extractions. They provide fault tolerance by preventing the loss of data should the system, the network, or a GoldenGate process need to be restarted.

25

Checkpointing - Replicat
Best practice is to create a checkpoint table in the target database Checkpoints are maintained in both the checkpoint table (if it exists) and a checkpoint file Replicat maintains 2 input checkpoints:

Start of current uncommitted transaction Input: GoldenGate Trail

Last record read from trail

Checkpoints

Parameters, Process Groups and Commands


GoldenGate processes are configured by ASCII parameter files. A process group consists of: An Extract or Replicat process Associated parameter file Associated checkpoint file Any other files associated with that process Each process group on a system must have a unique group name. Processes are added and started using the GoldenGate Software Command Interface (GGSCI) with the group name. GGSCI commands also add trails, check process status, etc.

Oracle GoldenGate Fundamentals Student Guide

Solutions and Architecture Discussion Points


1. How is Oracle GoldenGate different from simply replicating database operations? 2. What are some use cases for Oracle GoldenGate software? 3. What is the purpose of checkpointing?

1. Log-based change data capture, decoupled from database architecture. Real-time, heterogeneous and transactional. 2. (a) High availability live standby, active-active, zero down-time upgrades and migrations. (b) Real-time data integration real-time data warehousing (operational business intelligence), live reporting, transactional data integration. 3. For recovery if a GoldenGate process, network or system goes down.

27

Configuring Oracle GoldenGate

Configuring Oracle GoldenGate


Oracle GoldenGate can be deployed quickly and easily in four steps: 1. Prepare the Environment 2. Change Capture 3. Initial Load 4. Change Delivery

Note: You can run log-based change capture after the initial data load if you set the extract begin time to the start of the longest running transaction committed during the initial data load.
Configuring Oracle GoldenGate

1. Prepare the Environment Source Database 3. Initial Load (various methods) Target Database

Transaction Log Extract Local Trail 2. Change Capture Data Pump Remote Trail Replicat 4. Change Delivery

Oracle GoldenGate can be deployed quickly and easily in four steps: Prepare the environment, e.g. Install Oracle GoldenGate software on source and target Enable transaction logging

Oracle GoldenGate Fundamentals Student Guide

(Heterogeneous source/target) Generate source definitions so Replicat can process trail data Configure and start change capture to GoldenGate trail files (Extract processes primary and data pump) Perform initial load to synchronize databases by database-specific or GoldenGate methods Configure and start change delivery (Replicat process)

Step 1. Prepare the Environment

Step 1. Prepare the Environment

1. Prepare the Environment Source Database 3. Initial Load (various methods)

Target Database

Transaction Log Extract Local Trail 2. Change Capture Data Pump Remote Trail Replicat 4. Change Delivery

Step 1. Prepare the Environment


Set up each system: Install Oracle GoldenGate software on source and target Configure and start GoldenGate Manager on source and target If heterogeneous source/target, generate source definitions and copy to target Prepare the database. For example: Ensure database access by GoldenGate Enable transaction logging

29

Installing Oracle GoldenGate installs all of the components required to run and manage GoldenGate processing, and it installs the GoldenGate utilities. Manager must be running on each system before Extract or Replicat can be started, and must remain running while those processes are running so that resource management functions are performed. The source definitions file contains the definitions of the source tables and is required on the target system in hetereogeneous configurations. Replicat refers to the file to when transforming data from the source to the target. To reconstruct an update operation, GoldenGate needs more information than Oracle and SQL Server transaction logs provide by default. Adding supplemental log data forces the logging of the full before and after image for updates.

Install Oracle GoldenGate Software

Prepare Environment: Installation Access the Media Pack


Access the product media pack (software and documentation) at edelivery.oracle.com Identify the proper release of GoldenGate for your source and target environments Database and version Operating system and version

A GoldenGate instance is a single installation of GoldenGate.

Oracle GoldenGate Fundamentals Student Guide

Prepare Environment: Installation - Windows


Download .zip file to C:\GGS Unzip .zip file into C:\GGS folder Configure a Windows Service Name for Manager process in a GLOBALS parameter file (required only if multiple Managers on the server) C:\GGS> INSTALL ADDSERVICE ADDEVENTS GGSCI> CREATE SUBDIRS

For Windows: Do not install Oracle GoldenGate into a folder that contains spaces in its name, for example GoldenGate Software. The application references path names, and the operating system does not support path names that contain spaces, whether or not they are within quotes.

Prepare Environment: Installation Windows INSTALL Program


On Windows, an INSTALL program performs the following functions: Installs GoldenGate event messages into the system registry Installs the Manager as a Windows service Syntax:
INSTALL <item> [<item> ]

Example:
C:\GGS> INSTALL ADDEVENTS ADDSERVICE Note: The uninstall command is: INSTALL DELETESERVICE DELETEEVENTS

Items (all optional) ADDEVENTS Adds the GoldenGate events to the registry so that event messages appear in the Windows Event Log. DELETEEVENTS Deletes GoldenGate events from the registry. ADDSERVICE Defines the GoldenGate Manager process as a Windows service (Recommended) Manager can run by a local or domain account. However, when run

31

this way, Manager will stop when the user logs out. By using install, you can install Manager as a Windows service so that it can be operated independently of user connections and can be configured to start either manually or when the system starts. You can configure the Manager service to run as the Local System account or as a specific named account. The configuration of a service can be changed by using the Services applet of the Windows Control Panel and changing the service Properties. DELETESERVICE Removes the GoldenGate Manager service. AUTOSTART Specifies that the service be started at system boot time (the default). MANUALSTART Specifies that the service be started only at user request (with GGSCI or the Control Panel Services applet). USER Specifies a user name to logon as when executing Manager. If specified, user name should include the domain name, a backward slash, and the user name. PASSWORD Specifies the users password for logon purposes. This can be changed using the Control Panel Services applet.

Prepare Environment: Installation - Multiple Manager Services


GoldenGate supports running multiple Manager services on Windows For two or more GoldenGate instances, or GoldenGate with a Veridata C Agent (which uses a Manager) Each Manager service must be assigned a unique name Before installing the service, you can specify the name Create a GLOBALS parameter file for each Manager Specify the one-word name of the service using the MGRSERVNAME <name> parameter INSTALL ADDSERVICE Reads the GLOBALS MGRSERVNAME for the service name If no GLOBALS setting, uses default service name GGSMGR

A GLOBALS file stores parameters that relate to the GoldenGate instance as a whole, as opposed to runtime parameters for a specific process. This file is referenced when installing the Windows service, so that the correct name is registered.

Oracle GoldenGate Fundamentals Student Guide

Prepare Environment: Installation - UNIX, Linux or z/OS


Download .gz file to /ggs gzip d {filename}.tar.gz tar -xvof {filename}.tar GGSCI> CREATE SUBDIRS

For UNIX, z/OS, or Linux: Use the gzip and tar options appropriate for your system. If you are installing GoldenGate into a cluster environment, make certain that the GoldenGate binaries and files are installed on a file system that is available to all cluster nodes. After installing GoldenGate, make certain to configure the GoldenGate Manager process within the cluster application, as directed by the vendors documentation, so that GoldenGate will fail over properly with the other applications. The Manager process is the master control program for all GoldenGate operations. A GoldenGate instance is a single installation of GoldenGate.
Prepare Environment: Installation NonStop SQL/MX For a SQL/MX source, install Oracle GoldenGate on OSS running on the NonStop source system: Download .gz file to /ggs gzip d {filename}.tar.gz tar -xvf {filename}.tar GGSCI> CREATE SUBDIRS Run the ggmxinstall script For a SQL/MX target, install Oracle GoldenGate Either on OSS running on the NonStop target system (as described above) Or on an intermediate Windows system(as described earlier)

The ggmxinstall script SQL compiles the Extract program and installs the VAMSERV program in the NonStop Guardian space.

33

The command to run it is: OSS> ggmxinstall /G/<Guardian vol>/<Guardian subvol> where: <Guardian vol>/<Guardian subvol> is the destination NonStop volume and subvolume in OSS format.

Prepare Environment: Installation - GoldenGate Directories


Directory dirchk dirdat dirdef dirpcs dirprm dirrpt dirsql dirtmp Contents GoldenGate checkpoint files GoldenGate trail and extract files Data definitions produced by DEFGEN and used to translate heterogeneous data Process status files Parameter files Process report files SQL scripts Temporary storage for transactions that exceed allocated memory

dirchk Contains the checkpoint files created by Extract and Replicat processes, which store current read and write positions to support data accuracy and fault tolerance. Written in internal GoldenGate format. Do not edit these files. The file name format is <group name><sequence number>.<ext> where <sequence number> is a sequential number appended to aged files and <ext> is either cpe for Extract checkpoint files or cpr for Replicat checkpoint files. Examples: ext1.cpe, rep1.cpr dirdat The default location for GoldenGate trail files and extract files created by Extract processes to store records of extracted data for further processing, either by the Replicat process or another application or utility. Written in internal GoldenGate format. Do not edit these files. File name format is a user-defined two-character prefix followed by either a six-digit sequence number (trail files) or the user-defined name of the associated Extract process group (extract files). Examples: rt000001, finance dirdef

Oracle GoldenGate Fundamentals Student Guide

The default location for data definitions files created by the DEFGEN utility to contain source or target data definitions used in a heterogeneous synchronization environment. Written in external ASCII. File name format is a user-defined name specified in the DEFGEN parameter file. These files may be edited to add definitions for newly created tables. If you are unsure of how to edit a definitions file, contact technical support. Example: defs.dat dirpcs Default location for status files. File name format is <group>.<extension> where <group> is the name of the group and <extension> is either pce (Extract), pcr (Replicat), or pcm (Manager). These files are only created while a process is running. The file shows the program name, the process name, the port, and process ID that is running. Do not edit these files. Examples: mgr.pcm, ext.pce dirprm The default location for GoldenGate parameter files created by GoldenGate users to store run-time parameters for GoldenGate process groups or utilities. Written in external ASCII format. File name format is <group name/user-defined name>.prm or mgr.prm. These files may be edited to change GoldenGate parameter values. They can be edited directly from a text editor or by using the EDIT PARAMS command in GGSCI. Examples: defgen.prm, finance.prm dirrpt The default location for process report files created by Extract, Replicat, and Manager processes to report statistical information relating to a processing run. Written in external ASCII format. File name format is <group name><sequence number>.rpt where <sequence number> is a sequential number appended to aged files. Do not edit these files. Examples: fin2.rpt, mgr4.rpt dirsql The default location for SQL scripts. dirtmp The default location for storing large transactions when the size exceeds the allocated memory size. Do not edit these files.

35

Oracle GoldenGate Documentation

Prepare Environment: Oracle GoldenGate Documentation Quick Install Guide Installation and Setup Guides (by database) Administration Guide Reference Guide Troubleshooting and Tuning Guide
Note: You can download the documentation from http://www.oracle.com/technology/documentation/index.html

Windows and UNIX platforms: Oracle GoldenGate Quick Install Guide: Describes the structure of the media pack and where to find installation instructions. Oracle GoldenGate Installation and Setup Guides: There is an installation guide and setup guide for each database that is supported by Oracle GoldenGate. These include database-specific configuration information. Oracle GoldenGate Administration Guide: Introduces Oracle GoldenGate components and explains how to plan for, configure, and implement Oracle GoldenGate on the Windows and UNIX platforms. Oracle GoldenGate Reference Guide: Provides detailed information about Oracle GoldenGate parameters, commands, and functions for the Windows and UNIX platforms. Oracle GoldenGate Troubleshooting and Tuning Guide: Provides suggestions for improving the performance of Oracle GoldenGate in different situations, and provides solutions to common problems.

Oracle GoldenGate Fundamentals Student Guide

Configure and Start Manager

Prepare Environment: Manager - Overview


Performs system management and monitoring tasks Starting GoldenGate processes Starting dynamic Server Collector, Replicat, or GGSCI processes Error and lag reporting GoldenGate trail management Parameter file mgr.prm file in GGS ./dirprm directory Event information written to ggserr.log file

The Manager process performs system management and monitoring tasks on Windows and Unix, including the following. Starting Server Collector processes to collect data from remote Extract processes Threshold reporting (for example, when Extract falls behind GGSLOG) Purging trails Purging of GGSLOG or GGSLOG_HISTORY data Manager Parameters Enter Manager parameters in the dirprm/mgr.prm file, under the GoldenGate installation directory. If no mgr.prm file exists, default management parameters are used. Error and Informational Reporting Manager reports critical and informational events to the ggserr.log file in the GoldenGate installation directory.

37

Prepare Environment: Manager - Configuration


Create the parameter file using GGSCI
GGSCI> EDIT PARAM MGR

Start the Manager using GGSCI


GGSCI> START MGR

Note: To determine which port Manager is using


GGSCI> INFO MGR

Starting Manager You must start Manager before most other configuration tasks performed in GGSCI. Use either START MANAGER or START MGR. On Windows systems, you can also start and stop Manager through the standard Windows services control applet (in Control Panels).
Prepare Environment: Manager Sample MGR Parameter File
PORT 7809 DYNAMICPORTLIST 8001, 8002, 95009520 PURGEOLDEXTRACTS /ggs/dirdat/aa*, USECHECKPOINTS PURGEOLDEXTRACTS /ggs/dirdat/bb*, & USECHECKPOINTS, MINKEEPDAYS 5 AUTOSTART ER * AUTORESTART EXTRACT *, WAITMINUTES 2, RETRIES 5 LAGREPORTHOURS 1 LAGINFOMINUTES 3 LAGCRITICALMINUTES 5

This parameter file has the Manager listening on PORT 7809. Ports 8001, 8002, and those in the range 9500 to 9520 will be assigned to the dynamic processes started by Manager. This manager process will recycle GoldenGate trails that match the file name of /ggs/dirdat/aa* and /ggs/dirdat/bb*. It will only recycle the trail once all Extracts

Oracle GoldenGate Fundamentals Student Guide

and Replicats have a checkpoint beyond the file (USECHECKPOINTS), however bb* trails will not be purged until there has been no activity for 5 days. The manager will automatically start any Extract and Replicat process at startup and will attempt to restart any Extract process that abends after waiting 2 minutes, but only up to 5 attempts. The manager will report lag information every hour, but only for processes that have 3 and 5 minutes of latency. The message will be flagged informational for lags of 3 minutes and critical for any process that has a lag greater than 5 minutes.

Generate Source Definitions (Heterogeneous Source/Target)

Prepare Environment: Source Definitions - Overview


The problem Understanding source and target layouts across disparate systems and databases The solution the DEFGEN utility program DEFGEN produces a file containing layout definitions of the source files and tables This source definition file is used to interpret layouts for data stored in GoldenGate trails
- At start up Replicat reads the definition file specified with the SOURCEDEFS parameter - Server Collector uses the d argument to specify which definition file to read at startup

Can also capture target definitions on target system and copy to source system for Extract to use

The Problem When Capturing, Transforming, and Delivering data across disparate systems and databases, you must understand both the source and target layouts. Understanding column names and data types is instrumental to GoldenGates data synchronization functions. The Solution - The DEFGEN Utility Program The DEFGEN utility program produces a file containing a definition of the layouts of the source files and tables. The output definitions are saved in an edit file and transferred to all target systems in text format. Replicat and Collector read in the definitions at process startup and use the information to interpret the data from the GoldenGate trails. When transformation services are required on the source system, Extract can use a definition file containing the target, rather than source, layouts.

39

Note: The user should never modify the DEFGEN output.


Prepare Environment: Source Definitions Run DEFGEN
DEFGEN is initiated from the command prompt: defgen paramfile <paramfile> [ reportfile <reportfile> ] Unix Example: defgen paramfile /ggs/dirprm/defgen.prm reportfile /ggs/dirrpt/defgen.rpt Windows Example: defgen paramfile c:\ggs\dirprm\defgen.prm reportfile c:\ggs\dirrpt\defgen.rpt Definitions are saved to the file specified in the parameter file This file needs to be transferred to the target system as a text file

Prepare Environment: Source Definitions - Sample DEFGEN Parameters

DEFSFILE /ggs/dirdef/source.def, PURGE SOURCEDB mydb, USERID ggs, PASSWORD ggs TABLE SALES.ACCOUNT; TABLE SALES.PRODUCT;
Parameter DEFSFILE SOURCEDB USERID TABLE Specifies The output definitions file location and name The database name (if needed) The user ID and password (if needed) to access the database The table(s) to be defined

Oracle GoldenGate Fundamentals Student Guide

Prepare the Source Database

Prepare Environment: Source Database Overview


Set up the database to: Ensure access by GoldenGate Enable transaction logging Note: the exact steps depend on the database

Database access You need to assign a database user for each of the GoldenGate processes, unless the database allows authentication at the operating system level. While not required, GoldenGate recommends creating a user specifically for the GoldenGate application. To ensure that processing can be monitored accurately, do not permit other users or processes to operate as the GoldenGate user. In general, the following permissions are necessary for the GoldenGate user: On the source system, the user must have permissions to read the data dictionary or catalog tables. On the source system, the user must have permissions to select data against the tables. On the target system, the user must have the same permissions as the GoldenGate user on the source system plus additional privileges to perform DML on the target tables.

41

Prepare Environment: Source Database


Oracle Add minimal supplemental logging at database level ADD TRANDATA to mark tables for replication DB2 Enter DATA CAPTURE CHANGES at the column for LOB data type ADD TRANDATA to mark tables for replication Sybase Set the secondary truncation point in the logs ADD TRANDATA to mark tables for replication NonStop SQL/MX Special installation steps but no special database preparation

Oracle logs - On UNIX, GoldenGate reads the online logs by default, or the archived logs if an online is not available. On the Windows platform, GoldenGate reads the archived logs by default, or the online logs if an archive is not available. GoldenGate recommends archive logging be enabled, and that you keep the archived logs on the system for the longest time possible. This prevents the need to resynch data if the online logs recycle before all data has been processed. DB2 - In addition to enabling logging at a global level, each table to be captured must be configured to capture data for logging purposes. This is accomplished by the DATA CAPTURE CHANGE clause in the CREATE TABLE statement. Sybase To capture database operations for tables that you want to synchronize with GoldenGate, each one must be marked for replication. This can be done through the database, but GoldenGate recommends using ADD TRANDATA. GoldenGate uses the secondary transaction log truncation point to identify transaction log entries that have not been processed by the Extract process. The secondary truncation point must be established prior to running the GoldenGate Extract process. The GoldenGate process will manage the secondary truncation point once it has been established. NonStop SQL/MX During the installation of SQL/MX, the script ggmxinstall sets a pointer to the VAM that will work with Extract to capture changes from the TMF audit trail.

Oracle GoldenGate Fundamentals Student Guide

Prepare Environment: Source Database SQL Server


To prepare the SQL Server source environment for GoldenGate: Create the ODBC data source
GoldenGate connects to a SQL Server database through an ODBC connection Extract and Replicat require an established data source name (dsn)

Set up transaction logging


Log truncation and non-logged bulk copy must be turned off The SQL Server database must be set to full recovery mode Before GoldenGate processes are started, at least one full database backup must be done ADD TRANDATA to mark tables for replication

Supplemental logging is enabled using the GoldenGate command interface GGSCI. The other preparation is done using Windows and SQL Server utilities. ODBC Data Source Administrator is used to configure and define the ODBC connection and to define the data source. SQL Server Enterprise Manager is used to set full recovery mode and back up the database. SQL Server Query Analyzer is used to access the database to turn off log truncation and non-logged bulk copy.

43

Prepare Environment: Source Database SQL Server 2005


Additional considerations for SQL Server 2005 database: Either install Microsoft Cumulative Update package 6 for SQL Server 2005 Service Pack 2 (or later) Set TRANLOGOPTIONS to MANAGESECONDARYTRUNCATIONPOINT Or install SQL Server replication components Create a distribution database Add a replication publication Set transaction retention to zero Disable replication alerts Log full before and after images (no compressed) Set TRANLOGOPTIONS to NOMANAGESECONDARYTRUNCATIONPOINT

The Distributor database must be used only for source databases to be replicated by GoldenGate. One Distributor can be used for all of these databases. GoldenGate does not depend on the Distributor database, so transaction retention can be set to zero. Because GoldenGate does not depend on the Distributor database, but rather reads the logs directly, the GoldenGate extraction process can process at its customary high speed. For instructions on installing the replication component and creating a Distributor database, see the GoldenGate for Windows and UNIX Administrator Guide. MANAGESECONDARYTRUNCATIONPOINT Required TRANLOGOPTIONS parameters control whether or not GoldenGate maintains the secondary truncation point. Use the MANAGESECONDARYTRUNCATIONPOINT option if GoldenGate will not be running concurrently with SQL Server replication. Use the NOMANAGESECONDARYTRUNCATIONPOINT option if GoldenGate will be running concurrently with SQL Server replication. Allows SQL Server replication to manage the secondary truncation point. Primary Key Requirement - The requirement that all tables to be captured from SQL Server 2005 source databases must have a primary key is a requirement of the Microsoft replication component, which is utilized by GoldenGate as part of the logbased capture process. Connecting - Use SQL Server Management Studio for SQL Server 2005/2000 to connect to a MS SQL Server 2005 or 2000 database. Use Enterprise Manager only for MS SQL Server 2000.

Oracle GoldenGate Fundamentals Student Guide

Prepare Environment Discussion Points


1. Where do you download Oracle GoldenGate software from? 2. What are the roles and responsibilities of the Manager process?

1. edelivery.oracle.com 2. Starting and stopping processes; monitoring processes; reporting lag, errors and events; purging trail files.

45

GoldenGate Command Interface

GGSCI Starting and Help


Start the command interface from the GoldenGate install directory: Shell> cd <GoldenGate install location> Shell> GGSCI For Help Summary page: GGSCI> HELP For Help on a specific command: GGSCI> HELP <command> <object> For example: GGSCI> HELP ADD EXTRACT Help returns overview, syntax and example

Golden Gate Software Command Interface (GGSCI) provides on-line help for all commands. The following is an example of the information returned when you enter HELP STATUS EXTRACT: Use STATUS EXTRACT to determine whether or not Extract groups are running. Syntax: STATUS EXTRACT <group name> [, TASKS] [, ALLPROCESSES] <group name> is the name of a group or a wildcard (*) to specify multiple groups. ALLPROCESSES displays status of all Extract processes, including tasks. TASKS displays status of all Extract tasks. Examples: STATUS EXTRACT FINANCE, STATUS EXTRACT FIN*

Oracle GoldenGate Fundamentals Student Guide

GGSCI Commands
MANAGER ADD ALTER CLEANUP DELETE INFO KILL LAG REFRESH SEND START STATS STATUS STOP X X X X X X X X X X X X X X X X X X X X X EXTRACT X X X X X X X REPLICAT X X X X X X X X X X X X X X X X X ER EXTTRAIL X X RMTTRAIL X X X X X X X TRANDATA X CHECKPOINT TABLE X TRACE TABLE X

Objects Manager, Extract, Replicat GoldenGate processes. ER Multiple Extract and Replicat processes. EXTTRAIL Local trail. RMTTRAIL Remote trail. TRANDATA Transaction data (from transaction logs). CHECKPOINTTABLE Checkpoint table (on target database). TRACETABLE Oracle trace table (on target database). Commands ADD Creates an object or enables TRANDATA capture. ALTER Changes the attributes of an object. CLEANUP Deletes the run history of a process or removes records from a checkpoint table. DELETE Deletes an object or disables TRANDATA capture. INFO Displays information about an object (status, etc). KILL Forces a process to stop (no restart). LAG Displays the lag between when a record is processed by the process and the source record timestamp. REFRESH Refreshes Manager parameters (except port number) without stopping Manager. SEND Sends commands to a running process. START Starts a process. STATS Displays statistics for one or more processes. STATUS Displays whether a process is running. STOP Stops a process gracefully.

47

GGSCI Commands (contd)

Commands Parameters Database DDL Miscellaneous SET EDITOR, EDIT PARAMS, VIEW PARAMS DBLOGIN, ENCRYPT PASSWORD, LIST TABLES DUMPDDL [SHOW] !command, CREATE SUBDIRS, FC, HELP, HISTORY, INFO ALL, OBEY, SHELL, SHOW, VERSIONS, VIEW GGSEVT, VIEW REPORT

Parameter commands SET EDITOR Changes the default text editor for the current GGSCI session from Notepad or vi to any ASCII editor. EDIT PARAMS Edits a parameter file. VIEW PARAMS Displays the contents of a parameter file. Database commands DBLOGIN Establishes a database connection through GGSCI. ENCRYPT PASSWORD Encrypts a database login password. LIST TABLES List all tables in the database that match a wildcard string. DDL commands DUMPDDL Saves the GoldenGate DDL history table to file. SHOW option displays the DDL information in standard output format. Miscellaneous commands !command Executes a previous GGSCI command without modification. CREATE SUBDIRS Creates default directories within the GoldenGate home directory. FC Edit a previously issued GGSCI command. HELP Displays information about a GGSCI command. HISTORY List the most recent GGSCI commands issued. INFO ALL Displays the status and lag for all GoldenGate processes on a system. OBEY Runs a file containing a list of GGSCI commands. SHELL Run shell commands from within GGSCI. SHOW Displays the GoldenGate environment. VERSIONS Displays OS and database versions. VIEW GGSEVT Displays the GoldenGate event/error log. VIEW REPORT Displays a process report for Extract or Replicat.

Oracle GoldenGate Fundamentals Student Guide

GGSCI Examples
Start a Manager process GGSCI> START MGR Add an Extract group GGSCI> ADD EXTRACT myext, TRANLOG, BEGIN NOW Add a local trail GGSCI> ADD EXTTRAIL /ggs/dirdat/rt, EXTRACT myext Start an Extract group GGSCI> START EXTRACT myext

Using Obey Files


You can use an Obey file to perform a reusable sequence of commands Save the commands in a text file, for example: START MGR ADD EXTRACT myext, TRANLOG, BEGIN NOW ADD EXTTRAIL /ggs/dirdat/rt, EXTRACT myext START EXTRACT myext Then use the GGSCI OBEY command to run the file: GGSCI> OBEY <obey filename>.oby
Note. An Obey file can have any file extension or none.

49

Running GoldenGate from the OS Shell


You can also start GoldenGate processes from the OS command shell when running a batch job or initial load, for example:

Shell> cd <GoldenGate install location> Shell> extract paramfile <filepath> reportfile <filepath> [-p <port>] Shell> replicat paramfile <filepath> reportfile <filepath>

This is especially useful to schedule GoldenGate batch jobs to run during off-peak hours using a command-line capable scheduler

Manager must be running when you issue these commands. <filepath> specifies the fully qualified name of the parameter and report files. paramfile can be abbreviated to pf. reportfile can be abbreviated to rf.

GoldenGate Commands Discussion Points


1. What is GGSCI? 2. Where can you view the GoldenGate command syntax? 3. What is an Obey file and why would you use one?

1. GoldenGate Software Command Interface. 2. Help or Reference Guide. 3. A text file containing a sequence of GoldenGate commands; for easy re-use of common command sequences.

Oracle GoldenGate Fundamentals Student Guide

Step 2. Change Capture

Step 2. Change Capture

1. Prepare the Environment Source Database 3. Initial Load (various methods)

Target Database

Transaction Log Extract Local Trail 2. Change Capture Data Pump Remote Trail Replicat 4. Change Delivery

Change Capture - Extract Overview


Extract can be configured to: Capture changed data from database logs Distribute data from local trails to remote systems (data pump) Capture data directly from source tables for initial data load

51

Change Capture - Tasks


On the source system: Add a primary Extract (reading from source transaction logs) with an associated parameter file Optionally, add a local trail and a data pump Extract (reading from the local trail) with an associated parameter file Add a remote trail Start the Extract(s)

To configure Extract to capture changes from transaction logs, perform the following steps: Set up a parameter file for Extract with the GGSCI EDIT PARAMS command. Set up an initial Extract checkpoint into the logs with the GGSCI ADD EXTRACT command. Optionally, create a local trail using the GGSCI ADD EXTTRAIL command and a data pump Extract (and parameter file) reading from the local trail. Set up a remote trail using the GGSCI ADD RMTTRAIL command. Start the Server Collector process on the target system or let the Manager start the Server Collector dynamically. Start Extract using the GGSCI START EXTRACT command. For example: GGSCI> START EXTRACT FINANCE GGSCI sends this request to the Manager process, which in turn starts Extract. Manager monitors the Extract process and restarts it, when appropriate, if it goes down.
Change Capture - ADD EXTRACT Command
Add the initial Extract checkpoint with the GGSCI command ADD EXTRACT:

ADD EXTRACT <group name> , <data source> , <starting point> [, <processing options>] The components of this command are discussed in subsequent slides.

Oracle GoldenGate Fundamentals Student Guide

Change Capture - ADD EXTRACT <data source>


<data source> SOURCEISTABLE TRANLOG [<bsds name>] EXTFILESOURCE <file name> EXTTRAILSOURCE <trail name> Source (and when used) Database table (initial data load) Transaction log (change capture) [DB2 z/OS] Extract file (data pump) Trail (data pump)

SOURCEISTABLE Creates an Extract task that extracts entire records from the database for an initial load. If SOURCEISTABLE is not specified, ADD EXTRACT creates an online change-synchronization process, and one of the other data source options must be specified. When using SOURCEISTABLE, do not specify service options. Task parameters must be specified in the parameter file. TRANLOG [<bsds name>] Specifies the transaction log as the data source. Use this option for log-based extraction. TRANLOG requires the BEGIN option. Use the <bsds name> option for DB2 on a z/OS system to specify the BSDS (Bootstrap Data Set) file name of the transaction log. Make certain that the BSDS name you provide is the one for the DB2 instance to which the Extract process is connected. GoldenGate does not perform any validations of the BSDS specification. EXTFILESOURCE <file name> Specifies an extract file as the data source. Use this option with a secondary Extract group (data pump) that acts as an intermediary between a primary Extract group and the target system. For <file name>, specify the fully qualified path name of the file, for example c:\ggs\dirdat\extfile. EXTTRAILSOURCE <trail name> Specifies a trail as the data source. Use this option with a secondary Extract group (data pump) that acts as an intermediary between a primary Extract group and the target system. For <trail name>, specify the fully qualified path name of the trail, for example c:\ggs\dirdat\aa.

53

Change Capture - ADD EXTRACT <starting point>


<starting point> BEGIN {NOW | <datetime> } EXTSEQNO <seqno>, EXTRBA <relative byte address> EXTRBA <relative byte address> EOF | LSN <value> LSN <value> Database Any Oracle, SQL/MX DB2 z/OS DB2 LUW SQL Server, Ingres c-tree Sybase

LOGNUM <log number>, LOGPOS <byte offset> PAGE <data page>, ROW <row>

The starting point is indicated by one of the following: BEGIN specifies when the Extract begins processing. - For all databases except DB2 LUW, NOW specifies the time at which the ADD EXTRACT command is issued. - For DB2 LUW, NOW specifies the time at which START EXTRACT takes effect. <datetime> specifies the start date and time in the format: yyyy-mm-dd [hh:mi:[ss[.cccccc]]]. Several parameters specify the position with the log or trail to begin processing: EXTSEQNO <seqno>, EXTRBA <relative byte address> Valid for a primary Extract for Oracle and NonStop SQL/MX, and for a data pump Extract. Specifies one of the following: - sequence number of an Oracle redo log and RBA within that log at which to begin capturing data. - the NonStop SQL/MX TMF audit trail sequence number and relative byte address within that file at which to begin capturing data. Together these specify the location in the TMF Master Audit Trail (MAT). - the file in a trail in which to begin capturing data (for a data pump). Specify the sequence number, but not any zeroes used for padding. For example, if the trail file is c:\ggs\dirdat\aa000026, you would specify EXTSEQNO 26. By default, processing begins at the beginning of a trail unless this option is used. Contact GoldenGate Technical Support before using this option. EXTRBA <relative byte address> Valid for DB2 on z/OS. Specifies the relative byte address within a transaction log at which to begin capturing data.

Oracle GoldenGate Fundamentals Student Guide

EOF | LSN <value> Valid for DB2 LUW. Specifies a start position in the transaction logs when Extract starts. EOF configures processing to start at the active LSN in the log files. The active LSN is the position at the end of the log files that the next record will be written to. Any active transactions will not be captured. LSN <value> configures processing to start at an exact LSN if a valid log record exists there. If one does not exist, Extract will abend. Note that, although Extract might position to a given LSN, that LSN might not necessarily be the first one that Extract will process. There are numerous record types in the log files that Extract ignores, such as DB2 internal log records. Extract will report the actual starting LSN to the Extract report file. LSN <value> Valid for SQL Server or Ingres. Specifies the LSN in a SQL Server or Ingres transaction log at which to start capturing data. The LSN specified should exist in a log backup or the online log. - For SQL Server, an LSN is composed of three hexadecimal numbers separated by colons. The first is the virtual log file number, the second is the segment number within the virtual log, and the third is the entry number. - For Ingres, an LSN is two, 4-byte unsigned integers, separated by a colon. For example, to specify an LSN of 1206396546,43927 (as viewed in an Ingres utility), you would enter 1206396546:43927. - An alias for this option is EXTLSN. LOGNUM <log number>, LOGPOS <byte offset> Valid for c-tree. Specifies the location in a c-tree transaction log at which to start capturing data. <log number> is the number of the c-tree log file. <byte offset> is the relative position from the beginning of the file (0 based). PAGE <data page>, ROW <row> Valid for Sybase. Specifies a data page and row that together define a start position in a Sybase transaction log.

55

Change Capture - ADD EXTRACT <processing options>


<processing options> DESC <description> THREADS <n> Specifies Description of Extract group Number of redo threads when extracting from an Oracle RAC clustered database Alternative parameter file name (fully qualified)

PARAMS <file name>

PASSTHRU

Used only in Data Pumps. Passes the data through without any transformation. Alternative report file name (fully qualified)

REPORT <file name>

Change Capture - ADD EXTRACT Examples


Create an Extract group named finance that extracts database changes from the transaction logs. Start extracting with records generated at the time when you add the Extract group. ADD EXTRACT finance, TRANLOG, BEGIN NOW Create an Extract group named finance that extracts database changes from the transaction logs. Start extracting with records generated at 8:00 on January 31, 2006. ADD EXTRACT finance, TRANLOG, BEGIN 2006-01-31 08:00 Create a data-pump Extract group named finance that reads from the GoldenGate trail c:\ggs\dirdat\lt. ADD EXTRACT finance, EXTTRAILSOURCE c:\ggs\dirdat\lt Create an initial-load Extract named load. ADD EXTRACT load, SOURCEISTABLE

Oracle GoldenGate Fundamentals Student Guide

Change Capture - Edit Extract Parameters


Create/edit an Extract parameter file with the GGSCI command: EDIT PARAMS <group name> EXTRACT ODS USERID GoldenUser, PASSWORD password RMTHOST serverx, MGRPORT 7809 RMTTRAIL /ggs/dirdat/rt TABLE SALES.ORDERS; TABLE SALES.INVENTORY;

Change Capture - Add a Local/Remote Trail


Add a local or remote trail with the GGSCI command: ADD EXTTRAIL | RMTTRAIL <trail name> , EXTRACT <group name> [, MEGABYTES <n>] If using a data pump: The primary Extract needs a local trail (EXTTRAIL) The data pump Extract needs a remote trail (RMTTRAIL) Examples: ADD EXTTRAIL c:\ggs\dirdat\aa, EXTRACT finance, MEGABYTES 10 ADD RMTTRAIL c:\ggs\dirdat\bb, EXTRACT parts, MEGABYTES 5

<trail name> The fully qualified path name of the trail. The actual trail name can contain only two characters. GoldenGate appends this name with a six-digit sequence number whenever a new file is created. For example, a trail named /ggs/dirdat/tr would have files named /ggs/dirdat/tr000001, /ggs/dirdat/tr000002, and so forth. <group name> The name of the Extract group to which the trail is bound. Only one Extract process can write data to a trail. MEGABYTES <n> The maximum size, in megabytes, of a file in the trail. The default is 10.

57

Change Capture - Start Extract


Start an Extract process with the GGSCI command: START EXTRACT <group name> If the output trail is remote, this normally triggers the target Manager process to start a Server Collector process with default parameters Users can start a Server Collector statically and modify the parameters, though rarely used. See the Oracle GoldenGate Reference Guide.

Change Capture Primary Extract Configuration (Oracle)


Source Database Transaction Log

GGSCI> EDIT PARAMS ODS EXTRACT ODS USERID GoldenUser, PASSWORD password EXTTRAIL /ggs/dirdat/rt TABLE SALES.ORDERS; TABLE SALES.INVENTORY; GGSCI> ADD EXTRACT ODS, TRANLOG, BEGIN NOW GGSCI> ADD EXTTRAIL /ggs/dirdat/rt, EXTRACT ODS GGSCI> START EXTRACT ODS

Extract

Trail

/ggs/dirdat/rt000000 /ggs/dirdat/rt000001

Sample Extract parameter file: USERID <login>, PASSWORD <pw> supplies database credentials. (SOURCEDB <dsn> is not required for Oracle). RMTHOST <hostname> specifies the target system. MGRPORT <port> specifies the port where Manager is running. RMTTRAIL <trail id> specifies the GoldenGate trail on the target system. TABLE <table name> specifies a source table for which activity will be extracted. ADD EXTRACT command TRANLOG specifies the transaction log as the data source. BEGIN NOW specifies that the change capture is to begin immediately.

Oracle GoldenGate Fundamentals Student Guide ADD RMTTRAIL Identifies the GoldenGate trail on the target system and the name of the EXTRACT group.

Change Capture Primary Extract Configuration (DB2 and SQL Server)


Source Database Transaction Log

GGSCI> EDIT PARAMS ODS EXTRACT ODS SOURCEDB dsn, USERID login, PASSWORD pw EXTTRAIL /ggs/dirdat/rt TABLE SALES.ORDERS; TABLE SALES.INVENTORY; GGSCI> ADD EXTRACT ODS, TRANLOG, BEGIN NOW GGSCI> ADD EXTTRAIL ./dirdat/rt, EXTRACT ODS GGSCI> START EXTRACT ODS

Extract

Trail

/ggs/dirdat/rt000000 /ggs/dirdat/rt000001

Sample Extract parameter file: SOURCEDB <dsn> supplies a data source name as part of the connection information. USERID <login>, PASSWORD <pw> supplies database credentials. RMTHOST <hostname> specifies the target system. MGRPORT <port> specifies the port where Manager is running. RMTTRAIL <trail id> specifies the GoldenGate trail on the target system. TABLE <table name> specifies a source table for which activity will be extracted. ADD EXTRACT command TRANLOG specifies the transaction log as the data source. BEGIN NOW specifies that the change capture is to begin immediately. ADD RMTTRAIL Identifies the GoldenGate trail on the target system and the name of the EXTRACT group.

59

Data Pumps - Overview


Data is stored in a local trail on the source system A second Extract, the data pump: Reads this trail and sends it to one or more targets Manipulates the data or passes it through without change Reasons for using A safeguard against network and target failures To break complex data filtering and transformation into phases To consolidate data from many sources To synchronize one source with multiple targets

For most business cases, it is best practice to use a data pump. Some reasons for using a data pump include the following: Protection against network and target failures: In a basic GoldenGate configuration, with only a trail on the target system, there is nowhere on the source system to store data that Extract continuously extracts into memory. If the network or the target system becomes unavailable, the primary Extract could run out of memory and abend. However, with a trail and data pump on the source system, captured data can be moved to disk, preventing the abend. When connectivity is restored, the data pump extracts the data from the source trail and sends it to the target system(s). You are implementing several phases of data filtering or transformation. When using complex filtering or data transformation configurations, you can configure a data pump to perform the first transformation either on the source system or on the target system, and then use another data pump or the Replicat group to perform the second transformation. Consolidating data from many sources to a central target. When synchronizing multiple source databases with a central target database, you can store extracted data on each source system and use data pumps on each system to send the data to a trail on the target system. Dividing the storage load between the source and target systems reduces the need for massive amounts of space on the target system to accommodate data arriving from multiple sources. Synchronizing one source with multiple targets. When sending data to multiple target systems, you can configure data pumps on the source system for each one. If network connectivity to any of the targets fails, data can still be sent to the other targets.

Oracle GoldenGate Fundamentals Student Guide

Data Pumps One to Many Trails

Trail

Primary Extract

Trail

Data Pump

Trail

Trail

A data pump can be set up to duplicate or selectively route the data to multiple trails. However, if the trails are on multiple target systems and the communication to one of the systems goes down, the Extract may exhaust its retries and shut down, causing the updates to all targets to stop.
Data Pumps One to Many Target Systems

Data Pump 1

Trail

Primary Extract

Trail

Data Pump 2

Trail

Data Pump 3

Trail

GoldenGate supports synchronization of a source database to any number of target systems. For this configuration, GoldenGate recommends using data pump Extract groups to ensure that if network connectivity to any of the targets fails, data still can be sent to the other targets.

61

Change Capture Discussion Points


1. 2. 3. 4. 5. What does Extract do? Where does Extract capture transactional changes from? What parameters tell Extract where to send data? What commands are used to create and start an Extract group? What command option is used to set how large a GoldenGate trail file may get before it rolls to the next file?

1. Captures incremental changes from database transaction logs. It can also save source data from the tables themselves or other GoldenGate trails. Writes the captured data to GoldenGate trails or files. 2. From transaction logs (or archive logs) except for Teradata. 3. EXTTRAIL, EXTFILE RMTHOST with RMTTRAIL, RMTFILE or RMTTASK 4. EDIT PARAMS ADD EXTRACT ADD {EXTTRAIL | RMTTRAIL |EXTFILE | RMTFILE} START EXTRACT 5. The MEGABYTE <megabytes> option in the ADD EXTTRAIL or ADD RMTTRAIL commands.
Data Pumps - Configuration
Primary Extract parameters specify a local trail: EXTRACT <primary> <login for your database> EXTTRAIL ./dirdat/<trailid> <table statements as required>; Data pumps are often configured for pass-through: EXTRACT <datapump> PASSTHRU RMTHOST <target >, MGRPORT <port> RMTTRAIL ./dirdat/<rmttrail> <table statements as required>; Add a data pump (source is the local trail from the primary Extract ) ADD EXTRACT <datapump>, EXTTRAILSOURCE ./dirdat/<trailid>

Oracle GoldenGate Fundamentals Student Guide

PASSTHRU parameter is used on a data pump if you do not need to do any data transformations or user exit processing. Add the data pump extract with a local trail as source and remote trail as destination.
Data Pumps Discussion Points
1. 2. 3. 4. 5. 6. What is a data pump? What is the advantage of using a data pump? Why might you use multiple data pumps for one source trail? What parameter is used to identify the remote target system? What other parameter is commonly used on data pumps? Who can draw a flow chart of change capture & delivery using a data pump?

1. A secondary Extract process that reads from a local trail and distributes that data to a remote system. 2. Allows a local trail on the source system, which is useful for recovery if the network or target system fails. 3. To send to multiple target systems (so if one goes down, they dont all); to separate out different tables; for parallel processing (faster). 4. RMTHOST is used to identify the name or IP address of the remote system and the port that is being used. 5. The PASSTHRU parameter is used on a data pump (unless you need to perform data transformation or user exit processing). 6. Review Architecture slide for change capture & delivery using a data pump.

63

Step 3. Initial Load

Step 3. Initial Load

1. Prepare the Environment Source Database 3. Initial Load (various methods)

Target Database

Transaction Log Extract Local Trail 2. Change Capture Data Pump Remote Trail Replicat 4. Change Delivery

Initial Load
Can use GoldenGate TDM methods:
GoldenGate method File to Replicat File to database utility Direct load Direct bulk load Extract writes to Trail (GoldenGate format) Formatted text file Replicat (directly) Replicat (directly) Load method Replicat via SQL Database utility Replicat via SQL Replicat via SQL*Loader API

Or database-specific methods like: Backup/Restore Export/Import SQL scripts Break mirror Transportable tablespaces (Oracle) Notes: Run a test initial load early on for timing and sizing Run the actual initial load after starting change capture on the source

Initial Load An initial load takes a copy of the entire source data set, transforms it if necessary, and applies it to the target tables so that the movement of transaction data begins from a synchronized state. The first time that you start change synchronization will be during the initial load process. Change synchronization keeps track of ongoing transactional changes while the load is being applied.

Oracle GoldenGate Fundamentals Student Guide

Break mirror Break from database mirroring. Transportable tablespaces (Oracle) Allows whole tablespaces to be copied between databases in the time it takes to copy the datafiles.

Initial Load: Resource Limitations


How close are your systems? How large are your tables? What are the outage time constraints? How much disk space do you have to store changes?

Initial Load: Advantages of GoldenGate Methods


Work across heterogeneous database types and platforms No application downtime required Read directly from source tables, without locking tables Fetch data in arrays to speed performance Parallel processing using WHERE clauses or RANGE function Distribute data over multiple network controllers Flexible load alternatives, including native bulk load utilities GoldenGate change delivery can handle collisions with initial load

Array fetch GoldenGate 10.0 and later fetches 1000 rows (except for LOBs) but you can control this by DBOPTIONS FETCHBATCHSIZE

65

Initial Load: File to Replicat

Manager Source Database Extract Files Replicat Target Database

ADD EXTRACT <name> Extract parameters: SOURCEISTABLE RMTTRAIL<name>

File to Replicat Captures data directly from the source tables and writes to an Extract file for Replicat to process. Extract parameters SOURCEISTABLE instructs Extract to read the source tables directly rather than the transaction log. To format the output for processing by Replicat, use RMTTRAIL. Using Replicat provides the ability to perform additional data transformation prior to loading the data. Execution You can start Extract by the GGSCI command: START EXTRACT <name> Or from the command shell with syntax: extract paramfile <command file> [ reportfile <out file> ]

Oracle GoldenGate Fundamentals Student Guide

Initial Load: File to Database Utility


SQL* Loader BCP Target Database

Manager Source Database Extract

File

File

File

SSIS

ADD EXTRACT <name> Extract parameters: SOURCEISTABLE RMTFILE <name> FORMATASCII BCP or SQLLOADER

File to Database Utility Captures data directly from the source tables and outputs the data to files for processing by native bulk loaders. Extract parameters SOURCEISTABLE instructs Extract to read the source tables directly rather than the transaction log. To format the output for native bulk utilities, such as SSIS, BCP, or SQL*Loader, use: RMTFILE FORMATASCII with appropriate options, like BCP or SQLLOADER Execution You can start Extract by the GGSCI command: START EXTRACT <name> Or from the command shell with syntax: extract paramfile <command file> [ reportfile <out file> ]

67

Initial Load: Direct Load

Manager Source Database Extract

Manager Target Database Replicat

ADD EXTRACT <name>, SOURCEISTABLE Extract parameters: RMTTASK REPLICAT, GROUP <name>

ADD REPLICAT <name>, SPECIALRUN

Direct Load Captures data directly from the source tables and sends the data in large blocks to the Replicat process. Using Replicat provides the ability to perform additional data transformation prior to loading the data. Extract parameters Here you have RMTTASK (instead of RMTFILE in the Queue Data method). RMTTASK instructs the Manager process on the target system to start a Replicat process with a group name specified in the GROUP clause. Execution When you add Extract and Replicat: SOURCEISTABLE instructs Extract to read the source tables directly rather than the transaction log. SPECIALRUN on Replicat specifies one-time batch processing where checkpoints are not maintained. The initial data load is then started using the GGSCI command START EXTRACT. The Replicat process will be automatically started by the Manager process. The port used by the Replicat process may be controlled using the DYNAMICPORTLIST Manager parameter.

Oracle GoldenGate Fundamentals Student Guide

Initial Load: Direct Bulk Load (to Oracle)

Manager Source Database Source Database Extract

Manager Oracle Target Replicat SQL*Loader API

ADD EXTRACT <name>, SOURCEISTABLE Extract parameters: RMTTASK REPLICAT, GROUP <name>

ADD REPLICAT <name>, SPECIALRUN Replicat parameters: BULKLOAD

Direct Bulk Load The Direct Bulk Load method is the fastest method using GoldenGate for initial data load. It sends data in large blocks to the Replicat process, which communicates directly with SQL*Loaders through an API. The Manager process dynamically starts the Replicat process. Using Replicat provides the ability to perform additional data transformation prior to loading the data. The port used by the Delivery process may be controlled using the DYNAMICPORTLIST Manager parameter. Extract parameters Here you have RMTTASK (instead of RMTFILE in the Queue Data method). RMTTASK instructs the Manager process on the target system to start a Replicat process with a group name specified in the GROUP clause. Replicat parameters The parameter BULKLOAD distinguishes this from the direct load method. Execution When you add Extract and Replicat: SOURCEISTABLE instructs Extract to read the source tables directly rather than the transaction log. SPECIALRUN on Replicat specifies one-time batch processing where checkpoints are not maintained. The initial data load is then started using the GGSCI command START EXTRACT. The Replicat process will be automatically started by the Manager process. The port used by the Replicat process may be controlled using the DYNAMICPORTLIST Manager parameter.

69

Initial Load Discussion Points


1. What are the GoldenGate methods for initial load? 2. What GoldenGate command arguments specify that Extract and Replicat run as batch tasks (e.g. for initial load)? 3. What parameter manages conflicts between initial load and change replication? Where is it specified?

1. File to Replicat (Extract writes to a file for Replicat to load via SQL) File to database utility (Extract writes to ASCII files formatted for database utilities to load) Direct load (Extract writes directly to Replicat which loads via SQL) Direct bulk load (Oracle only Extract writes directly to Replicat, which loads through the SQL*Loader API) 2. ADD EXTRACT with SOURCEISTABLE ADD REPLICAT with SPECIALRUN 3. HANDLECOLLISIONS; in the Replicat parameter file for change delivery; turn off after initial load data processed.

Oracle GoldenGate Fundamentals Student Guide

Step 4. Change Delivery

Step 4. Change Delivery

1. Prepare the Environment Source Database 3. Initial Load (various methods)

Target Database

Transaction Log Extract Local Trail 2. Change Capture Data Pump Remote Trail Replicat 4. Change Delivery

Change Delivery - Replicat Overview


Replicat can: Read data out of GoldenGate trails Perform data filtering Table, row, operation Perform data transformation Perform database operations just as your application performed them

Overview GoldenGate trails are temporary queues for the Replicat process. Each record header in the trail provides information about the database change record. Replicat reads these trail files sequentially, and processes inserts, updates and deletes that meet your criteria. Alternatively, you can filter out the rows you do not wish to deliver, as well as perform data transformation prior to applying the data. Replicat supports a high volume of data replication activity. As a result, network activity is block-based not record-at-a-time. Replicat uses native calls to the database

71

for optimal performance. You can configure multiple Replicat processes for increased throughput. When replicating, Replicat preserves the boundaries of each transaction so that the target database has the same degree of integrity as the source. Small transactions can be grouped into larger transactions to improve performance. Replicat uses a checkpointing scheme so changes are processed exactly once. After a graceful stop or a failure, processing can be restarted without repetition or loss of continuity.
Change Delivery - Tasks
On the target system: Create a checkpoint table in the target database (best practice)
DBLOGIN ADD CHECKPOINTTABLE

Create a parameter file for Replicat


EDIT PARAMS

Add your initial Replicat checkpoint into GoldenGate trails


ADD REPLICAT

Start the Replicat process


START REPLICAT

Replicat reads the GoldenGate trail and applies changes to the target database. Like Extract, Replicat uses checkpoints to store the current read and write position and is added and started using the processing group name.

Oracle GoldenGate Fundamentals Student Guide

Change Delivery Sample Oracle Configuration

Trail

GGSCI> DBLOGIN SOURCEDB mydb USERID login PASSWORD pw GGSCI> ADD CHECKPOINTTABLE ggs.checkpt GGSCI> EDIT PARAMS REPORD REPLICAT REPORD TARGETDB dsn USERID ggsuser PASSWORD ggspass -- USERID ggsuser, PASSWORD ggspass ASSUMETARGETDEFS DISCARDFILE /ggs/dirrpt/REPORD.dsc, APPEND MAP SALES.ORDERS, TARGET SALES.ORDERS; MAP SALES.INVENTORY, TARGET SALES.INVENTORY; GGSCI> ADD REPLICAT REPORD, EXTTRAIL /ggs/dirdat/rt GGSCI> START REPLICAT REPORD

Replicat

Target Database

In this example: DBLOGIN USERID and PASSWORD logs the user into the database in order to add the checkpoint table. Replicat parameters: TARGETDB identifies the data source name (not required for Oracle) USERID and PASSWORD provide the credentials to access the database ASSUMETARGETS is used when the source and target systems have the same data definition with identical columns. DISCARDFILE creates a log file to receive records that cannot be processed. MAP establishes the relationship between source table and the target table. ADD REPLICAT names the Replicat group REPORD and establishes a local trail (EXTTRAIL) with the two-character identifier rt residing on directory dirdat.
Change Delivery - Avoiding Collisions with Initial Load
If the source database remains active during an initial load, you must either avoid or handle any collisions when updating the target with interim changes Avoiding Collisions If you can backup/restore or clone the database at a point in time, you can avoid collisions by starting Replicat to read trail records from a specific transaction Commit Sequence Number (CSN): START REPLICAT <group> ATCSN | AFTERCSN <csn>

73

Avoiding Collisions using the CSN 1. Use a standby copy of the source database for the initial load. 2. After the initial load completes, note the highest CSN number of the standby database. The CSN varies by database, e.g. for Oracle it is the SCN. 3. Start Replicat to read from the next CSN: START REPLICAT <group> ATCSN <csn> | AFTERCSN <csn> | SKIPTRANSACTION ATCSN <csn> Causes Replicat to skip transactions in the trail until it finds a transaction that contains the specified CSN. <csn> must be in the format that is native to the database. AFTERCSN <csn> Causes Replicat to skip transactions in the trail until it finds the first transaction after the one that contains the specified CSN. SKIPTRANSACTION Causes Replicat to skip the first transaction in the trail upon startup. All operations in that first transaction are excluded.

Change Delivery - Handling Collisions with Initial Load


If you cannot avoid collisions by the prior method, you must handle collisions Use the Replicat HANDLECOLLISIONS parameter: When Replicat encounters a duplicate-record error on an insert, it writes the change record over the initial data load record When Replicat encounters a missing-record error for an update or delete, the change record is discarded

Handling Collisions using HANDLECOLLISIONS Parameter HANDLECOLLISIONS processing requires that each target table have a primary key or unique index. If you cannot create a temporary primary key or unique index through your application, use the KEYCOLS argument of the TABLE or MAP parameter to designate columns as a substitute key. Otherwise, the source database must be quiesced for the initial load.

Oracle GoldenGate Fundamentals Student Guide

Note: Once all of the change data generated during the load has been replicated, turn off HANDLECOLLISIONS: GGSCI > SEND REPLICAT <group> NOHANDLECOLLISIONS GGSCI > EDIT PARAMS <group> to remove parameter

Change Delivery Discussion Points


1. What does Replicat do? 2. When is ASSUMETARGETDEFS valid? 3. How does Replicat know the layout of the source tables when source and target schemas differ? 4. What commands are used to create and start a Replicat group? 5. What GGSCI command creates a GoldenGate checkpoint table on the target database? 6. What is the purpose of the DISCARDFILE? 7. Who can draw a flow chart of basic change capture and delivery?

1. Reads change data from GoldenGate trails and applies them to a target database via SQL commands. 2. When the source and target table structures (column order, data type and length) are identical. 3. It uses the source definitions created by DEFGEN. 4. ADD CHECKPOINTTABLE (optional) EDIT PARAMS ADD REPLICAT START REPLICAT 5. ADD CHECKPOINTTABLE (must be logged into database). 6. Identifies operations that could not be processed by Replicat. 7. Review Architecture slide for change capture & delivery (without data pump).

75

Extract Trails and Files

Extract Trails and Files


Introduction GoldenGate data format Alternative trail formats Viewing in Logdump Reversing the sequence

Extract Trails and Files - Overview


Extract writes data to any of: Remote trail (RMTTRAIL) Remote file (RMTFILE) Local trail (EXTTRAIL) Local file (EXTFILE) Extract trails and files are unstructured, with variable length records I/O performed using large block writes Extract writes checkpoints for trails during change capture: Guarantees no data lost during restart Multiple Replicat processes may process the same trail Extract does not write checkpoints for files
120

Trails can reside on any platform that GoldenGate supports.

Oracle GoldenGate Fundamentals Student Guide

Extract Trails and Files - Distribution


Extract can write:
To local trails, then distribute over IP with a data pump to remote trails To multiple trails - For distribution multiple systems/disk storage devices - For parallel processing by downstream processes

Trails and files can be transported online using TCP/IP or sent in batch using any file transfer method

121

When transporting trails via TCP/IP, a Server Collector process on the target platform collects, writes, and checkpoints blocks of records in one or more extract files.
Extract Trails and Files - Contents
Each record in the trail contains an operation that has been committed in the source database Transactions are output in commit order Operations in a transaction are grouped together, in the order they were applied
By default, only the primary key and changed columns are recorded

Flags indicate the first and last records in each transaction

77

Extract Trails and Files - Cleanup


Trail files can be purged once consumed Temporary storage requirement is small if processes keep pace Configure Manager to purge used trail data (best practice)

Trail cleanup If one Replicat is configured to process the trail, you can instruct the Replicat to purge the data once it has been consumed. As long as Replicat remains current, your temporary storage requirements for trails can be very low. If multiple Replicat processes are configured against a single trail, you can instruct the Manager purge data trail data as soon as all checkpoints have been resolved. As long as replication processes keep pace, temporary storage requirements can be kept quite low.

Oracle GoldenGate Fundamentals Student Guide

GoldenGate Data Format

GoldenGate Data Format


By default, trails are formatted in the GoldenGate Data Format Each trail file has a trail file header and trail records

GoldenGate Trails Trail files are unstructured files containing variable length records. They are unstructured and written to in large blocks for best performance. Checkpoints Both Extract and Replicat maintain checkpoints into the trails. Checkpoints provide persistent processing whenever a failure occurs. Each process resumes where the last checkpoint was saved guaranteeing no data is lost. One Extract can write to one to many trails. Each trail can then be processed by one or many Replicat processes.
GoldenGate Data Format - File Header
Each trail file has a file header that contains: Trail file information Compatibility level Character set Creation time File sequence number File size First and last record information Timestamp Commit sequence number (CSN) Extract information GoldenGate version Group name Host name Hardware type OS type and version DB type, version and character set

79

GoldenGate Data Format - Compatibility Level


Identifies the trail file format by GoldenGate <major>.<minor> version numbers Allows customers to use different versions of GoldenGate Extract, trail files and Replicat together Set in Extract EXTFILE, EXTTRAIL, RMTFILE or RMTTRAIL parameter; for example:
RMTTRAIL /ggs/dirdat/ex, FORMAT RELEASE 10.0

The input and output trails of a data pump must have the same compatibility level

FORMAT RELEASE <major>.<minor> Specifies the version of the trail to which this Extract will write. FORMAT is a required keyword. RELEASE specifies a GoldenGate release version. <major> is the major version number and <minor> is the minor version number. Valid values are 7 through the current GoldenGate version number. The release version is programmatically mapped back to the appropriate trail format compatibility level. The default is the current version of the process that writes to this trail. Note: To ensure that the correct file version is applied, it is best practice to use RELEASE to match the trail to the version of the Extract or Replicat that will be reading it, unless the version that you downloaded instructs you otherwise. If full audit recovery (FAR) mode is not enabled for Extract, the file version number automatically defaults to 9 (GoldenGate version 9.x). FAR is controlled by the RECOVERYOPTIONS parameter.

Oracle GoldenGate Fundamentals Student Guide

GoldenGate Data Format - Commit Sequence Number (CSN)


Identifies the sequence in which transactions were committed: More efficient for deciding which transaction completed first than using heterogeneous database-supplied transaction identifiers The CSN is based on various database identifiers; for example: For Oracle, the system change number (SCN) For Teradata, the sequence ID

Each database management system generates some kind of unique serial number of its own at the completion of each transaction, which uniquely identifies that transaction. A CSN captures this same identifying information and represents it internally as a series of bytes. A comparison of any two CSN numbers, each of which is bound to a transaction commit record in the same log stream, reliably indicates the order in which the two transactions completed. However, because the CSN is processed in a platform-independent manner, it supports faster, more efficient heterogeneous replication than using the database-supplied identifier. All database platforms except Oracle, DB2 LUW, and DB2 z/OS have fixed-length CSNs, which are padded with leading zeroes as required to fill the fixed length. CSNs that contain multiple fields can be padded within each field, such as the Sybase CSN. CSN values for different databases: Ingres <LSN-high>:<LSN-low> Where: <LSN-high> is the newest log file in the range. Up to 4 bytes padded with leading zeroes. <LSN-low> is the oldest log file in the range. Up to 4 bytes padded with leading zeroes. The valid range of a 4-byte integer is 0 to 4294967295. The two components together comprise the Ingres LSN. Example: 1206396546:43927 Oracle <system change number> Where: <system change number> is the Oracle SCN value. Example: 6488359

81

Sybase <time_high>.<time_low>.<page>.<row> Where: <time_high> and <time_low> comprise a sequence number representing the time when the transaction was committed. It is stored in the header of each database log page. <time_high> is 2-bytes and <time_low> is 4-bytes, both without leading zeroes. <page> is the data page, without leading zeroes. <row> is the row, without leading zeroes. The valid range of a 2-byte integer for a timestamp-high is 0 - 65535. For a 4-byte integer for a timestamp-low, it is: 0 - 4294967295. Example: 12245.67330.12.345 DB2 LUW <LSN> Where: <LSN> is the decimal-based DB2 log sequence number, without leading zeroes. Example: 2008-01-01 10:30:00.1234567890 DB2 z/OS <RBA> Where: <RBA> is a 6-byte relative byte address of the commit record within the transaction log. Example: 1274565892 SQL Server Can be any of these, depending on how the database returns it: - Colon separated hex string (8:8:4) padded with leading zeroes and 0X prefix - Colon separated decimal string (10:10:5) padded with leading zeroes - Colon separated hex string with 0X prefix and without leading zeroes - Colon separated decimal string without leading zeroes - Decimal string Where the first value is the virtual log file number, the second is the segment number within the virtual log, and the third is the entry number. Examples: 0X00000d7e:0000036b:01bd 0000003454:0000000875:00445 0Xd7e:36b:1bd 3454:875:445 3454000000087500445 c-tree <log number>.<byte offset> Where: <log number> is the 10-digit decimal number of the c-tree log file, left padded with zeroes.

Oracle GoldenGate Fundamentals Student Guide

<byte offset> is the 10-digit decimal relative byte position from the beginning of the file (0 based), left padded with zeroes. Example: 0000000068.0000004682 SQL/MX <sequence number>.<RBA> Where: <sequence number> is the 6-digit decimal NonStop TMF audit trail sequence number, left padded with zeroes. <RBA> is the 10-digit decimal relative byte address within that file, left padded with zeroes. Together these specify the location in the TMF Master Audit Trail (MAT). Example: 000042.0000068242 Teradata <sequence ID> Where: <sequence ID> is the generic VAM fixed-length printable sequence ID. Example: 0x0800000000000000D700000021

GoldenGate Data Format - Records


Each trail record contains: Eye-catchers for each section GoldenGate record header (50 bytes plus length of table name) Contains metadata of the change
Table or File name I/O type Before/After indicator Transaction information Transaction time Transaction group Length of data area Plus other information

Optional user token area Token ID, token value Data area Column ID, column value

Record Header Use the Logdump utility to examine the record header. Here is a layout of the header record: Hdr-Ind: Always E, indicating that the record was created by the Extract process. UndoFlag: (NonStop) Normally, UndoFlag is set to zero, but if the record is the backout of a previously successful operation, then UndoFlag will be set to 1. RecLength: The length, in bytes, of the record buffer. IOType: The type of operation represented by the record.

83

TransInd: The place of the record within the current transaction. Values are 0 for the first record in transaction; 1 for neither first nor last record in transaction; 2 for the last record in the transaction; and 3 for the only record in the transaction. SyskeyLen: (NonStop) The length of the system key (4 or 8 bytes) if the source is a NonStop file and has a system key. AuditRBA: The relative byte address of the commit record. Continued: (Windows and UNIX) Identifies (Y/N) whether or not the record is a segment of a larger piece of data that is too large to fit within one record, such as LOBs. Partition: Depends on the record type and is used for NonStop records. In the case of BulkIO operations, Partition indicates the number of the source partition on which the bulk operation was performed. For other non-bulk NonStop operations, the value can be either 0 or 4. A value of 4 indicates that the data is in FieldComp format. BeforeAfter: Identifies whether the record is a before (B) or after (A) image of an update operation. OrigNode: (NonStop) The node number of the system where the data was extracted. FormatType: Identifies whether the data was read from the transaction log (R) or fetched from the database (F). AuditPos: Identifies the position of the Extract process in the transaction log. RecCount: (Windows and UNIX) Used to reassemble LOB data when it must be split into 2K chunks to be written to the GoldenGate file.

Oracle GoldenGate Fundamentals Student Guide

Alternative Formats

Extract Trails and Files - Alternative Formats


Alternative output formats can be specified by the Extract parameters: FORMATASCII FORMATSQL FORMATXML

Alternative Formats: FORMATASCII


Output is in external ASCII format Can format data for popular database load utilities FORMATASCII, BCP FORMATASCII, SQLLOADER Data cannot be processed by GoldenGate Replicat

85

Alternative Formats: FORMATASCII (contd)


By default, the output includes: Operation Type (I, U/V, D) Before/After Indicator (B/A) Table name Field name, Field value, . Field delimiter, defaults to Tab New Line Indicator after each row Additional information surrounds operations in a transaction Begin Indicator Transaction time Sequence number and RBA

Default output Without options, FORMATASCII generates records in the following format. Line 1, the following tab-delimited list: The operation-type indicator: I, D, U, V (insert, delete, update, compressed update). A before or after image indicator: B or A. The table name. A column name, column value, column name, column value, and so forth. A newline character (starts a new line). Line 2, the following tab-delimited begin-transaction record: The begin transaction indicator, B. The timestamp at which the transaction committed. The sequence number of the commit in the transaction log. The relative byte address (RBA) of the commit record within the transaction log. Line 3, the following tab-delimited commit record: The commit character C. A newline character.

Oracle GoldenGate Fundamentals Student Guide

Alternative Formats: FORMATASCII Syntax


FORMATASCII [, BCP] [, COLHDRS ] [, DATE | TIME | TS ] [, DELIMITER <delimiter> ] [, EXTRACOLS ] [, NAMES ] [, NONAMES ] [, NOHDRFIELDS ] [, NOQUOTE ] [, NOTRANSSTMTS ] [, NULLISPACE ] [, PLACEHOLDERS ] [, SQLLOADER ]

BCP Formats the output for compatibility with SQL Servers BCP (Bulk Copy Program) or SSIS high-speed load utility. COLHDRS Outputs the tables column names before the data. COLHDRS takes effect only when the extract is directly from the table. DATE |TIME |TS Specifies one of the following: Date: Year to Day Time: Year to Second TransTime: Year to Fraction DELIMITER Use an alternative field delimiter (the default is tab). EXTRACOLS Includes placeholders for additional columns at the end of each record. Use this when a target table has more columns than the source. NAMES | NONAMES Excludes column names from the output. For compressed records, column names are included unless you also specify PLACEHOLDERS. NOHDRFIELDS Suppresses output of transaction information, the operation type character, the before or after image indicator, and the file or table name. NOQUOTE

87

Excludes quotation marks from character-type data (default is surrounded by singlequotes). NOTRANSSTMTS Excludes transaction information. NULLISPACE Outputs NULL fields as empty fields. The default is to output null fields as the word NULL. PLACEHOLDERS Outputs a placeholder for missing fields or columns. For example, if The second and fourth columns are missing in a four column table, the data might look like: ABC,,123,, SQLLOADER Generates a file compatible with the Oracle SQL*Loader high-speed data load utility. SQLLOADER produces a fixed-length, ASCII-formatted file.
Alternative Formats: FORMATASCII Sample Output
The transaction that is the subject of the examples: INSERT INTO CUSTOMER VALUES ("Eric", "San Fran", 550); UPDATE CUSTOMER SET BALANCE = 100 WHERE CUSTNAME = "Eric"; COMMIT; Example 1. FORMATASCII without options produces the following: B,1997-02-17:14:09:46.421335,8,1873474, I,A,TEST.CUSTOMER,CUSTNAME,'Eric',LOCATION, 'San Fran',BALANCE,550, V,A,TEST.CUSTOMER,CUSTNAME,'Eric',BALANCE,100, C, Example 2. FORMATASCII, NONAMES, DELIMITER '|' produces the following: B|1997-02-17:14:09:46.421335|8|1873474| I|A|CUSTOMER|'Eric'|'San Fran'|550| V|A|CUSTOMER|CUSTNAME|'Eric'|BALANCE|100| C| Note: The last record returns column names for the CUSTNAME and BALANCE columns because the record is a compressed update and PLACEHOLDERS was not used.

Oracle GoldenGate Fundamentals Student Guide

Alternative Formats: FORMATSQL


Output is in external SQL DML format Data cannot be processed by Replicat Default output for each transaction includes: Begin transaction indicator, B Timestamp at which the transaction was committed Sequence number of transaction log containing commit RBA of commit record within transaction log SQL Statement Commit indicator, C Newline indicator

Scope of a transaction Every record in a transaction is contained between the begin and commit indicators. Each combination of commit timestamp and RBA is unique.
Alternative Formats: FORMATSQL Syntax
FORMATSQL [, NONAMES ] [, NOPKUPDATES ] [, ORACLE ]

NONAMES Omits column names for insert operations, because inserts contain all column names. This option conserves file size. NOPKUPDATES Converts UPDATE operations that affect columns in the target primary key to a DELETE followed by an INSERT. By default (without NOPKUPDATES), the output is a standard UPDATE operation. ORACLE
89

Formats records for compatibility with Oracle databases by converting date and time columns to a format accepted by SQL*Plus. For example: TO_DATE('1996-0501','YYYY-MM-DD'
Alternative Formats: FORMATSQL Sample Output
B,2008-11-11:13:48:49.000000,1226440129,155, DELETE FROM TEST.TCUSTMER WHERE CUST_CODE='JANE'; DELETE FROM TEST.TCUSTMER WHERE CUST_CODE='WILL'; DELETE FROM TEST.TCUSTORD WHERE CUST_CODE='JANE' AND ORDER_DATE='1995-11-11:13:52:00' AND PRODUCT_CODE='PLANE' AND ORDER_ID='256'; DELETE FROM TEST.TCUSTORD WHERE CUST_CODE='WILL' AND ORDER_DATE='1994-09-30:15:33:00' AND PRODUCT_CODE='CAR' AND ORDER_ID='144'; INSERT INTO TEST.TCUSTMER (CUST_CODE,NAME,CITY,STATE) VALUES ('WILL','BG SOFTWARE CO.','SEATTLE','WA'); INSERT INTO TEST.TCUSTMER (CUST_CODE,NAME,CITY,STATE) VALUES ('JANE','ROCKY FLYER INC.','DENVER','CO'); INSERT INTO TEST.TCUSTORD (CUST_CODE,ORDER_DATE,PRODUCT_CODE,ORDER_ID,PRODUCT_PRICE,PRODUC T_AMOUNT,TRANSACTION_ID) VALUES ('WILL','1994-0930:15:33:00','CAR','144',17520.00,3,'100'); INSERT INTO TEST.TCUSTORD (CUST_CODE,ORDER_DATE,PRODUCT_CODE,ORDER_ID,PRODUCT_PRICE,PRODUC T_AMOUNT,TRANSACTION_ID) VALUES ('JANE','1995-1111:13:52:00','PLANE','256',133300.00,1,'100'); C,

Alternative Formats: FORMATXML


Output is in XML format Data cannot be processed by Replicat Syntax FORMATXML [, INLINEPROPERTIES | NOINLINEPROPERTIES ] [, TRANS | NOTRANS ]

INLINEPROPERTIES | NOINLINEPROPERTIES Controls whether or not properties are included within the XML tag or written separately. INLINEPROPERTIES is the default. TRANS | NOTRANS Controls whether or not transaction boundaries and commit timestamps should be included in the XMLoutput. TRANS is the default.

Oracle GoldenGate Fundamentals Student Guide

Alternative Formats: FORMATXML Sample Output


<transaction timestamp="2008-11-11:14:33:12.000000"> <dbupdate table="TEST.TCUSTMER" type="insert"> <columns> <column name="CUST_CODE" key="true">ZEKE</column> <column name="NAME">ZEKE'S MOTION INC.</column> <column name="CITY">ABERDEEN</column> <column name="STATE">WA</column> </columns> </dbupdate> <dbupdate table="TEST.TCUSTMER" type="insert"> <columns> <column name="CUST_CODE" key="true">ZOE</column> <column name="NAME">ZOE'S USED BICYCLES</column> <column name="CITY">ABERDEEN</column> <column name="STATE">WA</column> </columns> </dbupdate> <dbupdate table="TEST.TCUSTMER" type="insert"> <columns> <column name="CUST_CODE" key="true">VAN</column> <column name="NAME">VAN'S BICYCLESS</column> <column name="CITY">ABERDEEN</column> <column name="STATE">WA</column> </columns> </dbupdate> </transaction>

91

Viewing in Logdump

Logdump
The Logdump utility allows you to: Display or search for information that is stored in GoldenGate trails or files Save a portion of a GoldenGate trail to a separate trail file

Logdump overview Logdump provides access to GoldenGate Trails, unstructured file with a variable record length. Each record in the trail contains a header, known as the GGS Header (unless the NOHEADERS Extract parameter was used), an optional user token area, and the data area.
Logdump Starting and Getting Online Help
To start Logdump - from the GoldenGate installation directory: Shell> logdump To get help: Logdump 1 > help The Logdump utility is documented in the: Oracle GoldenGate Troubleshooting and Tuning Guide

The following is an excerpt from the HELP output: FC [<num> | <string>] - Edit previous command

Oracle GoldenGate Fundamentals Student Guide

HISTORY - List previous commands OPEN | FROM <filename> - Open a Log file RECORD | REC - Display audit record NEXT [ <count> ] - Display next data record SKIP [ <count> ] - Skip down <count> records COUNT - Count the records in the file [START[time] <timestr>,] [END[time] <timestr>,] [INT[erval] <minutes>,] [LOG[trail] <wildcard-template>,] [FILE <wildcard-template>,] [DETAIL ] <timestr> format is [[yy]yy-mm-dd] [hh[:mm][:ss]] POSITION [ <rba> | FIRST ] - Set position in file RECLEN [ <size> ] - Sets max output length EXIT | QUIT - Exit the program FILES | FI | DIR - Display filenames ENV - Show current settings
Logdump Opening a Trail
Logdump> open dirdat/rt000000 Logdump responds with: Current LogTrail is /ggs/dirdat/rt000000

Opening a trail The syntax to open a trail is: OPEN <file_name> Where: <file_name> is either the relative name or fully qualified name of the file, including the file sequence number.

93

Logdump Setting up a View


To view the trail file header:

Logdump 1> fileheader on


To view the record header with the data:

Logdump 2> ghdr on


To add column information:

Logdump 3> detail on


To add hex and ASCII data values to the column list:

Logdump 4> detail data


To control how much record data is displayed:

Logdump 5> reclen 280

Setting up a view in Logdump fileheader [on | off | detail] Controls whether or not the trail file header is displayed. ghdr [on | off] Controls whether or not the record header is displayed with each record. Each record contains a header that includes information about the transaction environment. Without arguments, GHDR displays the status of header display (ON or OFF). detail {on | off | data} - DETAIL ON displays a list of columns that includes the column ID, length, and value in hex and ASCII. - DATA adds hex and ASCII data values to the column list. - DETAIL OFF turns off detailed display. usertoken By default, the name of the token and its length are displayed. Use the USERTOKEN DETAIL option to show the actual token data. reclen Controls how much of the record data is displayed. You can use RECLEN to control the amount of scrolling that must be done when records are large, while still showing enough data to evaluate the record. Data beyond the specified length is truncated.

Oracle GoldenGate Fundamentals Student Guide

Logdump Viewing the Trail File Header


fileheader [ on | detail ] displays the file header: Logdump Logdump Reading Logdump 14662 >fileheader detail 14663 >pos 0 forward from RBA 0 14664 >n Length Length Length Length Length Length Length 587 303 103 88 85 4 587 Len 587 RBA 0

TokenID x46 'F' Record Header Info x00 TokenID x30 '0' TrailInfo Info x00 TokenID x31 '1' MachineInfo Info x00 TokenID x32 '2' DatabaseInfo Info x00 TokenID x33 '3' ProducerInfo Info x00 TokenID x34 '4' ContinunityInfo Info x00 TokenID x5a 'Z' Record Trailer Info x00 2008/07/18 13:40:26.034.631 FileHeader Name: *FileHeader*
3000 0008 0037 6f6d 4f72 2f64 0138 012f 0000 0031 653a 6163 6972 0000 3000 0016 7572 6d63 6c65 6461 0800 0008 3300 693a 6361 3a73 742f 01e2 660d 000c 7465 7267 6f75 6572 4039 0a71 02f1 6c6c 6172 7263 3030 0000 3100 7834 7572 3a67 6536 3030 0c00 0006 eac7 6961 6773 0000 3030 0000 0001 7f3f 6e3a 3a67 1700 3700 0000

3200 3400 3a68 6773 112e 0005 001d

| | | | | | |

0../0...f..q1.....2. ......3.....x4...?4. .7.1uri:tellurian::h ome:mccargar:ggs:ggs Oracle:source6...... /dirdat/er0000007... .8......@9..........

GroupID x30 '0' TrailInfo

Info x00

Length

303

3000 012f 3000 0008 660d 0a71 3100 0006 0001 3200 | 0../0...f..q1.....2. etc.

The detail option displays the data.


Logdump Viewing Trail Records
To go to the first record, and to move from one record to another in sequence: Logdump 6 > pos 0 Logdump 7 > next (or just type n) To position at an approximate starting point and locate the next good header record: Logdump 8 > pos <approximate RBA> Logdump 9 > scanforheader (or just type sfh)

position | pos [ <RBA> | 0 | FIRST ] You can position on the first record using 0 or FIRST or on a relative byte address. scanforheader | sfh [prev] Use scanforheader to go to the next record header. Adding the prev option will display the previous header. Before using this command, use the ghdr on command to show record headers.

95

Logdump Viewing Trail Records


Record header: contains transaction information. Below the header is the data area.

I/O type Operation type and time the record was written Source table Image type: could be a before or after image

Column information

Record data, in hex

Length of record and its RBA position in the trail file

Record data, in ASCII

GoldenGate trail files are unstructured. The GoldenGate record header provides metadata of the data contained in the record and includes the following information. The operation type, such as an insert, update, or delete The transaction indicator (TransInd): 00 beginning, 01 middle, 02 end or 03 whole of transaction The before or after indicator for updates Transaction information, such as the transaction group and commit timestamp The time that the change was written to the GoldenGate file The type of database operation The length of the record The relative byte address within the GoldenGate file The table name The change data is shown in hex and ASCII format. If before images are configured to be captured, for example to enable a procedure to compare before values in the WHERE clause, then a before image also would appear in the record.

Oracle GoldenGate Fundamentals Student Guide

Logdump - Counting the Records in the Trail


Logdump> count LogTrail /ggs/dirdat/rt000000 has 4828 records Total Data Bytes 334802 Avg Bytes/Record 69 Delete 900 Insert 3902 FieldComp 26 Before Images 900 After Images 3928 Average of 25 Transactions Bytes/Trans ..... 22661 Records/Trans ... 193 Files/Trans ..... 8

COUNT output The basic output, without options, shows the following: - The RBA where the count began - The number of records in the file - The total data bytes and average bytes per record - Information about the operation types - Information about the transactions
Logdump Counting Records in the Trail (contd)
TCUSTMER Total Data Bytes Avg Bytes/Record Delete Insert FieldComp Before Images After Images TCUSTORD Total Data Bytes Avg Bytes/Record Delete Insert Field Comp Before Images After Images 10562 55 300 1578 12 300 1590 229178 78 600 2324 14 600 2338

COUNT Syntax: COUNT [, DETAIL] [, END[TIME] <time_string>] [, FILE <specification>] [, INT[ERVAL] <minutes>]

97

[, LOG] <wildcard>] [, START[TIME] <time_string>] COUNT options allow you to show table detail without using the DETAIL command first, set a start and end time for the count, filter the count for a table, data file, trail file, or extract file, and specify a time interval for counts.
Logdump Filtering on a Filename
Logdump 7 >filter include filename TCUST* Logdump 8 >filter match all Logdump 9 >n ________________________________________________________________ Hdr-Ind : E (x45) Partition : . (x00) UndoFlag : . (x00) BeforeAfter: A (x41) RecLength : 56 (x0038) IO Time : 2002/04/30 15:56:40.814 IOType : 5 (x05) OrigNode : 108 (x6c) TransInd : . (x01) FormatType : F (x46) SyskeyLen : 0 (x00) Incomplete : . (x00) AuditRBA : 105974056 2002/04/30 15:56:40.814 Insert Len 56 Log RBA 1230 File: TCUSTMER Partition 0 After Image: 3220 2020 4A61 6D65 7320 2020 2020 4A6F 686E 736F| 2 James Johnso 6E20 2020 2020 2020 2020 2020 2020 4368 6F75 6472| n Choudr 616E 7420 2020 2020 2020 2020 2020 4C41 | LA Filtering suppressed 18 records

Use FILTER to filter the display based on one or more criteria.

FILENAME specifies a SQL table, NonStop data file or group of tables/files using a wildcard. You can string multiple FILTER commands together, separating each one with a semi-colon, as in: FILTER INCLUDE FILENAME fin.act*; FILTER RECTYPE 5; FILTER MATCH ALL To avoid unexpected results, avoid stringing filter options together with one FILTER command. For example, the following would be incorrect: FILTER INCLUDE FILENAME fin.act*; RECTYPE 5; MATCH ALL Without arguments, FILTER displays the current filter status (ON or OFF) and any filter criteria that are in effect.

Oracle GoldenGate Fundamentals Student Guide

Logdump Locating a Hex Data Value


Logdump 27 >filter inc hex /68656C20/ Logdump 28 >pos 0 Current position set to RBA Logdump 29 >n __________________________________________________________ Hdr-Ind UndoFlag RecLength IOType TransInd SyskeyLen AuditRBA : E (x45) Partition : . (x00) : . (x00) BeforeAfter: B (x42) : 56 (x0038) IO Time : 2002/04/30 16:22:14.205 : 3 (x03) OrigNode : 108 (x6c) : . (x01) FormatType : F (x46) : 0 (x00) Incomplete : . (x00) : 109406324

2002/04/30 16:22:14.205 Delete Len 56 Log RBA 64424 File: TCUSTMER Partition 0 Before Image: 3620 2020 4A61 6D65 7320 2020 2020 4A6F 686E 736F | 6 James Johnso 6E20 2020 2020 2020 2020 2020 2020 4574 6865 6C20 | n Ethel 2020 2020 2020 2020 2020 2020 2020 4C41 | LA Filtering suppressed 545 records

The example includes a hex range. The FILTER command can INCLUDE | EXCLUDE the following options: AUDITRBA <rba> [<comparison operator>] CLEAR {<filter_spec> | ALL} ENDTIME <time_string> FILENAME <name> [, <name>] HEX <hex_string> [<byte_range>] [, ...] INT16 <16-bit_integer> | INT32 <32-bit_integer> IOTYPE <operation type> [, <operation type>] MATCH {ANY | ALL} OFF ON PROCESS <process_name> RBA <byte address> [<comparison operator>] [...] RECLEN <length> [<comparison operator>] RECTYPE {<type_number> | <type_name>} SHOW STARTTIME <time_string> STRING [BOTH] [B],<text> [<column_range>] [[B],<text> [<column_range>]] [...] SYSKEY <system key> [<comparison operator>] [...] TRANSIND <indicator> [<comparison operator>] TYPE <type> UNDOFLAG <type> [<comparison operator>]

99

Logdump Saving Records to a New Trail


Logdump> save newtrail 10 records The 10 records are taken forward from the current position in the file.

Use SAVE to write a subset of the records to a new trail or extract file. By saving a subset to a new file, you can work with a smaller file. Saving to another file also enables you to extract valid records that can be processed by GoldenGate, while excluding records that may be causing errors. Options allow you to overwrite an existing file, save a specified number of records or bytes, suppress comments, use the old or new trail format, set the transaction indicator (first, middle, end, only), and clean out an existing file before writing new data to it. Syntax SAVE <file_name> [!] {<n> records | <n> bytes}[NOCOMMENT] [OLDFORMAT | NEWFORMAT] [TRANSIND <indicator>] [TRUNCATE]
Logdump Keeping a Log of Your Session
Logdump> log to MySession.txt When finished Logdump> log stop

Oracle GoldenGate Fundamentals Student Guide

Use LOG to start and stop the logging of Logdump sessions. When enabled, logging remains in effect for all sessions of Logdump until disabled with the LOG STOP command. Without arguments, LOG displays the status of logging (ON or OFF). An alias for LOG is OUT. Syntax LOG {<file_name> | STOP}
Logdump Commands
Purpose Working with files Viewing information Selecting data and records Making conversions Controlling the environment Miscellaneous Examples CD, LOG, NEXTTRAIL, OPEN, POSITION, SAVE, WRITELOG COUNT, FILES, ENV, NOTIFY, SHOW, TIME DUMP, FILTER, NEXT, SCANFORENDTRANSACTION, SCANFORHEADER, SCANFORRBA, SCANFORTIME, SCANFORTYPE, SKIP COMPUTETIMESTAMP, CTIME, DECRYPT, ENCRYPT, INTERPRETINTERVAL, INTERPRETTIMESTAMP ASCIIDATA|EBCDICDATA, DETAIL, FILEHEADER, GHDR, HEADERTOKEN, RECLEN, TIMEOFFSET, TRAILFORMAT, USERTOKEN EXIT, FC, HELP, HISTORY, OBEY, X

Working with files: CD Sets the default directory LOG Writes a session log NEXTTRAIL Closes the current file and opens the next file in the trail sequence OPEN Open a log file POSITION Sets the read position in the file SAVE Writes record data to another file WRITELOG Writes text to a session log Viewing information: COUNT Displays record count information FILES Displays names of files in the current subvolume ENV Displays current Logdump settings NOTIFY Displays the number of records scanned, trail position, and record timestamp. SHOW Displays based on the following options: <OPEN>: open files <ENV>: current Logdump environment <RECTYPE>: list of GoldenGate record types <FILTER>: current filter settings TIME Displays the current time in local and GMT formats

101

Selecting data and records: DUMP Displays the specified number of bytes of data from the current position in the file FILTER Filters the record display NEXT Displays the next or <nn> record in the file SCANFORENDTRANSACTION Finds a record that is the last or only record in a transaction and displays the next record SCANFORHEADER Finds the start of the next record header SCANFORRBA Finds a specified relative byte address SCANFORTIME Finds the next record with a specified timestamp SCANFORTYPE Finds the next record of a specified type SKIP Skips a specified number of records Making conversions: COMPUTETIMESTAMP CTIME DECRYPT ENCRYPT INTERPRETINTERVAL INTERPRETTIMESTAMP

Converts a datetime string to a Julian timestamp Converts a C timestamp to an ASCII timestamp Decrypts data before displaying it Encrypts file data Displays a 64-bit Julian interval as: days-hh:mm:ss:ms:us Displays a 64-bit Julian timestamp in ASCII format

Controlling the environment: ASCIIDATA | EBCDICDATA Controls whether data is displayed in ASCII or EBCDIC format on the IBM mainframe ASCIIDUMP | EBCDICDUMP Sets the character set on an IBM mainframe ASCIIHEADER | EBCDICHEADER Controls whether the table name is displayed in ASCII or EBCDIC format on an IBM mainframe DETAIL Controls how much detail is displayed FILEHEADER Controls the display of the trail file header GHDR Controls the display of record header information HEADERTOKEN Controls the display of header token indicators RECLEN Sets the maximum data output length SCANSCROLLING Controls whether count notification displays on one line or multiple lines TIMEOFFSET Sets the time offset from GMT TRAILFORMAT Sets the trail format to the old version (pre-GoldenGate 6.0) or the new version TRANSBYTELIMIT Sets a byte-count threshold TRANSHIST Sets the size of the transaction history TRANSRECLIMIT Sets a record-count threshold USERTOKEN Controls the display of user token data Miscellaneous: EXIT Exits Logdump FC Edits a previous command HELP Shows syntax for Logdump commands

Oracle GoldenGate Fundamentals Student Guide

HISTORY OBEY X

Lists previously issued commands Executes a series of commands stored in a file Executes a specified program from within Logdump

103

Reversing the Trail Sequence

Reverse - Overview
The Reverse utility reorders operations within GoldenGate trails in reverse sequence: Provides selective back out of operations Selectively back out corrupt data or accidental delete operations while keeping the rest of the application alive Can be used to restore a database to a specific point in time Use to back out all operations during regression testing to restore the original test baseline

To use GoldenGate Reverse: Run Extract to extract the before data. Run the Reverse utility to perform the reversal of the transactions. Run Replicat to apply the restored before data to the target database. GoldenGate Reverse has the following restrictions: If the database does not store the before images of LONG or LOB columns, GoldenGate might not be able to roll back those columns. A before image is required to reverse update and delete operations. Tables with XMLTYPE columns are not supported by GoldenGate Reverse.
Reverse - Overall Process

Source Transaction Log or GoldenGate Trails Extract ext1

Write to a single file or a series of files File input Reverse File output Replicat rep1

Source Database

SPECIALRUN TRANLOG BEGIN END GETUPDATEBEFORES NOCOMPRESSDELETES Filter criteria (if any) EXTFILE or RMTFILE (Table statements)

Operation sequence is reversed Inserts become deletes Deletes become inserts Update Before Images become After Images Begin and end transaction indicators are reversed If input is a series of files, reverses file sequence

Oracle GoldenGate Fundamentals Student Guide

Reverse Sample Parameter Files


Extract Parameters (dirprm/ext1.prm)
SPECIALRUN, TRANLOG SOURCDB dsn, USERID user, PASSWORD password BEGIN 2004-05-30 17:00 END 2004-05-30 18:00 GETUPDATEBEFORES NOCOMPRESSDELETES RMTHOST target, MGRPORT 7809 RMTFILE /ggs/dirdat/input.dat TABLE HR.SALES; TABLE HR.ACCOUNTS;

Replicat Parameters (dirprm/rep1.prm)


SPECIALRUN END RUNTIME TARGETDB dsn, USERID user, PASSWORD password EXTFILE /ggs/dirdat/output.dat ASSUMETARGETDEFS MAP HR.SALES, TARGET HR.SALES; MAP HR.ACCOUNTS, TARGET HR.ACCOUNTS;

SPECIALRUN indicates a one-time batch process that will run from BEGIN date and time until END date and time. TRANLOG specifies the transaction log as the data source. GETUPDATEBEFORES is used to include before images of update records, which contain record details before an update (as opposed to after images). NOCOMPRESSDELETES causes Extract to send all column data to the output, instead of only the primary key. Enables deletes to be converted back to inserts. END RUNTIME causes the Extract or Replicat to terminate when it reaches process startup time.

105

Reverse Overall Process


1. Run Extract from either GoldenGate trails or the source database transaction logs $ ggs/> extract paramfile dirprm/ext1.prm 2. Run Reverse to produce the reordered output $ ggs/> reverse dirdat/input.dat dirdat/output.dat 3. Run Replicat to back out operations $ ggs/> replicat paramfile dirprm/rep1.prm

The Reverse utility is run from the OS shell. Syntax: REVERSE <source file or trail> <target file or trail> Example: C:\GGS> reverse dirdat\rt dirdat\nt

Trails Discussion Points


1. What is a trail? 2. What formats are Extract trails and files written in? 3. What GoldenGate utility allows you to view trail contents

1. A trail is a series of files on disk where GoldenGate stores data for further processing. 2. GoldenGate trail format, ASCII, SQL, XML 3. Logdump

Oracle GoldenGate Fundamentals Student Guide

Parameters

Parameters - Overview
Editing parameter files GLOBALS versus process parameters GLOBALS parameters Manager parameters Extract parameters Replicat parameters

Editing Parameter Files


Edit parameter files to configure GoldenGate processes The GLOBALS parameter file is identified by its file path GGSCI> EDIT PARAMS ./GLOBALS Manager and utility parameter files are identified by keywords GGSCI> EDIT PARAMS MGR GGSCI> EDIT PARAMS DEFGEN Extract and Replicat parameter files are identified by the process group name GGSCI> EDIT PARAMS <group name>

107

GLOBALS versus Process Parameters


GLOBALS parameters apply to all processes Set when Manager starts Reside in <GoldenGate install directory>/GLOBALS Process parameters apply to a specific process (Manager, Extract, Server Collector, Replicat, Utilities) Set when the process starts Override GLOBALS settings Reside by default in the dirprm directory in files named <processname>.prm Most apply to all tables processed but some can be specified at the table level

Parameters manage all GoldenGate components and utilities, allowing you to customize your data management environment to suit your needs. The Effective Range of Process Parameters Process-wide parameters These apply to all tables specified in the parameter file for the process. A global parameter can appear anywhere in the parameter file, and it should only be listed in the file once. When listed more than once, only the last instance of the parameter is active. All other instances are ignored. Table-specific parameters These control processing for tables specified with a TABLE or MAP statement. Tablespecific parameters enable you to designate one set of processing rules for some tables, while designating other rules for other tables. Table-specific parameters take effect in the order that each parameter is listed in the parameter file. There are two implementations for file-specific parameters: Toggling the parameter on and off around one or more TABLE or MAP statements. Adding the parameter within MAP statement so it applies only to that table or file.

Oracle GoldenGate Fundamentals Student Guide

GLOBALS Parameters
GLOBALS Parameters
Control things common to all processes in a GoldenGate instance Can be overridden by parameters at the process level Must be created before any processes are started Stored in <GoldenGate install directory>/GLOBALS (GLOBALS is uppercase, no extension) Must exit GGSCI to save Once set, rarely changed Parameters most commonly used MGRSERVNAME ggsmanager1 Defines a unique Manager service name on Windows systems CHECKPOINTTABLE dbo.ggschkpt Defines the table name used for Replicats checkpoint table

CHECKPOINTTABLE MGRSERVNAME

Defines the table name used for Replicats checkpoint table. Valid for Windows only. Defines the name of the Manager Service used for starting or stopping the Manager process. This service name will also be used when running the INSTALL utility to add the Windows Service.

Parameters for Oracle DDL replication GGSCHEMA Specifies the name of the schema that contains the database objects that support DDL synchronization for Oracle. DDLTABLE Specifies a non-default name for the DDL history table that supports DDL synchronization for Oracle. MARKERTABLE Specifies a non-default name for the DDL marker table that supports DDL synchronization for Oracle.

109

Manager Parameters
Sample Manager Parameter File
PORT 7809 DYNAMICPORTLIST 90019100 AUTOSTART ER * AUTORESTART EXTRACT *, WAITMINUTES 2, RETRIES 5 LAGREPORTHOURS 1 LAGINFOMINUTES 3 LAGCRITICALMINUTES 5 PURGEOLDEXTRACTS /ggs/dirdat/rt*, USECHECKPOINTS

PORT establishes the TCP/IP port number on which Manager listens for requests. DYNAMICPORTLIST specifies the ports that Manager can dynamically allocate. AUTOSTART specifies processes that are to be automatically started when Manager starts. AUTORESTART specifies processes to be restarted after abnormal termination. LAGREPORTHOURS sets the interval in hours at which Manager checks lag for Extract and Replicat processing. Alternately can be set in minutes. LAGINFOMINUTES specifies the interval at which Extract and Replicat will send a informational message to the event log. Alternately can be set in seconds or hours. LAGCRITICALMINUTES specifies the interval at which Extract and Replicat will send a critical message to the event log. Alternately can be set in seconds or hours. PURGEOLDEXTRACTS purges GoldenGate trails that are no longer needed based on the option settings.

Oracle GoldenGate Fundamentals Student Guide

Manager Parameters
Purpose General Port management Process management Event management Database login Maintenance Examples COMMENT PORT, DYNAMICPORTLIST AUTOSTART, AUTORESTART LAGREPORT, LAGINFO, LAGCRITICAL SOURCEDB, USERID PURGEOLDEXTRACTS

COMMENT

Indicates comments in a parameter file.

Port Management DYNAMICPORTLIST Specifies the ports that Manager can dynamically allocate. DYNAMICPORTREASSIGNDELAY Specifies a time to wait before reassigning a port. PORT Establishes the TCP/IP port number on which Manager listens for requests. Process Management AUTOSTART AUTORESTART

Specifies processes to be started when Manager starts. Specifies processes to be restarted by Manager after a failure. BOOTDELAYMINUTES Determines how long after system boot time MANAGER delays until performing main processing activities. USETHREADS | NOUSETHREADS To perform background tasks, USETHREADS causes a thread to be used on Windows systems or a child process to be spawned on a UNIX system. Event Management DOWNCRITICAL DOWNREPORT LAGCRITICAL LAGINFO LAGREPORT UPREPORT Database login SOURCEDB

Reports processes that stopped gracefully or abnormally. Controls the frequency of reporting stopped processes. Specifies a lag time threshold at which a critical message is reported to the event log. Specifies a lag time threshold at which an informational message is reported to the event log. Sets an interval for reporting lag time to the event log. Determines how often process heartbeat messages are reported.

Specifies a data source name as part of the login information.

111

USERID Maintenance CHECKMINUTES

Provides login information for Manager when it needs to access the database.

Determines how often Manager cycles through maintenance activities. PURGEDDLHISTORY Purges rows from the Oracle DDL history table when they are no longer needed. PURGEMARKERHISTORY Purges Oracle marker table rows that are no longer needed. PURGEOLDEXTRACTS Purges trail data that is no longer needed. PURGEOLDTASKS Purges Extract and Replicat tasks after a specified period of time. STARTUPVALIDATIONDELAY[CSECS] Sets the delay time after which Manager checks the status of a process it started.

Oracle GoldenGate Fundamentals Student Guide

Extract Parameters
Extract Parameter Overview
Extract parameters specify: Group name associates with a checkpoint file Where to send the data Local system Multiple remote systems One to many GoldenGate trails What is being captured Which tables Which rows and columns Which operations Which column mapping to apply Which data transformations to apply

Extract Parameter Defaults


All Extract parameters assume a default value: Capture all insert, update and delete operations Committed data only Full image for inserts Only primary key and changed columns for updates Only primary key for deletes Only after-image of update Send data without transformation Buffer transactions Until a block is full or Until time elapses Based on average transaction volumes

113

Sample Extract Parameter File


EXTRACT ODS USERID GoldenUser, PASSWORD password RMTHOST manhattan, MGRPORT 7809 RMTTRAIL /ggs/dirdat/rt TABLE SALES.ORDERS; TABLE SALES.INVENTORY;

USERID and PASSWORD supply database credentials. (SOURCEDB is not required for Oracle.) RMTHOST specifies the target system while the MGRPORT option specifies the port where Manager is running. RMTTRAIL specifies the GoldenGate trail on the target system. TABLE specifies a source table for which activity will be extracted.
Extract Parameters
Purpose General Database login Selecting and mapping data Routing data Formatting data Reporting Error handling Tuning Maintenance Security Examples SETENV, GETENV, OBEY

Processing method BEGIN, END, PASSTHRU SOURCEDB, USERID IGNOREINSERTS, GETUPDATEBEFORES, TABLE EXTTRAIL, RMTHOST, RMTTRAIL FORMATASCII, FORMATSQL, FORMATXML, NOHEADERS

Custom processing CUSEREXIT, INCLUDE, MACRO, SQLEXEC REPORT, REPORTCOUNT, STATOPTIONS DISCARDFILE, DDLERROR ALLOCFILES, CHECKPOINTSECS, DBOPTIONS PURGEOLDEXTRACTS, REPORTROLLOVER ENCRYPTTRAIL, DECRYPTTRAIL

General CHECKPARAMS COMMENT ETOLDFORMAT GETENV

Performs parameter check, then terminates. Allows insertion of comments in parameter file. Generates trails in a format that is compatible with Replicat versions prior to GoldenGate version 6.0. Retrieves variables that were set with the SETENV parameter.

Oracle GoldenGate Fundamentals Student Guide

Processes parameter statements contained in a different parameter file. RECOVERYOPTIONS Controls the recovery mode of the Extract process. SETENV Specifies a value for a UNIX environment variable from within the GGSCI interface. TCPSOURCETIMER | NOTCPSOURCETIMER Adjusts timestamps of rows transferred to other systems to account for system clock differences (not time zone differences). TRACETABLE | NOTRACETABLE Causes Extract to ignore database changes generated by Replicat during bidirectional synchronization. Supports Oracle. Processing Method BEGIN Specifies when to begin a processing run. DSOPTIONS Specifies Extract processing options when a Vendor Access Module (VAM) is being used. END Specifies when to end a processing run. Not required unless SPECIALRUN is used. Online processing is implied if END is in the future or unspecified. EXTRACT Defines an Extract group as an online process. GETAPPLOPS | IGNOREAPPLOPS Controls whether or not operations from all processes except Replicat are captured. GETREPLICATES | IGNOREREPLICATES Controls whether or not replicated operations are captured by an Extract on the same system. PASSTHRU | NOPASSTHRU Controls whether tables will be processed by a data-pump Extract in pass-through mode (data definitions not required). RMTTASK Creates a processing task on the target system. SOURCEISTABLE Extracts entire records from source tables. SPECIALRUN Used for one-time processing tasks that do not checkpoint from run to run. VAM Indicates a Vendor Access Module (VAM) is being used to provide transactional data to Extract. Database Login SOURCEDB USERID

OBEY

Specifies the data source as part of the login information. Specifies database connection information.

Selecting and Mapping Data COLMATCH Establishes global column-mapping rules. COMPRESSDELETES | NOCOMPRESSDELETES Controls whether GoldenGate writes all columns or only the key for delete operations. COMPRESSUPDATES | NOCOMPRESSUPDATES Causes only primary key columns and changed columns to be logged for updates. DDL Enables and filters the capture of DDL operations. DDLSUBST Enables string substitution in DDL processing.

115

Controls certain aspects of how GoldenGate fetches data from the database, such as what to do when a row is missing. Several of the options are specific to Oracle databases. GETDELETES | IGNOREDELETES Controls the extraction of delete operations. GETINSERTS | IGNOREINSERTS Controls the extraction of insert operations. GETTRUNCATES | IGNORETRUNCATES Controls the extraction of truncate statements. GETUPDATEAFTERS | IGNOREUPDATEAFTERS Controls the extraction of update after images. GETUPDATEBEFORES | IGNOREUPDATEBEFORES Controls the extraction of update before images. GETUPDATES | IGNOREUPDATES Controls the extraction of update operations. REPLACEBADCHAR Replaces invalid character values with another value. SEQUENCE (Oracle) Specifies sequences for synchronization. TABLE Specifies tables for synchronization. Controls column mapping and conversion. TABLEEXCLUDE Excludes tables from the extraction process. TARGETDEFS Specifies a file containing target table definitions for target databases that reside on the NonStop platform. TRIMSPACES | NOTRIMSPACES Controls whether or not trailing spaces are removed when mapping CHAR to VARCHAR columns. WILDCARDRESOLVE Defines rules for processing wildcard table specifications in a TABLE statement. Route Data EXTFILE EXTTRAIL RMTHOST RMTFILE RMTTRAIL Formatting Data FORMATASCII FORMATSQL FORMATXML INPUTASCII NOHEADERS

FETCHOPTIONS

Specifies an extract file to which extracted data is written on the local system. Specifies a trail to which extracted data is written on the local system. Specifies the target system and Manager port number for a remote file or trail. Specifies an extract file to which extracted data is written on the target system. Specifies a trail to which extracted data is written on a remote system.

Formats extracted data in external ASCII format. Formats extracted data into equivalent SQL statements. Formats extracted data into equivalent XML syntax. Accepts a decimal value from 1 to 127, representing an ASCII character. Prevents record headers from being written to the trail.

Custom Processing CUSEREXIT Invokes customized user exit routines at specified points during processing.

Oracle GoldenGate Fundamentals Student Guide

INCLUDE MACRO MACROCHAR SQLEXEC

Invokes a macro library. Defines a GoldenGate macro. Defines macro character other than the default of #. Executes a stored PL/SQL procedure or query during Extract processing. VARWIDTHNCHAR | NOVARWIDTHNCHAR Controls whether length information is written to the trail for NCHAR columns. Reporting CMDTRACE LIST | NOLIST

Displays macro expansion steps in the Extract report file. Suppresses or displays the listing of macros in the Extract report file. REPORT Schedules a statistical report. STATOPTIONS Specifies information to include in statistical displays. REPORTCOUNT Reports the number of records processed. TLTRACE Traces transaction log extraction activity. TRACE/TRACE2 Shows Extract processing information to assist in revealing processing bottlenecks. WARNLONGTRANS Defines a long-running transaction and controls the frequency of checking for and reporting them. Error Handling DDLERROR DISCARDFILE Tuning ALLOCFILES

Controls error handling for DDL extraction. Contains records that could not be processed.

Used to control the incremental amount of memory structured allocated once the initial allocation specified by NUMFILES parameter is reached. Defaults to 500. CACHEMGR Controls the virtual memory cache manager. You can set the CACHEDIRECTORY <path> option, but other options are best defaulted, unless under direction from GoldenGate Support. CHECKPOINTSECS Controls how often Extract writes a checkpoint. DBOPTIONS Specifies database options. DDLOPTIONS Specifies DDL processing options. DYNAMICRESOLUTION | NODYNAMICRESOLUTION Suppresses the metadata lookup for a table until Extract encounters transactional data for it. Makes Extract start faster when there is a large number of tables specified for capture. EOFDELAY | EOFDELAYCSECS Determines how long Extract delays before searching for more data to process. FLUSHSECS | FLUSHCSECS Determines the amount of time that record data remains buffered before being written to a trail. FUNCTIONSTACKSIZE Controls the number of arguments permitted in a parameter clause. GROUPTRANSOPS Controls the number of records that are sent to the trail in one batch.

117

MAXFETCHSTATEMENTS Controls the maximum number of prepared queries that Extract can use to fetch data from the database. NUMFILES Used to control the initial amount of memory structured allocate for the storage of tables structures. Defaults to 1000. RMTHOSTOPTIONS Specifies connection attributes other than host information for a TCP/IP connection used by a passive Extract group. THREADOPTIONS Controls aspects of the way that Extract operates in an Oracle Real Application Cluster environment. Supports Oracle. TRANLOGOPTIONS Supplies log-processing options. TRANSMEMORY (DB2 z/OS and NonStop SQL/MX only) Controls the amount of memory and temporary disk space available for caching uncommitted transaction data. Maintenance DISCARDROLLOVER PURGEOLDEXTRACTS REPORTROLLOVER ROLLOVER

Controls when to create a new discard file. Purges trails after they have been consumed. Specifies when to create new report files. Specifies the way that trail files are aged.

Security DECRYPTTRAIL ENCRYPTTRAIL | NOENCRYPTTRAIL

Decrypts data in a trail or extract file. Controls encryption of data in a trail or extract file.

Extract TABLE Parameter


TABLE <table spec> [, TARGET <table spec>] [, DEF <definitions template>] [, TARGETDEF <definitions template>] [, COLMAP (<column mapping expression>)] [, {COLS | COLSEXCEPT} (<column specification>)] [, EVENTACTIONS <action>] [, EXITPARAM <parameter string>] [, FETCHBEFOREFILTER] [, {FETCHCOLS | FETCHCOLSEXCEPT} (column specification)] [, {FETCHMODCOLS | FETCHMODCOLSEXCEPT} (<column spec>)] [, FILTER (<filter specification>)] [, KEYCOLS (<column specification>)] [, SQLEXEC (<SQL specification>)] [, SQLPREDICATE WHERE <where clause>] [, TOKENS (<token specification>)] [, TRIMSPACES | NOTRIMSPACES] [, WHERE (<where clause>)] ;

Note: You must use a semicolon to terminate the TABLE statement.

You can use TABLE options to do the following: Select and filter records Select and map columns Transform data Designate key columns Define user tokens

Oracle GoldenGate Fundamentals Student Guide

Trim trailing spaces Pass a parameter to a user exit Execute stored procedures and queries
There must be a TABLE statement for each source table from which you will be extracting data. Use wildcards to specify multiple tables with one TABLE statement, for example: TABLE acct*; TABLE <table spec> Specifies the source table. TARGET <table spec> Specifies a target table to which the source table will be mapped. TARGET is required for TABLE statements when Extract must refer to a target definitions file (specified with the TARGETDEFS parameter) to perform conversions, and when the COLMAP option is used. Otherwise, it can be omitted. Using TARGET identifies the extracted data by the target structure, rather than that of the source, to reflect the structure of the record that is reflected in the definitions file or the column map. Column mapping and conversion can be performed on the target system to prevent added overhead on the source system. Replication from a Windows or UNIX system to a NonStop system require these functions to be performed on the source, however. In addition, it may be preferrable to perform the mapping and conversion on the source when there are multiple sources and one target. In that case, it could be easier to manage one target definitions file that is transferred to each source, rather than having to manage source definitions for each source database that must be transferred to the target, especially when there are frequent application changes that require new files to be generated. DEF <definitions template> Specifies a source-definitions template. TARGETDEF <definitions template> Specifies a target-definitions template. COLMAP Maps records between different source and target columns. COLS | COLSEXCEPT Selects or excludes columns for processing. EVENTACTIONS (<action>) Triggers an action based on a record that satisfies a specified filter criterion or (if no filter condition) on every record. EXITPARAM Passes a parameter in the form of a literal string to a user exit. FETCHBEFOREFILTER Directs the FETCHCOLS or FETCHCOLSEXCEPT action to be performed before a filter is executed. FETCHCOLS | FETCHCOLSEXCEPT Enables the fetching of column values from the source database when the values are not in the transaction record.

119

FETCHMODCOLS | FETCHMODCOLSEXCEPT Forces column values to be fetched from the database when the columns are present in the transaction log. FILTER Selects records based on a numeric value. FILTER provides more flexibility than WHERE. KEYCOLS Designates columns that uniquely identify rows. SQLEXEC Executes stored procedures and queries. SQLPREDICATE Enables a WHERE clause to select rows for an initial load. TOKENS Defines user tokens. TRIMSPACES | NOTRIMSPACES Controls whether trailing spaces are trimmed or not when mapping CHAR to VARCHAR columns. WHERE Selects records based on conditional operators.
Extract TRANLOGOPTIONS Parameter
Use the TRANLOGOPTIONS parameter to control database-specific aspects of log-based extraction Examples: Controlling the archive log
TRANLOGOPTIONS ALTARCHIVEDLOGFORMAT log_%t_%s_%r.arc Specifies an alternative archive log format. TRANLOGOPTIONS ALTARCHIVELOGDEST /oradata/archive/log2 Specifies an alternative archive log location. TRANLOGOPTIONS ARCHIVEDLOGONLY Causes Extract to read from the archived logs exclusively.

Oracle GoldenGate Fundamentals Student Guide

Extract TRANLOGOPTIONS Parameter (contd)


Examples: Loop prevention
TRANLOGOPTIONS EXCLUDEUSER ggsrep Specifies the name of the Replicat database user so that those transactions are not captured by Extract. TRANLOGOPTIONS EXCLUDETRANS ggs_repl Specifies the transaction name of the Replicat database user so that those transactions are not captured by Extract. TRANLOGOPTIONS FILTERTABLE <table_name> Specifies the Replicat checkpoint table name. Operations on the checkpoint table will be ignored by the local Extract.

121

Replicat Parameters
Replicat Parameter Overview
Replicat parameters specify: Group name associated with a checkpoint file List of source to target relationships Optional row-level selection criteria Optional column mapping facilities Optional transformation services Optional stored procedure or SQL query execution Error handling Various optional parameter settings

The Replicat process runs on the target system, reads the extracted data, and replicates it to the target tables. Replicat reads extract and log files sequentially, and processes the inserts, updates and deletes specified by selection parameters. Replicat reads extracted data in blocks to maximize throughput. Optionally, you can filter out the rows you do not wish to deliver, as well as perform data transformation prior to replicating the data. Parameters control the way Replicat processes how it maps data, uses functions, and handles errors. You can configure multiple Replicat processes for increased throughput and identify each by a different group name.

Oracle GoldenGate Fundamentals Student Guide

Replicat Parameter Defaults


All Replicat parameters assume a default value: Apply all inserts, update, delete operations Smart transactional grouping 100 source operations are grouped into a single target transaction Process abends on any operational failure Roll back transaction to last good checkpoint Optional error handling Optional mapping to secondary table for exceptions

Replicat supports a high volume of data replication activity. As a result, network activity is block-based rather than a record-at-a-time. SQL operations used to replicate operations are compiled once and execute many times, resulting in virtually the same performance as pre-compiled operations. Replicat preserves the boundaries of each transaction while processing , but small transactions can be grouped into larger transactions to improve performance. Like Extract, Replicat uses checkpoints so that after a graceful stop or a failure, processing can be restarted without repetition or loss of continuity.
Sample Replicat Parameter File
REPLICAT SALESRPT USERID ggsuser, PASSWORD ggspass ASSUMETARGETDEFS DISCARDFILE /ggs/dirrpt/SALESRPT.dsc, APPEND MAP HR.STUDENT, TARGET HR.STUDENT WHERE (STUDENT_NUMBER < 400000); MAP HR.CODES, TARGET HR.CODES; MAP SALES.ORDERS, TARGET SALES.ORDERS, WHERE (STATE = CA AND OFFICE = LA);

REPLICAT names the group linking together the process, checkpoints, and log files. USERID, PASSWORD provide credentials to access the database. ASSUMETARGETDEFS specifies that the table layout is the identical on the source and target.

123

DISCARDFILE identifies the file to receive records that cannot be processed. Records will be appended or the file will be purged at the beginning of the run depending on the options. MAP links the source tables to the target tables and applies mapping, selection, error handling, and data transformation depending on options.
Replicat Parameters
Purpose General Processing method Database login Examples SETENV, GETENV, OBEY BEGIN, END, SPECIALRUN SOURCEDB, USERID

Selecting, converting COLMATCH, IGNOREUPDATES, MAP, SOURCEDEFS, ASSUMETARGETDEFS and mapping data Routing data Custom processing Reporting Error handling Tuning Maintenance Security EXTFILE, EXTTRAIL CUSEREXIT, DEFERAPPLYINTERVAL, INCLUDE, MACRO, SQLEXEC REPORT, REPORTCOUNT, STATOPTIONS DISCARDFILE, OVERRIDEDUPS, HANDLECOLLISIONS ALLOCFILES, BATCHSQL, GROUPTRANSOPS, DBOPTIONS PURGEOLDEXTRACTS, REPORTROLLOVER DECRYPTTRAIL

Performs parameter check, then terminates. Allows insertion of comments in parameter file. Retrieves a value for a variable set by SETENV. Processes parameter statements contained in a different parameter file. SETENV Specifies a value for a UNIX environment variable from within the GGSCI interface. TRACETABLE | NOTRACETABLE Defines a trace table in an Oracle database to which Replicat adds a record whenever it updates the target database. Processing Method BEGIN Specifies when Replicat starts processing. Required when SPECIALRUN is used. BULKLOAD Loads data directly into the interface of Oracles SQL*Loader bulkload utility. END Specifies when Replicat stops processing. Required when using SPECIALRUN. GENLOADFILES Generates run and control files that are compatible with bulk-load utilities. REPLICAT Links this run to an online Replicat process. SPECIALRUN Used for one-time processing that does not require checkpointing from run to run.

General CHECKPARAMS COMMENT GETENV OBEY

Oracle GoldenGate Fundamentals Student Guide

Database Login TARGETDB USERID

Specifies the target database. Support SQL Server, DB2, Sybase, and Informix. Specifies a database user ID and password for connecting to the target database.

Selecting, Converting and Mapping data ALLOWDUPTARGETMAP | NOALLOWDUPTARGETMAP Allows the same source-target MAP statement to appear more than once in the parameter file. ASCIITOEBCDIC Converts incoming ASCII text to EBCDIC for DB2 on z/OS systems running UNIX System Services. ASSUMETARGETDEFS Assumes the source and target tables have the same column structure. COLMATCH Establishes global column-mapping rules. DDL Enables and filters the capture of DDL operations. DDLSUBST Enables string substitution in DDL processing. FILTERDUPS | NOFILTERDUPS Controls the handling of anomalies in data that is sent out of order from NSK. GETDELETES | IGNOREDELETES Includes (default) or excludes delete operations from being replicated. GETINSERTS | IGNOREINSERTS Controls the replication of insert operations. GETUPDATEAFTERS | IGNOREUPDATEAFTERS Controls the replication of update after images. GETUPDATEBEFORES | IGNOREUPDATEBEFORES Controls the replication of update before images. GETUPDATES | IGNOREUPDATES Controls the replication of update operations. GETTRUNCATES | IGNORETRUNCATES Includes or excludes the processing TRUNCATE TABLE operations. Default is IGNORETRUCATES. INSERTALLRECORDS Inserts all records, including before and after images, as inserts into the target databases. INSERTDELETES | NOINSERTDELETES Converts deletes to inserts. INSERTMISSINGUPDATES | NOINSERTMISSINGUPDATES Converts an updates to an insert when a target row does not exist. INSERTUPDATES | NOINSERTUPDATES Converts updates to inserts. MAP Specifies a relationship between one or more source and target tables and controls column mapping and conversion. MAPEXCLUDE Excludes tables from being processed by a wildcard specification supplied in MAP statements. REPLACEBADCHAR Replaces invalid character values with another value. REPLACEBADNUM Replaces invalid numeric values with another value. SOURCEDEFS Specifies a file that contains source table data definitions created by the DEFGEN utility.

125

SPACESTONULL | NOSPACESTONULL Controls whether or not a target column containing only spaces is converted to NULL on Oracle. TABLE (Replicat) Specifies a table or tables for which event actions are to take place when a row satisfies the given filter criteria. TRIMSPACES | NOTRIMSPACES Controls if trailing spaces are preserved or removed for character or variable character columns. UPDATEDELETES | NOUPDATEDELETES Changes deletes to updates. USEDATEPREFIX Prefixes data values for DATE data types with a DATE literal, as required by Teradata databases. USETIMEPREFIX Prefixes data values for TIME datatypes with a TIME literal, as required by Teradata databases. USETIMESTAMPPREFIX Prefixes data values for TIMESTAMP datatypes with a TIMESTAMP literal, as required by Teradata databases. Routing Data EXTFILE EXTTRAIL

Defines the name of an extract file on the local system that contains data to be replicated. Used for one-time processing. Defines a trail containing data to be replicated Used for one-time processing.

Custom Processing CUSEREXIT Invokes customized user exit routines at specified points during processing. DEFERAPPLYINTERVAL Specifies a length of time for Replicat to wait before applying replicated operations to the target database. INCLUDE References a macro library in a parameter file. MACRO Defines a GoldenGate macro. SQLEXEC Executes a stored procedure, query or database command during Replicat processing. Reporting CMDTRACE LIST | NOLIST REPORT REPORTCOUNT SHOWSYNTAX STATOPTIONS TRACE | TRACE2

Displays macro expansion steps in the report file. Suppresses or displays the listings of the parameter file to the report file. Schedules a statistical report at a specified date or time. Reports the number or records processed. Causes Replicat to print its SQL statements to the report file. Specifies information to include in statistical displays. Shows Replicat processing information to assist in revealing processing bottlenecks.

Error Handling CHECKSEQUENCEVALUE | NOCHECKSEQUENCEVALUE (Oracle) Controls whether or not Replicat verifies that a target sequence value is higher than the one on the source and corrects any disparity that it finds. DDLERROR Controls error handling for DDL replication. DISCARDFILE Contains records that could not be processed.

Oracle GoldenGate Fundamentals Student Guide

HANDLECOLLISIONS | NOHANDLECOLLISIONS Handles errors for duplicate and missing records. Reconciles the results of changes made to the target database by an initial load process with those applied by a change-synchronization group. HANDLETPKUPDATE Prevents constraint errors associated with replicating transient primary key updates. OVERRIDEDUPS | NOOVERRIDEDUPS Overlays a replicated insert record onto an existing target record whenever a duplicate-record error occurs. RESTARTCOLLISIONS | NORESTARTCOLLISIONS Controls whether or not Replicat applies HANDLECOLLISIONS logic after GoldenGate has exited because of a conflict. REPERROR Determines how Replicat responds to database errors. REPFETCHEDCOLOPTIONS Determines how Replicat responds to operations for which a fetch from the source database was required. SQLDUPERR When OVERRIDEDUPS is on, specifies the database error number that indicates a duplicate-record error. WARNRATE Determines how often database errors are reported. Tuning ALLOCFILES

Used to control the incremental amount of memory structured allocated once the initial allocation specified by NUMFILES parameter is reached. Defaults to 500. BATCHSQL Increases the throughput of Replicat processing by arranging similar SQL statements into arrays and applying them at an accelerated rate. CHECKPOINTSECS Controls how often Replicat writes a checkpoint when checkpoints are not being generated as a result of transaction commits. DBOPTIONS Specifies database options. DDLOPTIONS Specifies DDL processing options. DYNAMICRESOLUTION | NODYNAMICRESOLUTION Makes Replicate start faster when there is a large number of tables specified for synchronization. DYNSQL | NODYNSQL Causes Replicat to use literal SQL statements rather than a compile-once, execute-many strategy. EOFDELAY | EOFDELAYSECS Determines how many seconds Replicat delays before looking for more data to process. FUNCTIONSTACKSIZE Controls the number of arguments permitted when using GoldenGate column-conversion functions. GROUPTRANSOPS Groups multiple transactions into larger transactions. INSERTAPPEND | NOINSERTAPPEND Controls whether or not Replicat uses an APPEND hint when applying INSERT operations to Oracle target tables. LOBMEMORY Controls the memory and disk space allocated to LOB transactions. MAXDISCARDRECS Limits the number of discarded records reported to the discard file. MAXSQLSTATEMENTS Controls the number of prepared SQL statements that can be used by Replicat. MAXTRANSOPS Divides large transactions into smaller ones. NUMFILES Used to control the initial amount of memory structured allocate for the storage of tables structures. Defaults to 1000.

127

Specifies the delay between attempts to retry a failed SQL operation. TRANSACTIONTIMEOUT Specifies a time interval after which Replicat will commit its open target transaction and roll back any incomplete source transactions that it contains, saving them for when the entire source transaction is ready to be applied. TRANSMEMORY (DB2 z/OS and NonStop SQL/MX only) Controls the amount of memory and temporary disk space available for caching uncommitted transaction data. WILDCARDRESOLVE Alters the rules by which wildcard specifications in a MAP statement are resolved. Maintenance DISCARDROLLOVER PURGEOLDEXTRACTS REPORTROLLOVER Security DECRYPTTRAIL

RETRYDELAY

Specifies when to create new discard files. Purges GoldenGate trail files once consumed. Specifies when to create new report files.

Decrypts data in a trail or extract file.

Replicat MAP Parameter


MAP <table spec> , TARGET <table spec> [, DEF <definitions template>] [, TARGETDEF <definitions template>] [, COLMAP (<column mapping expression>)] [, EVENTACTIONS <action>] [, EXCEPTIONSONLY] [, EXITPARAM <parameter string>] [, FILTER (<filter specification>)] [, HANDLECOLLISIONS | NOHANDLECOLLISIONS] [, INSERTALLRECORDS] [, INSERTAPPEND | NOINSERTAPPEND] [, KEYCOLS (<column specification>)] [, REPERROR (<error> , <response>)] [, SQLEXEC (<SQL specification>)] [, TRIMSPACES | NOTRIMSPACES] [, WHERE (<where clause>)] ;

Note: You must use a semicolon to terminate the MAP statement.

The MAP parameter establishes a relationship between one source and one target table. Insert, update and delete records originating in the source table are replicated in the target table. The first <table spec> is the source table. With MAP, you can replicate particular subsets of data to the target table, for example, WHERE (STATE = CA). In addition, MAP enables the user to map certain fields or columns from the source record into the target record format (column mapping). You can also include a FILTER command to evaluate data provided built-in functions for more complex filtering criteria.

Oracle GoldenGate Fundamentals Student Guide

A table can appear in multiple maps as either source or target. For example, one might replicate a sales file to either the east sales or west sales tables, depending on some value or type of operation. MAP <table spec> Specifies the source object. TARGET <table spec> Specifies the target object. DEF <definitions template> Specifies a source-definitions template. TARGETDEF <definitions template> Specifies a target-definitions template. COLMAP Maps records between different source and target columns. EVENTACTIONS (<action>) Triggers an action based on a record that satisfies a specified filter criterion or (if no filter condition) on every record. EXCEPTIONSONLY Specifies error handling within an exceptions MAP statement. EXITPARAM Passes a parameter in the form of a literal string to a user exit. FILTER Selects records based on a numeric operator. FILTER provides more flexibility than WHERE. HANDLECOLLISIONS | NOHANDLECOLLISIONS Reconciles the results of changes made to the target table by an initial load process with those applied by a change-synchronization group. INSERTALLRECORDS Applies all row changes as inserts. INSERTAPPEND | NOINSERTAPPEND Controls whether or not Replicat uses an APPEND hint when applying INSERT operations to Oracle target tables. KEYCOLS Designates columns that uniquely identify rows. REPERROR Controls how Replicat responds to errors when executing the MAP statement. SQLEXEC Executes stored procedures and queries. TRIMSPACES | NOTRIMSPACES Controls whether trailing spaces are trimmed or not when mapping CHAR to VARCHAR columns. WHERE Selects records based on conditional operators.

129

Parameters Discussion Points


1. 2. 3. 4. What are some typical Manager parameters? What are some typical Extract parameters? What are some typical Replicat parameters? Where are GoldenGate parameters documented?

1. PORT, DYNAMICPORTLIST, AUTOSTART, AUTORESTART, LAG parameters, PURGEOLDEXTRACTS, 2. EXTRACT with group name, Database login parameters; EXTTRAIL or RMTHOST and RMTTRAIL; TABLE, 3. REPLICAT with group name, Database login parameters, SOURCEDEFS or ASSUMETARGETDEFS, DISCARDFILE, MAP, 4. Reference Guide.

Oracle GoldenGate Fundamentals Student Guide

Data Mapping and Transformation

Data Mapping and Transformation - Overview


Data selection and filtering Column mapping Functions SQLEXEC Macros User tokens User exits Oracle Sequences

Data Selection and Filtering

Data Selection - Overview


GoldenGate provides the ability to select or filter out data based on a variety of levels and conditions Parameter / Clause TABLE or MAP WHERE FILTER TABLE COLS | COLSEXCEPT Selects Table Row Row, Operation, Range Columns

TABLE selection The MAP (Replicat) or TABLE (Extract) parameter can be used to select a table. MAP sales.tcustord, TARGET sales.tord; ROWS selection

131

The following WHERE option can be used with MAP or TABLE to select rows for AUTO product type. WHERE (PRODUCT_TYPE = AUTO); OPERATIONS selection The following can be used with MAP or TABLE to select rows with amounts greater than zero only for update and delete operations. FILTER (ON UPDATE, ON DELETE, amount > 0); COLUMNS selection The COLS and COLSEXCEPT options of the TABLE parameter allow selection of columns as shown in the example below. Use COLS to select columns for extraction, and use COLSEXCEPT to select all columns except those designated by COLSEXCEPT, for example: TABLE sales.tcustord, TARGET sales.tord, COLSEXCEPT (facility_number);

Data Selection WHERE Clause


The WHERE clause is the simplest form of selection WHERE clause appears on either the MAP or TABLE parameter and must be surrounded by parenthesis WHERE clause cannot: perform arithmetic operations refer to trail header and user token values Use the FILTER clause for more complex selections with built-in functions

Examples using the WHERE clause: MAP sales.tcustord, TARGET sales.tord, WHERE (PRODUCT_AMOUNT > 10000); MAP sales.tcustord, TARGET sales.tord, WHERE (PRODUCT_TYPE = AUTO);

Oracle GoldenGate Fundamentals Student Guide

Data Selection WHERE Clause


WHERE can perform an evaluation for: Element Description Columns Comparison operators Numeric values Literal strings Field tests Conjunctive operators Example PRODUCT_AMT =, <>, >, <, >=, <= -123, 5500.123 "AUTO", "Ca" @NULL,@PRESENT,@ABSENT AND, OR

Arithmetic operators and floating-point data types are not supported by WHERE.
Data Selection WHERE Clause Examples
Only rows where the state column has a value of CA are returned.
WHERE (STATE = CA);

Only rows where the amount column has a value of NULL. Note that if amount was not part of the update, the result is false.
WHERE (AMOUNT = @NULL);

Only rows where the amount was part of the operation and it has value that is not null.
WHERE (AMOUNT @PRESENT AND AMOUNT <> @NULL);

Only rows where the account identifier is greater than CORP-ABC.


WHERE (ACCOUNT_ID > CORP-ABC);

133

Selection FILTER Clause


The FILTER clause provides complex evaluations to include or exclude data selection FILTER clause appears on either the MAP or TABLE parameter and must be surrounded by parenthesis With FILTER you can: Deploy other GoldenGate built-in functions Use multiple FILTERs on one statement
- If any filter fails, the entire filter clause fails

Include multiple option clauses, for example (on insert/ update) Raise a user-defined Error for exception processing

When multiple filters are specified per a given TABLE or MAP entry, the filters are executed until one fails or until all are passed. The failure of any filter results in a failure for all filters. Filters can be qualified with operation type so you can specify different filters for inserts, updates and deletes. The FILTER RAISEERROR option creates a user-defined error number if the filter clause is true. In the following example error 9999 is generated when the BEFORE timestamp is earlier than the CHECK timestamp. This also selects only update operations. FILTER (ON UPDATE, BEFORE.TIMESTAMP < CHECK.TIMESTAMP, RAISEERROR 9999);

Oracle GoldenGate Fundamentals Student Guide

Data Selection FILTER Clause


Syntax
FILTER (<option> [, <option]) , [FILTER (<option> [, <option]) ] [, ]

Where <option> is one of :


<column specification> <field conversion function> <ON INSERT | UPDATE | DELETE > <IGNORE INSERT | UPDATE | DELETE> <RAISEERROR <error number> >

ON INSERT | UPDATE | DELETE Specifically limits the filter to be executed on an insert, update or delete. More than one ON clause can be specified (for example, ON UPDATE, ON DELETE executes on updates and deletes but not inserts). IGNORE INSERT | UPDATE | DELETE Specifically ignores the specified type of operation. RAISEERROR <error num> RAISEERROR causes an error to be raised as if there was a database error in the map (RAISEERROR has no effect in the Extract program). In combination with REPERROR, RAISEERROR can be used to control what happens in the event that a filter is not passed (the operation can be discarded, posted to an exceptions table, reported, etc.).

Data Selection FILTER Clause Examples


The following example includes rows where the price multiplied by the amount exceeds 10,000:
FILTER ((PRODUCT_PRICE*PRODUCT_AMOUNT)>10000);

The following example includes rows containing a string JOE:


FILTER (@STRFIND(NAME, "JOE")>0);

Why is the example above not constructed like the one below? The filter below will fail!
FILTER(NAME = JOE);

135

Data Selection RANGE Function


Divides workload into multiple, randomly distributed groups of data Guarantees the same row will always be processed by the same process Determines which group that range falls in by computing a hash against the primary key or user defined columns Syntax
@RANGE (<my range>, <total ranges> [, <column> [, ...]])

Example
TABLE SALES.ACCOUNT, FILTER (@RANGE (1,3));

@RANGE helps divide workload into multiple, randomly distributed groups of data, while guaranteeing that the same row will always be processed by the same process. For example, @RANGE can be used to split the workload different key ranges for a heavily accessed table into different Replicat processes. The user specifies both a range that applies to the current process, and the total number of ranges (generally the number of processes). @RANGE computes a hash value of all the columns specified, or if no columns are specified, the primary key columns of the source table. A remainder of the hash and the total number of ranges is compared with the ownership range to determine whether or not @RANGE determines true or false. Note that the total number of ranges will be adjusted internally to optimize even distribution across the number of ranges. Restriction: @RANGE cannot be used if primary key updates are performed on the database.

Oracle GoldenGate Fundamentals Student Guide

Data Selection RANGE Function Examples


For transaction volume beyond the capacity of a single Replicat, the example below shows three Replicat groups, each processing one-third of the data Hashing each operation by primary key to a particular Replicat guarantees the original sequence of operations Replicat #1 MAP SALES.ACCOUNT, TARGET SALES.ACCOUNT, FILTER (@RANGE (1,3)); Replicat #2 MAP SALES.ACCOUNT, TARGET SALES.ACCOUNT, FILTER (@RANGE (2,3)); Replicat #3 MAP SALES.ACCOUNT, TARGET SALES.ACCOUNT, FILTER (@RANGE (3,3));

The example above demonstrates three Replicat processes, with each Replicat group processing one-third of the data in the GoldenGate trail based on the primary key.

Data Selection RANGE Function Examples


Two tables, REP and ACCOUNT, related by REP_ID, require three Replicats to handle the transaction volumes By hashing the REP_ID column, related rows will always be processed to the same Replicat RMTTRAIL /ggs/dirdat/aa TABLE SALES.REP, FILTER (@RANGE (1,3)); TABLE SALES.ACCOUNT, FILTER (@RANGE (1,3,REP_ID)); RMTTRAIL /ggs/dirdat/bb TABLE SALES.REP, FILTER (@RANGE (2,3)); TABLE SALES.ACCOUNT, FILTER (@RANGE (2,3,REP_ID)); RMTTRAIL /ggs/dirdat/cc TABLE SALES.REP, FILTER (@RANGE (3,3)); TABLE SALES.ACCOUNT, FILTER (@RANGE (3,3,REP_ID));

Both of these MAP statements select the first of 2 ranges, so this Replicat is processing the first range for both the ORDER table and the TRANSACTION table. Another Replicat would include parameters to process the second of the ranges.

137

Column Mapping

Column Mapping - Overview


GoldenGate provides the capability to map columns from one table to another Data can be transformed between dissimilar database tables Using COLMAP to map target columns from your source columns GoldenGate automatically matches source to target column names with USEDEFAULTS Mapping can be applied either when extracting or replicating data

Extract and Replicat provides the capability to transform data between two dissimilarly structured database tables or files. These features are implemented with the COLMAP clause in the TABLE and MAP Parameters. Data Type Conversions Numeric fields are converted from one type and scale to match the type and scale of the target. If the scale of the source is larger than that of the target, the number is truncated on the right. If the target scale is larger than the source, the number is padded with zeros. Varchar and character fields can accept other character, varchar, group, and datetime fields, or string literals enclosed in quotes. If the target character field is smaller than that of the source, the character field is truncated on the right.

Oracle GoldenGate Fundamentals Student Guide

Column Mapping - Syntax


Syntax for the COLMAP clause: MAP SOURCE.TABLE, TARGET TARGET.TABLE, COLMAP ( [USEDEFAULTS,] <target field> = <source expression> [, <target field> = <source expression>] [, ...]

);

For COLMAP: <target field> is the name of a column/field in the target table <source expression> is one of the following: Numeric constant, such as 123 String constant, such as ABCD The name of a source field or column, such as DATE1.YY A function expression, such as @STREXT (COL1,1, 3) Note: Function expressions are used to manipulate data, and begin with the at sign (@). Default Mapping When you specify USEDEFAULTS, the process maps columns in the source table to columns in the target with the same name. This can be useful when the source and target definitions are similar but not identical. Note: If you set up global column mapping rules with COLMATCH parameters, you can map columns with different names to each other using default mapping.

139

Column Mapping Example


MAP HR.CONTACT, TARGET HR.PHONE, COLMAP (USEDEFAULTS, NAME = CUST_NAME, PHONE_NUMBER = @STRCAT( (, AREA_CODE, ), PH_PREFIX, -, PH_NUMBER ) );

This example: Moves the HR.CONTACT CUST_NAME column value to the HR.PHONE NAME column Concatenates the HR.CONTACT AREA_CODE, PH_PREFIX and PH_NUMBER with quote and hypen literals to derive the PHONE_NUMBER column value Automatically maps other HR.CONTACT columns to the HR.PHONE columns that have the same name.
Column Mapping Building History
This example uses special values to build history of operations data
INSERTALLRECORDS MAP SALES.ACCOUNT, TARGET REPORT.ACCTHISTORY, COLMAP (USEDEFAULTS, TRAN_TIME = @GETENV(GGHEADER,COMMITTIMESTAMP), OP_TYPE = @GETENV(GGHEADER, OPTYPE), BEFORE_AFTER_IND = @GETENV(GGHEADER, BEFOREAFTERINDICATOR), );

INSERTALLRECORDS causes Replicat to insert every change operation made to a record as a new record in the database. The initial insert and subsequent updates and deletes are maintained as point-in-time snapshots. COLMAP uses the @GETENV function to get historical data from the GoldenGate trail header TRAN_TIME picks up the commit timestamp for the date of the

Oracle GoldenGate Fundamentals Student Guide

transaction; OP_TYPE stores whether it is an insert, update, or delete operation; and BEFORE_AFTER_IND indicates whether it is storing a before or after image.

Functions

Functions - Data Transformation


GoldenGate provides the capability to transform columns by using a set of built-in functions Transformation functions can be applied either for Extract or Replicat If you require more, you also have the ability to call your own logic through user exits

Functions - Overview
Using column conversion functions, you can: Perform string and number conversion Extract portions of strings or concatenate columns Compare strings or numbers Perform a variety of date mappings Use single or nested IF statements to evaluate numbers, strings, and other column values to determine the appropriate value and format for target columns Functions are identified with the @ prefix

141

Functions Example
MAP SALES.ACCOUNT, TARGET REPORT.ACCOUNT, COLMAP ( USEDEFAULTS, TRANSACTION_DATE = @DATE (YYYY-MM-DD, YY, YEAR, MM, MONTH, DD, DAY), AREA_CODE = @STREXT (PHONE-NO, 1, 3), PHONE_PREFIX = @STREXT (PHONE-NO, 4, 6), PHONE_NUMBER = @STREXT (PHONE-NO, 7, 10) );

The example uses @DATE to derive the TRANSACTION_DATE by converting source date columns YEAR in the format YY, DAY in the format DD, and MONTH in the format MM to a target date with the format YYYY-MM-DD. The syntax for the @DATE function is: @DATE (<out descriptor>, <in descriptor>, <source col> [, <in descriptor>, <source col>] [, ...]) The example uses @STREXT to extract portions of a string field into 3 different columns. It takes the first through the third characters from the sources PHONE-NO to populate the targets AREA_CODE, characters 4 through 6 for the PHONE_PREFIX, and 7 to 10 for the PHONE_NUMBER. The syntax for the @STREXT function is: @STREXT (<column or literal string>, <begin position>, <end position>)
Functions Performing Tests on Column Values
Function CASE EVAL IF COLSTAT COLTEST VALONEOF Description Allows user to select a value depending on a series of value tests Allows a user to select a value depending on a series of independent tests Selects one of two values depending on whether a conditional statement returns TRUE or FALSE Returns whether a column value is missing, NULL or invalid Tests whether a column value is present, missing, NULL or invalid Returns true if a column contains one of a list of values

Oracle GoldenGate Fundamentals Student Guide

Functions that perform tests These functions select a value based on tests against the current value.
Discussion Points: IF Function
Syntax: @IF (<conditional expression>, <value if expression is non-zero>, <value if expression is zero>) Non-zero is considered true, and zero (0) is considered false.

1. What IF clause would you use to set the target column AMOUNT_COL to AMT only if AMT is greater than zero, and otherwise return zero? AMOUNT_COL = @IF (AMT > 0, AMT, 0)

Discussion Points: IF Function


2. What IF clause would you use to set ORDER_TOTAL to PRICE*QUANTITY if both PRICE and QUANTITY are greater than zero, otherwise return zero? ORDER_TOTAL = @IF (PRICE > 0 AND QUANTITY > 0, PRICE * QUANTITY, 0)

143

Functions Working with Dates


Function DATE Description Returns a date from a variety of sources in a variety of output formats Returns the difference between two dates or times Returns the current date and time

DATEDIFF DATENOW

Functions that work with dates These functions return dates in various formats and calculate the difference between two dates.
Discussion Points: DATE Function
Syntax @DATE (<output format>, <input format>, <source column> [, <input format>, <source column>] [,...]) Supported input and output formats are in the notes. 1. What DATE expression would you use to convert year, month and day columns into a date? date_col = @DATE ("YYYY-MM-DD", "YY", date1_yy, "MM", date1_mm, "DD", date1_dd)

Formats that are supported for both input and output are: CC century; YYYY four-digit year; YY two-digit year; MMM alphanumeric month, such as APR; MM numeric month; DDD numeric day of the year (e.g. 001, 365) DD numeric day of month; HH Hour; MI minute;

Oracle GoldenGate Fundamentals Student Guide

SS seconds; FFFFFF fraction (up to microseconds); DOW0 numeric day of the week (Sunday = 0); DOW1 numeric day of the week (Sunday = 1); DOWA alphanumeric day of the week, (e.g. SUN; JUL Julian day; JTSGMT and JTS Julian timestamp; JTSLCT Julian timestamp that is already local time, or to keep local time when converting to a Julian timestamp; STRATUS Application timestamp; CDATE C timestamp in seconds since the Epoch. Formats supported for input are: TTS NonStop 48-bit timestamp PHAMIS Application date format. Calculating the century: When a two-digit year is supplied, but a four-digit year is required in the output, the system can calculate the century (as 20 if the year is < 50), it can be hard-coded or the @IF function can be used to set a condition.
Discussion Points: DATE Function
2. What DATE expression would you use to convert a numeric column stored as YYYYMMDDHHMISS to a Julian timestamp? julian_ts_col = @DATE ("JTS", "YYYYMMDDHHMISS", numeric_date)

145

Functions Working with Strings and Numbers


Function COMPUTE NUMBIN NUMSTR STRCAT STRCMP STREQ STREXT STRFIND STRLEN Description Returns the result of an arithmetic expression Converts a binary string into a number Converts a string into a number Concatenates two or more strings Compares two strings to determine if they are equal, or if the first is less or greater than the second Tests to see if two strings are equal. Returns 1 for equal and 0 if not equal Extracts selected characters from a string Finds the occurrence of a string within a string Returns the length of a string

Functions that work with strings and numbers These functions convert, compare, extract, trim and otherwise manipulate strings and numbers.
Functions Working with Strings and Numbers (cont.)
Function STRLTRIM STRNCAT STRNCMP STRNUM STRRTRIM STRSUB STRTRIM STRUP Description Trims leading spaces in a column Concatenates one or more strings up to a specified number of characters per string Compares two strings up to a certain number of characters Converts a number into a string, with justification and zero-fill options Trims trailing spaces in a column Substitutes one string for another within a column Trims both leading and trailing spaces in a column Changes a string to uppercase

Oracle GoldenGate Fundamentals Student Guide

Discussion Points: STRCAT Function


Syntax @STRCAT (<string1>, <string2> [, ...]) The strings can be column names or literal values in quotes.

1. What STRCAT expression would you use to concatenate the columns LASTNAME and FIRSTNAME, separated by a semicolon? NAME = @STRCAT (LASTNAME, ";" ,FIRSTNAME)

Discussion Points: STRCAT Function


2. What STRCAT expression would you use to concatenate a country code, area code and local phone number into an international phone number with hyphens between the components? INTL_PHONE = @STRCAT (COUNTRY_CODE, "-", AREA_CODE, "-", LOCAL_PHONE)

147

Discussion Point: STREXT Function


Syntax @STREXT (<column or literal string>, <begin position>, <end position>) 1. What STREXT expressions would you use to split a long-distance phone number into three columns (area code, prefix, phone no)?

AREA_CODE = @STREXT (PHONE, 1, 3), PREFIX = @STREXT (PHONE, 4, 6), PHONE_NO = @STREXT (PHONE, 7, 10)

Functions - Other
Function BINARY BINTOHEX GETENV Description Keeps source data in its original binary format in the target when source column is defined as character. Converts a binary string to a hexadecimal string. Returns information on the GoldenGate environment, trail file header, trail record header, last replicated operation and lag. Can retrieve the commit timestamp in local time or GMT. Extracts parameters from a stored procedure as input to a FILTER or COLMAP clause. Converts a hexadecimal string to a binary string. Emulate COBOL functions that allow you to set a numeric limit on string or binary datatypes. Divides workload into multiple groups of data, while ensuring the same row will always be sent to the same process. Range uses a hash against primary key or user defined columns. Maps environmental values that are stored in the user token area to the target column.

GETVAL HEXTOBIN HIGHVAL, LOWVAL RANGE

TOKEN

GETENV with Open Systems trails GoldenGate for Open Systems version 10 and beyond writes a trail file header. GoldenGate for NonStop version 10 does not write a file header but can retrieve file header values from Open Systems trails using GETENV with the GGFILEHEADER option. See the GoldenGate for Mainframe v10 Reference Guide. Further information GETENV and TOKEN are discussed further in User Tokens. GETVAL is discussed in SQLEXEC. RANGE is discussed in Data Selection.

Oracle GoldenGate Fundamentals Student Guide

SQLEXEC

SQLEXEC - Overview
SQLEXEC advantages: Extends GoldenGate capabilities by enabling Extract and Replicat to communicate with the application database through SQL queries or run stored procedures Extends data integration beyond what can be done with GoldenGate functions

The SQLEXEC option enables both Extract and Replicat to communicate with the users database, either via SQL queries or stored procedures. SQLEXEC can be used to interface with a virtually unlimited set of the functionality supported by the underlying database. Stored Procedure Capabilities Stored procedures extend the functionality of popular databases such as Oracle, DB2, SQL Server and Teradata. Users write stored procedures in order to perform custom logic, typically involving the database in some way, using languages such as Oracles PL/SQL and Microsofts Transact-SQL. Extract and Replicat enables stored procedure capabilities to be leveraged for Oracle, SQL Server and DB2. Tying together industry-standard stored procedure languages with extraction and replication functions brings a familiar, powerful interface to virtually unlimited functionality. Stored procedures can also be used as an alternative method for inserting data into the database, aggregating data, denormalizing or normalizing data, or any other function that requires database operations as input. Extract and Replicat can support stored procedures that only accept input, or procedures that produce output as well. Output parameters can be captured and used in subsequent map and filter operations. SQL Query Capabilities In addition to stored procedures, Extract and Replicat can execute specified database queries that either return results (SELECT statements) or update the database (INSERT, UPDATE, and DELETE statements).

149

SQLEXEC Basic Functionality


Execute a stored procedure or SQL query using the SQLEXEC clause of the TABLE or MAP parameter Optionally extract output parameters from the stored procedure or SQL query as input to a FILTER or COLMAP clause using the @GETVAL function Use SQLEXEC at the root level (without input/output parameters) to call a stored procedure, run a SQL query or issue a database command

Before defining the SQLEXEC clause, a database logon must be established. This is done via the SOURCEDB or USERID parameter for Extract, and the TARGETDB or USERID parameter for Replicat. When using SQLEXEC, a mapping between one or more input parameters and source columns or column functions must be supplied. When supplying at least one SQLEXEC entry for a given Replicat map entry, a target table is not required.
SQLEXEC - DBMS and Data Type Support
SQLEXEC is available for the following databases: Oracle SQL Server Teradata Sybase DB2 ODBC The stored procedure interface supports the following data types for input and output parameters:
Oracle CHAR VARCHAR2 DATE All Numeric types LOBS up to 200 bytes DB2 CHAR VARCHAR DATETIME All Numeric types BLOB data types SQL Server / Sybase / Teradata v12+ CHAR VARCHAR DATE All Numeric types

Database and Data Type Support The stored procedure interface for Oracle currently supports the following input and output parameter types: CHAR

Oracle GoldenGate Fundamentals Student Guide

VARCHAR2 DATE All available numeric data types LOB data types (BLOB and CLOB) where the length is less than 200 bytes The ANSI equivalents of the above types The stored procedure interface for SQL Server currently supports the following input and output parameter types: CHAR VARCHAR DATETIME All available numeric data types Image and text data types where the length is less than 200 bytes TIMESTAMP parameter types are not supported natively, but you can specify other data types for parameters and convert the data to TIMESTAMP format within the stored procedure The stored procedure interface for DB2 currently supports the following input and output parameter types: CHAR VARCHAR DATE All available numeric data types BLOB data types The stored procedure interface for Sybase currently supports data types except TEXT, IMAGE, and UDT. The stored procedure interface Teradata version 12 and later supports CHAR, VARCHAR, DATE and all available numeric data types. Database Transaction Considerations When specifying a stored procedure or query that updates the database, you must supply the DBOPS option in the SQLEXEC clause. Doing so ensures that any database updates are committed to the database properly. Otherwise, database operations can potentially be rolled back. As with direct table updates, database operations initiated within the stored procedure will be committed in the same context as the original transaction.

151

SQLEXEC Using with Lookup Stored Procedure


The following stored procedure performs a query to return a description given a code:
CREATE OR REPLACE PROCEDURE LOOKUP (CODE_PARAM IN VARCHAR2, DESC_PARAM OUT VARCHAR2) BEGIN SELECT DESC_COL INTO DESC_PARAM FROM LOOKUP_TABLE WHERE CODE_COL = CODE_PARAM; END;

Table lookup using a stored procedure Mapping can be augmented with a simple database lookup procedure in Extract or Replicat. This example illustrates the stored procedure to do a table lookup.
SQLEXEC Using with Lookup Stored Procedure (contd)
The following parameter entry: Maps data from the ACCOUNT table to the NEWACCT table When processing any rows from ACCOUNT, Extract performs the LOOKUP stored procedure prior to executing the column map Maps values returned in desc_param to the newacct_val column using the @GETVAL function
MAP HR.ACCOUNT, TARGET HR.NEWACCT, SQLEXEC (spname lookup, params (code_param = account_code)), COLMAP (USEDEFAULTS, newacct_id = account_id, newacct_val = @GETVAL(lookup.desc_param));

Using the lookup procedure This example illustrates how a stored procedure can be used for mapping in a Replicat parameter file.

Oracle GoldenGate Fundamentals Student Guide

SQLEXEC Using with SQL Query


The following example (for Oracle) performs a SQL query directly to return the description. @GETVAL is used to retrieve the return parameter:
MAP HR.ACCOUNT, TARGET HR.NEWACCT, SQLEXEC (id lookup, query select desc_param from lookup_table where code_col = :code_param, params (code_param = account_code)), COLMAP (USEDEFAULTS, newacct_id = account_id, newacct_val = @GETVAL(lookup.desc_param));

Table lookups using SQL queries The example parameter file entries illustrates a mapping using a simple SQL query to lookup the account description.

SQLEXEC - Syntax within a TABLE or MAP Statement


When the SQLEXEC parameter is used within a TABLE or MAP statement, the syntax is: SQLEXEC ( { SPNAME <sp name> | QUERY <sql query> } [, ID <logical name>] { PARAMS <param spec> | NOPARAMS} [, BEFOREFILTER | AFTERFILTER] [, DBOP] [, EXEC <frequency>] [, ALLPARAMS <option>] [, PARAMBUFSIZE <num bytes>] [, MAXVARCHARLEN <num bytes>] [, TRACE <option>] [, ERROR <action>] )

The SQLEXEC option is specified as an option in TABLE and MAP statements within EXTRACT and REPLICAT parameter files. Use either SPNAME (for stored procedure) or QUERY (for SQL query). SPNAME <sp name> Is the name of the stored procedure in the database. This name can be used when extracting values from the procedure.

153

QUERY <sql query> Is a query to execute against the database. The query must be a legitimate SQL statement for the database in question. ID <logical name> ID is required with the QUERY parameter (to reference the column values returned by the query) or when you can invoke several instances of a stored procedure (to reference each instance separately, for example, to invoke the same stored procedure to populate two separate target columns). PARAMS <param spec> <param spec> is [OPTIONAL | REQUIRED] <sp param name> = <source column> | <source function> For a stored procedure, <sp param name> is the name of any parameter in the stored procedure or query that can accept input. For an Oracle SQL query, <sp param name> is the name of any input parameter in the query without the leading colon. For example, if the parameter :param1 appears in the query, it is specified as param1 in the PARAMS clause. NOPARAMS Specifies there are no parameters. BEFOREFILTER Use BEFOREFILTER to cause the stored procedure or query to execute before applying filters to a particular map. By default, stored procedures and queries are executed after filtering logic has been applied. AFTERFILTER Use AFTERFILTER (the default) to cause the stored procedure or query to execute after applying filters to a particular map. This enables you to skip stored procedure or query processing unless the map is actually executed. DBOP Use DBOP if the stored procedure or query updates the database and you want the process to execute transaction commit logic. EXEC <frequency> Determines the frequency of execution for the stored procedure or query. MAP Executes the stored procedure or query once for each source-target table map for which it is specified. MAP renders the results invalid for any subsequent maps that have the same source table. For example, if a source table is being synchronized with more than one target table, the results would only be valid for the first source-target map. MAP is the default. ONCE Executes the stored procedure or query once during the course of a GoldenGate run, upon the first invocation of the associated FILE or MAP

Oracle GoldenGate Fundamentals Student Guide

statement. The results remain valid for as long as the process remains running. TRANSACTION Executes the stored procedure or query once per source transaction. The results remain valid for all operations of the transaction. SOURCEROW Executes the stored procedure or query once per source row operation. Use this option when you are synchronizing a source table with more than one target table, so that the results of the procedure or stored procedure or query are invoked for each source-target mapping. ALLPARAMS {REQUIRED | OPTIONAL} REQUIRED specifies that all parameters must be present in order for the stored procedure or query to execute. OPTIONAL enables the stored procedure or query to execute without all parameters present (the default). PARAMBUFSIZE <num bytes> By default, each stored procedure or query is assigned 10,000 bytes of space for input and output. For stored procedures requiring more room, specify a <num bytes> with an appropriate amount of buffer space. MAXVARCHARLEN <num bytes> Determines the maximum length allocated for any output parameter in the stored procedure or query. The default is 200 bytes. TRACE [ALL|ERROR] If TRACE or TRACE ALL is specified, the input and output parameters for each invocation of the stored procedure or query are output to the report file. If TRACE ERROR is specified, parameters are output only after an error occurs in the stored procedure or query. ERROR <action> Requires one of the following arguments: IGNORE Database error is ignored and processing continues. REPORT Database error is written to a report. RAISE Database error is handled just as a table replication error. FINAL Database error is handled as a table replication error, but does not process any additional queries. FATAL Database processing abends.

155

SQLEXEC - Syntax as a Standalone Statement


When a SQLEXEC parameter is used at the root level, the syntax is:
SQLEXEC {call <sp name> () | <sql query> | <database command>} [EVERY <n> {SECONDS | MINUTES | HOURS | DAYS}] [ONEXIT]

Examples:
SQLEXEC SQLEXEC SQLEXEC SQLEXEC SQLEXEC call prc_job_count () select x from dual "call prc_job_count ()" EVERY 30 SECONDS call prc_job_count () ONEXIT SET TRIGGERS OFF

call <sp name> () Specifies the name of a stored procedure to execute. The statement must be enclosed within double quotes. Example: SQLEXEC call prc_job_count () <sql query> Specifies the name of a query to execute. Enclose the query within quotes. For a multi-line query, use quotes on each line. For best results, type a space after each begin quote and before each end quote (or at least before each end quote). Example: SQLEXEC select x from dual <database command> Executes a database command. EVERY <n> {SECONDS | MINUTES | HOURS | DAYS} Causes a standalone stored procedure or query to execute at defined intervals, for example: SQLEXEC "call prc_job_count ()" EVERY 30 SECONDS ONEXIT Executes the SQL when the Extract or Replicat process stops gracefully.

Oracle GoldenGate Fundamentals Student Guide

SQLEXEC Error Handling


There are two types of potential errors that must be considered when implementing SQLEXEC:
1.

An error is raised by the database (either the query or stored procedure) The procedure map requires a column that is missing from the source database operation (likely in an update statement)

2.

When an error is raised by the database: Error handling allows the error to be ignored or reported These options are controlled with the ERROR option in the SQLEXEC clause

SQLEXEC Using the GETVAL Function to Get Result


The GETVAL function supplies a mechanism to: Extract stored procedure and query output parameters Subsequently map them or use in a COLMAP or FILTER clause Syntax
@GETVAL (<name>.<parameter>)

Example
MAP schema.tab1, TARGET schema.tab2, SQLEXEC (SPNAME lookup, PARAMS (param1 = srccol)), COLMAP (USEDEFAULTS, targcol = @GETVAL (lookup.param1));

<name> The name of the stored procedure or query. When using SQLEXEC to execute the procedure or query, valid values are as follows: Queries: Use the logical name specified with the ID option of the SQLEXEC clause. ID is a required SQLEXEC argument for queries. Stored procedures: Use one of the following, depending on how many times the procedure is to be executed within a TABLE or MAP statement: - For multiple executions, use the logical name defined by the ID clause of the SQLEXEC statement. ID is required for multiple executions of a procedure. - For a single execution, use the actual stored procedure name. <parameter> Valid values are one of the following:

157

- The name of the parameter in the stored procedure or query from which the data will be extracted and passed to the column map. - RETURN_VALUE, if extracting values returned by a stored procedure or query. Whether or not the parameter value can be extracted depends on: 1. The stored procedure or query executing successfully. 2. The stored procedure or query results have not yet expired. Rules for determining expiration are defined by the SQLEXEC EXEC option. When a value cannot be extracted, the @GETVAL function results in a column missing condition. Usually this means that the column is not mapped. You can also use the @COLTEST function to test the result of the @GETVAL function to see if it is missing, and map an alternative value if desired.

Macros

Macros - Overview
Macros enable easier and more efficient building of parameters Write once and use many times Consolidate multiple statements Eliminate the need for redundant column specifications Use macros to invoke other macros Create macro libraries and share across parameter files

By using GoldenGate macros in parameter files you can easily configure and reuse parameters, commands, and functions. As detailed in the slide, you can use macros for a variety of operations to enable easier and more efficient building of parameters. GoldenGate macros work with the following parameter files: Manager Extract Replicat Note: Do not use macros to manipulate data for tables being processing by a data pump Extract in pass-through mode.

Oracle GoldenGate Fundamentals Student Guide

Macros - Creating
Macros can be defined in any parameter file or library Macro statements include the following Macro name Optional parameter list Macro body Syntax
MACRO #<macro name> PARAMS (#<param1>, #<param2>, ) BEGIN <macro body> END;

The macro and parameter identifier # can be changed to alternative value


MACROCHAR $

Syntax <macro name> is the name of the macro. <macro name> must begin with the # character, as in #macro1. If the # macro character is used elsewhere in the parameter file, such as in a table name, you can change it to something else with the MACROCHAR parameter. Macro names are not case-sensitive. PARAMS (<p1>,<p2>...) describes each of the parameters to the macro. Names must begin with the macro character, such as #param1. When the macro is invoked, it must include a value for each parameter named in the PARAMS statement. Parameter names are optional and not case-sensitive. BEGIN indicates the beginning of the body of the macro. Must be specified before the macro body. <macro body> represents one or more statements to be used as parameter file input. It can include simple parameter statements, such as COL1 = COL2; more complex statements that include parameters, such as COL1 = #val2; or invocations of other macros, such as #colmap(COL1, #sourcecol). END ends the macro definition.

159

Macros - Invoking
Reference the macro and parameters anywhere you want the macro to be invoked
EXTRACT EXSALES MACRO #make_date PARAMS (#year, #month, #day) BEGIN @DATE(YYYY-MM-DD, CC, @IF(#year < 50, 20, 19), YY, #year, MM, #month, DD, #day) END; MAP SALES.ACCT, TARGET REPORT.ACCOUNT, COLMAP ( TARGETCOL1 = SOURCECOL1, Order_Date = #make_date(Order_YR,Order_MO,Order_DAY), Ship_Date = #make_date(Ship_YR,Ship_MO,Ship_DAY) );

Invoking Macros The example above demonstrates defining a macro named #make_date, calling the macro two different times, with each instance sending a different set of source column values to determine the target column values. Note that the order and ship dates are determined as the result of calling the make_date routine to populate the target columns.
Macros - Example
Consolidating Multiple Parameters
Define the macro: MACRO #option_defaults BEGIN

GETINSERTS GETUPDATES GETDELETES INSERTDELETES END;


Invoke the macro: #option_defaults () IGNOREUPDATES MAP SALES.SRCTAB, TARGET SALES.TARGTAB; #option_defaults () MAP SALES.SRCTAB2, TARGET SALES.TARGTAB2;

Reusing parameter sets Another use of macros is to create a set of frequently used commands. For example, GETINSERTS, GETUPDATES, GETDELETES, and INSERTDELETES may be referenced as a macro within multiple MAP statements as shown in the slide.

Oracle GoldenGate Fundamentals Student Guide

Macros Example (contd)


The macro expands to the following:
GETINSERTS GETUPDATES GETDELETES INSERTDELETES IGNOREUPDATES MAP SALES.SRCTAB, TARGET SALES.TARGTAB; GETINSERTS GETUPDATES GETDELETES INSERTDELETES MAP SALES.SRCTAB2, TARGET SALES.TARGTAB2;

Note that the macros result is altered by the IGNOREUPDATES parameter for the first MAP statement.
Macros Libraries
Macros can be built in a library and referenced into your parameter file
EXTRACT EXTACCT INCLUDE /ggs/dirprm/macro.lib

Macros Listing can be suppressed


NOLIST include /ggs/dirprm/large.lib LIST

Macro libraries To use a macro library, use the INCLUDE parameter at the beginning of a parameter file. The syntax is: INCLUDE <macro name> You may toggle the listing or suppression of listing of the output of libraries by using the LIST and NOLIST parameters.

161

Macros Expansion
Macro Processor enables tracing of macro expansion with the CMDTRACE option Syntax
CMDTRACE [ ON | OFF | DETAIL ]

Default is OFF Example


EXTRACT EXTACCT INCLUDE /ggs/dirprm/macro.lib CMDTRACE ON MAP SALES.ACCOUNT, TARGET REPORT.ACCOUNT_HISTORY, COLMAP (USEDEFAULTS, #maptranfields () );

Macro expansion The macro processor enables tracing of macro expansion for debugging purposes via the CMDTRACE command. When CMDTRACE is enabled, the macro processor will display macro expansion steps in the processs report file. The ON option enables tracing, OFF disables it, and DETAIL produces additional details.

Oracle GoldenGate Fundamentals Student Guide

User Tokens

User Tokens - Overview


GoldenGate provides the ability to store environmental values in the user token area of the GoldenGate record header. Set token values through a TABLE TOKENS clause and @GETENV functions, for example: TABLE SALES.PRODUCT, TOKENS (TKN1 = @GETENV(GGENVIRONMENT",OSUSERNAME"), TKN2 = @GETENV(GGHEADER",COMMITTIMESTAMP") ); Use token values to populate target columns through a MAP COLMAP clause and @TOKEN functions, for example: MAP SALES.PRODUCT, TARGET SALES.PRODUCT_HISTORY, COLMAP (USEDEFAULTS, OSUSER = @TOKEN(TKN1"), TRANSTIME = @TOKEN(TKN2") );

Saving user tokens Use the TOKENS option of the Extract TABLE parameter to define a user token and associate it with data. Tokens enable you to extract and store data within the user token area of a trail record header. You can set tokens to values returned by the @GETENV function (for example, values from the GoldenGate header or environment). Using user tokens You can use token data in column maps, stored procedures called by SQLEXEC, or macros. For example, use the @TOKEN function in the COLMAP clause of a Replicat MAP statement to map a token to a target column.

163

User Tokens - Environmental Values Available to @GETENV


Syntax: @GETENV (<option>, [<return value]) Example: @GETENV (GGENVIRONMENT, HOSTNAME)
Source General Option LAG LASTERR JULIANTIMESTAMP RECSOUTPUT GGENVIRONMENT GGFILEHEADER GGHEADER RECORD DBENVIRONMENT TRANSACTION OSVARIABLE Returns values for/from Lag (in unit specified) Last failed operation Julian timestamp Number of records written to trail GoldenGate environment Trail file header Trail record header Trail record location Database environment Source transaction OS environmental variable

GoldenGate

Database Operating System

Environmental values available with @GETENV Note: If a given database, operating system, or GoldenGate version does not provide information that relates to a given token, a NULL value will be returned. @GETENV (LAG, <unit>) SEC Returns the lag in seconds. This is the default when a unit is not explicitly provided for LAG. MSEC Returns the lag in milliseconds. MIN Returns the lag in minutes. @GETENV (LASTERR, <return value>) DBERRNUM Returns the database error number associated with the failed operation. DBERRMSG Returns the database error message associated with the failed operation. OPTYPE Returns the operation type that was attempted. ERRTYPE Returns the type of error: DB (for database errors) or MAP (for mapping errors). @GETENV (JULIANTIMESTAMP) @GETENV (RECSOUTPUT) @GETENV (GGENVIRONMENT, <return value>) DOMAINNAME (Windows only) Returns the domain name associated with the user that started the process. GROUPDESCRIPTION The description of the group (if any) taken from the checkpoint file. GROUPNAME Returns the name of the process group. GROUPTYPE Returns the type of process, either EXTRACT or REPLICAT.

Oracle GoldenGate Fundamentals Student Guide

HOSTNAME Returns the name of the system running the Extract or Replicat process. OSUSERNAME Returns the operating system user name that started the process. PROCESSID The process ID that is assigned by the operating system. @GETENV (GGHEADER, <return value>) BEFOREAFTERINDICATOR Returns BEFORE (before image) or AFTER (after image). COMMITTIMESTAMP Returns the transaction timestamp (when committed) in the format of YYY-MM-DD HH:MI:SS.FFFFFF. LOGPOSITION Returns the sequence number in the data source. LOGRBA Returns the relative byte address in the data source. OBJECTNAME | TABLENAME Returns the table name or object name (if a sequence). OPTYPE Returns the type of operation: INSERT, UPDATE, DELETE, ENSCRIBE COMPUPDATE, SQL COMPUPDATE, PK UPDATE, TRUNCATE, TYPE n. RECORDLENGTH Returns the record length in bytes. TRANSACTIONINDICATOR Returns the transaction indicator: BEGIN, MIDDLE, END, WHOLE. @GETENV (GGFILEHEADER, <return_value>) COMPATIBILITY Returns the GoldenGate compatibility level of the trail file. 1 means that the trail file is of GoldenGate version 10.0 or later, which supports file headers that contain file versioning information. 0 means that the trail file is of a GoldenGate version that is older than 10.0. File headers are not supported in those releases. The 0 value is used for backward compatibility to those GoldenGate versions. Information about the trail file CHARSET Returns the global character set of the trail file. For example: WCP1252-1 CREATETIMESTAMP Returns the time that the trail was created, in local GMT Julian time in INT64. FILENAME Returns the name of the trail file. Can be an absolute or relative path. FILEISTRAIL Returns a True/False flag indicating whether the trail file is a single file (such as one created for a batch run) or a sequentially numbered file that is part of a trail for online, continuous processing. FILESEQNO Returns the sequence number of the trail file, without any leading zeros. FILESIZE Returns the size of the trail file when the file is full and the trail rolls over. FIRSTRECCSN Returns the commit sequence number (CSN) of the first record in the trail file. NULL until the trail file is completed. LASTRECCSN Returns the commit sequence number (CSN) of the last record in the trail file. NULL until the trail file is completed. FIRSTRECIOTIME Returns the time that the first record was written to the trail file. NULL until the trail file is completed.

165

LASTRECIOTIME Returns the time that the last record was written to the trail file. NULL until the trail file is completed. URI Returns the universal resource identifier of the process that created the trail file, in the format: <host_name>:<dir>:[:<dir>][:<dir_n>]<group_name> URIHISTORY Returns a list of the URIs of processes that wrote to the trail file before the current process. Information about the GoldenGate process that created the trail file GROUPNAME Returns the droup name associated with the Extract process that created the trail. The group name is that which was given in the ADD EXTRACT command. For example, ggext. DATASOURCE Returns the data source that was read by the process. GGMAJORVERSION Returns the major version of the Extract process that created the trail, expressed as an integer. For example, if a version is 1.2.3, it returns 1. GGMINORVERSION Returns the minor version of the Extract process that created the trail, expressed as an integer. For example, if a version is 1.2.3, it returns 2. GGVERSIONSTRING Returns the maintenance (or patch) level of the Extract process that created the trail, expressed as an integer. For example, if a version is 1.2.3, it returns 3. GGMAINTENANCELEVEL Returns the maintenance version of the process (xx.xx.xx). GGBUGFIXLEVEL Returns the patch version of the process (xx.xx.xx.xx). GGBUILDNUMBER Returns the build number of the process. Information about the local host of the trail file HOSTNAME Returns the DNS name of the machine where the Extract that wrote the trail is running. OSVERSION Returns the major version of the operating system of the machine where the Extract that wrote the trail is running. OSRELEASE Returns the release version of the operating system of the machine where the Extract that wrote the trail is running. OSTYPE Returns the type of operating system of the machine where the Extract that wrote the trail is running. HARDWARETYPE Returns the type of hardware of the machine where the Extract that wrote the trail is running. Information about the database that produced the data in the trail file. DBNAME Returns the name of the database, for example findb. DBINSTANCE Returns the name of the database instance, if applicable to the database type, for example ORA1022A. DBTYPE Returns the type of database that produced the data in the trail file. DBCHARSET Returns the character set that is used by the database that produced the data in the trail file. DBMAJORVERSION Returns the major version of the database that produced the data in the trail file. DBMINORVERSION Returns the minor version of the database that produced the data in the trail file.

Oracle GoldenGate Fundamentals Student Guide

DBVERSIONSTRING Returns the maintenance (patch) level of the database that produced the data in the trail file. DBCLIENTCHARSET Returns the character set that is used by the database client. DBCLIENTVERSIONSTRING Returns the maintenance (patch) level of the database client. Recovery information carried over from the previous trail file RECOVERYMODE Returns recovery information for internal GoldenGate use. LASTCOMPLETECSN Returns recovery information for internal GoldenGate use. LASTCOMPLETEXIDS Returns recovery information for internal GoldenGate use. LASTCSN Returns recovery information for internal GoldenGate use. LASTXID Returns recovery information for internal GoldenGate use. LASTCSNTS Returns recovery information for internal GoldenGate use. @GETENV (RECORD, <environment value>) FILESEQNO Returns the sequence number of the trail file without any leading zeros. FILERBA Returns the relative byte address of the record within the FILESEQNO file. @GETENV (DBENVIRONMENT, <return value>) DBNAME Returns the database name. DBVERSION Returns the database version. "DBUSER" Returns the database login user. SERVERNAME Returns the name of the server. @GETENV (TRANSACTION, <return value>) TRANSACTIONID | XID Returns the transaction ID number. CSN Returns the commit sequence number (CSN). TIMESTAMP Returns the commit timestamp of the transaction. NAME Returns the transaction name, if available. USERID (Oracle) Returns the Oracle user-id of the database user that committed the last transaction. USERNAME (Oracle) Returns the Oracle user-name of the database user that committed the last transaction. RSN Returns the record sequence number. PLANNAME (DB2 on z/OS) Returns the plan name under which the current transaction was originally executed. The plan name is included in the begin unit of recovery log record. @GETENV (OSVARIABLE, <variable>) <variable> The name of the variable. The search is an exact match of the supplied variable name. The search is case-sensitive if the operating system supports casesensitivity.

167

User Tokens - Setting


Values are stored in the GoldenGate record header using a TOKENS clause and @GETENV functions:
EXTRACT EXTDEMO TABLE SALES.PRODUCT, TKN-OSUSER TKN-DOMAIN TKN-COMMIT-TS TKN-BA-IND TKN-TABLE TKN-OP-TYPE TKN-LENGTH TKN-DB-VER

TOKENS ( = @GETENV = @GETENV = @GETENV = @GETENV = @GETENV = @GETENV = @GETENV = @GETENV

(GGENVIRONMENT", OSUSERNAME"), (GGENVIRONMENT", DOMAINNAME"), (GGHEADER", COMMITTIMESTAMP"), (GGHEADER", BEFOREAFTERINDICATOR), (GGHEADER", TABLENAME"), (GGHEADER", OPTYPE"), (GGHEADER", RECORDLENGTH"), (DBENVIRONMENT", DBVERSION");

Storing values in the trail header This example demonstrates how to store the details in the GoldenGate trail header. Using the TOKENS clause of the TABLE parameter, the user defines a token identifier (e.g. TKN-OSUSER) and specifies the environment category and value using the @GETENV function.
User Tokens - Using
Tokens are retrieved through a MAP COLMAP clause and @TOKEN functions:
MAP SALES.ORDER, TARGET REPORT.ORDER_HISTORY, COLMAP (USEDEFAULTS, TKN_NUMRECS = @TOKEN ("TKN-NUMRECS"); MAP SALES.CUSTOMER, TARGET REPORT.CUSTOMER_HISTORY, COLMAP (USEDEFAULTS, TRAN_TIME = @TOKEN ("TKN-COMMIT-TS"), OP_TYPE = @TOKEN (TKN-OP-TYPE), BEFORE_AFTER_IND = @TOKEN (TKN-BA-IND), TKN_ROWID = @TOKEN ("TKN-ROWID"));

Retrieving values from tokens This example demonstrates how to retrieve values that have been stored as tokens in the GoldenGate trail header. Using the @TOKEN function on the MAP parameter,

Oracle GoldenGate Fundamentals Student Guide

specify the token identifier (e.g. TKN-GROUP-NAME) value to use for the target column specification.

User Tokens Viewing in Logdump


logdump 2> usertoken on logdump 3> usertoken detail logdump 4> next User tokens: TKN-HOST TKN-GROUP TKN-BA_IND TKN-COMMIT_TS TKN-POS TKN-RBA TKN-TABLE TKN-OPTYPE TKN-LENGTH TKN-TRAN_IND TKN-LAG_SEC TKN-LAG_MIN TKN-LAG_MSEC TKN-NUMRECS TKN-DBNAME TKN-DB_USER TKN-DB_VER TKN-INAME TKN-ROWID

: jemhadar : EXTORA : AFTER : 2003-03-24 17:08:59.000000 : 3604496 : 4058 : SOURCE.CUSTOMER : INSERT : 57 : BEGIN : 1 : 0 : 1229 : 8 : ORA901 : GGOODRIC : 9.0.1.0.0 : ora901 : AAABBAAABAAAB0BAAF

LOGDUMP example Once environment values have been stored in the trail header, Logdump can display them when the USERTOKEN ON option is used. The USERTOKEN DETAIL option provides additional information.

169

User Exits

User Exits What are they?


Custom logic written in C or C++ by the customer Invoked at different points in Extract or Replicat processing (through CUSEREXIT parameter) Allows you to extend or customize the functionality of data movement and integration beyond what is supported through mapping, functions, or SQLEXEC Can perform an unlimited number of functions

User exits overview At different points during Extract and Replicat processing, routines you create in C can be invoked to perform an unlimited number of functions. When to implement user exits You can employ user exits as an alternative to, or in conjunction with, the columnconversion functions that are available within GoldenGate. User exits can be a better alternative to the built-in functions because, with a user exit, data is processed once (when extracted) rather than twice (extracted and then read again to perform the transformation). User exits cannot be used for tables being processing by a data-pump Extract in passthrough mode.

Oracle GoldenGate Fundamentals Student Guide

User Exits Applications


Perform arithmetic operations or data transformations beyond those provided with GoldenGate built-in functions Perform additional table lookups or clean up invalid data Respond to events in custom ways, for example, by sending a formatted e-mail message or paging a supervisor based on some field value Accumulate totals and gather statistics Perform conflict detection, or custom handling of errors or discards Determine the net difference in a record before and after an update (conflict resolution technique)

Additional applications of user exits: Implement record archival functions off-line. Accept or reject records for extraction or replication based on complex criteria. Normalize a database during conversion.
User Exits High-Level Processing Logic
Accepts different events and information from Extract or Replicat Passes the information to the appropriate paragraph/routine for processing Returns a response and information to the caller

171

User Exits - Parameters


EXIT_CALL_TYPE indicates when, during processing, the Extract or Replicat process calls the user exit: at start processing, stop processing, begin transaction, end transaction, process record, process marker, discard record, fatal error or call result EXIT_CALL_RESULT provides a response to the routine: OK, ignore, stop, abend or skip record EXIT_PARAMS supplies information to the routine: calling program path and name, function parameter, more records indicator ERCALLBACK implements a callback routine. Callback routines retrieve record and GoldenGate context information and modify the contents of data records

EXIT_CALL_TYPE indicates the processing point of the caller and determines the type of processing to perform. Extract and Replicat call the shell routine with the following calls: EXIT_CALL_START - Invoked at the start of processing. The user exit can perform initialization work. EXIT_CALL_STOP - Invoked before the caller stops or ends abnormally. The user exit can perform completion work. EXIT_CALL_BEGIN_TRANS - In Extract, invoked just before the output of the first record in a transaction. In Replicat, invoked just before the start of a transaction. EXIT_CALL_END_TRANS - In Extract and Replicat , invoked just after the last record in a transaction is processed. EXIT_CALL_CHECKPOINT - Called just before an Extract or Replicat checkpoint is written. EXIT_CALL_PROCESS_RECORD - In Extract, invoked before a record buffer is output to an Extract file. In Replicat, invoked just before a replicated operation is performed. This call is the basis of most user exit processing. EXIT_CALL_PROCESS_MARKER - Called during Replicat processing when a marker from a NonStop server is read from the trail, and before writing to the marker history file. EXIT-CALL_DISCARD_RECORD - Called during Replicat processing before a record is written to the discard file. EXIT_CALL_DISCARD_ASCII_RECORD - Called during Extract processing before an ASCII input record is written to the discard file. EXIT_CALL_FATAL_ERROR - Called during Extract or Replicat processing just before GoldenGate terminates after a fatal error. EXIT_CALL_RESULT - Set by the user exit routines to instruct the caller how to respond when each exit call completes (see below).

Oracle GoldenGate Fundamentals Student Guide

EXIT_CALL_RESULT is set by the user exit routines and instructs the caller how to respond when each exit call completes. The following results can be specified by the operators routines: EXIT_OK_VAL - If the routine does nothing to respond to an event, OK is assumed. If the call specified PROCESS_RECORD or DISCARD_RECORD and OK_VAL is returned, the caller processes the record buffer returned by the user exit and uses the parameters set by the exit . EXIT_IGNORE_VAL - Reject records for further processing. EXIT_STOP_VAL - Instructs the caller to STOP immediately. EXIT_ABEND_VAL - Instructs the caller to ABEND immediately. EXIT_PROCESSED_REC_VAL - Instructs Extract or Replicat to skip the record, but update the statistics that are printed to the eport file for that table and for that operation type. EXIT_PARAMS supplies information to the user exit routine, such as the program name and user-defined parameters. You can process a single data record multiple times: PROGRAM_NAME - Specifies the full path and name of the calling process, for example \ggs\extract or \ggs\replicat. Use this parameter when loading a GoldenGate callback routine using the Windows API or to identify the calling program when user exits are used with both Extract and Replicat processing. FUNCTION_PARAM - Allows you to pass a parameter that is a literal string to the user exit. Specify the parameter with the EXITPARAM option of a TABLE or MAP statement. FUNCTION_PARAM can also be used at exit call startup to pass the parameters that are specified in the PARAMS option of the CUSEREXIT parameter. MORE_RECS_IND Set on return from an exit. For database records, determines whether Extract or Replicat processes the record again. This allows the user exit to output many records per record processed by Extract, a common function when converting Enscribe to SQL (data normalization). To request the same record again, set MORE_RECS_IND to CHAR_NO_VAL or CHAR_YES_VAL. ERCALLBACK executes a callback routine. A user callback routine retrieves context information from the Extract or Replicat process and its context values, including the record itself, when the call type is one of the following: EXIT_CALL_PROCESS_RECORD EXIT_CALL_DISCARD_RECORD EXIT_CALL_DISCARD_ASCII_RECORD Syntax: ERCALLBACK (<function_code>, <buffer>, <result_code>); <function_code> The function to be executed by the callback routine. The user callback routine behaves differently based on the function code passed to the callback routine. While some functions can be used for both Extract and Replicat, the validity of the function in one process or the other is dependent on the input parameters that are set for that function during the callback routine. <buffer> A void pointer to a buffer containing a predefined structure associated with the specified function code.

173

<result_code> The status of the function executed by the callback routine. The result code returned by the callback routine indicates whether or not the callback function was successful. See the Oracle GoldenGate Reference Guide for function codes and result codes.
User Exits Implementing
On Windows: create a DLL in C and create a routine to be called from Extract or Replicat On UNIX: Create a shared object in C and create a routine to be called from Extract or Replicat The routine must accept the following parameters: EXIT_CALL_TYPE EXIT_CALL_RESULT EXIT_PARAMS In the source for the DLL/shared object, include the usrdecs.h file (in the GoldenGate install directory) Call the ERCALLBACK function from the shared object to retrieve record and application context information

To implement user exits on Windows, perform the following steps: 1. On Windows: create a user exit DLL in C and export a routine to be called from Extract or Replicat. On Unix: create a shared object in C and create a routine to be called from GoldenGate. This routine is the communication point between Extract or Replicat and your routines. You can define the name of the routine, but it must accept the following user exit parameters: EXIT_CALL_TYPE EXIT_CALL_RESULT EXIT_PARAMS Example export syntax for MyUserExit routine __declspec(dllexport) void MyUserExit ( exit_call_type_def exit_call_type, exit_result_def *exit_call_result, exit_params_def *exit_params) 2. In the source for the DLL/shared object, include the usrdecs.h file. This file contains type definitions, return status values, callback function codes, and a number of other definitions. 3. Include callback routines in the user exit when applicable. Callback routines retrieve record and application context information, and modify the contents of data records.

Oracle GoldenGate Fundamentals Student Guide

Extract and Replicat export an ERCALLBACK function to be called from the user exit routine. The user exit must explicitly load the callback function at run-time using the appropriate Windows/Unix API calls.

User Exits - Samples


Sample user exit files are located in <GoldenGate installation directory>/ UserExitExamples Each directory contains the .c file as well as makefiles and a readme.txt file.

Sample User Exits exitdemo.c shows how to initialize the user exit, issue callbacks at given exit points, and modify data. The demo is not specific to any database type. exitdemo_passthru.c shows how the PASSTHRU option of the CUSEREXIT parameter can be used in an Extract data pump. exitdemo_more_recs.c shows an example of how to use the same input record multiple times to generate several target records. exitdemo_lob.c shows an example of how to get read access to LOB data. exitdemo_pk_befores.c shows how to access the before and after image portions of a primary key update record, as well as the before images of regular updates (nonkey updates). It also shows how to get target row values with SQLEXEC in the Replicat parameter file as a means for conflict detection. The resulting fetched values from the target are mapped as the target record when it enters the user exit.

175

User Exits - Calling


You can call a user exit from Extract or Replicat by the CUSEREXIT parameter Syntax
CUSEREXIT <DLL or shared object name> <routine name> [, PASSTHRU] [, INCLUDEUPDATEBEFORES] [, PARAMS "<startup string>]

Examples
CUSEREXIT userexit.dll MyUserExit CUSEREXIT userexit.dll MyUserExit, INCLUDEUPDATEBEFORES, & PASSTHRU, PARAMS "init.properties"

<DLL or shared object name> The name of the Windows DLL or UNIX shared object that contains the user exit function. <routine name> The name of the exit routine to be executed. PASSTHRU Valid only for an Extract data pump. It assumes that no database is required, and that no output trail is allowed. It expects that the user exit will perform all of the processing and that Extract will skip the record. Extract will perform all of the required data mapping before passing the record to the user exit. Instead of a reply status of EXIT_OK_VAL, the reply will be EXIT_PROCESSED_REC_VAL. All process statistics are updated as if the records were processed by GoldenGate. INCLUDEUPDATEBEFORES Passes the before images of column values to a user exit. When using this parameter, you must explicitly request the before image by setting the requesting_before_after_ind flag to BEFORE_IMAGE_VAL within a callback function that supports this flag. Otherwise, only the after image is passed to the user exit. By default, GoldenGate only works with after images. When using INCLUDEUPDATEBEFORES for a user exit that is called from a data pump or from Replicat, always use the GETUPDATEBEFORES parameter for the primary Extract process, so that the before image is captured, written to the trail, and causes a process record event in the user exit. In a case where the primary Extract also has a user exit, GETUPDATEBEFORES causes both the before image and the after image to be sent to the user exit as separate EXIT_CALL_PROCESS_RECORD events.

Oracle GoldenGate Fundamentals Student Guide

If the user exit is called from a primary Extract (one that reads the transaction log), only INCLUDEUPDATEBEFORES is needed for that Extract. GETUPDATEBEFORES is not needed in this case, unless other GoldenGate processes downstream will need the before image to be written to the trail. INCLUDEUPDATEBEFORES does not cause before images to be written to the trail. PARAMS "<startup string> Passes the specified string at startup. Can be used to pass a properties file, startup parameters, or other string. The string must be enclosed within double quote marks. Data in the string is passed to the user exit in the EXIT_CALL_START exit_params_def.function_param. If no quoted string is specified with PARAMS, the exit_params_def.function_param is NULL.

Oracle Sequences

Oracle Sequences
GoldenGate supports the replication of Oracle sequence values Use the Extract SEQUENCE parameter to extract sequence values from the transaction log; for example: SEQUENCE hr.employees_seq; Use the Replicat MAP parameter to apply sequence values to the target; for example: MAP hr.employees_seq, TARGET payroll.employees_seq; The default Replicat CHECKSEQUENCEVALUE parameter ensures that target sequence values are: higher than the source values (if the increment interval is positive) or lower than the source values (if the increment interval is negative)
Note: Change this default only if you know there will be no gaps in the sequence updates (e.g. from a trail corruption or process failure) and you want to improve the performance of GoldenGate

GoldenGate online and batch (SPECIALRUN) change synchronization methods support the replication of sequence values. GoldenGate initial load methods (configurations containing SOURCEISTABLE) do not support the replication of sequence values. GoldenGate does not support the replication of sequence values in a bi-directional configuration. Note: Gaps are possible in the values of the sequences that GoldenGate replicates because gaps are inherent, and expected, in the way that sequences are maintained by the database. However, the target values will always be greater than those of the source, unless the NOCHECKSEQUENCEVALUE parameter is used.

177

Configuration Options

Configuration Options - Overview


BATCHSQL Compression Encryption Bidirectional considerations Event actions Oracle DDL replication

Oracle GoldenGate Fundamentals Student Guide

BATCHSQL

Options: BATCHSQL Overview


Supported for Oracle, DB2 LUW, DB2 on z/OS, Teradata, SQL Server and Sybase Batches similar SQL statements into arrays, as opposed to individual operations Operations containing the same table, operation type (I, U, D), and column list are grouped into a batch Each statement type is prepared once, cached, and executed many times with different variables Referential integrity is preserved Can be used with change capture or initial loads

Operations containing the same table, operation type (I, U, D), and column list are grouped into a batch. For example, the following would each be an example of a batch: Inserts to table A Inserts to table B Updates to table A Updates to table B Deletes from table A Deletes from table B GoldenGate analyzes parent-child foreign key referential dependencies in the batches before executing them. If referential dependencies exist for statements that are in different batches, more than one statement per batch may be required to maintain the referential integrity.

179

Options: BATCHSQL Syntax


Implemented with the Replicat BATCHSQL parameter Syntax

BATCHSQL [BATCHERRORMODE | NOBATCHERRORMODE] [BATCHESPERQUEUE <n>] [BATCHTRANSOPS <n>] [BYTESPERQUEUE <n>] [OPSPERBATCH <n>] [OPSPERQUEUE <n>] [TRACE]

BATCHERRORMODE | NOBATCHERRORMODE Set the response of Replicat to errors. In NOBATCHERRORMODE (default), Replicat aborts the transaction on an error, temporarily disables BATCHSQL, and retries in normal mode. In BATCHERRORMODE, Replicat attempts to resolve errors without reverting to normal mode. Requires HANDLECOLLISIONS to prevent Replicat from exiting on an error. BATCHESPERQUEUE <n> Sets a maximum number of batches per queue before flushing all batches. The default is 50. Note: A queue is a thread of memory containing captured operations waiting to be batched. By default, there is one buffer queue, but you can change this with NUMTHREADS. BATCHTRANSOPS <n> Controls the size of a batch. Set to the default of 1000 or higher. BYTESPERQUEUE <n> Sets the maximum number of bytes to hold in a queue before flushing batches. The default is 20 megabytes. OPSPERBATCH <n> Sets the maximum number of rows that can be prepared for one batch before flushing. The default is 1200. OPSPERQUEUE <n> Sets the maximum number of row operations that can be queued for all batches before flushing. The default is 1200. TRACE Enables tracing of BATCHSQL activity to the console and report file.

Oracle GoldenGate Fundamentals Student Guide

Managing the buffer Buffering consumes memory. To attain the optimum balance between efficiency and the use of memory, use the following options to control when a buffer is flushed: BATCHESPERQUEUE, BYTESPERQUEUE, OPSPERBATCH, OPSPERQUEUE, BATCHTRANSOPS (the last buffer flush for a target transaction occurs when the threshold set with BATCHTRANSOPS is reached).
Options: BATCHSQL Results
Smaller row changes will show a higher gain in performance than larger row changes. At 100 bytes of data per row change, BATCHSQL has been known to improve Replicats performance from 400 to 500 percent. Actual performance benefits will vary depending on the mix of operations. At around 5,000 bytes of data per row change, BATCHSQL benefits diminish.

Usage restrictions Some statement types cannot be processed in batches and must be processed as exceptions. When BATCHSQL encounters them, it flushes everything in the batch, applies the exceptions in the normal manner of one at a time, and then resumes batch processing. Transaction integrity is maintained. Statements treated as exceptions include: Statements containing LOB or LONG data. Statements containing rows longer than 25k in length. Statements where the target table has one or more unique keys besides the primary key. Such statements cannot be processed in batches because BATCHSQL does not guarantee correct ordering for non-primary keys if their values could change.

181

Compression

Options: Compression
GoldenGate provides optional data compression when sending data over TCP/IP Automatic decompression is performed by Server Collector on remote system Compression threshold allows user to set minimum block size for which to compress GoldenGate uses the zlib compression. More information can be found at www.zlib.net

Options: Compression - Example


Compression is specified on the Extract RMTHOST parameter:
RMTHOST <host> | <ip address>, MGRPORT <port> [, COMPRESS ] [, COMPRESSTHRESHOLD <byte size> ] COMPRESS specifies that outgoing blocks of captured changes are compressed. COMPRESSTHRESHOLD sets the minimum byte size for which compression will occur. Default is 1000 bytes.

Example:
RMTHOST newyork, MGRPORT 7809, COMPRESS, COMPRESSTHRESHOLD 750

The destination Server Collector decompresses the data stream before writing it to the remote file or remote trail. This typically results in compression ratios of at least 4:1 and sometimes much better. However, compression can require significant CPU resources.

Oracle GoldenGate Fundamentals Student Guide

Encryption

Options: Encryption Overview


Message Encryption Encrypts the messages sent over TCP/IP Uses Blowfish, a symmetric 64-bit block cipher from CounterPane Internet Security

The data is automatically decrypted by Server Collector before saving the data to the trail
Trail or Extract File Encryption GoldenGate uses 256-key byte substitution

Encrypts only the record data in a trail or extract file The data is decrypted by a downstream data pump or Replicat
Database Password Encryption Encrypted password can be generated using a default key or userdefined key

Options: Encryption - Overview

Extract

Network (TCP/IP)

Server Collector

Trail

Replicat

Message Encryption (Blowfish) Trail or Extract File Encryption (GoldenGate) Parameters Database Password Encryption

Parameters

183

Options: Message Encryption


1. Run the GoldenGate KEYGEN utility to generate random hex keys
C:\GGS> RUN KEYGEN <key length> <number of keys>

Blowfish accepts a variable-length key from 32 to 128 bits 2. Enter key names and values in an ASCII text file named ENCKEYS (upper case, no file extension) in the GoldenGate install directory
##Key name superkey secretkey Key value 0x420E61BE7002D63560929CCA17A4E1FB 0x027742185BBF232D7C664A5E1A76B040

3. Copy the ENCKEYS file to the source and target GoldenGate install directory 4. In the Extract parameter files, use the RMTHOST ENCRYPT and KEYNAME parameters
RMTHOST West, MGRPORT 7809, ENCRYPT BLOWFISH, KEYNAME superkey

5. Configure a static Server Collector and start it manually with the -ENCRYPT and -KEYNAME parameters
server -p <port> -ENCRYPT BLOWFISH -KEYNAME <keyname>

If you prefer to use a literal key, then instead of using KEYGEN, enter the literal key in quotes as the key value in an ENCKEYS file: ##Key name Key value mykey DailyKey "

Options: Message Encryption (contd)


KEYGEN

ENCKEYS

ENCKEYS

Extract

TCP/IP Network

Server Collector Startup command: server -p <port> -ENCRYPT BLOWFISH -KEYNAME <keyname>

Trail

Replicat

Extract Parameters: RMTHOST MGRPORT... ENCRYPT BLOWFISH, KEYNAME <keyname> Message Encryption (Blowfish)

Oracle GoldenGate Fundamentals Student Guide

Options: Trail or Extract File Encryption


GoldenGate uses 256-key byte substitution Only the data records are encrypted in the trail Set Extract ENCRYPTTRAIL and Replicat DECRYPTTRAIL parameters Can set ENCRYPTTRAIL before tables you want encrypted and NOENCRYPTTRAIL before other tables Downstream data pumps can also decrypt the trail for transformation and pass it on either encrypted or decrypted

Extract

Network (TCP/IP)

Server Collector

Trail

Replicat

Extract Parameters: ENCRYPTTRAIL <table statements> Trail or Extract File Encryption

Replicat Parameters: DECRYPTTRAIL <map statements>

Options: Password Encryption Method 1


1. Generate an encrypted password with a GoldenGate default key code: GGSCI> ENCRYPT PASSWORD <password> For example: GGSCI> ENCRYPT PASSWORD goldenpassword No key specified, using default key... Encrypted password: AACAAAAAAAAAAAOARAQIDGEEXAFAQJ 2. Paste the encrypted password in the Extract or Replicat PASSWORD parameter, for example: SOURCEDB MySource, USERID joe, PASSWORD AACAAAAAAAAAAAOARAQIDGEEXAFAQJ, ENCRYPTKEY DEFAULT

185

Options: Password Encryption Method 2


1. Generate an encrypted password with a user-defined key:
GGSCI> ENCRYPT PASSWORD <password>, ENCRYPTKEY <keyname> For example: GGSCI> ENCRYPT PASSWORD MyPass, ENCRYPTKEY DRKEY Encrypted password: AACAAAAAAAAAAAIAJFGBNEYGTGSBSHVB 2. Enter the key name and value in the ENCKEYS file, for example: ##Key name Key value drkey 0x11DF0E2C2BC20EB335CB98F05471A737 3. Paste the encrypted password in the Extract or Replicat PASSWORD parameter, for example: SOURCEDB MySource, USERID joe, PASSWORD AACAAAAAAAAAAAIAJFGBNEYGTGSBSHVB, ENCRYPTKEY drkey

Options: Password Encryption - Summary

ENCKEYS (user-defined key) Network (TCP/IP) Server Collector

(user-defined key) ENCKEYS

Extract

Trail

Replicat

Extract Parameters: [ SOURCEDB ] , USERID , PASSWORD <encrypted password> , ENCRYPTKEY DEFAULT | <keyname>

Replicat Parameters: [ TARGETDB ] , USERID , PASSWORD <encrypted password> , ENCRYPTKEY DEFAULT | <keyname>

Password Encryption

Oracle GoldenGate Fundamentals Student Guide

Event Actions

Options: Event Actions - Event Records


GoldenGate provides an event marker system that enables the GoldenGate processes to take a defined action based on an event record in the transaction log or trail The event record is: Either a record in a data table that satisfies a filter condition for which you want an action to occur Or a record that you write to a dedicated event table when you want an action to occur Only implemented for change replication, not initial loads

Options: Event Actions - Examples


Examples of actions you might take on detecting an event record are: Stop the process Ignore or discard the current record Log an informational or warning message to the report file, GoldenGate error log and system event log Generate a report file Rollover the trail file Run a shell command for example, to switch an application, start batch processes or start end-of-day reporting Activate tracing Write a checkpoint before and/or after writing the record to the trail

187

Options: Event Actions Examples (contd)


INSERT / UPDATE / DELETE Values() in an event table INSERT / UPDATE / DELETE Values() in a data table

Reports

Logs

Discards Chkpts

Reports

Logs

Discards Chkpts

EVENT PROCESSING

EVENT PROCESSING

Extract Transaction Log

Network (TCP/IP)

Target Trail

Replicat

Options: Event Actions - Implementing


Add an EVENTACTION option to a TABLE or MAP statement EVENTACTION can specify one or multiple events Example using a separate event table to manage events: TABLE source.event_table, EVENTACTION (ROLLOVER); Whenever a record is written to the event table, the trail file is rolled over. Example using data values to trigger events: MAP source.account, TARGET target.account, FILTER (account_no = 100), EVENTACTIONS (DISCARD, LOG); Any record where account_no = 100 is discarded and a log message written.

Oracle GoldenGate Fundamentals Student Guide

Options: Event Actions - EVENTACTIONS Parameter


TABLE | MAP EVENTACTIONS ( [STOP | ABORT | FORCESTOP] [IGNORE [TRANSACTION [INCLUDEVENT]] [DISCARD] [LOG [INFO | WARNING]] [REPORT] [ROLLOVER] [SHELL <command>] [TRACE <trace file> [TRANSACTION] [PURGE | APPEND]] [CHECKPOINT [BEFORE | AFTER | BOTH]] [, ...] )
Note: You can also use a TABLE parameter in a Replicat to trigger actions without writing data to target tables

EVENTACTIONS STOP Graceful stop ABORT Immediate exit FORCESTOP Graceful stop if the event record is the last operation in the transaction, else log warning message and abort IGNORE [TRANSACTION [INCLUDEVENT]] Ignore record. Optionally ignore entire transaction and propagate the event record. DISCARD Write record to discard file LOG [INFO | WARNING] Log an informational or warning message to the report, error and systems event files REPORT Generate a report file ROLLOVER (Extract only) Roll over the trail file SHELL Execute a shell command TRACE Write trace information to file CHECKPOINT Write a checkpoint before and/or after writing the event record Table statement for Replicat TABLE <table spec>, [, SQLEXEC (<SQL specification>), BEFOREFILTER] [, FILTER (<filter specification>)] [, WHERE (<where clause>)] {, EVENTACTIONS ({IGNORE | DISCARD} [<action>])} ;

189

Options: Event Actions Heartbeat Example


A heartbeat table is periodically updated with the current time in the source database:
MAP source.heartbeat, TARGET target.heartbeat, FILTER ((@DATEDIFF (SS, hb_timestamp, @DATENOW() > 60 AND @DATEDIFF (SS, HBTIMESTAMP, @DATENOW() < 120), EVENTACTIONS (LOG); MAP source.heartbeat, TARGET target.heartbeat, FILTER (@DATEDIFF (SS, hb_timestamp, @DATENOW() > 120), EVENTACTIONS (LOG WARNING);

EVENT PROCESSING TX1 TX2 Heartbeat TX4 Target Trail Replicat

Info Log Warning

Options: Event Actions - Automated Switchover Example


Switchover

Application 1. User writes an event record at the planned outage point. This is read by Extract through the transaction log.

Application 2. When Replicat reads the event record, it triggers an event action run a custom script to switch the application to the target database. Trail Network (TCP/IP Trail Extract Trans Log 3. The Extract on the target, already configured and running, starts capturing transactions. Replicat Target

Extract Source Trans Log Replicat

Oracle GoldenGate Fundamentals Student Guide

Options: Event Actions - Automated Synchronization Example

Application

ETL

2. When Replicat reads the event record, it communicates to the second ETL process to start at the right point and performs checkpoints before and after the record.

Application

ETL

1. When a batch load is starting, the ETL process writes an event record. Extract reads the record and performs a checkpoint before and after the record. Extract Source Trans Log Replicat Trail Network (TCP/IP) Extract Trans Log Replicat Target

Trail

3. When the second ETL process is completed, it generates an event record that is read by Extract on the target. When Replicat on the source receives the event record, it triggers a custom script to start the application based on the status of the batch process on the source.

191

Bidirectional Considerations

Options: Bidirectional - Configuration

Transaction Log Extract Network (TCP/IP) Trail Replicat

Source Target

Target Source Replicat Network (TCP/IP) Extract Transaction Log

Trail

The top of this illustration shows changes from the left-side database being extracted and sent over the network to be replicated in the right-side database. The bottom is the reverse changes from the right-side database are extracted and sent to the left.
Options: Bidirectional - Capabilities
Available for both homogeneous and heterogeneous configurations Distributed processing Both sides are live GoldenGates low latency reduces the risk of conflicts GoldenGate provides loop detection

Bidirectional Capabilities GoldenGate supports bidirectional replication between two databases, whether the configuration is for homogeneous or heterogeneous synchronization.

Oracle GoldenGate Fundamentals Student Guide

In a bidirectional configuration, both sides of the database may be live and processing application transactions at the same time. It also may be that the target application is standing-by waiting to be used if there is a fail-over. For this, operations are captured and queued to be synchronized back to the primary database once it becomes available. Configuring GoldenGate for bidirectional may be as straightforward as configuring a mirror set of Extract and Replicat groups moving in the opposite direction. It is important, however, to thoroughly discuss special considerations for your environment to guarantee data accuracy. The following slides discuss some of the bidirectional concerns as well as known issues relating to file and data types.

Options: Bidirectional - Issues


Loop Detection Detect if GoldenGate or the application performed the operation Conflict Avoidance, Detection and Resolution Detect if an update occurred on both the source and target before the changes were applied by GoldenGate Determine business rules on how to handle collisions Sequence numbers and identity data types Truncate table operations

Loops Because there are both Extract and Replicat processes operating on the same tables in bidirectional synchronization, Replicats operations must be prevented from being sent back to the source table by Extract. If they are re-extracted they will be rereplicated beginning an endless loop. Loop detection is sometimes called ping-pong detection. Conflicts Because GoldenGate is an asynchronous solution, conflict-management is required to ensure data accuracy in the event that the same row is changed in two or more databases at (or about) the same time. For example, User A on Database A updates a row, and then User B on Database B updates that same row. If User Bs transaction occurs before User As transaction is synchronized to Database B, there will be a conflict on the replicated transaction.

193

Options: Bidirectional - Loop Example 1


The following example demonstrates the problem that occurs without loop-detection: A row is updated on system A. The update operation is captured and sent to system B The row is updated on system B The update operation is captured sent to system A The row is updated on system A Without loop-detection, this loop continues endlessly.

Preventing looping To avoid looping, Replicats operations must be prevented from being sent back to the source table by Extract. This is accomplished by configuring Extract to ignore transactions issued by the Replicat user. This slide illustrates why data looping needs to be prevented. Methods of preventing data looping are available for the following databases. Oracle, DB2, and SQL Server databases using log-based extraction Teradata databases using VAM-based extraction SQL/MX using the checkpoint table.
Options: Bidirectional - Loop Example 2
Or A row is inserted on system A The insert is captured and sent to system B The row is inserted on system B The insert is captured and sent to system A The insert is attempted on system A, but the operation fails with a conflict on the primary key causing synchronization services to HALT

Because of the constraint that the primary key must be unique, the insert fails in this example so it does not trigger looping. However the failed insert causes the Replicat

Oracle GoldenGate Fundamentals Student Guide

to abend which stops all replication services, so it is another example of the need to recognize and not extract Replicat transactions.
Options: Bidirectional - Loop Detection
Loop detection technique depends on the source database: Oracle Using EXCLUDEUSER or EXCLUDEUSERID (Oracle 10g and later) Stop capture with Extract parameter:
TRANLOGOPTIONS EXCLUDEUSER <userid | username> Use @GETENV ("TRANSACTION", "USERID") or @GETENV ("TRANSACTION", "USERNAME") to retrieve the Oracle userid or username on the Replicat database

Using Trace Table (Oracle 9i and earlier) Add a trace table to detect GoldenGate operations with the GGSCI command:
ADD TRACETABLE <owner>.<table name>

The default <table name> is GGS_TRACE If not using the default table name, add a parameter in both Extract and Replicat:
TRACETABLE <owner>.<table name>

Oracle Trace Table Loop-detection can be accomplished by creating a table in the Oracle environment known as the trace table. The Replicat process updates this table when a transaction is committed providing a mechanism Extract can use to detect that a Replicat operation. When the Replicat starts up, it looks for a TRACETABLE parameter, and if one exists, updates the trace table name as every transaction is committed. If no TRACETABLE parameter is present, Replicat looks for a default table GGS_TRACE owned by the logged-in USERID. If it exists, it will automatically update the table. If GGS_TRACE does not exist, then nothing is updated. To create the GGS_TRACE table, use the GGSCI command ADD TRACETABLE. To create a table with a different table name, use ADD TRACETABLE [<owner>].<table_name>. If <owner> is omitted, it assumes the logged-in USERID. GGSCI also provides INFO and DELETE commands for the TRACETABLE entity. To use these commands, you must first login to the database using DBLOGIN. The Extract process behaves in a similar manner as Replicat in that, as it is started, it looks for the parameter TRACETABLE and will use the table name as the table to use to detect if Replicat performed an operation. If no parameter is present, it looks to see if GGS_TRACE is part of the transaction.

195

Options: Bidirectional - Loop Detection (contd)


SQL Server By default, SQL Server does not log the ggs_repl transactions written by Replicat, so no loops arise. DB2 and Ingres Execute Replicat with unique User ID Stop capture with Extract parameter:
TRANLOGOPTIONS EXCLUDEUSER <user id>

Loop-detection operates by default for SQL Server, NonStop SQL/MP and Enscribe sources. For other database types, you must handle loop-detection in various ways. SQL Server By default, SQL Server logging excludes the ggs_repl transactions written by Replicat. DB2 and Ingres Loop-detection is accomplished by identifying which identifier of the user performed the operation. To deploy loop-detection for DB2, you must execute the Replicat using a User ID different than the ID of the application. Then the EXCLUDEUSER argument on the TRANLOGOPTIONS parameter can be added to Extract to trigger the detection and exclusion of that User ID.
Options: Bidirectional - Loop Detection (contd)
Sybase Do nothing and allow Replicat to use the default transaction name ggs_repl Or identify the Replicat transaction name in the Extract parameter:
TRANLOGOPTIONS EXCLUDETRANS <trans name>

Or identify the Replicat user name in the Extract parameter:


TRANLOGOPTIONS EXCLUDEUSER <user name>

Oracle GoldenGate Fundamentals Student Guide

Options: Bidirectional - Loop Detection (contd)


NonStop SQL/MX Replicat transactions identified by the name of the checkpoint table specified with TRANLOGOPTIONS option:
TRANLOGOPTIONS FILTERTABLE <table>

Extract ignores transactions that include this checkpoint table PURGEDATA operation is not supported Teradata You do not need to identify Replicat transactions that are applied to a Teradata database. c-tree Extract automatically identifies Replicat transactions that are applied to a c-tree database

NonStop SQL/MX To prevent data loopback, a checkpoint table is required in a bidirectional configuration that involves a source or target SQL/MX database (or both). Because there is not SQL transaction associated with a PURGEDATA operation, this operation type is not supported for bidirectional replication. Because PURGEDATA operations are DDL, they are implicit transactions, so GoldenGate cannot update the checkpoint table within that transaction.

Options: Bidirectional - Conflict Avoidance and Detection


Conflicts can be minimized by low latency Conflicts can be avoided at the application level by assuring that records are always updated on one system only GoldenGate can capture both the before and after image of updates so you can compare before applying updates You can write compare logic in a filter or user exit

Conflict Detection Low latency reduces the risk of encountering a conflict where the same record is updated on both systems. This is the best overall method that we encourage in bidirectional configurations.
197

Conflict detection can also be addressed at the application or system level by assuring that records are always updated on one system. For example, all updates to card holder (or account number) 0 1000000 are applied on system A, while updates to 1000001 or higher is applied on system B. GoldenGate for UNIX and Windows (only) provides the capability to capture the before and after values of columns so that comparisons may be made on the target database before applying the values. Additional SQL procedures can be written based on your application requirements.

Options: Bidirectional - Conflict Detection by Filter


REPERROR 9999, EXCEPTION MAP SRCTAB, TARGET TARGTAB, SQLEXEC (ID CHECK, ON UPDATE, BEFOREFILTER, QUERY SELECT COUNTER FROM TARGTAB WHERE PKCOL = :P1, PARAMS (P1 = PKCOL)), FILTER (ON UPDATE, BEFORE.COUNTER <> CHECK.COUNTER, RAISEERROR 9999); INSERTALLRECORDS MAP SRCTAB, TARGET TARGET_EXCEPT, EXCEPTIONSONLY, COLMAP (USEDEFAULTS, ERRTYPE = Conflict Detected);

Conflict Detection Example The example above does the following when an update is encountered: Before the update filter is executed, perform a query to retrieve the present value of the COUNTER column If the first filter is passed successfully, ensure the value of COUNTER before the update occurred matches the value in the target before performing the update. If the update filter fails, raise error 9999 The REPERROR clause for error 9999 ensures that the exceptions map to TARGET_EXCEPT will be executed

Oracle GoldenGate Fundamentals Student Guide

Options: Bidirectional - Conflict Resolution


Depends on your business rules for example: Apply the net difference instead of the after value Map the data to an exception table for manual resolution Accept or ignore the change based on date/time

Options: Bidirectional - Conflict Resolution Example


Initial balance: $500 Trans A: Mr. Smith deposits $75 in LA: balance=$575 Trans B: Mrs. Smith withdraws $20 in NY: balance $480 Trans A replicated to NY: balance $575

Trans B replicated to LA: balance $480

Time

At end of day, correct balance should be $500 + $75 - $20 = $555 !

The conflict When the two transactions occur at different times, the deposit transaction is replicated to New York before the withdrawal transaction is extracted. But when the two transactions occur at around the same time, they are applied independently, resulting in an incorrect balance on both systems.

199

Options: Bidirectional - Conflict Resolution by Applying Net Differences


Initial balance: $500 Trans A: Mr. Smith deposits $75 in LA: end balance $575 Trans B: Mrs. Smith withdraws $20 in NY: end balance $480 Trans A replicated to NY: begin/end balance $500 / $575
Conflict detected between trans A begin balance (500) and current NY balance (480), so apply net difference: Trans A end balance (575) Trans A begin balance (500) = +75 resulting in NY end balance of $555

Trans B replicated to LA: begin/end balance $500 / $480


Conflict detected between trans B begin balance (500) and current LA balance (575), so apply net difference: Trans B end balance (480) Trans B begin balance (500) = -20 resulting in LA end balance of $555
Time

Options: Bidirectional - Oracle Sequence Numbers


Oracle sequence numbers are used for a variety of purposes, such as to maintain a count, determine order of entry, or generate a primary key value GoldenGate does not support the replication of sequence values in a bidirectional configuration In a bidirectional configuration, to ensure that the source and target sequence numbers are unique, assign odd and even values In a multidirectional configuration, each system must use a starting value and increment based on the number of systems

Oracle GoldenGate Fundamentals Student Guide

Options: Bidirectional - Sybase & SQL Server Identity Data Types


Similar issues to Oracle sequences Set the seed and increment values so each system is using a different range of identity values

SQL Server Identity GoldenGate has issues with the replication of identity data types if the identity is part of the key regardless of whether this is unidirectional or bidirectional. Identity data types are used for a variety of purposes, for example as a field that is part of a primary key in order to make the row unique. The challenge in a bidirectional configuration is to ensure that the same identity value is not generated on both systems causing a conflict. One technique that addresses this issue is to create the table on each system and set the seed and increment values so that each system is using a different range of identity values.
Options: Bidirectional - Truncate Table Operations
Truncate Table operations cannot be detected for loops Make sure that GETTRUNCATES is ON for only one direction Use IGNORETRUNCATES (default) for the other direction Change database security so truncates can only be issued on one system

201

Oracle DDL Replication

Options: DDL Replication - Overview


Available for all Oracle versions supported by GoldenGate DML synchronization DDL in relation to DML: DDL can be active with/without DML synchronization The same Extract/Replicat should be processing both DML and DDL to avoid timing issues DDL operations are recognized differently by Extract and Replicat: Source: DDL disabled by default; Extract must be configured to enable. Target: DDL enabled by default to maintain data integrity Replicat must be configured to ignore or filter DDL

Options: DDL Replication GoldenGate Requirements/Restrictions


Identical source and target data def: ASSUMETARGETDEFS Data pumps must be configured in PASSTHRU mode WILDCARDRESOLVE must remain set to DYNAMIC (default) Data manipulation is not supported GoldenGate user exits are not supported for DDL activity Restrictions exist for DDL operations that involve user defined types and LOB data DDL on objects in TABLE or MAP statements inherit the limitations in allowed characters of those parameters Restrictions exist for DDL operations that involve stored procedures GoldenGate does not support bidirectional replication of DDL allow DDL changes on one system only DDL statements > 2MB require special handling

DDL support is only valid with log based extraction. Object definitions Source and target object definitions must be identical (ASSUMETARGETDEFS specified for Replicat). Neither data nor DDL conversion is supported. DDL and data pumps When Extract data pumps are being used, tables for which DDL is being replicated must be configured in pass-through mode. DDL filtering, manipulation, and error handling is not supported by data pumps.

Oracle GoldenGate Fundamentals Student Guide

Wildcard resolution Standard GoldenGate asterisk wildcards (*) can be used with certain parameter options when synchronizing DDL operations. WILDCARDRESOLVE is now set by default to DYNAMIC and must remain so for DDL support. User Exits GoldenGate user exit functionality is not supported for use with DDL synchronization activities (user exit logic can not be triggered based on DDL operations.) User exits can be used with concurrent DML processing. LOB data With LOB data Extract might fetch a LOB value from a Flashback Query, and Oracle does not provide Flashback capability for DDL (except DROP). When a LOB is fetched, the object structure reflects current metadata, but the LOB record in the transaction log reflects old metadata. Refer to the Oracle GoldenGate Reference Guide for information on this topic. User defined types DDL operations that involve user defined types generate implied DML operations on both the source and target. To avoid SQL errors that would be caused by redundant operations, GoldenGate does not replicate those DML operations. If DML is being replicated for a user defined type, Extract must process all of those changes before DDL can be performed on the object. Because UDT data might be fetched by Extract, the reasons for this rule are similar to those that apply to LOB columns SQLEXEC Objects that are affected by a stored procedure must exist with the correct structure prior to the execution of SQL. Consequently, DDL that affects structure must happen before the SQLEXEC executes. Objects affected by a standalone SQLEXEC statement must exist before the GoldenGate processes start. This means that DDL support must be disabled for these objects; otherwise DDL operations could change or delete the object before the SQLEXEC executes. Long DDL statements GoldenGate 10.4 supports the capture and replication of Oracle DDL statements of up to 2 MB in length (including some internal GoldenGate maintenance information). Extract will skip statements that are greater than the supported length, but the ddl_ddl2file.sql script can be used to save the skipped DDL to a text file in the USER_DUMP_DEST directory of Oracle. To use the new support, the DDL trigger must be reinstalled in INITIALSETUP mode, which removes all of the DDL history. See the GoldenGate for Oracle Installation and Setup Guide.

203

Options: DDL Replication - Oracle Requirements/Restrictions


Schema names using Oracle reserved names are ignored The GETTRUNCATES parameter should not be used with full DDL support Table name cannot be longer than 16 characters plus quotation marks for ALTER TABLE RENAME ALTER TABLE..MOVE TABLESPACE Supported when tablespace all SMALLFILE or BIGFILE Stop Extract before issuing a MOVE TABLESPACE The recycle bin must be turned off ALTER DATABASE and ALTER SYSTEM not captured Long DDL statements need special handling

GETTRUNCATES GoldenGate supports the synchronization of TRUNCATEs as a standalone function (independently of full DDL synchronization) or as part of full DDL synchronization. If using DDL synchronization, disable standalone TRUNCATE synchronization to avoid errors caused by duplicate operations. Table names The ALTER TABLE RENAME fails if the old or new table name is longer than 18 characters (16 for the name and two for the quotation marks). Oracle only allows 18 characters for a rename because of the ANSI limit for identifiers.
Options: DDL Replication - Characteristics
New names must be specified in TABLE/MAP statements Extract sends all DDL to each trail when writing to multiple trails Supported objects for Oracle clusters functions indexes packages procedures roles sequences synonyms tables tablespaces triggers types views materialized views users

Renames To work around remote permissions issues that may arise when different users are being used on the source and target, RENAME will always be converted to equivalent ALTER TABLE RENAME for Oracle.

Oracle GoldenGate Fundamentals Student Guide

New Names New names must be specified in TABLE/MAP statements in order to: 1) Replicate DML operations on tables resulting from a CREATE or RENAME, 2) CREATE USER and then move new/renamed tables into that schema

Options: DDL Replication - Oracle Characteristics


A comment identifies each Extract/Replicat DDL statement: /* GOLDENGATE_DDL_REPLICATION */ By default Extract ignores DDL identified with this comment For Oracle, comments in middle of an object name appear at the end of the name in the target DDL statement

Renames To work around remote permissions issues that may arise when different users are being used on the source and target, RENAME will always be converted to equivalent ALTER TABLE RENAME. New Names New names must be specified in TABLE/MAP statements in order to: 1) Replicate DML operations on tables resulting from a CREATE or RENAME, 2) CREATE USER and then move new/renamed tables into that schema

Options: DDL Replication - Activating Oracle DDL Capture


Install database objects used for the replication of operations: DDL marker table: GGS_MARKER
- Stores DDL information

DDL history table: GGS_DDL_HIST


- Stores object metadata history

DDL trigger: GGS_DDL_TRIGGER_BEFORE


- Activates when there is a DDL operation - Writes to the Marker and History table

User Role: GGS_GGSUSER_ROLE


- Establishes the role needed for DDL replication - Should be the user that executes Extract

Specify schema name in GLOBALS: GGSCHEMA <schema_name>


Note: These tables will grow over time, so cleanup should be specified in the Manager parameter file.

205

DDL marker table The DDL Marker table only receives inserts and its rows can be periodically purged. The name can be set in the GLOBALS parameter file, but if it is not the default name is GGS_MARKER. Do not delete the DDL marker table if you plan to continue processing DDL. The marker table and the DDL trigger are interdependent. If you attempt to remove the marker table without first removing the trigger, the following error will be generated: "ORA-04098: trigger 'SYS.GGS_DDL_TRIGGER_BEFORE' is invalid and failed re-validation" DDL history table The DDL History table receives inserts, updates, deletes. It contains the SQL statement that was issued by the user. Default name is GGS_DDL_HIST. Caution must be used if purging. DDL trigger DDL trigger is installed with some packages. Default trigger name is GGS_DDL_TRIGGER_BEFORE. User Role Default user role name is GGS_GGSUSER_ROLE. Other supporting database objects: Internal setup table DUMPDDL tables (for viewing DDL history) ddl_pin (pins tracing for performance evaluation) sequence used for a column in the marker table
Options: DDL Replication - Scope
Database objects fall into categories known as scopes : MAPPED
- Is specified in a TABLE or MAP statement - Operations CREATE, ALTER, DROP, RENAME, GRANT*, REVOKE* - Objects TABLE*, INDEX, TRIGGER, SEQUENCE*, MATERIALIZED VIW* * Operations are only for objects with asterisk

UNMAPPED
- Does not have a TABLE or MAP statement

OTHER
- TABLE or MAP statements do not apply - DDL operations other than those listed above - Examples are CREATE USER, CREATE ROLE, ALTER TABLESPACE

Oracle GoldenGate Fundamentals Student Guide

Options: DDL Replication - DDL Parameter DDL parameter enables DDL support and filters the operations Valid for Extract and Replicat
DDL [ {INCLUDE | EXCLUDE} [, MAPPED | UNMAPPED | OTHER | ALL] [, OPTYPE <type>] [, OBJTYPE <type>] [, OBJNAME <name>] [, INSTR <string>] [, INSTRCOMMENTS <comment_string>] ] [...]

Only one DDL parameter can be used in a parameter file, but you can combine multiple inclusion and exclusion options to filter the DDL to the required level. When combined, multiple option specifications are linked logically as AND statements. All criteria specified with multiple options must be satisfied for a DDL statement to be replicated. Options INCLUDE | EXCLUDE Identifies the beginning of an inclusion or exclusion clause. INCLUDE includes specified DDL for capture or replication. EXCLUDE excludes specified DDL from being captured or replicated. The inclusion or exclusion clause must consist of the INCLUDE or EXCLUDE keyword followed by any valid combination of other options of the DDL parameter. An EXCLUDE must be accompanied by a corresponding INCLUDE clause. An EXCLUDE takes priority over any INCLUDEs that contain the same criteria. You can use multiple inclusion and exclusion clauses. MAPPED | UNMAPPED | OTHER | ALL applies INCLUDE or EXCLUDE based on the DDL operation scope. MAPPED applies to DDL operations that are of MAPPED scope. UNMAPPED applies to DDL operations that are of UNMAPPED scope. OTHER applies to DDL operations that are of OTHER scope. ALL applies to DDL operations of all scopes. DDL EXCLUDE ALL maintains up-to-date metadata on objects, while blocking the replication of the DDL operations themselves. OPTYPE <type> applies INCLUDE or EXCLUDE to a specific type of DDL operation. For <type>, use any DDL command that is valid for the database, such as CREATE, ALTER, and RENAME. OBJTYPE <type> applies INCLUDE or EXCLUDE to a specific type of database object. For <type>, use any object type that is valid for the database, such as TABLE, INDEX, TRIGGER, USER, ROLE. Enclose the object type within single quotes. OBJNAME <name> applies INCLUDE or EXCLUDE to the name of an object, for example a table name. Provide a double-quoted string as input. Wildcards can be used. If you do not qualify the object name for Oracle, the owner is assumed to be the GoldenGate user.

207

When using OBJNAME with MAPPED in a Replicat parameter file, the value for OBJNAME must refer to the name specified with the TARGET clause of the MAP statement. For DDL that creates triggers and indexes, the value for OBJNAME must be the name of the base object, not the name of the trigger or index. For RENAME operations, the value for OBJNAME must be the new table name. INSTR <string> applies INCLUDE or EXCLUDE to DDL statements that contain a specific character string within the command syntax itself, but not within comments. Enclose the string within single quotes. The string search is not case sensitive INSTRCOMMENTS <comment_string>s applies INCLUDE or EXCLUDE to DDL statements that contain a specific character string within a comment, but not within the DDL command itself. By using INSTRCOMMENTS, you can use comments as a filtering agent. Enclose the string within single quotes. The string search is not case sensitive. You can combine INSTR and INSTRCOMMENTS options to filter on a string in the command syntax and in the comments.
Options: DDL Replication - String Substitution DDLSUBST parameter substitutes strings in a DDL operation Multiple statements can be used DDLSUBST parameter syntax:
DDLSUBST <search_string> WITH <replace_string> [INCLUDE <clause> | EXCLUDE <clause>] Where: <search_string> is the string in the source DDL statement you want to replace, in single quotes <replace_string> is the replacement string, in single quotes <clause> is an inclusion or exclusion clause using same syntax as INCLUDE and EXCLUDE from DDL parameter

DDLSUBST Clauses DDLSUBST <search_string> WITH <replace_string> [ {INCLUDE | EXCLUDE} [, ALL | MAPPED | UNMAPPED | OTHER] [, OPTYPE <type>] [, OBJTYPE <type>] [, OBJNAME <name>] [, INSTR <string>] [, INSTRCOMMENTS <comment_string>] ] [...]

Oracle GoldenGate Fundamentals Student Guide

Options: DDL Replication - Error Handling


DDLERROR parameter: default and specific error handling rules to handle full range of anticipated errors Extract syntax: DDLERROR [, RESTARTSKIP <num skips>] Replicat syntax: DDLERROR {<error> | DEFAULT} {<response>} [RETRYOP MAXRETRIES <n> [RETRYDELAY <delay>]] {INCLUDE <clause> | EXCLUDE <clause>} [, IGNOREMISSINGTABLES | ABENDONMISSINGTABLES] [, RESTARTCOLLISIONS | NORESTARTCOLLISIONS] Where <response> can be IGNORE, ABEND, DISCARD

Options: DDL Replication - DDLOPTIONS for Oracle


DDLOPTIONS parameter configures aspects of DDL processing other than filtering and string substitution
DDLOPTIONS [, MAPDERIVED | NOMAPDERIVED] [, NOCROSSRENAME] [, REPORT | NOREPORT] [, ADDTRANDATA] [, DEFAULTUSERPASSWORD <password> [ENCRYPTKEY DEFAULT | ENCRYPTKEY <keyname>]] [, GETAPPLOPS | IGNOREAPPLOPS] [, GETREPLICATES | IGNOREREPLICATES] [, REMOVECOMMENTS {BEFORE | AFTER}] [, REPLICATEPASSWORD | NOREPLICATEPASSWORD]

MAPDERIVED | NOMAPDERIVED is valid for Replicat. It controls how derived Object (e.g. indexes) names are mapped. With MAPDERIVED, if a MAP statement exists for the derived object, that is used. Otherwise, the name is mapped to the name specified in the TARGET clause of the MAP statement for the base object. MAPDERIVED is the default. NOMAPDERIVED overrides any explicit MAP statements that contain the name of the derived object and prevents name mapping. NOCROSSRENAME is valid for Extract on Oracle RAC. It assumes that tables excluded from the GoldenGate configuration will not be renamed to names that are in the configuration. NOCROSSRENAME improves performance by eliminating processing that otherwise is required to keep track of excluded tables in case they get renamed to an included name. REPORT | NOREPORT is valid for Extract and Replicat. It controls whether or not expanded DDL processing information is written to the report file. The default of

209

NOREPORT reports basic DDL statistics. REPORT adds the parameters being used and a step-by-step history of the operations that were processed ADDTRANDATA is valid for Extract. Use ADDTRANDATA to enable supplemental logging for CREATE TABLE or update supplemental logging for tables affected by an ALTER TABLE to add or drop columns, are renames, or have unique key added or dropped. DEFAULTUSERPASSWORD is valid for Replicat. It specifies a different password for a replicated {CREATE | ALTER} USER <name> IDENTIFIED BY <password> statement from the one used in the source statement. The password may be entered as a clear text or encrypted using the default or a user defined <keyname> from ENCKEYS. When using DEFAULTUSERPASSWORD, use the NOREPLICATEPASSWORD option of DDLOPTIONS for Extract. GETAPPLOPS | IGNOREAPPLOPS are valid for Extract. This controls whether or not DDL operations produced by business applications except Replicat are included in the content that Extract writes to a trail or file. The default is GETAPPLOPS. GETREPLICATES | IGNOREREPLICATES is valid for Extract. It controls whether or not DDL operations produced by Replicat are included in the content that Extract writes to a trail or file. The default is IGNOREREPLICATES. REMOVECOMMENTS is valid for Extract and Replicat. It controls whether or not comments are removed from the DDL operation. By default, comments are not removed. REMOVECOMMENTS BEFORE removes comments before the DDL operation is processed by Extract or Replicat. AFTER removes comments after they are used for string substitution. REPLICATEPASSWORD is valid for Extract. It applies to the password in a {CREATE | ALTER} USER <user> IDENTIFIED BY <password> command. By default GoldenGate uses the source password in the target CREATE or ALTER statement. To prevent the source password from being sent to the target, use NOREPLICATEPASSWORD.

Oracle GoldenGate Fundamentals Student Guide

Managing Oracle GoldenGate

Managing Oracle GoldenGate - Overview


Command level security Trail management Process startup and TCP/IP errors Reporting and statistics Monitoring Troubleshooting

Command Level Security

Managing: Command Level Security Overview


Security rules established in the CMDSEC file Controls which users have access to GGSCI commands For example, certain users Allowed to view reports and collect status information Excluded from stop and delete commands

Establishing Command Security Command-level security can be implemented to control which users have access to which GGSCI commands. For example, you can allow certain users access to INFO and STATUS commands, while restricting access to START and STOP commands to other users. Security levels are defined by the operating systems user groups.

211

To implement security for GoldenGate commands, you create a CMDSEC file in the GoldenGate directory. Without this file, access to all GoldenGate commands is granted to all users. The CMDSEC file should be created and secured by the user responsible for central administration of Extract/Replicat.
Managing: Command Level Security - CMDSEC File
Command security entries include: Entry COMMAND NAME OBJECT NAME GROUP NAME Examples Such as INFO, ADD, START, STOP . May be any GGSCI command name or a wildcard. Such as EXTRACT, REPLICAT. May be any GGSCI command object or a wildcard. For a Windows or Unix group. On Unix operating systems, you can specify: - A numeric group ID instead of the group name - A wildcard to specify all groups For a Windows or Unix user. On Unix operating systems, you can specify: - A numeric user ID instead of the user name - A wildcard to specify all users Granted or prohibited; YES or NO

USER NAME

ACCESS

The CMDSEC File Create the CMDSEC file in the GoldenGate subvolume as an ASCII text file. Each line of the CMDSEC file contains a comment or a command security entry. Comments begin with a pound sign (#), two hyphens (--), or the word COMMENT. For the command security line, separate each of the following components with spaces or tabs. <command name> <command object> <OS group> <OS user> <YES | NO> Note: Command names and command objects are not validated for accuracy.

Oracle GoldenGate Fundamentals Student Guide

Managing: Command Level Security - Sample CMDSEC File


Sample CMDSEC File:
#Command
STATUS STATUS START START * STOP STOP * *

Object
REPLICAT * REPLICAT REPLICAT EXTRACT * * * *

Group
ggsgroup ggsgroup root * 200 ggsgroup ggsgroup root *

User
ggsuser * * * * * ggsuser root *

Access Allowed?
NO YES YES NO NO NO YES YES NO

Can you see the error with the two STOP lines?

How Security Rules are Resolved Security rules are processed from the top of the CMDSEC file downward. The first entry satisfied is the one used to determine whether or not access is allowed. Order configuration entries from the most specific (those with no wildcards) to the least specific. When no explicit entry is found for a given user and command, access is allowed. The CMDSEC file in the slide illustrates implementation of command security. The STATUS REPLICAT line explicitly restricts user ggsuser, belonging to group ggsgroup, from the STATUS REPLICAT command. Everyone else in the ggsgroup group is given explicit access by the wildcard on the following line (STATUS *). On the next two lines, START REPLICAT is granted to the root group (group 0) and explicitly denied outside the root group. The next line denies EXTRACT commands to anyone belonging to the group with a numeric group ID of 200. The order of the lines that begin with STOP causes a logical error. The first rule (STOP 1) denies all STOP commands to all members of group ggsgroup. The second rule (STOP 2) grants all STOP commands to user ggsuser. However, because ggsuser is a member of the ggsgroup, he has been already been denied access to all STOP commands by the first rule, even though he is supposed to have permission to issue them. The proper way to set up this security is to set the specific entry above the more general (reverse the order of these entries). The next line (beginning * for command and * for object) allows the root user to execute any command. The last line explicitly restricts all users from using any command. Any rules that were specified granting access above this line take precedence over this last entry due to ordering. In the absence of this last line, authority to commands is granted by default. Securing the CMDSEC File

213

Since the CMDSEC file is the source of security, it must be secured. The administrator must grant read access to anyone allowed access to GGSCI, but restrict write and purge access to everyone but the administrator.

Trail Management

Managing: Trail Management Overview


Trail files are created by Extract Plan the initial allocation of space for the trail files Manage the ongoing number and size of the files Using size and number parameters Purging old files

Managing: Trail Management Allocation of Space Allocation of space for GoldenGate trails
Initially set the number and size of the trail files based on Transaction log volume Speed of your system Maximum anticipated outage With ADD EXTTRAIL, ADD RMTTRAIL commands Use MEGABYTES to control the maximum size

Initial allocating of storage for trails To prevent trail activity from interfering with business applications, use a separate disk managed by a disk process different than that of the application. To ensure there is enough disk space for the trail files, follow these guidelines: For trails on the source system, there should be enough space to handle data accumulation if the network connection fails. In a failure reading from a trail

Oracle GoldenGate Fundamentals Student Guide

terminates, but the primary Extract group reading from logs or audit data continues extracting data. It is not good practice to stop the primary Extract group to prevent further accumulation. The transaction logs could recycle or the audit could be offloaded. For trails on the target system, data will accumulate because data is extracted and transferred across the network faster than it can be applied to the target database. To estimate the required trail space 1. Estimate the longest time that you think the network can be unavailable. 2. Estimate how much transaction log volume you generate in one hour. 3. Use the following formula: trail disk space = <transaction log volume in 1 hour> x <number of hours down> x .4 Note: The equation uses a multiplier of 40 percent because GoldenGate estimates that only 40 percent of the data in the transaction logs is written to the trail. A more exact estimate can be derived by configuring Extract and allowing it to run for a set time period, such as an hour, to determine the growth. This growth factor can then be applied to the maximum down time.
Managing: Trail Management PURGEOLDEXTRACTS Parameter
Add PURGEOLDEXTRACTS to clean up trail files In Extract or Replicat parameter file Trail files are purged as soon as the Extract/Replicat finishes processing them Do not use if more than one process uses the trail files In Manager parameter file Set MINKEEP<hours, days, files> to define the minimums for keeping files Specify USECHECKPOINTS to trigger checking to see if local processes have finished with the trail files Specify a frequency to purge old files Best Practices: All PURGEOLDEXTRACTS rules in the Manager parameter file

Manager processing for PURGEOLDEXTRACTS Use PURGEOLDEXTRACTS in a Manager parameter file to purge trail files when GoldenGate has finished processing them. By using PURGEOLDEXTRACTS as a Manager parameter, you can use options that are not available with the Extract or Replicat version. Allowing Manager to control the purging helps to ensure that no file is purged until all groups are finished with it. To purge trail files, you can use the following rules: Purge if all processes are finished with a file as indicated by checkpoints. Use the USECHECKPOINTS option. (This is the default.) MINKEEP rules set the time or number of files to keep. The Manager process determines which files to purge based on Extract and Replicat processes configured on the local system. If at least one process reads a trail file, Manager applies the specified rules; otherwise, the rules do not take effect.

215

Managing: Trail Management PURGEOLDEXTRACTS Parameter


Manager evaluation process for PURGEOLDEXTRACTS:
Parameter FREQUENCYMINUTES | FREQUENCYHOURS Explanation If set, determines how often Manager purges old files. If not set, defaults to the maintenance frequency set in the Manager CHECKMINUTES parameter (default 10). Checkpoints are considered unless NOUSECHECKPOINTS is set. If processing complete, file will be purged unless it falls below MINKEEP rules. Only one of the MINKEEP options should be set. If both MINKEEPHOURS and MINKEEPDAYS are set, last setting is used. If both a MINKEEP<hours or days> and MINKEEPFILES are set, MINKEEP<hours or days> is used and MINKEEPFILES ignored. If no MINKEEP rules are set, the default is MINKEEPFILES = 1.

USECHECKPOINTS

MINKEEPHOURS | MINKEEPDAYS | MINKEEPFILES

Syntax: PURGEOLDEXTRACTS {<trail name> | <log table name> } [, USECHECKPOINTS | NOUSECHECKPOINTS] [, <minkeep rule>] [, <frequency>] Arguments: <trail name>The trail to purge. Use the fully qualified name. <log table name> When used to maintain log files rather than trail files, specifies the log file to purge. Requires a login to be specified with the USERID parameter. The table owner is assumed to be the one specified with the USERID parameter. USECHECKPOINTS (Default) Allows purging after all Extract and Replicat processes are done with the data as indicated by checkpoints, according to any MINKEEP rules. NOUSECHECKPOINTS Allows purging without considering checkpoints, based on keeping a minimum of: Either one file if no MINKEEP rule is used Or the number of files specified with a MINKEEP rule. <minkeep rule> Use only one of the following to set rules for the minimum amount of time to keep data: MINKEEPHOURS <n> Keeps an unmodified file for at least the specified number of hours. MINKEEPDAYS <n> Keeps an unmodified file for at least the specified number of days. MINKEEPFILES <n> Keeps at least n unmodified files, including the active file.

Oracle GoldenGate Fundamentals Student Guide

<frequency> Sets the frequency with which to purge old trail files. The default time for Manager to process maintenance tasks is 10 minutes, as specified with the CHECKMINUTES parameter. Every 10 minutes, Manager evaluates the PURGEOLDEXTRACTS frequency and conducts the purge after the specified interval. <frequency> can be one of the following: FREQUENCYMINUTES <n> Sets the frequency, in minutes, with which to purge old trail files. The default purge frequency is 60 minutes. FREQUENCYHOURS <n> Sets the frequency, in hours, at which to purge old trail files.
Managing: Trail Management PURGEOLDEXTRACTS Example
Manager parameter file: PURGEOLDEXTRACTS /ggs/dirdat/AA*, USECHECKPOINTS, MINKEEPHOURS 2 For example: Trail files AA000000, AA000001, and AA000002 exist. Replicat has been down for four hours and has not completed processing any of the files. The result: The files have not been accessed for 4 hours so MINKEEP rule allows purging, but checkpoints indicate the files have not been processed so purge is not allowed.

Additional examples: Example 1 Trail files AA000000, AA000001, and AA000002 exist. The Replicat has been down for four hours and has not completed processing. The Manager parameters include: PURGEOLDEXTRACTS /ggs/dirdat/AA*, NOUSECHECKPOINTS, MINKEEPHOURS 2 Result: All trail files will be purged since the minimums have been met. Example 2 The following is an example of why only one of the MINKEEP options should be set. Replicat and Extract have completed processing. There has been no access to the trail files for the last five hours. Trail files AA000000, AA000001, and AA000002 exist. The Manager parameters include: PURGEOLDEXTRACTS /ggs/dirdat/AA*, USECHECKPOINTS, MINKEEPHOURS 4, MINKEEPFILES 4 Result: USECHECKPOINTS requirements have been met so the minimum rules will be considered when deciding whether to purge AA000002. There will only be two files if AA000002 is purged, which will violate the MINKEEPFILES parameter. Since both MINKEEPFILES and MINKEEPHOURS have been entered, however, MINKEEPFILES is

217

ignored. The file will be purged because it has not been modified for 5 hours, which meets the MINKEEPHOURS requirement of 4 hours.
Managing : Trail Management GETPURGEOLDEXTRACTS Command GETPURGEOLDEXTRACTS SEND MANAGER option Displays the rules set with PURGEOLDEXTRACTS Syntax SEND MANAGER {CHILDSTATUS | GETPORTINFO [DETAIL] | GETPURGEOLDEXTRACTS | KILL <process name>}

Managing: Trail Management GETPURGEOLDEXTRACTS Report


Example: GGSCI > SEND MANAGER GETPURGEOLDEXTRACTS PurgeOldExtracts Rules Fileset MinHours MaxHours MinFiles MaxFiles UseCP S:\GGS\DIRDAT\EXTTRAIL\P4\* 0 0 1 0 Y S:\GGS\DIRDAT\EXTTRAIL\P2\* 0 0 1 0 Y S:\GGS\DIRDAT\EXTTRAIL\P1\* 0 0 1 0 Y S:\GGS\DIRDAT\REPTRAIL\P4\* 0 0 1 0 Y S:\GGS\DIRDAT\REPTRAIL\P2\* 0 0 1 0 Y S:\GGS\DIRDAT\REPTRAIL\P1\* 0 0 1 0 Y OK Extract Trails Filename Oldest_Chkpt_Seqno IsTable IsVamTwoPhaseCommit S:\GGS\8020\DIRDAT\RT 3 0 0 S:\GGS\8020\DIRDAT\REPTRAIL\P1\RT 13 0 0 S:\GGS\8020\DIRDAT\REPTRAIL\P2\RT 13 0 0 S:\GGS\8020\DIRDAT\REPTRAIL\P4\RT 13 0 0 S:\GGS\8020\GGSLOG 735275 1 0 S:\GGS\8020\DIRDAT\EXTTRAIL\P1\ET 14 0 0 S:\GGS\8020\DIRDAT\EXTTRAIL\P2\ET 14 0 0 S:\GGS\8020\DIRDAT\EXTTRAIL\P4\ET 14 0 0

Oracle GoldenGate Fundamentals Student Guide

Process Startup and TCP/IP Errors

Managing: Process Startup


AUTOSTART
AUTOSTART ER * AUTOSTART EXTRACT PROD* Do not specify any batch task groups, such as one-time initial load groups.

AUTORESTART
AUTORESTART ER *, RETRIES 5, WAITMINUTES 3, RESETMINUTES 90

Default maximum retries is 2 Default wait minutes is 2 Default reset minutes is 20

AUTOSTART Manager parameter used to start one or more Extract or Replicat processes when Manager starts. This can be useful at system boot time, for example, when you want synchronization to begin immediately. You can use multiple AUTOSTART statements in the same parameter file. The syntax is: AUTOSTART <process type> <group name> <process type> is one of the following: EXTRACT, REPLICAT, ER (Extract and Replicat) <group name> is a group name or wildcard specification for multiple groups. Example AUTOSTART ER * Note: Be careful to not include any batch tasks, such as initial load processes. AUTORESTART Manager parameter used to specify Extract or Replicat processes to be restarted by Manager after abnormal termination. You can use multiple AUTORESTART statements in the same parameter file. The syntax is: AUTORESTART <process type> <group name> [, RETRIES <max retries>] [, WAITMINUTES <wait minutes>] [, RESETMINUTES <reset minutes>] <process type> Specify one of: EXTRACT, REPLICAT, ER (Extract and Replicat) <group name> A group name or wildcard indicating the group names of multiple processes to start.

219

RETRIES <max retries> is the maximum number of times that Manager should try to restart a process before aborting retry efforts. The default is 2 retries. WAITMINUTES <wait minutes> is the amount of time to pause between discovering that a process has terminated abnormally and restarting the process. Use this option to delay restarting until a necessary resource becomes available or some other event occurs. The default delay is 2 minutes. RESETMINUTES <reset minutes> is the window of time during which retries are counted. The default is 20 minutes. After the time expires, the number of retries reverts to zero. Example In the following example, Manager tries to start all Extract processes three times after failure within a one hour time period, and it waits five minutes before each attempt. AUTORESTART EXTRACT *, RETRIES 3, WAITMINUTES 5, RESETMINUTES 60
Managing: TCP/IP Errors
GoldenGate automatically attempts handling of IP errors Error handling defaults are stored in tcperrs file located in the GoldenGate installation directory/folder You may customize any of these settings to fit your environment Error handling includes: Error Response (ABEND or RETRY) Delay (in centiseconds) Maximum Retries

Settings for error handling Error Specifies a TCP/IP error for which you are defining a response. Response Controls whether or not GoldenGate tries to connect again after the defined error. Valid values are either RETRY or ABEND. Delay Controls how long GoldenGate waits before attempting to connect again. Max Retries Controls the number of times that GoldenGate attempts to connect again before aborting.

Oracle GoldenGate Fundamentals Student Guide

Managing: TCP/IP Errors (contd)


Sample tcperrs file:
# TCP/IP error handling parameters # Default error response is abend # Error ECONNABORTED #ECONNREFUSED ECONNREFUSED ECONNRESET ENETDOWN ENETRESET ENOBUFS ENOTCONN EPIPE ESHUTDOWN ETIMEDOUT NODYNPORTS Response RETRY ABEND RETRY RETRY RETRY RETRY RETRY RETRY RETRY RETRY RETRY ABEND Delay (csecs) 1000 0 1000 500 3000 1000 100 100 500 1000 1000 0 Max Retries 10 0 12 10 50 10 60 10 10 10 10 0

Changing TCPERRS To alter the instructions or add instructions for new errors, open the file in a text editor and change any of the values in the columns.

Reporting and Statistics

Managing: Reporting - Overview


Set up hourly or daily interval reports via parameters REPORT REPORTCOUNT REPORTROLLOVER

Generate reports on demand via commands SEND [ EXTRACT | REPLICAT ] <group>, REPORT

View process reports and history of reports VIEW REPORT <group> VIEW REPORT <group>[n] VIEW REPORT <filename>

Reports overview Each Extract, Replicat, and Manager process generates a standard report file that shows: parameters in use

221

table and column mapping database information runtime messages and errors You can set up additional reports to be generated at a defined interval by using the Extract and Replicat parameters REPORT, REPORTCOUNT, and REPORTROLLOVER. You can request additional reports by sending the request to running Extract and Replicat processes.
Managing: Reporting Set up with Parameters
Generate interim runtime statistics using: REPORT
- REPORT AT 14:00 - REPORT ON FRIDAY AT 23:00

REPORTCOUNT
- REPORTCOUNT EVERY 1000000 RECORDS - REPORTCOUNT EVERY 30 MINUTES, RATE - REPORTCOUNT EVERY 2 HOURS

REPORTROLLOVER
- REPORTROLLOVER AT 01:00

REPORT Use REPORT to specify when Extract or Replicat generates interim runtime statistics in a process report. The statistics are added to the existing report. By default, runtime statistics are displayed at the end of a run unless the process is intentionally killed. By default, reports are only generated when an Extract or Replicat process is stopped. The statistics for REPORT are carried over from the previous report. For example, if the process performed 10 million inserts one day and 20 million the next, and a report is generated at 3:00 each day, then the first report would show the first 10 million inserts, and the second report would show those plus the current days 20 million inserts, totaling 30 million. To reset the statistics when a new report is generated, use the STATOPTIONS parameter with the RESETREPORTSTATS option. Syntax REPORT {AT <hh:mi> | ON <day> | AT <hh:mi> ON <day>} Where:

Oracle GoldenGate Fundamentals Student Guide

AT <hh:mi> generates the report at a specific time of the day. Using AT without ON generates a report at the specified time every day. ON <day> generates the report on a specific day of the week. Valid values are the days of the week in text (e.g. SUNDAY). REPORTCOUNT Use REPORTCOUNT to generate a count of records that have been processed since the Extract or Replicat process started. Results are printed to the report file and to screen. Record counts can be output at scheduled intervals or after a specific number of records. Record counts are carried over from one report to the other. REPORTCOUNT can be used only once in a parameter file. If there are multiple instances of REPORTCOUNT, GoldenGate uses the last one. Syntax REPORTCOUNT [EVERY] <count> {RECORDS | SECONDS | MINUTES | HOURS} [, RATE] <count> is the interval after which to output a count. RECORDS | SECONDS | MINUTES | HOURS is the unit of measure for <count>. RATE reports the number of operations per second and the change in rate, as a measurement of performance. REPORTROLLOVER Use REPORTROLLOVER to define when the current report file is aged and a new one is created. Old reports are renamed in the format of <group name><n>.rpt, where <group name> is the name of the Extract or Replicat group and <n> is a number that gets incremented by one whenever a new file is created, for example: myext0.rpt, myext1.rpt, myext2.rpt, and so forth. Note Report statistics are carried over from one report to the other. To reset the statistics in the new report, use the STATOPTIONS parameter with the RESETREPORTSTATS option. Either the AT or ON option is required. Both options can be used together. Using AT without ON generates a report at the specified time every day. This parameter does not cause new runtime statistics to be written to the report. To generate new runtime statistics to the report, use the SEND EXTRACT or SEND REPLICAT command with the REPORT option. To control when runtime statistics are generated to report files, use the REPORT parameter. Syntax REPORTROLLOVER {AT <hh:mi> | ON <day> | AT <hh:mi> ON <day>}

223

Managing: Reporting SEND and VIEW REPORT Commands


Report on demand using commands SEND [ EXTRACT | REPLICAT ] {group}, REPORT
- SEND ER *, REPORT - SEND DB2INIT, REPORT - SEND REPLICAT ACCT*, REPORT

VIEW REPORT <group>


- VIEW REPORT FINANCE

VIEW REPORT <group>[<n>]


- VIEW REPORT FINANCE 5

VIEW REPORT <filename>


- VIEW REPORT /usr/ggs/reportarchives/finance999.rpt

SEND REPORT Use SEND REPORT to communicate with a running process and generate an interim statistical report that includes the number of inserts, updates, and deletes output since the last report. The request is processed as soon as it is ready to accept commands from users. Syntax: REPORT [HANDLECOLLISIONS [<table spec>] ] HANDLECOLLISIONS shows tables for which HANDLECOLLISIONS has been enabled. <table spec> restricts the output to a specific target table or a group of target tables specified with a standard wildcard (*). VIEW REPORT Use VIEW REPORT to view the process report that is generated by Extract or Replicat. The report lists process parameters, run statistics, error messages, and other diagnostic information. The command displays only the current report. Reports are aged whenever a process starts. Old reports are appended with a sequence number, for example finance0.rpt, finance1.rpt, and so forth. To view old reports, use the [<n>] option. Syntax VIEW REPORT {<group name>[<n>] | <file name>} <group name> The name of the group. The command assumes the report file named <group>.rpt in the GoldenGate dirrpt sub-directory. <n> The number of an old report. Report files are numbered from 0 (the most recent) to 9 (the oldest). <file name> A fully qualified file name, such as c:\ggs\dirrpt\orders.rpt.

Oracle GoldenGate Fundamentals Student Guide

Example 1 VIEW REPORT orders3 Example 2 VIEW REPORT c:\ggs\dirrpt\orders.rpt

Managing: Statistics - Overview


Generate statistics on demand STATS [ EXTRACT | REPLICAT ] {group} View latest statistics STATS <group>, LATEST Display daily statistics STATS <group>, DAILY Other statistics options By table Totals only Hourly Reset statistics Report rate Report fetch statistics (Extract) Report collisions (Replicat) STATOPTIONS parameter (report fetch statistics or collisions, reset statistics on report rollover)

Statistics overview Extract and Replicat maintain statistics in memory during normal processing. These statistics can be viewed online with GGSCI by issuing the STATS command. There are many options with STATS, such as to reset the counters, or display only a brief totals only, or to provide a per-table basis statistical report.

225

Managing: Statistics STATS Command


STATS [ EXTRACT | REPLICAT ] {group} or STATS {group} STATS <group>, <statistic> TOTAL DAILY HOURLY LATEST RESET STATS <group>, <option> TABLE {owner.tablename} TOTALSONLY REPORTRATE REPORTFETCH | NOREPORTFETCH REPORTDETAIL | NOREPORTDETAIL

(Extract only) (Replicat only)

STATS Command Use STATS REPLICAT or STATS EXTRACT to display statistics for one or more groups. Syntax: STATS EXTRACT | REPLICAT <group name> or just STATS <group name> [, <statistic>] [, TABLE <table>] [, TOTALSONLY <table spec>] [, REPORTRATE <time units>] [, REPORTFETCH | NOREPORTFETCH] [, REPORTDETAIL | NOREPORTDETAIL] [, ... ] <group name> The name of a Replicat group or a wildcard (*) to specify multiple groups. For example, T* shows statistics for all groups whose names begin with T. <statistic> The statistic to be displayed. More than one statistic can be specified by separating each with a comma, for example STATS REPLICAT finance, TOTAL, DAILY. Valid values are: TOTAL Displays totals since process startup. DAILY Displays totals since the start of the current day. HOURLY Displays totals since the start of the current hour. LATEST Displays totals since the last RESET command. RESET Resets the counters in the LATEST statistical field. TABLE <table> Displays statistics only for the specified table or a group of tables specified with a wildcard (*).

Oracle GoldenGate Fundamentals Student Guide

TOTALSONLY <table spec> Summarizes the statistics for the specified table or a group of tables specified with a wildcard (*). REPORTRATE <time units> Displays statistics in terms of processing rate rather than values. <time units> valid values are HR, MIN, SEC. REPORTFETCH | NOREPORTFETCH (Extract) Controls whether or not statistics about fetch operations are included in the output. The default is NOREPRTFETCH. REPORTDETAIL | NOREPORTDETAIL (Replicat) Controls whether or not the output includes operations that were not replicated as the result of collision errors. These operations are reported in the regular statistics (inserts, updates, and deletes performed) plus as statistics in the detail display, if enabled. For example, if 10 records were insert operations and they were all ignored due to duplicate keys, the report would indicate that there were 10 inserts and also 10 discards due to collisions. The default is REPORTDETAIL.

Managing: Statistics STATS Command Example


GGSCI > STATS EXTRACT GGSEXT, LATEST, REPORTFETCH
Sending STATS request to EXTRACT GGSEXT... Start of Statistics at 2006-06-08 11:45:05. DDL replication statistics (for all trails): *** Total statistics since extract started *** Operations 3.00 Mapped operations 3.00 Unmapped operations 0.00 Default operations 0.00 Excluded operations 0.00 Output to ./dirdat/aa: Extracting from JDADD.EMPLOYEES to JDADD.EMPLOYEES: *** Latest statistics since 2006-06-08 11:36:55 *** Total inserts 176.00 Total updates 0.00 Total deletes 40.00 Total discards 0.00 Total operations 216.00 Extracting from JDADD.DEPARTMENTS to JDADD.DEPARTMENTS: *** Latest statistics since 2006-06-08 11:36:55 *** No database operations have been performed.

STATS Command Example: The example displays total and hourly statistics per minute for a the table finance, and it also resets the latest statistics. Statistics for discarded operations are not reported.

227

Managing: Statistics STATOPTIONS Parameter


REPORTDETAIL | NOREPORTDETAIL Report statistics for collisions (Replicat) REPORTFETCH | NOREPORTFETCH Report statistics on row fetching (Extract) RESETREPORTSTATS | NORESETREPORTSTATS Resets statistics when a new report file is create by the REPORTROLLOVER parameter

STATOPTIONS Parameter Use STATOPTIONS to specify information to be included in statistical displays generated by the STATS EXTRACT or STATS REPLICAT command. These options also can be enabled as needed as arguments to those commands. Syntax: STATOPTIONS [, REPORTDETAIL | NOREPORTDETAIL] [, REPORTFETCH | NOREPORTFETCH] [, RESETREPORTSTATS | NORESETREPORTSTATS] REPORTDETAIL | NOREPORTDETAIL Valid for Replicat. REPORTDETAIL returns statistics on operations that were not replicated as the result of collision errors. These operations are reported in the regular statistics (inserts, updates, and deletes performed) plus as statistics in the detail display, if enabled. For example, if 10 records were insert operations and they were all ignored due to duplicate keys, the report would indicate that there were 10 inserts and also 10 discards due to collisions. NOREPORTDETAIL turns off reporting of collision statistics. The default is REPORTDETAIL. REPORTFETCH | NOREPORTFETCH Valid for Extract. REPORTFETCH returns statistics on row fetching, such as that triggered by a FETCHCOLS clause or fetches that must be performed when not enough information is in the transaction record. NOREPORTFETCH turns off reporting of fetch statistics. The default is NOREPORTFETCH. RESETREPORTSTATS | NORESETREPORTSTATS Controls whether or not report statistics are reset when a new process report is created. The default of NORESETREPORTSTATS continues the statistics from one report

Oracle GoldenGate Fundamentals Student Guide

to another (as the process stops and starts or as the report rolls over based on the REPORTROLLOVER parameter). To reset statistics, use RESETREPORTSTATS.

Monitioring Oracle GoldenGate

Managing: Monitoring - Overview


Setting up operator log alerts for processes stopping and starting DOWNREPORT, DOWNCRITICAL, UPREPORT Setting up operator log alerts when lag thresholds are exceeded LAGREPORT, LAGINFO, LAGCRITICAL Setting up email alerts for messaging Event text in error log generates email notification Setting up email alerts for latency Use a lag alert to notify you when a process has exceeded the threshold

GoldenGate provides proactive messaging for process that are not running: DOWNCRITICAL messages for failed processes DOWNREPORT reminders for failed processes GoldenGate provides proactive messaging for processes that are lagging: LAGINFO and LAGCRITICAL control warning and critical error messaging for latency thresholds that have been exceeded LAGREPORT controls the frequency of latency monitoring Director Client can be configured to send automatic email alerts to operators or email distribution groups for: LATENCY ALERTS sends emails when latency thresholds that have been exceeded MESSAGE ALERTS sends emails when any particular error message is generated LAG Charts can be displayed to show history of average latency over a period of time

229

Managing: Monitoring Processes


Can set Manager parameters for: DOWNREPORT - Frequency to report down processes DOWNREPORTHOURS 1 DOWNREPORTMINUTES 15 DOWNCRITICAL - Include abended processes in DOWNREPORT (default is not to) UPREPORT - Frequency to report running processes UPREPORTHOURS 1 UPREPORTMINUTES 10

DOWNREPORT Whenever a process starts or stops, events are generated to the error log, but those messages can easily be overlooked if the log is large. DOWNREPORTMINUTES and DOWNREPORTHOURS sets an interval for reporting on terminated processes. Only abended processes are reported as critical unless DOWNCRITICAL is specified. Syntax DOWNREPORTMINUTES <minutes> | DOWNREPORTHOURS <hours> <minutes> The frequency, in minutes, to report processes that are not running. <hours> The frequency, in hours, to report processes that are not running. Example The following generates a report every 30 minutes. DOWNREPORTMINUTES 30 DOWNCRITICAL Specifies that both abended processes and those that have stopped gracefully are marked as critical in the down report. UPREPORT Use UPREPORTMINUTES or UPREPORTHOURS to specify the frequency with which Manager reports Extract and Replicat processes that are running. Every time one of those processes starts or stops, events are generated. Those messages are easily overlooked in the error log because the log can be so large. UPREPORTMINUTES and UPREPORTHOURS report on a periodic basis to ensure that you are aware of the process status.

Oracle GoldenGate Fundamentals Student Guide

Managing: Monitoring Lag


Source Database Manager

Manager

Target Database Replicat

Primary Extract Trans Log

Trail

Data Pump

Network (TCP/IP)

Server Collector

Trail

Source Commit Timestamp

System Time Write to Trail

Extract lag

Pump lag Replicat lag end-to-end latency

Definitions: Lag This is the Extract lag or Pump lag in the diagram. It is the difference in time between when a change record was processed by Extract (written to the trail) and the timestamp of that record in the data source. Latency This is the Replicat lag in the diagram. It is the difference in time between when a change is made to source data and when that change is reflected in the target data.

231

Target Commit Timestamp

System Time Write to Trail

Managing: Monitoring Lag (contd)


Can set Manager parameters for:
LAGREPORT
Frequency to check for lags LAGREPORTMINUTES <minutes> LAGREPORTHOURS <hours>

LAGINFO
Frequency to report lags to the error log LAGINFOSECONDS <seconds> LAGINFOMINUTES <minutes> LAGINFOHOURS <hours>

LAGCRITICAL
Set lag threshold to force a warning message to the error log LAGCRITICALSECONDS <seconds> LAGCRITICALMINUTES <minutes> LAGCRITICALHOURS <hours>

LAGREPORT Manager parameter, LAGREPORTMINUTES or LAGREPORTHOURS, used to specify the interval at which Manager checks for Extract and Replicat lag. The syntax is: LAGREPORTMINUTES <minutes> | LAGREPORTHOURS <hours> <minutes> The frequency, in minutes, to check for lag. <hours> The frequency, in hours, to check for lag. Example LAGREPORTHOURS 1 LAGINFO Manager parameter, LAGINFOSECONDS, LAGINFOMINUTES, or LAGINFOHOURS, used to specify how often to report lag information to the error log. A value of zero (0) forces a message at the frequency specified with the LAGREPORTMINUTES or LAGREPORTHOURS parameter. If the lag is greater than the value specified with the LAGCRITICAL parameter, Manager reports the lag as critical; otherwise, it reports the lag as an informational message. The syntax is: LAGINFOSECONDS <seconds> | LAGINFOMINUTES <minutes> | LAGINFOHOURS <hours> <seconds> The frequency, in seconds, to report lag information. <minutes> The frequency, in minutes, to report lag information. <hours> The frequency, in hours, to report lag information. Example LAGINFOHOURS 1

Oracle GoldenGate Fundamentals Student Guide LAGCRITICAL Manager parameter, LAGCRITICALSECONDS, LAGCRITICALMINUTES, or LAGCRITICALHOURS, used to specify a lag threshold that is considered critical and to force a warning message to the error log when the threshold is reached. This parameter affects Extract and Replicat processes on the local system. The syntax is: LAGCRITICALSECONDS <seconds> | LAGCRITICALMINUTES <minutes> | LAGCRITICALHOURS <hours> <seconds> Lag threshold, in seconds. <minutes> Lag threshold, in minutes. <hours> Lag threshold, in hours. Example LAGCRITICALSECONDS 60
Managing: Monitoring - Setting Email Alerts in Director
Within Director, you can configure email alerts: when specific messages appear in the GoldenGate error log when a specific lag threshold has been exceeded

Email Alerts Director Client can be configured to send automatic email alerts to operators or email distribution groups for: MESSAGE ALERTS sends emails when any particular error message is generated LATENCY ALERTS sends emails when latency thresholds that have been exceeded LAG Charts can be displayed to show history of average latency over a period of time What is a System Alert? A System Alert is a filter you set up in Director on event type and/or text or checkpoint lag for a particular process or group of processes. If the criteria established in the Alert match, an audible warning can be triggered, or an email may be sent to one or more recipients. Setting up a System Alert

233

Using the System Alert Setup page, choose to enter a new Alert, or change an existing one, by selecting from the Select an Alert dropdown list. When the desired Alert is displayed, complete the following fields: Alert Name This is a friendly name that you will use to refer to the Alert, it can be anything up to 50 characters. Alert Type Choose between the following options, note that some fields on the page will change, depending upon what you select here. Process Lag This alert type allows you to specify a threshold for checkpoint lag, should lag go over the threshold, notifications will be generated. Event Text This alert type allows you to specify event type or text, if the event matches, notifications will be generated. Instance Name Choose a specific Manager Instance to apply the Alert criteria to, or you may make the Alert global to all instances. Process Name Type the process name to apply the criteria to. You may use a wildcard (*) here to make partial matches, for example: '*' - would match all process names 'EXT*' - would match processes beginning with 'EXT' '*SF*' - would match all processes with 'SF' anywhere in the name Criteria When Lag goes above This field is displayed when you select a Process Lag Alert type. Enter the desired Lag Threshold here. When Event Type is This field is displayed when you select an Event Text Alert type. Select the event types (ERROR, WARNING) to match here. When Event Text contains This field is displayed when you select an Event Text Alert type. Enter the text to match, leave blank or enter '*' to match any text. Action Send eMail to Enter a comma-separated list of email addresses

Oracle GoldenGate Fundamentals Student Guide

Troubleshooting

Troubleshooting - Resources
GGSCI commands to view:

Processing status Events Errors Checkpoints


GoldenGate reports and logs

Process reports Event/Error log Discard file System logs

Resources for gathering evidence GoldenGate sends processing information to several locations to help you monitor processes and troubleshoot failures and unexpected results. GGSCI The GGSCI command interface provides several helpful commands for troubleshooting. For syntax, view the GGSCI online help by issuing the HELP command, or see the alphabetical reference in the Oracle GoldenGate Reference Guide. Process Reports Each Extract, Replicat, and Manager process generates a report file that shows: parameters in use table and column mapping database information runtime messages and error Event/Error log The error log is a file that shows: history of GGSCI commands processes that started and stopped errors that occurred informational messages

235

Discard file GoldenGate creates a discard file when the DISCARDFILE parameter is used in the Extract or Replicat parameter file and the process has a problem with a record it is processing. The discard file contains column-level details for operations that a process could not handle, including: the database error message the trail file sequence number the relative byte address of the record in the trail details of the discarded record System logs GoldenGate writes errors that occur at the operating system level to the Event Viewer on Windows or the syslog on UNIX. On Windows, this feature must be installed. Errors appearing in the system logs also appear in the GoldenGate error log. Director Most of the information viewed with GGSCI commands can also be viewed through Director Client and Director Web, GoldenGates graphical user interfaces. For more information about Director, see the Director online help.
Troubleshooting Show Status, Events and Errors
These commands display basic processing status, events and errors SEND <group>, STATUS shows current processing status STATS <group> shows statistics about operations processed INFO ALL shows status and lag for all Manager, Extract, and Replicat processes on the system INFO <group>, DETAIL shows process status, datasource, checkpoints, lag, working directory, files containing processing information

The data source can be transaction log or trail/extract file for Extract, or trail/extract file for Replicat. Sample INFO DETAIL command output (some redo logs purposely omitted due to space constraints): GGSCI > INFO EXT_CTG1 DETAIL EXTRACT EXT_CTG1 ABENDED Checkpoint Lag Last Started 2005-12-16 11:10 Status

00:00:00 (updated 444:20:28 ago)

Oracle GoldenGate Fundamentals Student Guide

Log Read Checkpoint

File C:\ORACLE\ORADATA\ORA10G\REDO01.LOG 2005-12-22 23:20:39 Seqno 794, RBA 8919040

Target Extract Trails: Remote Trail Name MB C:\GoldenGate\Oracle\dirdat\g1 Extract Source End C:\ORACLE\ORADATA\ORA10G\REDO01.LOG 2005-12-22 23:20 C:\ORACLE\ORADATA\ORA10G\REDO03.LOG 2005-12-16 05:55 C:\ORACLE\ORADATA\ORA10G\REDO03.LOG 2005-12-15 15:11 C:\ORACLE\ORADATA\ORA10G\REDO03.LOG 2005-12-15 15:10 Not Available 2005-12-13 10:53 Not Available 2005-12-13 10:53 Seqno 0 RBA 25380 Begin 2005-12-16 05:55 2005-12-15 15:11 * Initialized * 2005-12-15 14:27 2005-12-13 10:53 * Initialized * Max 5

Current directory C:\GoldenGate\Oracle_10.1.10\win_ora101_v8020_019 Report file C:\GoldenGate\Oracle_10.1.10\win_ora101_v8020_019\dirrpt\EXT_CTG1 .rpt Parameter file C:\GoldenGate\Oracle_10.1.10\win_ora101_v8020_019\dirprm\EXT_CTG1 .prm Checkpoint file C:\GoldenGate\Oracle_10.1.10\win_ora101_v8020_019\dirchk\EXT_CTG1 .cpe Process file C:\GoldenGate\Oracle_10.1.10\win_ora101_v8020_019\dirpcs\EXT_CTG1 .pce Error log C:\GoldenGate\Oracle_10.1.10\win_ora101_v8020_019\ggserr.log

237

Troubleshooting Show Checkpoints


INFO <group>, SHOWCH provides detailed checkpoint information Extract maintains the following read checkpoints: Startup starting position in data source (transaction log or trail) Recovery position of oldest unprocessed transaction in data source Current position of last record read in data source and one write checkpoint: Current - current write position in trail Replicat maintains the following read checkpoints: Startup starting position in trail Current position of last record read in trail

Example of an extract checkpoint report GGSCI> INFO EXTRACT JC108XT, SHOWCH Checkpoint Lag 00:00:00 (updated 00:00:01 ago) Log Read Checkpoint File /orarac/oradata/racq/redo01.log 2006-06-09 14:16:45 Thread 1, Seqno 47, RBA 68748800 Log Read Checkpoint File /orarac/oradata/racq/redo04.log 2006-06-09 14:16:19 Thread 2, Seqno 24, RBA 65657408 Current Checkpoint Detail: Read Checkpoint #1 Oracle RAC Redo Log Startup Checkpoint (starting position in data source): Thread #: 1 Sequence #: 47 RBA: 68548112 Timestamp: 2006-06-09 13:37:51.000000 SCN: 0.8439720 Redo File: /orarac/oradata/racq/redo01.log Recovery Checkpoint (position of oldest unprocessed transaction in data source): Thread #: 1 Sequence #: 47 RBA: 68748304 Timestamp: 2006-06-09 14:16:45.000000 SCN: 0.8440969 Redo File: /orarac/oradata/racq/redo01.log

Oracle GoldenGate Fundamentals Student Guide

Current Checkpoint (position of last record read in the data source): Thread #: 1 Sequence #: 47 RBA: 68748800 Timestamp: 2006-06-09 14:16:45.000000 SCN: 0.8440969 Redo File: /orarac/oradata/racq/redo01.log Read Checkpoint #2 Oracle RAC Redo Log Startup Checkpoint(starting position in data source): Sequence #: 24 RBA: 60607504 Timestamp: 2006-06-09 13:37:50.000000 SCN: 0.8439719 Redo File: /orarac/oradata/racq/redo04.log Recovery Checkpoint (position of oldest unprocessed transaction in data source): Thread #: 2 Sequence #: 24 RBA: 65657408 Timestamp: 2006-06-09 14:16:19.000000 SCN: 0.8440613 Redo File: /orarac/oradata/racq/redo04.log Current Checkpoint (position of last record read in the data source): Thread #: 2 Sequence #: 24 RBA: 65657408 Timestamp: 2006-06-09 14:16:19.000000 SCN: 0.8440613 Redo File: /orarac/oradata/racq/redo04.log Write Checkpoint #1 GGS Log Trail Current Checkpoint (current write position): Sequence #: 2 RBA: 2142224 Timestamp: 2006-06-09 14:16:50.567638 Extract Trail: ./dirdat/eh Header: Version = 2

239

Record Source = A Type = 6 # Input Checkpoints = 2 # Output Checkpoints = 1 File Information: Block Size = 2048 Max Blocks = 100 Record Length = 2048 Current Offset = 0 Configuration: Data Source = 3 Transaction Integrity = 1 Task Type = 0 Status: Start Time = 2006-06-09 14:15:14 Last Update Time = 2006-06-09 14:16:50 Stop Status = A

Troubleshooting Recovery
Both Extract and Replicat restart after a failure at their last read checkpoint SEND EXTRACT STATUS command reports when Extract is recovering Checkpoint information is updated during the recovery stage allowing you to monitor the progress with the INFO command If an error prevents Replicat from moving forward in the trail, you can restart Replicat after the bad transaction: START REPLICAT <group> SKIPTRANSACTION | ATCSN <csn> |AFTERCSN <csn> To determine the CSN to use, view the Replicat report file with the VIEW REPORT <group> command or view the trail with the Logdump utility

Oracle GoldenGate Fundamentals Student Guide

Troubleshooting - Process Report


Each Extract, Replicat, and Manager has its own report file that shows: Banner with startup time Parameters in use Table and column mapping Database and Environmental information Runtime messages and errors The report provides initial clues, such as invalid parameter, data mapping errors, or database error messages. View with: VIEW REPORT <group> in GGSCI Director provides single click to next or previous historical reports

Viewing a process report View file name with INFO <group> DETAIL in GGSCI. Default location is dirrpt directory in GoldenGate home location. Old reports are kept with active reports. You may need to go back to them if the GoldenGate environment is large. Only the last ten reports are kept. The number of reports is not configurable. Old reports are numbered 0 through 9. For a group named EXT_CTG, here is a sample of the report history: 12/15/2005 12/14/2005 12/14/2005 12/14/2005 12/13/2005 12/13/2005 12/13/2005 12/12/2005 12/12/2005 02:31 11:58 11:13 02:14 04:00 03:56 03:54 06:26 06:38 PM AM AM AM PM PM PM PM AM 5,855 5,855 4,952 5,931 5,453 5,435 5,193 5,636 5,193 EXT_CTG.rpt EXT_CTG0.rpt EXT_CTG1.rpt EXT_CTG2.rpt EXT_CTG3.rpt EXT_CTG4.rpt EXT_CTG5.rpt EXT_CTG6.rpt EXT_CTG7.rpt Current Report next oldest

241

Troubleshooting - Event Log (ggserr.log)


GoldenGate Event Log provides: History of GGSCI commands Processes that started and stopped Errors that occurred Informational messages Shows events leading to an error. For example, you might discover: Someone stopped a process A process failed to make a TCP/IP or database connection A process could not open a file View with: Standard text editor or shell command GGSCI command VIEW GGSEVT Oracle GoldenGate Director

The logs name is ggserr.log, located in the root GoldenGate directory. You can also locate the file using the INFO EXTRACT <group>, DETAIL command. The location of the ggserr.log file is listed with the other GoldenGate working directories, as shown below: GGSCI> info extract oraext, detail EXTRACT ORAEXT Last Started 2005-12-28 10:45 Status STOPPED Checkpoint Lag 00:00:00 (updated 161:55:17 ago) Log Read Checkpoint File C:\ORACLE\ORADATA\ORA920\REDO03.LOG 2005-12-29 17:55:57 Seqno 34, RBA 104843776 <some contents deliberately omitted> Current directory C:\GoldenGate802 Report file C:\GoldenGate802\dirrpt\ORAEXT.rpt Parameter file C:\GoldenGate802\dirprm\ORAEXT.prm Checkpoint file C:\GoldenGate802\dirchk\ORAEXT.cpe Process file C:\GoldenGate802\dirpcs\ORAEXT.pce Error log C:\GoldenGate802\ggserr.log

Oracle GoldenGate Fundamentals Student Guide

Troubleshooting - Discard File


GoldenGate Discard file: Contains column-level details for operations that the process could not handle. Created when Extract or Replicat has a problem with the record it is processing, and if the DISCARDFILE <filename> parameter is used in the Extract or Replicat parameter file. Usually used for Replicat to log operations that could not be reconstructed or applied. Can help you resolve data mapping issues.

Discard file The location of the discard file is set in either the Extract or Replicat parameter file by using the DISCARDFILE parameter: DISCARDFILE C:\GoldenGate\dirrpt\discard.txt, <OPTION> Options are: APPEND: adds new content to old content in an existing file PURGE: purges an existing file before writing new content MEGABYTES <n>: sets the maximum size of the file (default is 1 MB) Discard file sample Aborting transaction beginning at seqno 12 rba 231498 error at seqno 12 rba 231498 Problem replicating HR.SALES to HR.SALES Mapping problem with compressed update record (target format)... ORDER_ID = ORDER_QTY = 49 ORDER_DATE = 2005-10-19 14:15:20

243

Technical Support

My Oracle Support Web Site


Go to My Oracle Support (http:// support.oracle.com) New users must register for an account with a Customer Support Identifier (CSI) and be approved by their Customer User Administrator (CUA) The customer portal allows you to: Search Knowledge Documents on known issues Log and track a Service Requests (SR) for bugs or enhancement requests Download patches Enter GoldenGate as the product

Global Customer Care


Global Customer Care is a highly available expert resource that resolves customers' business-related inquiries while capturing and sharing feedback to enhance the customer relationship Call US: 800 223 1711 Other Global Support Hot Lines are listed at: http://www.oracle.com/support/contact.html Responsibilities include but are not limited to: Access and navigation of Oracle Support websites Placing software orders Assisting with CSI questions Assisting with product or platform issues related to SR logging Locate and provide published information Create software upgrade orders

Oracle GoldenGate Fundamentals Student Guide

Oracle Advisors Webcasts


A new way to receive information about your Oracle products and services Go to MetaLink Note 553747.1 - Welcome to the Oracle Advisor Webcast Program! Includes links to the Advisor Webcast page Instructions for registering and viewing both live and archived webcasts The current menu of scheduled webcasts

245

You might also like