You are on page 1of 21

Continuent™ Replicator Guide

Continuent™ Replicator Guide


www.continuent.com
Copyright © 2008 Continuent
The trademarks, logos, and service marks in this Document are the property of Continuent or other third parties. You are not permitted to use these
Marks without the prior written consent of Continuent or such appropriate third party. Continuent, Tungsten, uni/cluster, m/cluster, p/cluster, uc/
connector, and the Continuent logo are trademarks or registered trademarks of Continuent in the United States, France, Finland and other countries.

All Materials on this Document are (and shall continue to be) owned exclusively by Continuent or other respective third party owners and are
protected under applicable copyrights, patents, trademarks, trade dress and/or other proprietary rights. Under no circumstances will you acquire
any ownership rights or other interest in any Materials by or through your access or use of the Materials. All right, title and interest not expressly
granted is reserved to Continuent.

All rights reserved.


Continuent™ Replicator Guide

Table of Contents
1 Building, Installing, and Running Replicator .................................................................................... 1
1.1 Installation Prerequisites ..................................................................................................... 1
1.2 Building Replicator from Source .......................................................................................... 1
1.3 Installing Replicator ............................................................................................................ 1
1.4 Running Replicator ............................................................................................................. 4
2 Replication Architecture and Key Components ............................................................................... 6
3 Introduction to Replicator .............................................................................................................. 7
3.1 Basic Replication ............................................................................................................... 7
3.2 Provisioning Slaves ............................................................................................................ 7
3.3 Switching the Master .......................................................................................................... 8
3.4 Consistency Checking ........................................................................................................ 8
3.5 Monitoring and Management APIs .................................................................................... 10
3.5.1 JMX/MBean Interface ............................................................................................ 10
4 Troubleshooting Replicator .......................................................................................................... 12
4.1 Error Handling ................................................................................................................. 12
4.2 Dealing with Failed Slaves ............................................................................................... 12
4.3 Dealing with a Failed Master ............................................................................................ 12
4.4 Database Failure .............................................................................................................. 13
4.5 Replicator State Machine .................................................................................................. 13
5 Command Reference Guide ........................................................................................................ 14
5.1 Running the Replicator from the Command Line Interface .................................................. 14
5.2 Running the Replicator as an Operating System Service .................................................... 14
5.3 Controlling a Running Replicator Process .......................................................................... 15
6 Extending the Replicator System ................................................................................................. 17

Continuent Replicator Guide - Document issue 1.0 iv


Building, Installing, and Running Replicator

Chapter 1. Building, Installing, and Running


Replicator
This chapter explains how to install and run the Replicator. It is assumed that each Replicator instance runs
on a separate database node.

1.1 Installation Prerequisites


Ant 1.7.0 or above is required to build the Replicator.

JDK 1.5 or above is required to run the Replicator.

1.2 Building Replicator from Source


Build the replicator using ant. Build instructions are provided in the README file found at the top of the
source tree (the root directory). The build procedure creates a build subdirectory in the root directory with
the following distribution files.

• tungsten-replicator-version.tgz

• tungsten-replicator-version.zip

You can install either archive as the contents are identical.

1.3 Installing Replicator


To install the Replicator, proceed as follows:

1. Copy the distribution archive to the database nodes and unpack at the location of your choice. In the
following code examples, we will use this location as the default directory.

On Linux, Solaris, MacOS X, and other Unix variants we recommend installing in directory /opt/
continuent. On Windows, use for example the C:\Program Files directory.

Note
If you use Windows and cannot unpack the .zip distribution archive, try installing another file
compression program, such as the 7-zip. You can also use the jar program distributed with the
Java JDK.

2. Prepare the database nodes.

a. Install MySQL on the database nodes.

b. Edit your my.cnf configuration file to enable binlogging and to disable logging of the tungsten
database. Use, for example, the my.cnf configuration file below:

[mysqld]
# Master replication settings.
log-bin=mysql-bin
server-id=1
binlog-ignore-db=tungsten

c. Start MySQL.

d. Create a new empty database called tungsten. Use, for example, the following command on
your MySQL client:

Continuent Replicator Guide - Document issue 1.0 1


Building, Installing, and Running Replicator

mysql> create database tungsten;

3. Dump the master database and upload it to all slaves in the cluster. For example, issue the following
command on the master:

mysqldump -uuser -ppassword -hmaster_host --all-databases > mydump.sql

On the slave:

mysql -uuser -ppassword -hslave_host < mydump.sql

Note
On Debian based distributions, you may have to copy the password value in /etc/mysql/
debian.cnf from the master to the slave after taking a dump. Otherwise MySQL scripts will
not work.

4. Configure the Replicator instances.

a. In the unpacked distribution, open the conf/replicator.properties configuration file on


each node.

b. See below for example configurations. Pay attention to the following restrictions and instructions:

• replicator.node_id must be a unique name for each Replicator. The host name is a good
value if you have one Replicator per host.

• replicator.thl.remote_uri must contain the host name of the master.

• replicator.dbms_uri must have the correct host, port, login, and password vale for the
server

• The extractor binlog directory must be the same as defined with the log_bin option in the
my.cnf MySQL configuration file.

In some Linux distributions, the MySQL binlog resides in /var/lib/mysql instead of the de-
fault value in conf/replicator.properties (/var/log/mysql).

Master MySQL:
=============
#
# Node Identifier. This must be unique for each node.
#
replicator.node_id=master_host

#
# Transaction History UUID. This parameter is used to uniquely
# identify transaction history.
#
replicator.history_uuid=2a849f26-d08d-4d69-9ee1-65cb5775fa02

#
# Replicator DBMS driver
#
replicator.dbms_driver=com.mysql.jdbc.Driver

#
# Replicator SQL Database URI. Must have administrator rights.
#
replicator.dbms_uri=jdbc:mysql://localhost/?user=root&password=rootpass
Continuent Replicator Guide - Document issue 1.0 2
Building, Installing, and Running Replicator

#
# Schema to store Replicator metadata
#
replicator.schema=tungsten

replicator.thl.uri=thl://0.0.0.0/
replicator.thl.remote_uri=thl://master_host/

#
# Replicator SQL Event Extractor type. Currently recognized
# values are 'dummy' and 'mysql'.
#
replicator.extractor=mysql
#
# Replicator MySQL Event Extractor binlog directory.
# Notice that in some operating system distributions
# the MySQL binlog resides in /var/lib/mysql.
#
replicator.extractor.mysql.binlog_dir=/var/log/mysql
#
# Replicator MySQL Event Extractor binlog file name pattern.
#
replicator.extractor.mysql.binlog_file_pattern=mysql-bin

#
# Set for events to contain checksums.
# Possible values are 'SHA' or 'MD5' or empty for no checksums
# The master and slave checksum settings must match
#
replicator.event.checksum=md5

#
# How to react on consistency check failure (stop|warn)
#
replicator.applier.consistency_policy=stop

Slave MySQL:
============
#
# Node Identifier. This must be unique for each node.
#
replicator.node_id=slave_1_host

#
# Transaction History UUID. This parameter is used to uniquely
# identify transaction history.
#
replicator.history_uuid=2a849f26-d08d-4d69-9ee1-65cb5775fa02

#
# Replicator DBMS driver
#
replicator.dbms_driver=com.mysql.jdbc.Driver

#
# Replicator SQL Database URI. Must have administrator rights.

Continuent Replicator Guide - Document issue 1.0 3


Building, Installing, and Running Replicator

#
replicator.dbms_uri=jdbc:mysql://localhost/?user=root&password=rootpass

#
# Schema to store Replicator metadata
#
replicator.schema=tungsten

replicator.thl.uri=thl://0.0.0.0/
replicator.thl.remote_uri=thl://master_host/

#
# Replicator SQL Event Extractor type. Currently recognized values
# are 'dummy' and 'mysql'.
#
replicator.extractor=mysql
#
# Replicator MySQL Event Extractor binlog directory.
# Notice that in some operating system distributions
# the MySQL binlog resides in /var/lib/mysql.
#
replicator.extractor.mysql.binlog_dir=/var/log/mysql
#
# Replicator MySQL Event Extractor binlog file name pattern.
#
replicator.extractor.mysql.binlog_file_pattern=mysql-bin

#
# Set for events to contain checksums.
# Possible values are 'SHA' or 'MD5' or empty for no checksums
# The master and slave checksum settings must match
#
replicator.event.checksum=md5

#
# How to react on consistency check failure (stop|warn)
#
replicator.applier.consistency_policy=stop

If you have followed these instructions so far and have MySQL installed, you should not need to make any
other changes.

1.4 Running Replicator


Important
In Linux, Solaris, and Mac OS X, the login used to run the Replicator must have permissions to read
MySQL binlog files. Add the Replicator login to the mysql group, which will allow it to read but not
write to the logs.

The Replicator is run and configured by using shell scripts residing in the /bin directory.

In Linux, Solaris, and Mac OS X, use the scripts below:

• trep_start.sh is used to start the Replicator.

• trep_ctl.sh is used to control the Replicator.

In Windows, use the scripts below:

• trep_start.bat is used to start the Replicator.

Continuent Replicator Guide - Document issue 1.0 4


Building, Installing, and Running Replicator

• trep_ctl.bat is used to control the Replicator.

To start the Replicator in Linux, Solaris, or Mac OS X, proceed as follows:

1. On master and all slaves, start the Replicator process:

bin/trep_start.sh

2. In the master node, command the Replicator to MASTER state.

a. bin/trep_ctl.sh configure

b. bin/trep_ctl.sh goOnline

c. bin/trep_ctl.sh goMaster

3. In all the slave nodes, command the Replicator to SLAVE state.

a. bin/trep_ctl.sh configure

b. bin/trep_ctl.sh goOnline

To start the Replicator in Windows, proceed as follows:

1. On master and all slaves, start the Replicator process:

bin/trep_start.bat

2. In the master node, command the Replicator to MASTER state.

a. bin/trep_ctl.bat configure

b. bin/trep_ctl.bat goOnline

c. bin/trep_ctl.bat goMaster

3. In all the slave nodes, command the Replicator to SLAVE state.

a. bin/trep_ctl.bat configure

b. bin/trep_ctl.bat goOnline

This is all it takes to start Tungsten Replicator master and slaves. You should now have your master and
slave Replicators running and you can check the replication by making some data changes in the master
database and verifying that the changes are reflected in the slave databases.

The use of trep_ctl.sh/trep_ctl.bat command is documented in Chapter 5, Command Reference Guide.


See also Section 5.2, “Running the Replicator as an Operating System Service”.

Continuent Replicator Guide - Document issue 1.0 5


Replication Architecture and Key Components

Chapter 2. Replication Architecture and Key


Components
The Replicator is a process that runs on every host in the cluster. The Replicator has the key responsibilities
explained in the chapters below. The figure below depicts the replication architecture:

Figure 2.1. Replication Architecture

The components in the figure are:

• Master DBMS - The Database Management System (DBMS), which acts as the master for the replication
system. The master role can change, and any DBMS can be potentially elected as the master for the
replication.

• Slave DBMS - The slave DBMS receives replication events from the master DBMS and applies them.
There can be any number of slaves in the replication system.

• Replication Event Extractor - The replication event extractor extracts replication events from the master
DBMS logs. Events are either SQL statements or rows from the replicated database.

• Transaction History Log - The transaction history log provides a persistent storage for replication events
and communications with other transaction history logs in the cluster.

• Replication Event Applier - The replication event applier applies the replication events into the slave
DBMS.

• Node Manager - Node manager refers to the manager for Tungsten components running either on the
slave or master node. Node manager connects to the Tungsten service manager at the upper level.

Continuent Replicator Guide - Document issue 1.0 6


Introduction to Replicator

Chapter 3. Introduction to Replicator


The basic principles of the Replicator system are explained in the following chapters.

3.1 Basic Replication


The Tungsten replication system replicates all changes made in the master database to all the slaves in the
cluster. Replication starts as soon as the master node is commanded to the ONLINE state. It is possible to
stop and start slave nodes and add more slaves in the system on the fly.

Due to the asynchronous nature of the replication, there is some delay before changes made on the master
node will be reflected in the slave nodes. The latency depends on hardware, communication network, and
the SQL load profile and should be measured separately for each installation.

3.2 Provisioning Slaves


You can add more slaves in the system while the system is online. However, this is not an automatic process.
To add a slave, proceed as follows:

1. Command the donor slave to the OFFLINE state:

In Linux, Solaris, or Mac OS X:

bin/trep_ctl.sh shutdown

In Windows:

bin/trep_ctl.bat shutdown

2. Take dumps from the tungsten database and your own database(s). Here, mydb is used as an ex-
ample:

mysqldump -uuser -ppassword tungsten > tungsten.sql

mysqldump -uuser -ppassword mydb > mydb.sql

3. Command the donor slave back to the ONLINE state:

In Linux, Solaris, or Mac OS X:

bin/trep_ctl.sh goOnline

In Windows:

bin/trep_ctl.bat goOnline

4. Make sure that MySQL is running on the new slave. Load the tungsten and the application database
to the new slave:

mysql -uuser -ppassword < tungsten.sql

mysql -uuser -ppassword < mydb.sql

5. Reset the transaction history by using the following command:

mysql -uuser -ppassword -e"UPDATE tungsten.seqno_trxid_map SET trxid = NULL"

6. Start the Replicator on the new slave and command it to the SLAVE state:

In Linux, Solaris, or Mac OS X:

bin/trep_start.sh&

Continuent Replicator Guide - Document issue 1.0 7


Introduction to Replicator

bin/start_slave.sh

In Windows:

bin/trep_start.bat

bin/start_slave.bat

3.3 Switching the Master


You can switch the master as follows:

1. Edit the Replicator configuration file (replicator.properties) and set


replicator.thl.remote_uri to point to the new master on all nodes.

2. On the new master, run the command below:

In Linux, Solaris, or Mac OS X:

bin/trep_ctl.sh reconfigure

In Windows:

bin/trep_ctl.bat reconfigure

Wait until the Replicator reaches the SLAVE state. Then, run:

In Linux, Solaris, or Mac OS X:

bin/trep_ctl.sh goOnline

bin/trep_ctl.sh goMaster

In Windows:

bin/trep_ctl.bat goOnline

bin/trep_ctl.bat goMaster

to promote the slave as the new master.

3. On the remaining slaves, run the command below:

In Linux, Solaris, or Mac OS X:

bin/trep_ctl.sh reconfigure

In Windows:

bin/trep_ctl.bat reconfigure

Wait until the Replicator reaches the SLAVE state.

3.4 Consistency Checking


Currently, the database consistency check is a manual process. This chapter explains how to check the
database consistency. This implementation was inspired by Maatkit (http://www.maatkit.org/).

Upon startup, the Replicator node manager creates a special table in the Replicator metadata schema:

CREATE TABLE consistency (


db char(64) NOT NULL,

Continuent Replicator Guide - Document issue 1.0 8


Introduction to Replicator

tbl char(64) NOT NULL,


id int NOT NULL,
this_crc char(40) NOT NULL, /* Slave checksum */
this_cnt int NOT NULL, /* Slave row count */
master_crc char(40) NULL, /* Master checksum */
master_cnt int NULL, /* Master row count */
ts timestamp NOT NULL,
command text NULL,
PRIMARY KEY (db, tbl, id)
);

The table keeps logs of consistency checks. Consistency check is a result of a specially forged transaction,
which should follow the structure below:

START TRANSACTION;
SET @crc:='', @cnt:=0;
REPLACE INTO tungsten.consistency (id, db, tbl, this_cnt, this_crc)
<SELECT_CLAUSE>;
SET @crc:='', @cnt:=0;
SELECT this_cnt, this_crc INTO @cnt, @crc FROM tungsten.consistency
WHERE db = <schema> AND tbl = <table> AND id = <id>;
UPDATE tungsten.consistency
SET master_cnt = @cnt, master_crc = @crc, command = <REPLACE_STATEMENT>
WHERE db = <schema> AND tbl = <table> AND id = <id>;
COMMIT;

Where

• <table> - the name of the table to be checked.

• <schema> - the name of the schema that contains the table.

• <id> - a unique check identifier for the table. This identifieris used if you want to separately check different
regions of the table or differentiate between checks. The actual value is not important.

• <SELECT_CLAUSE> - a SELECT statement, which provides the values for the row. Its exact form depends
on the table to be checked and the check type. This SELECT statement normally yields the same results
on both master and slave nodes. For example, if a table was created as:

CREATE TABLE my_schema.my_table (


idx int(11) NOT NULL auto_increment,
value int(11) NOT NULL,
PRIMARY KEY (idx)
);

you can check the consistency of the first 100 rows by using a <SELECT_CLAUSE> as follows:

SELECT 1, 'my_schema', 'my_table', COUNT(*) AS cnt,


RIGHT(MAX(@crc := CONCAT(LPAD(@cnt := @cnt + 1, 16, '0'),
MD5(CONCAT(@crc, MD5(CONCAT_WS(idx, value)))))), 32) AS crc
FROM my_schema.my_table WHERE idx > 0 AND idx < 100 LOCK IN SHARE MODE;

The order and number of <SELECT_CLAUSE> outputs must match the REPLACE inputs.

• <REPLACE_STATEMENT> - a concatenation of all transaction statements preceding and including RE-


PLACE. In this case, the statements would be the first SET statement and the following REPLACE state-

Continuent Replicator Guide - Document issue 1.0 9


Introduction to Replicator

ment. Notice that the concatenation includes the <SELECT_CLAUSE> as a part of the REPLACE state-
ment. This parameter is required in row-level replication. In the case of statement-level replication, this
parameter can be an empty string.

For compatibility with databases that do not support the REPLACE command, the REPLACE command can
be substituted by an equivalent DELETE INSERT superposition as folows:

DELETE FROM tungsten.consistency


WHERE db = <schema> AND tbl = <table> AND id = <id>;
INSERT INTO tungsten.consistency (id, db, tbl, this_cnt, this_crc)
<SELECT_CLAUSE>;

For an example of how to script consistency checks on MySQL, please see the mysql_cc_sample.sh
script.

A consistency check is perfomed as follows:

1. When the Extractor detects a consistency check transaction in the binlog stream, it creates a special
consistency check event. In the case of row-level replication, it uses information contained in the last
UPDATE statement to recover the original <REPLACE_STATEMENT>, and restores the transaction to
the statement form.

2. On the slave, the Applier receives the consistency check event and applies the transaction in the
event, as if it were a normal statement-level SQL event. This causes the slave to perform the same
<SELECT_CLAUSE> as the master and the master values are carried by the last UPDATE statement.

Next, the Applier selects the master and slave CRC values from the updated row and compares them.
If the comparison fails, the Applier notifies the node manager of the failure and either stops or contin-
ues working depending on the replicator.applier.consistency_policy setting, which can
be either stop or warn.

3.5 Monitoring and Management APIs


The Replicator has the following monitoring and management APIs:

3.5.1 JMX/MBean Interface


The JMX/MBean interface is the management interface between the Replicator and the management sys-
tem. The high-level architecture of the JMX/MBean interface consists of three components:

• The management system provides a user interface to the Replicator states and error notifications.

• The JMX/MBean interface is the management interface between the Replicator and the management
system.

• The Replicator contains the state machine that changes the Replicator state in line with the state change
commands from the JMX/MBean interface. It also sends state change and error notifications to the JMX/
MBean interface.

The figure below depicts the high-level failover architecture:

Continuent Replicator Guide - Document issue 1.0 10


Introduction to Replicator

Figure 3.1. High-Level Failover Architecture

The JMX/MBean interface contains the following methods:

• configure() - This method is used to configure the Replicator. It is only allowed in the OFFLINE state.

• reconfigure() - This method is used to re-configure the Replicator. This method only affects param-
eters associated to THL connectivity (remote THL address). This method is allowed in all other states
except in the MASTER state.

• goOnline() - This method is used to set the Replicator to the ONLINE state.

• goMaster() - This method is used to set the Replicator to the MASTER state. It is only allowed in the
ONLINE state.

• shutdown() - This method is used to set the Replicator to the OFFLINE state.

• progressPoll() - This method is used to get the Replicator status.

The Replicator also gives notifications. These notifications can be listened to by registering a JMX notifi-
cation listener. See Section 4.1, “Error Handling” for more information on errors. The Replicator can give
notifications as follows:

• StateChangeNotification - This notification is sent when the Replicator state machine reaches a
new state.

Furthermore, recoverable errors are enclosed into state change notifications.

• ErrorNotification - This notification is sent when one of the Replicator components encounters an
unrecoverable error.

Continuent Replicator Guide - Document issue 1.0 11


Troubleshooting Replicator

Chapter 4. Troubleshooting Replicator


This chapter deals with Replicator troubleshooting.

4.1 Error Handling


Recoverable error notifications are enclosed into state change notifications to provide atomic notification of
error and state change for the management system. In the case of an unrecoverable error, the Replicator
waits in the OFFLINE state - instead of immediately shutting down - to avoid ambiguities that may result
when a JMX connection error is detected before an error notification is delivered to the management system.
The Replicator in the OFFLINE state must be restarted before its state can be changed again.

4.2 Dealing with Failed Slaves


If one of the Tungsten slave nodes fails, the master node and other slaves will keep on working without
interruption. In other words, the cluster is functional despite the slave problem.

To recover from a slave node failure, it is essential to first analyze the reason for the node failure and fix
any problem(s) that can prevent the future slave operation.

Once you have fixed the problem on the slave node, you can join the slave node in the cluster by following
the instructions in Section 3.2, “Provisioning Slaves”.

4.3 Dealing with a Failed Master


This chapter concentrates to a simple failover scenario, where the master database or the master Replicator
process failure leads to a failover.

To recover, proceed as follows:

1. Check the Replicator status by issuing the command below:

In Linux, Solaris, or Mac OS X:

bin/trep_ctl.sh

In Windows:

bin/trep_ctl.bat

Wait until both slaves end up to the SYNCHRONIZING state.

2. Check the seqno ranges from both slaves by issuing the command below:

In Linux, Solaris, or Mac OS X:

bin/trep_ctl.sh

In Windows:

bin/trep_ctl.bat

Select the slave with the greater max seqno as the new master.

3. Edit the Replicator configuration file replicator.properties and set


replicator.thl.remote_uri to point to the new master on both old slave nodes.

4. On the new master, run the command below:

In Linux, Solaris, or Mac OS X:

bin/trep_ctl.sh reconfigure

Continuent Replicator Guide - Document issue 1.0 12


Troubleshooting Replicator

In Windows:

bin/trep_ctl.bat reconfigure

Wait until the Replicator reaches the SLAVE state. Then, run:

In Linux, Solaris, or Mac OS X:

bin/trep_ctl.sh goOnline

bin/trep_ctl.sh goMaster

In Windows:

bin/trep_ctl.bat goOnline

bin/trep_ctl.bat goMaster

This promotes the slave as the new master.

5. On the remaining slave, run the command below:

In Linux, Solaris, or Mac OS X:

bin/trep_ctl.sh reconfigure

In Windows:

bin/trep_ctl.bat reconfigure

Wait until the Replicator reaches the SLAVE state.

4.4 Database Failure


If the database fails, the node is considered as failed and the procedure for a failed master or failed slave
is used, accordingly.

4.5 Replicator State Machine


The Replicator state machine contains the following states:

• OFFLINE - The Replicator process is running, but none of its subprocesses are.

• SYNCHRONIZING - The Replicator applier subprocess is running and is either connecting to the remote
THL instance or waiting for a reconfiguration that changes the remote THL address and the goOnline()
command that follows it.

• ONLINE - The Replicator applier subprocess is running, has connected successfully to the remote THL
instance and detected that a continuous sequence of SQLEvents can be fetched from the remote THL.

• MASTER - The Replicator extractor is running, extracting a database log, and storing SQLEvents into
the THL.

Continuent Replicator Guide - Document issue 1.0 13


Command Reference Guide

Chapter 5. Command Reference Guide


The Replicator can be run from the command line interface or run as an operating system process. The
following commands are available for the Replicator.

5.1 Running the Replicator from the Command Line Interface


The Replicator bin directory contains scripts that can be used to start and stop the Replicator from the
command line prompt.

Linux, Solaris, and Mac OS X Command Line Interfaces


Use these commands to run the Replicator from the command line prompt in the Linux, Solaris, and Mac
OS X operating systems.

• bin/trep_ctl.sh shutdown

This command is used to stop the Replicator in an orderly manner. Run this command before running
the bin/trep_stop.sh command.

• bin/trep_stop.sh

This command stops the Replicator process if it is running. Run the bin/trep_ctl.sh shutdown command
before this command. After you have issued this command, the master or slave Replicator enters the
OFFLINE state.

• bin/trep_start.sh

This command starts the Replicator process if it is not already running.

Windows Command Line Interface


Use these commands to run the Replicator from the command line prompt in the Windows operating system.

• bin/trep_stop.bat

This command is used to command the master or slave Replicator to enter the OFFLINE state.

• bin/trep_start.bat

This command starts the Replicator process if it is not running.

5.2 Running the Replicator as an Operating System Service


These instructions are only applicable for the Linux, Solaris, and Mac OS X operating systems.

The Replicator includes a service implementation based on the Java Service Wrapper (http://
wrapper.tanukisoftware.org). This allows you to run the Replicator as a service that has protection against
signals and it also implements the standard interface used by Unix Services. The service implementation
also restarts the Replicator in the event of a crash.

You can adjust the Replicator service configuration by editing the conf/wrapper.properties configu-
ration file. Please read the comments in the file for information on legal settings. For most installations, the
included file should work out of the box.

The Replicator service implementation supports services on 32-bit and 64-bit versions of Linux, and on Mac
OS X platforms.

The Replicator service wrapper binary is called the trepsvc. in practice, the trepsvc is a replacement
for the trep_start.sh and trep_stop.sh commands. The trepsvc commands are summarized below:

Continuent Replicator Guide - Document issue 1.0 14


Command Reference Guide

• trepsvc start - This command starts the Replicator service if it is not already running. Logs are written
to log trepsvc.log.

• trepsvc status - This command prints out the status of the Replicator service, namely whether it is running
and if it is, on which process number.

• trepsvc stop - This command stops the Replicator service if it is currently running.

• trepsvc restart - This command restarts the Replicator service, stopping it first if it is currently running.

• trepsvc console - This command runs the Replicator service in a Java console program that allows you
to view log output in a GUI shell.

• trepsvc dump - This command sends a 'kill -quit' signal to the Java VM to force it to write a thread dump
to the log. This command is useful for debugging a stuck process.

5.3 Controlling a Running Replicator Process


The commands in the sections below change the Replicator state.

Controlling a Running Replicator Process in Linux, Solaris and Mac OS X Op-


erating Systems
The trep_ctl.sh script allows you to submit commands to the Replicator. These commands change the
Replicator state.

• bin/trep_ctl.sh configure [file]

This command uploads the Replicator configuration. The configure command should only be used
initially, when the node is in the OFFLINE state. For later configuration changes, use the reconfigure
command.

The Replicator reads the ../conf/replicator.properties file by default. An alternative property


file can be specified by giving the configuration property file name as an argument.

• bin/trep_ctl.sh reconfigure [file]

This command uploads the Replicator configuration and resets the applying process in the node. The
reconfigure command can be issued in SYNCHRONIZING and SLAVE states.

The Replicator reads the ../conf/replicator.properties file by default. An alternative property


file can be specified by giving the configuration property file name as an argument.

• bin/trep_ctl.sh goMaster

This command is used to command the slave Replicator to enter the MASTER state. Always issue the
goOnline command before the goMaster command.

• bin/trep_ctl.sh goOnline

This command is used to command the master Replicator to enter the ONLINE state.

• bin/trep_ctl.sh

Run the bin/trep_ctl.sh command without arguments to check the Replicator state.

Controlling a Running Replicator Process in the Windows Operating System


The trep_ctl.bat script allows you to submit commands to the Replicator. These commands change
the Replicator state.

• bin/trep_ctl.bat configure [file]

Continuent Replicator Guide - Document issue 1.0 15


Command Reference Guide

This command uploads the Replicator configuration. The configure command should only be used
initially, when the node is in the OFFLINE state. For later configuration changes, use the reconfigure
command.

The Replicator reads the ../conf/replicator.properties file by default. An alternative property


file can be specified by giving the configuration property file name as an argument.

• bin/trep_ctl.bat reconfigure [file]

This command uploads the Replicator configuration and resets the applying process in the node. The
reconfigure command can be issued in SYNCHRONIZING and SLAVE states.

The Replicator reads the ../conf/replicator.properties file by default. An alternative property


file can be specified by giving the configuration property file name as an argument.

• bin/trep_ctl.bat goMaster

This command is used to command the slave Replicator to enter the MASTER state. Always issue the
goOnline command before the goMaster command.

• bin/trep_ctl.bat goOnline

This command is used to command the master Replicator to enter the ONLINE state.

• bin/trep_ctl.bat

Run the bin/trep_ctl.bat command without arguments to check the Replicator state.

Continuent Replicator Guide - Document issue 1.0 16


Extending the Replicator System

Chapter 6. Extending the Replicator System


The Replicator system can be extended by using adapters for different DBMSs.

To be more specific, the replication event extracting is a generic framework, which allows adapters for
different DBMSs to plug in. The master node connects to one DBMS only, which means that only one of
the adapters is needed for a particular installation. The adapters convert native DBMS log information into
generic replication events (SQLEvent), which are then propagated to the transaction history log for further
processing.

The diagram below depicts the internal structure of the extracting process and transaction history log. In
this figure, Oracle is used as the source database.

Figure 6.1. Extracting of Replication Events

Continuent Replicator Guide - Document issue 1.0 17