Oracle Streams 10g

+
==>set the following initialization parameters, as necessary, at each participating instance: global_names, _job_queue_interval, sga_target, streams_pool_size ==>Tablespace/User for Streams Administrator queues : CREATE TABLESPACE &streams_tbs_name DATAFILE '&db_file_directory/&db_file_name' SIZE 100M REUSE AUTOEXTEND ON NEXT 25M MAXSIZE UNLIMITED; create user STRMADMIN identified by STRMADMIN; ALTER USER strmadmin DEFAULT TABLESPACE &streams_tbs_name QUOTA UNLIMITED ON &streams_tbs_name; ==>Privileges GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE,DBA to STRMADMIN; execute DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('STRMADMIN'); ==>Separate queues for capture and apply Configure separate queues for changes that are captured locally and for receiving captured changes from each remote site. This is especially important when configuring bi-directional replication between multiple databases. For example, consider the situation where Database db1.net replicates its changes to databases db2.net, and Database db2.net replicates to db1.net. Each database will maintain 2 queues: one for capturing the changes made locally and other queue receiving changes from the other database. ==>Streams and Flash Recovery Area (FRA) In Oracle 10g and above, configure a separate log archive destination independent of the Flash Recovery Area for the Streams capture process for the database. Archive logs in the FRA can be removed automatically on space pressure, even if the Streams capture process still requires them. Do not allow the archive logs for Streams capture to reside solely in the FRA. ==>Archive Logging must be enabled ==>Supplemental logging If you set the parallelism apply process parameter to a value greater than 1, then you must specify a conditional supplemental log group at the source database for all of the unique and foreign key columns in the tables for which an apply process applies changes. Supplemental logging may be required for other columns in these tables as well, depending on your configuration. Any columns specified in rule-based transformations or used within DML Handlers at target site must be unconditionally logged at the source site. Supplemental logging can be specified at the source either at the database level or for the individual replicated table. In 10gR2, supplemental logging is automatically configured for tables on which primary, unique, or foreign keys are defined when the database object is prepared for Streams capture. The procedures for maintaining streams and adding rules in the DBMS_STREAMS_ADM package automatically prepare objects for a local Streams capture. -->Database level logging: -->Minimal supplemental Logging SQL> Alter database add supplemental log data; -->Identification Key Logging ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL,PRIMARY KEY,UNIQUE,FOREIGN KEY) COLUMNS; select SUPPLEMENTAL_LOG_DATA_MIN, SUPPLEMENTAL_LOG_DATA_PK, SUPPLEMENTAL_LOG_DATA_UI,SUPPLEMENTAL_LOG_DATA_FK, SUPPLEMENTAL_LOG_DATA_all from v$database;

1

alter table HR.EMPLOYEES ADD SUPPLEMENTAL LOG GROUP emp_fulltime (EMPLOYEE_ID. implement a "heart beat" table. 2 . commit. 'Conditional') ALWAYS from DBA_LOG_GROUPS -->Check columns in supplemental log groups Select log_group_name. table_name. queue_to_queue is automatically set to TRUE. supplemental_log_data_fk FROM dba_capture_prepared_tables UNION SELECT supplemental_log_data_pk. so be sure to check the spelling of each object specified in a rule carefully.job values (temp_seq. variable jobno number. begin dbms_job. LAST_NAME. or exec DBMS_PROPAGATION_ADM.FOREIGN. b date). supplemental_log_data_fk FROM dba_capture_prepared_database. supplemental_log_data_ui. end.SCHEMA.EMPLOYEES add SUPPLEMENTAL LOG data (PRIMARY KEY. The database link for this queue_to_queue propagation must use a TNS servicename (or connect name) that specifies the GLOBAL_NAME in the CONNECT_DATA clause of the descriptor ==>Propagation Restart Use the procedures START_PROPAGATION and STOP_PROPAGATION from DBMS_PROPAGATION_ADM to enable and disable the propagation schedule.ALL) columns. sysdate). / ==>Configuring Capture use the DBMS_STREAMS_ADM. supplemental_log_data_fk FROM dba_capture_prepared_schemas UNION SELECT supplemental_log_data_pk. These procedures automatically handle queue_to_queue propagation.STOP_PROPAGATION('name_of_propagation').force=>true).GLOBAL) procedures are used to configure Streams. column_name.-->Table level logging: alter table HR. 'ALWAYS'.GLOBAL. 'insert into zul. it is possible to create rules for non-existent objects.MAINTAIN_* (where *=TABLE.The ADD_GLOBAL_RULES procedure cannot be used to capture DML changes for entire database. Implementing a heartbeat table ensures that there are open transactions occurring regularly within the source database enabling additional opportunities for the metadata to be updated frequently --> Submit a job to moniter heartbeat table create table job (a number primary key. TTS) These procedures minimize the number of steps required to configure Streams processes. The streams capture process requests a checkpoint after every 10Mb of generated redo. A "heart beat" table is especially useful for databases that have a low activity rate. CAPTURE requires a rule set with rules. Also. if possible.nextval. SELECT supplemental_log_data_pk. decode(always.'. 'sysdate+60/(60*60*24)'). NULL. ADD_GLOBAL_RULES can be used to capture all DDL changes for the database. ==>Propagation Configuration If the maintain_* (TABLE. DEPARTMENT_ID).SCHEMA. sysdate. the metadata for streams is maintained if there are active transactions. supplemental_log_data_ui. 'Unconditional'.UNIQUE. ==>Implement a Heartbeat Table To ensure that the applied_scn of the DBA_CAPTURE view is updated periodically. During the checkpoint. create sequence temp_seq start with 1. position from dba_log_group_columns where table_name = 'DEPARTMENTS' and owner='HR'. Example: exec DBMS_PROPAGATION_ADM.submit(:jobno. supplemental_log_data_ui. -->Check supplemental log groups Select log_group_name.STOP_PROPAGATION('name_of_propagation'.

A user HR_DEMO exists on the destination site -STRM2.STREAMS_QUEUE@STRM2. / BEGIN DBMS_STREAMS_ADM. /* Step 2 .NET and create the streams queue */ conn strmadmin/strmadmin@strm2. Running the Script The script assumes that : -.NET using 'strm2.NET = Global Database name of the Target (apply) Site STRMADMIN = Streams Administrator with password strmadmin HR.STRM1.ADD_TABLE_RULES( 3 . include_ddl => true. queue_table =>'STREAMS_QUEUE_TABLE'. / /*Step 3 -Connected to STRM1. streams_name => 'STRMADMIN_PROP'.NET.net BEGIN DBMS_STREAMS_ADM.net BEGIN DBMS_STREAMS_ADM.NET = Global Database name of the Source (capture) Site STRM2. create the streams queue and the database link that will be used for propagation at STRM1.NET -.exec DBMS_PROPAGATION_ADM. queue_table =>'STREAMS_QUEUE_TABLE'. create CAPTURE and PROPAGATION rules for HR.ADD_TABLE_PROPAGATION_RULES( table_name => 'HR. */ conn strmadmin/strmadmin@strm1.NET.SET_UP_QUEUE( queue_name => 'STREAMS_QUEUE'. queue_user => 'STRMADMIN').EMPLOYEES = table to be replicated to the target database.SET_UP_QUEUE( queue_name => 'STREAMS_QUEUE'.The sample HR schema is installed on the source site . Script /* Step 1 . ======= ========= ============ ============ ============== ==>Streams Table Level Replication Setup Script Configuring the Script To run this script either set your environment so the values below are the same as yours or replace them in the script with values appropriate to your environment : STRM1.The target site table is empty .net BEGIN DBMS_STREAMS_ADM.Connect as the Streams Administrator in the target site STRM2.Connected as the Streams Administrator. source_database => 'STRM1.net'. include_dml => true.NET'. queue_user => 'STRMADMIN').net as sysdba create public database link STRM2. END.NET -. destination_queue_name => 'STRMADMIN.STREAMS_QUEUE'. / conn sys/oracle@strm1.NET').net create database link STRM2.NET connect to strmadmin identified by strmadmin. conn strmadmin/strmadmin@strm1. END. END. source_queue_name => 'STRMADMIN.EMPLOYEES'.EMPLOYESS */ conn strmadmin/strmadmin@strm1.START_PROPAGATION('name_of_propagation').

net BEGIN DBMS_APPLY_ADM.log OBJECT_CONSISTENT=Y STATISTICS = NONE /*Step 6 .ADD_TABLE_RULES( table_name => 'HR. streams_name => 'STRMADMIN_CAPTURE'. END. include_ddl => true. END.log STREAMS_INSTANTIATION=Y /*Step 7 . source_database => 'STRM1. apply_user => 'HR'). / conn strmadmin/strmadmin@strm1.table_name => 'HR.NET'). streams_type => 'APPLY'.dmp IGNORE=Y COMMIT=Y LOG=hr_imp. END. / /*Step 4 . value => 'n').EMPLOYEES'. / /*Step 5 .NET and import */ imp USERID=SYSTEM/<password>@strm2.Connected as STRMADMIN at STRM2.SET_PARAMETER( apply_name => 'STRMADMIN_APPLY'.NET */ exp USERID=SYSTEM/oracle@strm1. END. streams_name => 'STRMADMIN_APPLY'.NET').net BEGIN DBMS_CAPTURE_ADM.Start Apply and capture */ conn strmadmin/strmadmin@strm2. Script Output 4 .Take an export of the table at STRM1.STREAMS_QUEUE'. / For bidirectionals treams setup.net TABLES=EMPLOYEES FILE=hr.dmp LOG=hr_exp. Please run steps 1 through 9 after interchanging Db1 and Db2. include_ddl => true.Transfer the export dump file to STRM2.net CONSTRAINTS=Y FULL=Y FILE=hr. queue_name => 'STRMADMIN. / BEGIN DBMS_APPLY_ADM. Export option ROWS=N can be used for the instantiation of objects from DB2--> DB1.EMPLOYEES'. create APPLY rules for HR. END.net BEGIN DBMS_STREAMS_ADM.STREAMS_QUEUE'. include_dml => true. END.ALTER_APPLY( apply_name => 'STRMADMIN_APPLY'.START_APPLY( apply_name => 'STRMADMIN_APPLY').EMPLOYEES */ conn STRMADMIN/STRMADMIN@strm2. parameter => 'disable_on_error'. include_dml => true.NET. streams_type => 'CAPTURE'. / BEGIN DBMS_APPLY_ADM. source_database => 'STRM1. queue_name => 'STRMADMIN.START_CAPTURE( capture_name => 'STRMADMIN_CAPTURE'). Caution should be exercised while setting the instantiation SCN this time as one maynot want to export and import the data.

*/ connect STRMADMIN/STRMADMIN@STRM1..'TEST'.'TEST'. END. queue_user => 'STRMADMIN').NET = Global Database name of the Source (capture) Site STRM2.net and create the streams queue */ connect STRMADMIN/STRMADMIN@STRM2. streams_name => 'STREAM_APPLY'. Check the spool file for errors after you run this script. commit.'TEST@oracle'.Employees values (99999.null).SET_UP_QUEUE( queue_table => 'STREAMS_QUEUE_TABLE'.NET using 'strm2.sysdate. --CREATE DATABASE LINK AT SOURCE as STRMADMIN conn strmadmin/strmadmin@strm1. /************************* BEGINNING OF SCRIPT ****************************** Run SET ECHO ON and specify the spool file for the script./* Perform changes HR. / --CREATE DATABASE LINK AT SOURCE as SYS conn sys/&sys_pwd_source@strm1. The Streams Administrator (STRMADMIN) has been created as per Note 786528. queue_name => 'STREAMS_QUEUE'.null.Create the streams queue and the database links that will be used for propagation.NET BEGIN DBMS_STREAMS_ADM. / /* STEP 3.ADD_SCHEMA_RULES( schema_name => 'HR'. conn hr / hr@strm2.net as sysdba create public database link STRM2. queue_name => 'STREAMS_QUEUE'.SET_UP_QUEUE( queue_table => 'STREAMS_QUEUE_TABLE'.Create the database link at the destination database too /* STEP 2.EMPLOYEES and confirm that these are applied to tables on the destination */ conn hr/hr@strm1.. ======= ========= ============ ============ ============== ==> How To Setup One-Way SCHEMA Level Streams Replication : Running the Sample Code To run this script either set your environment so the values below are the same as yours or replace them in the script with values appropriate to your environment : STRM1. 5 .net create database link STRM2. END.'ST_MAN'.Add apply rules for the Schema at the destination database */ BEGIN DBMS_STREAMS_ADM..out /* STEP 1.STREAMS_QUEUE'.net insert into hr.NET = Global Database name of the Target (apply) Site STRMADMIN = Streams Administrator with password strmadmin HR = Source schema to be replicated . queue_name => 'STRMADMIN.null.'1234567'. queue_user => 'STRMADMIN').null.NET BEGIN DBMS_STREAMS_ADM.NET connect to strmadmin identified by strmadmin.This schema is already installed on the source site The sample code replicates both DML and DDL.1 How to create STRMADMIN user and grant privileges.net'.Connect as the Streams Administrator in the target site strm2.net select * From employees where employee_id=99999. */ SET ECHO ON SPOOL stream_oneway. -. streams_type => 'APPLY '.

if the objects are not present in the destination database. source_database => 'STRM1.NET OWNER=HR FILE=hr. the streams metadata is updated with the appropriate information in the destination database corresponding to the SCN that is recorded in the export file */ $ imp USERID=SYSTEM/&system_pwd_dest@STRM2. / /* STEP 6. source_database => 'STRM1..Add capture rules for the schema HR at the source database */ CONN STRMADMIN/STRMADMIN@STRM1. By means of Metadata-only export/import : Specify ROWS=N during Export Specify IGNORE=Y during Import along with above import parameters. import and instantiation of tables from Source to Destination Database.NET set serveroutput on DECLARE 6 . By Manaually instantiating the objects Get the Instantiation SCN at the source database: connect STRMADMIN/STRMADMIN@STRM1.ADD_SCHEMA_RULES( schema_name => 'HR'.STREAMS_QUEUE@STRM2. / /* STEP 5.log OBJECT_CONSISTENT=Y STATISTICS = NONE /* Import into the Destination Database: Specify STREAMS_INSTANTIATION=Y clause in the import command.NET BEGIN DBMS_STREAMS_ADM.NET'). / /* STEP 4.dmp LOG=hr_exp. By doing this.ADD_SCHEMA_PROPAGATION_RULES( schema_name => 'HR'.NET FULL=Y CONSTRAINTS=Y FILE=hr. include_ddl => true. include_dml => true. END.Add propagation rules for the schema HR at the source database.NET'. source_queue_name => 'STRMADMIN. there are two ways of instantiating the objects at the destination site.STREAMS_QUEUE'. include_ddl => true.dmp IGNORE=Y COMMIT=Y LOG=hr_imp. include_ddl => true.STREAMS_QUEUE'. 1. an export is performed that is consistent for each individual object at a particular system change number (SCN). include_dml => true. destination_queue_name => 'STRMADMIN. queue_name => 'STRMADMIN.. streams_type => 'CAPTURE'. 2. perform an export of the objects from the source database and import them into the destination database Export from the Source Database: Specify the OBJECT_CONSISTENT=Y clause on the export command. END. streams_name => 'STREAM_CAPTURE'.NET'). By doing this.include_dml => true.. streams_name => 'STREAM_PROPAGATE'.log STREAMS_INSTANTIATION=Y /* If the objects are already present in the destination database. */ $ exp USERID=SYSTEM/&system_pwd_source@STRM1. source_database => 'STRM1.Export. This step will also create a propagation job to the destination database */ BEGIN DBMS_STREAMS_ADM. END.NET').

then start the Capture process on the source */ conn strmadmin/strmadmin@strm1. then the apply process discards the LCR. RECURSIVE => TRUE. END. */ conn strmadmin/strmadmin@strm2. the apply process applies the LCR. SOURCE_DATABASE_NAME => 'STRM1. 7 .SET_PARAMETER( apply_name => 'STREAM_APPLY'.. / /* STEP 8.ALTER_CAPTURE( capture_name => 'STREAM_CAPTURE'.NET'. / DECLARE v_started number. END. parameter => 'disable_on_error'. END.NET BEGIN DBMS_APPLY_ADM.START_APPLY(apply_name => 'STREAM_APPLY'). end if. The SET_TABLE_INSTANTIATION_SCN procedure controls which LCRs for a table are to be applied by the apply process.net BEGIN DBMS_APPLY_ADM.net BEGIN DBMS_APPLY_ADM. 1. value => 'n'). INSTANTIATION_SCN => &iscn ). END. This is the user who would apply all statements and DDL statements..GET_SYSTEM_CHANGE_NUMBER(). 0) INTO v_started FROM DBA_APPLY WHERE APPLY_NAME = 'STREAM_APPLY'.Set stop_on_error to false so apply does not abort for every error. END. -.Variable to hold instantiation SCN value BEGIN iscn := DBMS_FLASHBACK. */ connect STRMADMIN/STRMADMIN@STRM2..net BEGIN DBMS_CAPTURE_ADM. END. then. 'ENABLED'.Set up capture to retain 7 days worth of logminer checkpoint information.iscn NUMBER. end. The user specified in the APPLY_USER parameter must have the necessary privileges to perform DML and DDL changes on the apply objects. start the Apply process on the destination */ conn strmadmin/strmadmin@strm2.START_CAPTURE(capture_name => 'STREAM_CAPTURE'). if (v_started = 0) then DBMS_APPLY_ADM. / /* STEP 9. If the commit SCN of an LCR from the source database is less than or equal to this instantiation SCN. BEGIN SELECT decode(status. / begin DBMS_CAPTURE_ADM. checkpoint_retention_time => 7).Specify an 'APPLY USER' at the destination database.ALTER_APPLY( apply_name => 'STREAM_APPLY'.SET_SCHEMA_INSTANTIATION_SCN( SOURCE_SCHEMA_NAME => 'HR'. DBMS_OUTPUT. / /*Instantiate the objects at the destination database with this SCN value. Else.PUT_LINE ('Instantiation SCN is: ' || iscn). apply_user => 'HR'). / Enter value for iscn: <Provide the value of SCN that you got from the source database above> /* STEP 7.

NET. queue_user => 'STRMADMIN').NET'.NET').205.Connected as the Streams Administrator. /* Step 2 . / 8 .net BEGIN DBMS_STREAMS_ADM.DEPARTMENTS where department_id=99.NET connect to strmadmin identified by strmadmin.net select * from HR.STREAMS_QUEUE'. END. */ conn strmadmin/strmadmin@strm1. include_ddl => true.out spool file to ensure that all actions finished successfully after this script is completed.net as sysdba create public database link STRM2.ADD_GLOBAL_PROPAGATION_RULES( streams_name => 'STRMADMIN_PROP'.net insert into HR. destination_queue_name => 'STRMADMIN.DEPARTMENTS at destination and a HR. conn strmadmin/strmadmin@strm1. /* Confirm the insert has been done on HR. END. alter table HR.Connect as the Streams Administrator in the target site STRM2. source_queue_name => 'STRMADMIN.NET using 'strm2. queue_table =>'STREAMS_QUEUE_TABLE'. / /*Step 3 -Connected to STRM1. desc HR. include_dml => true./ /* Check the Spool Results Check the stream_oneway. ======= ========= ============ ============ ============== ==>How to setup Database Level Streams Replication Script /* Step 1 .SET_UP_QUEUE( queue_name => 'STREAMS_QUEUE'.EMPLOYEES add (NEWCOL VARCHAR2(10)).net BEGIN DBMS_STREAMS_ADM. create CAPTURE and PROPAGATION rules */ conn strmadmin/strmadmin@strm1. / conn sys/oracle@strm1. END.SET_UP_QUEUE( queue_name => 'STREAMS_QUEUE'.net'. queue_user => 'STRMADMIN').net BEGIN DBMS_STREAMS_ADM. queue_table =>'STREAMS_QUEUE_TABLE'.EMPLOYEES has now a new column */ conn HR/HR@strm2.STREAMS_QUEUE@STRM2.DEPARTMENTS values (99. source_database => 'STRM1.NET. create the streams queue and the database link that will be used for propagation at STRM1. */ SET ECHO OFF SPOOL OFF /*************************** END OF SCRIPT ******************************/ -->Sample Code Output: /* Perform changes in tables belonging to HR on the source site and check that these are applied on the destination */ conn HR/HR@strm1. commit.NET and create the streams queue */ conn strmadmin/strmadmin@strm2.'OTHER'.EMPLOYEES.net create database link STRM2.1700).

SET_PARAMETER( apply_name => 'STRMADMIN_APPLY'. value => 'n'). include_ddl => true. END.net FULL=Y FILE=hr. include_dml => true.net CONSTRAINTS=Y FULL=Y FILE=hr.STREAMS_QUEUE'.Transfer the export dump file to STRM2. Please run steps 1 through 9 after interchanging Db1 and Db2.NET and import */ $ imp USERID=SYSTEM/<password>@strm2. Caution should be exercised while setting the instantiation SCN this time as one maynot want to export and import the data. streams_name => 'STRMADMIN_CAPTURE'. END.START_CAPTURE( capture_name => 'STRMADMIN_CAPTURE'). include_dml => true.net BEGIN DBMS_CAPTURE_ADM.net BEGIN DBMS_APPLY_ADM. / BEGIN DBMS_APPLY_ADM. source_database => 'STRM1.NET').log STREAMS_INSTANTIATION=Y /*Step 9 . / /*Step 7 . / conn strmadmin/strmadmin@strm1. streams_name => 'STRMADMIN_APPLY'. create APPLY rules */ conn STRMADMIN/STRMADMIN@strm2.dmp IGNORE=Y COMMIT=Y LOG=hr_imp. Export option ROWS=N can be used for the instantiation of objects from DB2--> DB1. queue_name => 'STRMADMIN. / For bidirectional streams setup.ADD_GLOBAL_RULES( streams_type => 'CAPTURE'.Take an export of the DB at STRM1. END. END. END. queue_name => 'STRMADMIN. parameter => 'disable_on_error'. source_database => 'STRM1.Start Apply and capture */ conn strmadmin/strmadmin@strm2. include_ddl => true. ======= ========= ============ ============ ============== ==>How To Configure Streams Real-Time Downstream Environment 9 .NET.NET').START_APPLY( apply_name => 'STRMADMIN_APPLY').Connected as STRMADMIN at STRM2.ADD_GLOBAL_RULES( streams_type => 'APPLY'.dmp LOG=hr_exp.NET */ $ exp USERID=SYSTEM/oracle@strm1.BEGIN DBMS_STREAMS_ADM.log OBJECT_CONSISTENT=Y STATISTICS = NONE /*Step 8 . Script Output Please perform DML on one of the objects and make sure it is propagated to the other site. / /*Step 4 .STREAMS_QUEUE'.net BEGIN DBMS_STREAMS_ADM.

on both sides: conn /as sysdba CREATE USER strmadmin IDENTIFIED BY strmadmin DEFAULT TABLESPACE streams_tbs QUOTA UNLIMITED ON streams_tbs. Check the listeners are working and sdatabases are registered. If source database is RAC.You may create the tablespace on both sides. 10 .From downstream site: conn /as sysdba exec DBMS_LOGMNR_D. --Creating connection between source and downstream: -.Setting the SYS password: The SYS password need to be the same on both source and target. END.EG.Set GLOBAL_NAMES = TRUE on both sides: Alter system set global_names=TRUE scope=BOTH. check that all nodes have the connection identifiers to connect to downstream database. ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='LOCATION=/home/oracle/archives/ORCL102D/standby-archives/ VALID_FOR=(STANDBY_LOGFILE. Make sure that service names of both databases exists in tnsnames. select * from dual@ORCL102C. -Instruct the logmnr to use streams_tbs tablespace from downstream site: -.EG.ORACLE.SET_TABLESPACE ('streams_tbs'). Make sure that Source and Target can both connect to each other.Setting Connection between the databases: Check $TNS_ADMIN. -. but most important the downstream site: conn /as sysdba CREATE TABLESPACE streams_tbs DATAFILE 'streams_tbs_01.checking that the streams admin is created: SELECT * FROM dba_streams_administrator. for the connection to be established successfully between source and downstream sites to send the redo data. to point to the place of tnsnames. GRANT DBA TO strmadmin.Creating the streams tablespace: ========================= -. -.dbf' SIZE 100M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED.GRANT_ADMIN_PRIVILEGE( grantee => 'strmadmin'. Creating the streams admin on both sides: =============================== -. grant_privileges => true). you may use the TNSPING to make sure. / -.Create the streams admin. Setting parameters for downstream archiving: ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE SCOPE=SPFILE.ora.ORACLE. -.PRIMARY_ROLE)' SCOPE=SPFILE. BEGIN DBMS_STREAMS_AUTH.Create dblink form Downstream to Source for administration purposes: conn strmadmin/strmadmin create database link ORCL102C.COM.ora files of each other.COM connect to strmadmin identified by strmadmin using 'ORCL102C'.

use the following statements to create the appropriate standby log file groups: conn /as sysdba ALTER DATABASE ADD STANDBY LOGFILE GROUP 4 ('/home/oracle/archives/ORCL102D/standbylogs/slog4. if the query from V$LOG showed the following: THREAD# GROUP# BYTES/1024/1024 ---------.---------. In this case. GROUP#. using DB_UNIQUE_NAME of both sites: ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(ORCL102C.From the downstream site: 2) Add standby logs: -. ARCHIVED. .rdo') SIZE 50M.From the source: 1) Determine the log file size used on the source database: conn /as sysdba select THREAD#.---------.---------.ORCL102D)' SCOPE=SPFILE.The standby log file size must exactly match (or be larger than) the source database log file size.--------------1 1 50 1 2 50 1 3 50 Previous output indicates that we got 3 groups each with size 50M. ALTER DATABASE ADD STANDBY LOGFILE GROUP 7 ('/home/oracle/archives/ORCL102D/standbylogs/slog7. This means that we will need to have 4 standby-log groups with atleast 50M each.---------4 0 0 YES UNASSIGNED 5 0 0 YES UNASSIGNED 6 0 0 YES UNASSIGNED 7 0 0 YES UNASSIGNED If source is RAC: ========== For example. the source database has three online redo log file groups and each log file size of 50 MB. STATUS FROM V$STANDBY_LOG. ALL_ROLES) for dest_1 -. ALTER DATABASE ADD STANDBY LOGFILE GROUP 6 ('/home/oracle/archives/ORCL102D/standbylogs/slog6. If source is NON-RAC: ============== For example.---------. the output of V$LOG will show something like this: THREAD# GROUP# BYTES/1024/1024 ---------. ALTER DATABASE ADD STANDBY LOGFILE GROUP 5 ('/home/oracle/archives/ORCL102D/standbylogs/slog5.PRIMARY_ROLE) or (STANDBY_LOGFILE.Specify Source and downstream database for LOG_ARCHIVE_CONFIG. -.ALL_ROLES). THREAD#.place where archived-logs will be written from the standby redo logs coming from the source site. BYTES/1024/1024 from V$LOG. VALID FOR .Your output should be similar to the following: GROUP# THREAD# SEQUENCE# ARC STATUS ---------.The number of standby log file groups must be at least one more than the number of online log file groups on the source database.--------------1 1 50 1 2 50 2 3 50 11 . SEQUENCE#.rdo') SIZE 50M.rdo') SIZE 50M. REDO LOGS ARE STORED IN TWO LOCATIONS :We will get duplicate file on downstreams capture because of parameter VALID_FOR=(ONLINE_LOGFILE.For example.LOCATION .Specify either (STANDBY_LOGFILE. Note: .--.rdo') SIZE 50M. -Creating standby redo-logs to receive redo data from Source: -. the souce is a 2-Node RAC database. 3) Ensure that the standby log file groups were added successfully: conn /as sysdba SELECT GROUP#. -.

PRIMARY_ROLE) DB_UNIQUE_NAME=ORCL102D' 12 . ALTER DATABASE ADD STANDBY logs/slog8. ALTER DATABASE ADD STANDBY logs/slog7. -. ALTER SYSTEM SET log_archive_dest_state_1 = 'ENABLE' SCOPE=SPFILE. ************************************************** *** Preparing the Source site (ORCL102C) **** ************************************************** --Enable Shipping of online redo log data from Source to Downstream database: ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE SCOPE=SPFILE.Use the following statements to create the appropriate standby log file groups: conn /as sysdba ALTER DATABASE ADD STANDBY logs/slog4.---------4 1 0 YES UNASSIGNED 5 1 0 YES UNASSIGNED 6 1 0 YES UNASSIGNED 7 2 0 YES UNASSIGNED 8 2 0 YES UNASSIGNED 9 2 0 YES UNASSIGNED Get Downstream database to archive-log mode: ==================================== -. ALTER DATABASE ADD STANDBY logs/slog5. ARCHIVED.rdo') SIZE 50M.rdo') SIZE 50M. -. This means that we will need to have 3 standby-log groups per THREAD with each group atleast 50M.Your output should be similar to the following: GROUP# THREAD# SEQUENCE# ARC STATUS ---------.Set the following parameters for the location and format of local archives: conn /as sysdba ALTER SYSTEM SET log_archive_dest_1='LOCATION=/home/oracle/archives/ORCL102D/redo-archives/' SCOPE=SPFILE. ALTER DATABASE ADD STANDBY logs/slog9. ALTER SYSTEM SET log_archive_format = 'ORCL102D_%t_%s_%r. ALTER DATABASE ADD STANDBY logs/slog6.---------.set the database to archive-log mode: conn /as sysdba shutdown immediate startup mount alter database archivelog.rdo') SIZE 50M.--. ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=ORCL102D LGWR SYNC NOREGISTER VALID_FOR=(ONLINE_LOGFILES.From the downstream site: 2) Add standby logs: -. LOGFILE THREAD 1 GROUP 4 ('/home/oracle/archives/ORCL102D/standbyLOGFILE THREAD 1 GROUP 5 ('/home/oracle/archives/ORCL102D/standbyLOGFILE THREAD 1 GROUP 6 ('/home/oracle/archives/ORCL102D/standbyLOGFILE THREAD 2 GROUP 7 ('/home/oracle/archives/ORCL102D/standbyLOGFILE THREAD 2 GROUP 8 ('/home/oracle/archives/ORCL102D/standbyLOGFILE THREAD 2 GROUP 9 ('/home/oracle/archives/ORCL102D/standby- 3) Ensure that the standby log file groups were added successfully: conn /as sysdba SELECT GROUP#.rdo') SIZE 50M.2 4 50 Previous output indicates that we got two THREADS(instances) each one has two redo-log groups each one have a size 50M. SEQUENCE#.---------. -.arc' SCOPE=SPFILE. THREAD#. alter database open.rdo') SIZE 50M. STATUS FROM V$STANDBY_LOG.Increase the number of archiving processes: ALTER SYSTEM SET log_archive_max_processes=5 SCOPE=BOTH.rdo') SIZE 50M. -.

From NODE1 ALTER SYSTEM SET log_archive_format = 'NODE1_%t_%s_%r. NOREGISTER .Specify this attribute so that the location of the archived redo log files is not recorded in the downstream database control file. to make sure that there is no errors reported for the log shipping.Startup the rest of the nodes: conn /as sysdba startup Note: .set the database to archive log mode: startup mount alter database archivelog. LGWR ASYNC or LGWR SYNC .Shutdown all nodes: conn /as sysdba shutdown immediate -. You can specify LGWR SYNC for a real-time downstream capture process only. or the single instance for the source database. .arc' SCOPE=SPFILE. -. -.Check alertlogs of all the RAC nodes. ALTER SYSTEM SET log_archive_dest_state_1 = 'ENABLE' SCOPE=SPFILE SID='*'. if not already created: ============================================================ conn /as sysdba 13 . ALTER SYSTEM SET log_archive_dest_state_1 = 'ENABLE' SCOPE=SPFILE.ALL_ROLES).The value of db_unique_name of the downstream database. ALTER SYSTEM SET log_archive_format = 'ORCL102C_%t_%s_%r.SCOPE=SPFILE. -.PRIMARY_ROLE) or (ONLINE_LOGFILE.Specify Source and downstream database for LOG_ARCHIVE_CONFIG. -.From NODE2 ALTER SYSTEM SET log_archive_format = 'NODE2_%t_%s_%r. SERVICE . DB_UNIQUE_NAME . alter database open.For a RAC source: conn /as sysdba ALTER SYSTEM SET log_archive_dest_1='LOCATION='+FLASH/NODE/' SCOPE=SPFILE SID='*'.set the database to archive log mode: conn /as sysdba shutdown immediate startup mount alter database archivelog.ora of source.Specify either (ONLINE_LOGFILE.Set the following parameters for the location and format of local archives: -. ********************************************* **** Setting up Streams Replication **** ********************************************* Creating the replicated schema at Source site. -.For single instance source: conn /as sysdba ALTER SYSTEM SET log_archive_dest_1='LOCATION=/home/oracle/archives/ORCL102C/redo-archvies/' SCOPE=SPFILE.Check in the background_dump_dest directory for traces with the string "lns" and "arc" and make sure they are not reporting any errors for archving or connection to the downstream database. -.Specify the identifier of downstream database from tnsnames. The advantage of specifying LGWR SYNC is that redo data is sent to the downstream database faster then when LGWR ASYNC is specified.arc' SCOPE=SPFILE SID='bless1'. using DB_UNIQUE_NAME of both sites: ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(ORCL102C. --Get source database to archivelog mode: -.arc' SCOPE=SPFILE SID='bless2'. VALID_FOR . then do not proceed untill you fix these errors. -.Specify a redo transport mode.ORCL102D)' SCOPE=SPFILE.If any errors were shown. . alter database open.

ORACLE. name varchar(20)). / -. --Creating the apply process at the downstream site: conn strmadmin/strmadmin BEGIN DBMS_APPLY_ADM.Using a single queue is the best practice for queue configuration for real-time downstream.CREATE_APPLY( queue_name => 'strmadmin. create user mars identified by mars. / -. status from dba_capture. queue_table from user_queues. use_database_link => true. value.Checking apply info.COM'. logfile_assignment => 'implicit'). SELECT capture_name. rule_set_name => NULL. queue_user => 'STRMADMIN'). SELECT parameter. capture_name => 'DOWNSTREAM_CAPTURE'. SELECT apply_name. Set capture for real-time captuing of changes: ===================================== -. queue_name => 'strmadmin.To be executed from downstream site: conn strmadmin/strmadmin BEGIN 14 . A single combined queue for both capture and apply is preferable as it eliminates the redundant propagation/queue to queue transfer. set_by_user FROM DBA_CAPTURE_PARAMETERS. grant connect.DOWNSTREAM_Q'. start_scn => NULL. value. apply_captured => TRUE ).Checking the capture info. conn mars/mars create table test(id number. queue_name FROM DBA_APPLY. END.DOWNSTREAM_Q_TABLE'. resource. source_database => 'ORCL102C. apply_name => 'DOWNSTREAM_APPLY'.capture process accepts redo data implicitly from Source.DOWNSTREAM_Q'. -.For administrative purposes.CREATE_CAPTURE( queue_name => 'strmadmin.EG. create table to mars. first_scn => NULL. -. / -. set_by_user FROM DBA_APPLY_PARAMETERS WHERE apply_name = 'DOWNSTREAM_APPLY'. END. Creating the streams queue on the downstream site: ======================================== conn strmadmin/strmadmin BEGIN DBMS_STREAMS_ADM. END. status.check the created queue: select name.SET_UP_QUEUE( queue_table => 'strmadmin.drop user mars cascade. --Creating the capture process at the downstream site: conn strmadmin/strmadmin BEGIN DBMS_CAPTURE_ADM. SELECT parameter.DOWNSTREAM_Q'.

streams_name => 'downstream_capture'.dmp from '/tmp/schema_export' on source site to '/tmp/schema_import' on downstream site. -.DBMS_CAPTURE_ADM. ARCHIVED. parameter => 'downstream_real_time_mine'. then do this from one of the nodes: conn /as sysdba ALTER SYSTEM ARCHIVE LOG CURRENT.COM'.dmp LOGFILE=schema_export:schema. what to capture: conn strmadmin/strmadmin BEGIN DBMS_STREAMS_ADM.log -.Check the created rules: SELECT rule_name. include_tagged_lcr => false. THREAD#.ORACLE. --Instantiating the replicated objects: There are three ways to instantiate: In our example we want to replicate schema MARS.From Downstream sqlplus session: conn system/oracle !mkdir /tmp/schema_import !chmod 777 /tmp/schema_import create or replace directory schema_import as '/tmp/schema_import'. !impdp system/oracle SCHEMAS=mars DIRECTORY=schema_import DUMPFILE=schema. streams_type => 'capture'. Note: Archiving the current log file at the source database starts real time mining of the source database redo log. queue_name => 'strmadmin. STATUS FROM V$STANDBY_LOG. / -. SEQUENCE#. -.downstream_q'.dmp 2) If Using ordinary exp/imp to instantiate: -.ADD_SCHEMA_RULES( schema_name => 'mars'. END. rule_condition FROM DBA_STREAMS_SCHEMA_RULES WHERE streams_name = 'DOWNSTREAM_CAPTURE' AND streams_type = 'CAPTURE'. include_dml => true. / -.copy the dump file schema. -. -. check that the status of one/more of the standby logs has changed from UNASSIGNED to ACTIVE: conn /as sysdba SELECT GROUP#.From Source: 15 . so where what we shall do: 1) If using datapump to exp/imp the replicated objects from source to downstream. value => 'y'). include_ddl => true. END.If source is RAC.Now. !expdp system/oracle SCHEMAS=MARS DUMPFILE=schema_export:schema.From Source sqlplus session: conn system/oracle !mkdir /tmp/schema_export !chmod 777 /tmp/schema_export create or replace directory schema_export as '/tmp/schema_export'. inclusion_rule => TRUE). --Add rules to instruct the capture process.Archive the current log file from the source database.SET_PARAMETER( capture_name => 'DOWNSTREAM_CAPTURE'.EG. source_database => 'ORCL102C.

SET_SCHEMA_INSTANTIATION_SCN( source_schema_name => 'mars'. *********************** *** Testing. and this will insure that data at the target is consistent with data at the source.GET_SYSTEM_CHANGE_NUMBER@ORCL102C. status from dba_capture. you will need to copy them by any means like insert into downstream_table select * form source_table@dblink. --Start the apply process: conn strmadmin/strmadmin exec DBMS_APPLY_ADM.From Downstream: imp system/oracle file=mars.exp system/oracle owner=mars file=mars.. and if tables contains data. 16 . -. select capture_name. / * You have to make sure that objects between source and downstream are consistent at the time you set the instantiation manually. select * from DBA_APPLY_INSTANTIATED_SCHEMAS. --Start the capture process: conn strmadmin/strmadmin exec DBMS_CAPTURE_ADM. END.After instantiating.COM'.START_APPLY(apply_name => 'DOWNSTREAM_APPLY'). check that instantiation is done: select * from DBA_APPLY_INSTANTIATED_OBJECTS.dump log=mars. select apply_name. -.test values(1. **** *********************** -.Set instantiation for the replicated objects: -.START_CAPTURE(capture_name => 'DOWNSTREAM_CAPTURE').EG.EG.ORACLE. You need to create the same objects at the downstream site. -.Get current SCN from Source DBMS_APPLY_ADM. source_database_name => 'ORCL102C. the instantiation SCN for the apply will be modified to the SCN at the time the export was taken. -.Create the replicated objects at the downstream site.log object_consistent=Y -. status from dba_apply.COM.Variable to hold instantiation SCN value BEGIN iscn := DBMS_FLASHBACK. B.From source: conn mars/mars insert into mars. so the imported data would be consistent with the source. 3) If you want to instantiate Manually: A. recursive => TRUE). commit.dump full=y ignore=y STREAMS_INSTANTIATION=Y Note: When doing STREAMS_INSTANTIATION=Y and having the export done with object_consistent=Y. or you may end up with ORA-01403.ORACLE.Object_consistent must be set to Y.. instantiation_scn => iscn.'Test message').Run the following from the downstream site: conn strmadmin/strmadmin DECLARE iscn NUMBER.

sql streams_hc_10GR2.From Downstream: conn mars/mars select * from mars.test.sql Check Apply: Wp_appy.sql streams_hc_11_2_0_2.sql ======= ========= ============ ============ ============== 17 .sql Change Data Capture Health Check: cdc_healthcheck.sql Stream_performance_Advisor. ======= ========= ============ ============ ============== = HealthCheck For Stream : streams_hc_10GR2.sql streams_hc_11_2_0_2.-.sql Stream_performance_Advisor.

Sign up to vote on this title
UsefulNot useful