P. 1
oracle10g dataguard

oracle10g dataguard

|Views: 73|Likes:
Published by Nst Tnagar
dataguard
dataguard

More info:

Published by: Nst Tnagar on Dec 04, 2012
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as DOC, PDF, TXT or read online from Scribd
See more
See less

08/12/2013

pdf

text

original

ATABASE FORCE LOGGING;

This statement can take a considerable amount of time to complete, because it waits for all unlogged direct write I/O to finish.

3.1.2 Create a Password File
Create a password file if one does not already exist. Every database in a Data Guard configuration must use a password file, and the password for the SYSuser must be identical on every system for redo data transmission to succeed. See Oracle Database Administrator's Guide.

3.1.3 Configure a Standby Redo Log
A standby redo log is required for the maximum protection and maximum availability modes and the LGWR ASYNC transport mode is recommended for all databases. Data Guard can recover and apply more redo data from a standby redo log than from archived redo log files alone. You should plan the standby redo log configuration and create all required log groups and group members when you create the standby database. For increased availability, consider multiplexing the standby redo log files, similar to the way that online redo log files are multiplexed. Perform the following steps to configure the standby redo log.

Step 1 Ensure log file sizes are identical on the primary and standby databases. The size of the current standby redo log files must exactly match the size of the current primary database online redo log files. For example, if the primary database uses two online redo log groups whose log files are 200K, then the standby redo log groups should also have log file sizes of 200K. Step 2 Determine the appropriate number of standby redo log file groups.

Minimally, the configuration should have one more standby redo log file group than the number of online redo log file groups on the primary database. However, the recommended number of standby redo log file groups is dependent on the number of threads on the primary database. Use the following equation to determine an appropriate number of standby redo log file groups:

(maximum number of logfiles for each thread + 1) * maximum number of threads Using this equation reduces the likelihood that the primary instance's log writer (LGWR) process will be blocked because a standby redo log file cannot be allocated on the standby database. For example, if the primary database has 2 log files for each thread and 2 threads, then 6 standby redo log file groups are needed on the standby database. Note: Logical standby databases may require more standby redo log files (or additional ARCn processes) depending on the workload. This is because logical standby databases also write to online redo log files, which take precedence over standby redo log files. Thus, the standby redo log files may not be archived as quickly as the online redo log files. Also, see Section 5.7.3.1. Step 3 Verify related database parameters and settings.

Verify the values used for the MAXLOGFILES and MAXLOGMEMBERS clauses on the SQL CREATE DATABASE statement will not limit the number of standby redo log file groups and members that you can add. The only way to override the limits specified by the MAXLOGFILES and MAXLOGMEMBERS clauses is to re-create the primary database or control file. See Oracle Database SQL Reference and your operating system specific Oracle documentation for the default and legal values of the MAXLOGFILES andMAXLOGMEMBERS clauses. Step 4 Create standby redo log file groups.

To create new standby redo log file groups and members, you must have the ALTER DATABASE system privilege. The standby database begins using the newly created standby redo data the next time there is a log switch on the primary database. Example 3-1 and Example 3-2 show how to create a new standby redo log file group using the ALTER DATABASE statement with variations of the ADD STANDBY LOGFILE GROUP clause.

Example 3-1 Adding a Standby Redo Log File Group to a Specific Thread
The following statement adds a new standby redo log file group to a standby database and assigns it to THREAD 5:
SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 5 2> ('/oracle/dbs/log1c.rdo','/oracle/dbs/log2c.rdo') SIZE 500M;

The THREAD clause is required only if you want to add one or more standby redo log file groups to a specific primary database thread. If you do not include theTHREAD clause and the configuration uses Real Application Clusters (RAC), Data Guard will automatically assign standby redo log file groups to threads at runtime as they are needed by the various RAC instances.

Example 3-2 Adding a Standby Redo Log File Group to a Specific Group Number
You can also specify a number that identifies the group using the
SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 10 2> ('/oracle/dbs/log1c.rdo','/oracle/dbs/log2c.rdo') SIZE 500M; GROUP

clause:

Using group numbers can make administering standby redo log file groups easier. However, the group number must be between 1 and the value of theMAXLOGFILES clause. Do not skip log file group numbers (that is, do not number groups 10, 20, 30, and so on), or you will use additional space in the standby database control file. Note: Although the standby redo log is only used when the database is running in the standby role, Oracle recommends that you create a standby redo log on the primary database so that the primary database can switch over quickly to the standby role without the need for additional DBA intervention. Consider using Oracle Enterprise Manager to automatically configure standby redo log on both your primary and standby databases. Step 5 Verify the standby redo log file groups were created.

To verify the standby redo log file groups are created and running correctly, invoke a log switch on the primary database, and then query either theV$STANDBY_LOG view or the V$LOGFILE view on the standby database once it has been created. For example:
SQL> SELECT GROUP#,THREAD#,SEQUENCE#,ARCHIVED,STATUS FROM V$STANDBY_LOG; GROUP# 3 4 5 THREAD# 1 0 0 SEQUENCE# ARC STATUS ACTIVE

---------- ---------- ---------- --- ---------16 NO 0 YES UNASSIGNED 0 YES UNASSIGNED

3.1.4 Set Primary Database Initialization Parameters
On the primary database, you define initialization parameters that control redo transport services while the database is in the primary role. There are additional

parameters you need to add that control the receipt of the redo data and log apply services when the primary database is transitioned to the standby role. Example 3-3 shows the primary role initialization parameters that you maintain on the primary database. This example represents a Data Guard configuration with a primary database located in Chicago and one physical standby database located in Boston. The parameters shown in Example 3-3 are valid for the Chicago database when it is running in either the primary or the standby database role. The configuration examples use the names shown in the following table: Database Primary Physical standby DB_UNIQUE_NAME chicago boston Oracle Net Service Name chicago boston

Example 3-3 Primary Database: Primary Role Initialization Parameters
DB_NAME=chicago DB_UNIQUE_NAME=chicago LOG_ARCHIVE_CONFIG='DG_CONFIG=(chicago,boston)' CONTROL_FILES='/arch1/chicago/control1.ctl', '/arch2/chicago/control2.ctl' LOG_ARCHIVE_DEST_1= 'LOCATION=/arch1/chicago/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=chicago' LOG_ARCHIVE_DEST_2= 'SERVICE=boston LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=boston' LOG_ARCHIVE_DEST_STATE_1=ENABLE LOG_ARCHIVE_DEST_STATE_2=ENABLE REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE LOG_ARCHIVE_FORMAT=%t_%s_%r.arc LOG_ARCHIVE_MAX_PROCESSES=30

These parameters control how redo transport services transmit redo data to the standby system and the archiving of redo data on the local file system. Note that the example specifies the LGWR process and asynchronous (ASYNC) network transmission to transmit redo data on the LOG_ARCHIVE_DEST_2 initialization parameter. These are the recommended settings and require standby redo log files (see Section 3.1.3, "Configure a Standby Redo Log").

Example 3-4 shows the additional standby role initialization parameters on the primary database. These parameters take effect when the primary database is transitioned to the standby role.

Example 3-4 Primary Database: Standby Role Initialization Parameters
FAL_SERVER=boston FAL_CLIENT=chicago DB_FILE_NAME_CONVERT='boston','chicago' LOG_FILE_NAME_CONVERT= '/arch1/boston/','/arch1/chicago/','/arch2/boston/','/arch2/chicago/' STANDBY_FILE_MANAGEMENT=AUTO

Specifying the initialization parameters shown in Example 3-4 sets up the primary database to resolve gaps, converts new datafile and log file path names from a new primary database, and archives the incoming redo data when this database is in the standby role. With the initialization parameters for both the primary and standby roles set as described, none of the parameters need to change after a role transition. The following table provides a brief explanation about each parameter setting shown in Example 3-3 and Example 3-4. Parameter
DB_NAME

Recommended Setting Specify an 8-character name. Use the same name for all standby databases. Specify a unique name for each database. This name stays with the database and does not change, even if the primary and standby databases reverse roles. Specify the DG_CONFIG attribute on this parameter to list the DB_UNIQUE_NAME of the primary and standby databases in the Data Guard configuration; this enables the dynamic addition of a standby database to a Data Guard configuration that has a Real Application Clusters primary database running in either maximum protection or maximum availability mode. By default, the LOG_ARCHIVE_CONFIG parameter enables the database to send and receive redo; after a role transition, you may need to specify these settings again using the SEND, NOSEND, RECEIVE, or NORECEIVE keywords. Specify the path name for the control files on the primary database. Example 3-3 shows how to do this for two control files. It is recommended that a second copy of the control file is available so an instance can be easily

DB_UNIQUE_NAME

LOG_ARCHIVE_CONFIG

CONTROL_FILES

Parameter

Recommended Setting restarted after copying the good control file to the location of the bad control file.

LOG_ARCHIVE_DEST_n

Specify where the redo data is to be archived on the primary and standby systems. In Example 3-3: • LOG_ARCHIVE_DEST_1 archives redo data generated by the primary database from the local online redo log files to the local archived redo log files in /arch1/chicago/. • LOG_ARCHIVE_DEST_2 is valid only for the primary role. This destination transmits redo data to the remote physical standby destination boston. Note: If a flash recovery area was configured (with the DB_RECOVERY_FILE_DEST initialization parameter) and you have not explicitly configured a local archiving destination with the LOCATION attribute, Data Guard automatically uses theLOG_ARCHIVE_DEST_10 initialization parameter as the default destination for local archiving. See Section 5.2.3 for more information. Also, see Chapter 14 for complete LOG_ARCHIVE_DEST_n information.

LOG_ARCHIVE_DEST_STATE_n

Specify ENABLE to allow redo transport services to transmit redo data to the specified destination. Set the same password for SYS on both the primary and standby databases. The recommended setting is either EXCLUSIVE orSHARED. Specify the format for the archived redo log files using a thread (%t), sequence number (%s), and resetlogs ID (%r). SeeSection 5.7.1 for another example. Specify the maximum number (from 1 to 30) of archiver (ARCn) processes you want Oracle software to invoke initially. The default value is 4. See Section 5.3.1.2 for more information about ARCn processing. Specify the Oracle Net service name of the FAL server (typically this is the database running in the primary role). When the Chicago database is running in the standby role, it uses the Boston database as the FAL server from which to fetch (request) missing archived redo log files if Boston is unable to automatically send the missing log files. See Section 5.8. Specify the Oracle Net service name of the Chicago database. The FAL server (Boston) copies missing archived

REMOTE_LOGIN_PASSWORDFILE

LOG_ARCHIVE_FORMAT

LOG_ARCHIVE_MAX_PROCESSES=integer

FAL_SERVER

FAL_CLIENT

Parameter

Recommended Setting redo log files to the Chicago standby database. See Section 5.8.

DB_FILE_NAME_CONVERT

Specify the path name and filename location of the primary database datafiles followed by the standby location. This parameter converts the path names of the primary database datafiles to the standby datafile path names. If the standby database is on the same system as the primary database or if the directory structure where the datafiles are located on the standby site is different from the primary site, then this parameter is required. Note that this parameter is used only to convert path names for physical standby databases. Multiple pairs of paths may be specified by this parameter. Specify the location of the primary database online redo log files followed by the standby location. This parameter converts the path names of the primary database log files to the path names on the standby database. If the standby database is on the same system as the primary database or if the directory structure where the log files are located on the standby system is different from the primary system, then this parameter is required. Multiple pairs of paths may be specified by this parameter. Set to AUTO so when datafiles are added to or dropped from the primary database, corresponding changes are made automatically to the standby database.

LOG_FILE_NAME_CONVERT

STANDBY_FILE_MANAGEMENT

Caution: Review the initialization parameter file for additional parameters that may need to be modified. For example, you may need to modify the dump destination parameters (BACKGROUND_DUMP_DEST, CORE_DUMP_DEST, USER_DUMP_DEST) if the directory location on the standby database is different from those specified on the primary database. In addition, you may have to create directories on the standby system if they do not already exist.

3.1.5 Enable Archiving
If archiving is not enabled, issue the following statements to put the primary database in ARCHIVELOG mode and enable automatic archiving:
SQL> SHUTDOWN IMMEDIATE; SQL> STARTUP MOUNT; SQL> ALTER DATABASE ARCHIVELOG; SQL> ALTER DATABASE OPEN;

See Oracle Database Administrator's Guide for information about archiving.

3.2 Step-by-Step Instructions for Creating a Physical Standby Database
This section describes the tasks you perform to create a physical standby database. Table 3-2 provides a checklist of the tasks that you perform to create a physical standby database and the database or databases on which you perform each task. There is also a reference to the section that describes the task in more detail.

Table 3-2 Creating a Physical Standby Database
Reference Task Database Primary Primary Primary Primary Standby Standby Standby

Section 3.2.1 Create a Backup Copy of the Primary Database Datafiles Section 3.2.2 Create a Control File for the Standby Database Section 3.2.3 Prepare an Initialization Parameter File for the Standby Database Section 3.2.4 Copy Files from the Primary System to the Standby System Section 3.2.5 Set Up the Environment to Support the Standby Database Section 3.2.6 Start the Physical Standby Database Section 3.2.7 Verify the Physical Standby Database Is Performing Properly

3.2.1 Create a Backup Copy of the Primary Database Datafiles
You can use any backup copy of the primary database to create the physical standby database, as long as you have the necessary archived redo log files to completely recover the database. Oracle recommends that you use the Recovery Manager utility (RMAN). See Oracle High Availability Architecture and Best Practices for backup recommendations and Oracle Database Backup and Recovery Advanced User's Guide to perform an RMAN backup operation.

3.2.2 Create a Control File for the Standby Database
If the backup procedure required you to shut down the primary database, issue the following SQL*Plus statement to start the primary database:
SQL> STARTUP MOUNT;

Then, create the control file for the standby database, and open the primary database to user access, as shown in the following example:
SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/tmp/boston.ctl'; SQL> ALTER DATABASE OPEN;

Note: You cannot use a single control file for both the primary and standby databases.

3.2.3 Prepare an Initialization Parameter File for the Standby Database
Perform the following steps to create a standby initialization parameter file.

Step 1

Copy the primary database parameter file to the standby database.

Create a text initialization parameter file (PFILE) from the server parameter file (SPFILE) used by the primary database; a text initialization parameter file can be copied to the standby location and modified. For example:
SQL> CREATE PFILE='/tmp/initboston.ora' FROM SPFILE;

Later, in Section 3.2.5, you will convert this file back to a server parameter file after it is modified to contain the parameter values appropriate for use with the physical standby database. Step 2 Set initialization parameters on the physical standby database.

Although most of the initialization parameter settings in the text initialization parameter file that you copied from the primary system are also appropriate for the physical standby database, some modifications need to be made. Example 3-5 shows the portion of the standby initialization parameter file where values were modified for the physical standby database. Parameter values that are different from Example 3-3 and Example 3-4 are shown in bold typeface. The parameters shown in Example 3-5 are valid for the Boston database when it is running in either the primary or the standby database role.

Example 3-5 Modifying Initialization Parameters for a Physical Standby Database
. . . DB_NAME=chicago

DB_UNIQUE_NAME=boston LOG_ARCHIVE_CONFIG='DG_CONFIG=(chicago,boston)' CONTROL_FILES='/arch1/boston/control1.ctl', '/arch2/boston/control2.ctl' DB_FILE_NAME_CONVERT='chicago','boston' LOG_FILE_NAME_CONVERT= '/arch1/chicago/','/arch1/boston/','/arch2/chicago/','/arch2/boston/' LOG_ARCHIVE_FORMAT=log%t_%s_%r.arc LOG_ARCHIVE_DEST_1= 'LOCATION=/arch1/boston/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=boston' LOG_ARCHIVE_DEST_2= 'SERVICE=chicago LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=chicago' LOG_ARCHIVE_DEST_STATE_1=ENABLE LOG_ARCHIVE_DEST_STATE_2=ENABLE REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE STANDBY_FILE_MANAGEMENT=AUTO FAL_SERVER=chicago FAL_CLIENT=boston . . .

Note that the example assumes the use of the LGWR process to transmit redo data to both the local and remote destinations on the LOG_ARCHIVE_DEST_2initialization parameter. In addition, ensure the COMPATIBLE initialization parameter is set to the same value on both the primary and standby databases. If the values differ, redo transport services may be unable to transmit redo data from the primary database to the standby databases. In a Data Guard configuration, COMPATIBLE must be set to a minimum of 9.2.0.1.0. However, if you want to take advantage of new Oracle Database 10g features, set the COMPATIBLE parameter to 10.2.0.0 or higher. It is always a good practice to use the parameters need to be changed.
SHOW PARAMETERS

command to verify no other

The following table provides a brief explanation about the parameter settings shown in Example 3-5 that have different settings from the primary database. Parameter
DB_UNIQUE_NAME

Recommended Setting Specify a unique name for this database. This name stays with the database and does not change even if the primary and standby databases

Parameter
CONTROL_FILES

Recommended Setting reverse roles. Specify the path name for the control files on the standby database. Example 3-5 shows how to do this for two control files. It is recommended that a second copy of the control file is available so an instance can be easily restarted after copying the good control file to the location of the bad control file. Specify the path name and filename location of the primary database datafiles followed by the standby location. This parameter converts the path names of the primary database datafiles to the standby datafile path names. If the standby database is on the same system as the primary database or if the directory structure where the datafiles are located on the standby site is different from the primary site, then this parameter is required. Specify the location of the primary database online redo log files followed by the standby location. This parameter converts the path names of the primary database log files to the path names on the standby database. If the standby database is on the same system as the primary database or if the directory structure where the log files are located on the standby system is different from the primary system, then this parameter is required. Specify where the redo data is to be archived. In Example 3-5: • LOG_ARCHIVE_DEST_1 archives redo data received from the primary database to archived redo log files in /arch1/boston/. • LOG_ARCHIVE_DEST_2 is currently ignored because this destination is valid only for the primary role. If a switchover occurs and this instance becomes the primary database, then it will transmit redo data to the remote Chicago destination. Note: If a flash recovery area was configured (with the DB_RECOVERY_FILE_DEST initialization parameter) and you have not explicitly configured a local archiving destination with the LOCATION attribute, Data Guard automatically uses theLOG_ARCHIVE_DEST_10 initialization parameter as the default destination for local archiving. See Section 5.2.3 for more information. Also, see Chapter 14 for complete information about LOG_ARCHIVE_DEST_n.

DB_FILE_NAME_CONVERT

LOG_FILE_NAME_CONVERT

LOG_ARCHIVE_DEST_n

FAL_SERVER

Specify the Oracle Net service name of the FAL server (typically this is the database running in the primary role). When the Boston database is running in the standby role, it uses the Chicago database as the FAL server from which to fetch (request) missing archived redo log files if Chicago is unable to automatically send the missing log files. See Section 5.8.

Parameter
FAL_CLIENT

Recommended Setting Specify the Oracle Net service name of the Boston database. The FAL server (Chicago) copies missing archived redo log files to the Boston standby database. See Section 5.8.

Caution: Review the initialization parameter file for additional parameters that may need to be modified. For example, you may need to modify the dump destination parameters (BACKGROUND_DUMP_DEST, CORE_DUMP_DEST, USER_DUMP_DEST) if the directory location on the standby database is different from those specified on the primary database. In addition, you may have to create directories on the standby system if they do not already exist.

3.2.4 Copy Files from the Primary System to the Standby System
Use an operating system copy utility to copy the following binary files from the primary system to the standby system:
• • •

Backup datafiles created in Section 3.2.1 Standby control file created in Section 3.2.2 Initialization parameter file created in Section 3.2.3

3.2.5 Set Up the Environment to Support the Standby Database
Perform the following steps to create a Windows-based service, create a password file, set up the Oracle Net environment, and create a SPFILE.

Step 1

Create a Windows-based service.

If the standby system is running on a Windows-based system, use the ORADIM utility to create a Windows Service and password file. For example:
WINNT> oradim -NEW -SID boston -INTPWD password -STARTMODE manual

See Oracle Database Platform Guide for Microsoft Windows (32-Bit) for more information about using the ORADIM utility. Step 2 Create a password file.

On platforms other than Windows, create a password file, and set the password for the SYS user to the same password used by the SYS user on the primary database. The

password for the SYS user on every database in a Data Guard configuration must be identical for redo transmission to succeed. See Oracle Database Administrator's Guide. Step 3 Configure listeners for the primary and standby databases.

On both the primary and standby sites, use Oracle Net Manager to configure a listener for the respective databases. To restart the listeners (to pick up the new definitions), enter the following LSNRCTL utility commands on both the primary and standby systems:
% lsnrctl stop % lsnrctl start

See Oracle Database Net Services Administrator's Guide. Step 4 Create Oracle Net service names.

On both the primary and standby systems, use Oracle Net Manager to create a network service name for the primary and standby databases that will be used by redo transport services. The Oracle Net service name must resolve to a connect descriptor that uses the same protocol, host address, port, and service that you specified when you configured the listeners for the primary and standby databases. The connect descriptor must also specify that a dedicated server be used. See the Oracle Database Net Services Administrator's Guide and the Oracle Database Administrator's Guide. Step 5 Create a server parameter file for the standby database.

On an idle standby database, use the SQL CREATE statement to create a server parameter file for the standby database from the text initialization parameter file that was edited in Step 2. For example:
SQL> CREATE SPFILE FROM PFILE='initboston.ora';

3.2.6 Start the Physical Standby Database
Perform the following steps to start the physical standby database and Redo Apply.

Step 1

Start the physical standby database.

On the standby database, issue the following SQL statement to start and mount the database:
SQL> STARTUP MOUNT;

Step 2

Start Redo Apply.

On the standby database, issue the following command to start Redo Apply:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

The statement includes the DISCONNECT FROM SESSION option so that Redo Apply runs in a background session. See Section 6.3, "Applying Redo Data to Physical Standby Databases" for more information. Step 3 Test archival operations to the physical standby database.

In this example, the transmission of redo data to the remote standby location does not occur until after a log switch. A log switch occurs, by default, when an online redo log file becomes full. To force a log switch so that redo data is transmitted immediately, use the following ALTER SYSTEM statement on the primary database. For example:
SQL> ALTER SYSTEM SWITCH LOGFILE;

3.2.7 Verify the Physical Standby Database Is Performing Properly
Once you create the physical standby database and set up redo transport services, you may want to verify database modifications are being successfully transmitted from the primary database to the standby database. To see that redo data is being received on the standby database, you should first identify the existing archived redo log files on the standby database, force a log switch and archive a few online redo log files on the primary database, and then check the standby database again. The following steps show how to perform these tasks.

Step 1

Identify the existing archived redo log files.
V$ARCHIVED_LOG

On the standby database, query the archived redo log. For example:

view to identify existing files in the

SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME 2 FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#; NEXT_TIME

SEQUENCE# FIRST_TIME

---------- ------------------ -----------------8 11-JUL-02 17:50:45 11-JUL-02 17:50:53 9 11-JUL-02 17:50:53 11-JUL-02 17:50:58 10 11-JUL-02 17:50:58 11-JUL-02 17:51:03 3 rows selected.

Step 2

Force a log switch to archive the current online redo log file.

On the primary database, issue the ALTER SYSTEM SWITCH LOGFILE statement to force a log switch and archive the current online redo log file group:
SQL> ALTER SYSTEM SWITCH LOGFILE;

Step 3

Verify the new redo data was archived on the standby database.

On the standby database, query the V$ARCHIVED_LOG view to verify the redo data was received and archived on the standby database:
SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME 2> FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#; NEXT_TIME

SEQUENCE# FIRST_TIME

---------- ------------------ -----------------8 11-JUL-02 17:50:45 11-JUL-02 17:50:53 9 11-JUL-02 17:50:53 11-JUL-02 17:50:58 10 11-JUL-02 17:50:58 11-JUL-02 17:51:03 11 11-JUL-02 17:51:03 11-JUL-02 18:34:11 4 rows selected.

The archived redo log files are now available to be applied to the physical standby database. Step 4 Verify new archived redo log files were applied.
V$ARCHIVED_LOG

On the standby database, query the files were applied.

view to verify the archived redo log

SQL> SELECT SEQUENCE#,APPLIED FROM V$ARCHIVED_LOG 2 ORDER BY SEQUENCE#;

SEQUENCE# APP --------- --8 YES 9 YES 10 YES 11 YES

4 rows selected. See Section 5.9.1, "Monitoring Log File Archival Information" and Section 8.5.4, "Monitoring Log Apply Services on Physical Standby Databases" to verify redo transport services and log apply services are working correctly.

3.3 Post-Creation Steps
At this point, the physical standby database is running and can provide the maximum performance level of data protection. The following list describes additional preparations you can take on the physical standby database:

Upgrade the data protection mode The Data Guard configuration is initially set up in the maximum performance mode (the default). See Section 5.6 for information about the data protection modes and how to upgrade or downgrade the current protection mode.

Enable Flashback Database Flashback Database removes the need to re-create the primary database after a failover. Flashback Database enables you to return a database to its state at a time in the recent past much faster than traditional point-in-time recovery, because it does not require restoring datafiles from backup nor the extensive application of redo data. You can enable Flashback Database on the primary database, the standby database, or both. See Section 12.4 andSection 12.5 for scenarios showing how to use Flashback Database in a Data Guard environment. Also, see Oracle Database Backup and Recovery Advanced User's Guide for more information about Flashback Database.

Tuesday, September 15, 2009

Step by Step Data Guard Setup for Oracle 10g
Steps: 1.) Make sure archive log mode is enabled on your database: SQL> archive log list Database log mode Archive Mode Automatic archival Enabled

Archive destination /opt/app/oracle/oradata/orcl/archive Oldest online log sequence 108 Next log sequence to archive 109 Current log sequence 109 SQL> select name, log_mode from v$database; NAME LOG_MODE --------- -----------ORCL ARCHIVELOG If archive log mode is not enabled. Please enable it using the following link. How to enable archivelog mode in Oracle 11g database 2.) Enable force logging on the database, so that there is no problems with no logging operations in the future. SQL> alter database force logging; Database altered. 3.) Create password file, if you do not have one already. [oracle@APP3 dbs]$ cd $ORACLE_HOME/dbs [oracle@APP3 dbs]$ orapwd file=orapworcl password=oracle force=y [oracle@APP3 dbs]$ ls -lrt orapworcl -rw-r----- 1 oracle oinstall 1536 Sep 14 08:21 orapworcl SQL> select * from v$pwfile_users; USERNAME SYSDB SYSOP ------------------------------ ----- ----SYS TRUE TRUE 4.) Create Standby Redo Logfiles on primary DB. Current logfile: SQL> col member a40 SQL> select a.group#,a.status,a.member,b.bytes/1024/1024 from v$logfile a,v$log b 2 where a.group#=b.group#; GROUP# STATUS MEMBER B.BYTES/1024/1024 ---------- ------- ---------------------------------------- ----------------1 /opt/app/oracle/oradata/orcl/redo01.log 50 2 /opt/app/oracle/oradata/orcl/redo02.log 50 Add standby redo log groups: SQL> alter database add standby logfile group 3 size 50M; Database altered. SQL> alter database add standby logfile group 4 size 50M;

Database altered. SQL> select * from v$logfile; GROUP# STATUS TYPE MEMBER IS_ ---------- ------- ------- --------------------------------------------------------------------------- --1 ONLINE /opt/app/oracle/oradata/ORCL/redo01.log NO 2 ONLINE /opt/app/oracle/oradata/ORCL/redo02.log NO 3 STANDBY /opt/app/oracle/flash_recovery_area/ORCL/onlinelog/o1_mf_3_5bvzkzgs_.log YES 4 STANDBY /opt/app/oracle/flash_recovery_area/ORCL/onlinelog/o1_mf_4_5bvzl8hf_.log YES SQL> select * from v$standby_log; GROUP# DBID THREAD# SEQUENCE# BYTES USED ARC STATUS FIRST_CHANGE# FIRST_TIM LAST_CHANGE# LAST_TIME ---------------------------------------------------------------------------------------------------------------------3 UNASSIGNED 0 0 52428800 512 YES UNASSIGNED 0 0 4 UNASSIGNED 0 0 52428800 512 YES UNASSIGNED 0 0 5.) Check parameter db_unique_name SQL> show parameters unique NAME TYPE VALUE ------------------------------------ ----------- -----------------------------db_unique_name string orcl 6.) Add standby related entried to Primary database: SQL> create pfile='/home/oracle/initprim.ora' from spfile; Sample init.ora from Primary: orcl.__db_cache_size=2097152000 orcl.__java_pool_size=16777216 orcl.__large_pool_size=16777216 orcl.__shared_pool_size=536870912 orcl.__streams_pool_size=0 *.audit_file_dest='/opt/app/oracle/admin/orcl/adump' *.background_dump_dest='/opt/app/oracle/admin/orcl/bdump' *.compatible='10.2.0.3.0' *.control_files='/opt/app/oracle/oradata/orcl/control01.ctl','/opt/app/oracle/oradata/orcl/control02. ctl','/opt/app/oracle/oradata/orcl/control03.ctl' *.core_dump_dest='/opt/app/oracle/admin/orcl/cdump' *.db_block_size=8192 *.db_domain='' *.db_file_multiblock_read_count=16 *.db_name='orcl' *.db_recovery_file_dest='/opt/app/oracle/flash_recovery_area' *.db_recovery_file_dest_size=2147483648 *.db_unique_name='orcl' *.dispatchers='(PROTOCOL=TCP) (SERVICE=orclXDB)' *.job_queue_processes=10 *.log_archive_dest_1='location=/opt/app/oracle/oradata/orcl/archive VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=orcl' *.open_cursors=300 *.pga_aggregate_target=823132160 *.processes=500

*.remote_login_passwordfile='EXCLUSIVE' *.sga_target=2684354560 *.undo_management='AUTO' *.undo_tablespace='UNDOTBS1' *.user_dump_dest='/opt/app/oracle/admin/orcl/udump' db_unique_name=orcl LOG_ARCHIVE_CONFIG='DG_CONFIG=(orcl,orcl1)' *.LOG_ARCHIVE_DEST_2='SERVICE=ORCL1 LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=orcl1' LOG_ARCHIVE_DEST_STATE_1=ENABLE LOG_ARCHIVE_DEST_STATE_2=ENABLE LOG_ARCHIVE_FORMAT=%t_%s_%r.arc LOG_ARCHIVE_MAX_PROCESSES=3 DB_FILE_NAME_CONVERT='/u01/oradata/orcl','/opt/app/oracle/oradata/orcl' LOG_FILE_NAME_CONVERT='/u01/oradata/orcl/archive',/opt/app/oracle/oradata/orcl/archive','/u01/o radata/flash_recovery_area/orcl','/opt/app/oracle/flash_recovery_area/orcl/onlinelog' FAL_SERVER=orcl1 FAL_CLIENT=orcl Copy the init.ora and make necessary changes to the file to be used at standby side. Changes like location of various files, FAL_SERVER, FAL_CLIENT etc. Sample init.ora in Standby DB: orcl.__db_cache_size=2097152000 orcl.__java_pool_size=16777216 orcl.__large_pool_size=16777216 orcl.__shared_pool_size=536870912 orcl.__streams_pool_size=0 *.audit_file_dest='/u01/oradata/orcl/adump' *.background_dump_dest='/u01/oradata/orcl/bdump' *.compatible='10.2.0.3.0' *.control_files='/u01/oradata/orcl/control01.ctl','/u01/oradata/orcl/control02.ctl','/u01/oradata/orcl/ control03.ctl' *.core_dump_dest='/u01/oradata/orcl/cdump' *.db_block_size=8192 *.db_domain='' *.db_file_multiblock_read_count=16 *.db_name='orcl' *.db_recovery_file_dest='/u01/oradata/flash_recovery_area/orcl' *.db_recovery_file_dest_size=2147483648 *.db_unique_name='orcl1' *.dispatchers='(PROTOCOL=TCP) (SERVICE=orclXDB)' *.job_queue_processes=10 *.log_archive_dest_1='location=/u01/oradata/orcl/archive VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=orcl' *.open_cursors=300 *.pga_aggregate_target=823132160 *.processes=500 *.remote_login_passwordfile='EXCLUSIVE' *.sga_target=2684354560 *.undo_management='AUTO' *.undo_tablespace='UNDOTBS1' *.user_dump_dest='/u01/oradata/orcl/udump' LOG_ARCHIVE_CONFIG='DG_CONFIG=(orcl,orcl1)' *.LOG_ARCHIVE_DEST_2='SERVICE=ORCL LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)

DB_UNIQUE_NAME=orcl' LOG_ARCHIVE_DEST_STATE_1=ENABLE LOG_ARCHIVE_DEST_STATE_2=ENABLE LOG_ARCHIVE_FORMAT=%t_%s_%r.arc LOG_ARCHIVE_MAX_PROCESSES=3 DB_FILE_NAME_CONVERT='/opt/app/oracle/oradata/orcl','/u01/oradata/orcl' LOG_FILE_NAME_CONVERT='/opt/app/oracle/oradata/orcl/archive','/u01/oradata/orcl/archive','/opt/a pp/oracle/flash_recovery_area/orcl/onlinelog','/u01/oradata/flash_recovery_area/orcl' FAL_SERVER=orcl FAL_CLIENT=orcl1 7.) Shutdown the primary database. Use the newly created pfile to startup nomount the database. Then create a spfile for the database. Mount the database and create a standby controlfile. Shutdown the database and take a cold back of the database, all files including the redo log files. You can also create a standby DB from hot backup. SQL> shutdown immediate; Database closed. Database dismounted. ORACLE instance shut down. SQL> startup nomount pfile='/home/oracle/pfileorcl.ora' ORACLE instance started. Total System Global Area 2684354560 bytes Fixed Size 2086352 bytes Variable Size 570427952 bytes Database Buffers 2097152000 bytes Redo Buffers 14688256 bytes SQL> create spfile from pfile='/home/oracle/pfileorcl.ora'; File created. Meanwhile I also received the error: create spfile from pfile='/home/oracle/pfileregdb.ora' * ERROR at line 1: ORA-00600: internal error code, arguments: [kspsetpao1], [1753], [1700], [*], [user_dump_dest], [33], [], [] Note: This error usually comes when the syntax of the pfile is wrong somewhere, please fix the pfile and try again. It worked for me. SQL> shutdown immediate; ORA-01507: database not mounted ORACLE instance shut down. SQL> startup ORACLE instance started. Total System Global Area 2684354560 bytes Fixed Size 2086352 bytes Variable Size 570427952 bytes Database Buffers 2097152000 bytes Redo Buffers 14688256 bytes

Database mounted. Database opened. 8.) Shutdown the database again and take a cold backup of all files. 9.) Create standby control file. SQL> startup mount ORACLE instance started. Total System Global Area 2684354560 bytes Fixed Size 2086352 bytes Variable Size 570427952 bytes Database Buffers 2097152000 bytes Redo Buffers 14688256 bytes Database mounted. --Then mount and create a standby controlfile. SQL> alter database create standby controlfile as 'standby.ctl'; Database altered. -- Open the primary read write. SQL> alter database open; Database altered. 10.) Transfer all the file from the cold backup from Primary to Standby server. Also copy the password file from primary to standby. Also copy the standby controlfile created in step 9 and copy if with the right name and location on standby server. I use SFTP for transferring the files. 11.) Add entries for the primary db and standby DB in both primary and standby servers. i.e. primary server should have its own (orcl) and standby server (orcl1) tns entry. 12.) Copy the pfile from step 6 for standby DB. Now try to nomount the standby database with the new pfile. [oracle@dbtest dbs]$ sqlplus / as sysdba SQL*Plus: Release 10.2.0.1.0 - Production on Tue Sep 15 04:57:32 2009 Copyright (c) 1982, 2005, Oracle. All rights reserved. Connected to an idle instance. SQL> startup nomount pfile='/home/oracle/oracle/product/10.2.0/db_1/dbs/pfilestbregdb.ora'; ORACLE instance started. Total System Global Area 1694498816 bytes Fixed Size 1219784 bytes Variable Size 402654008 bytes Database Buffers 1275068416 bytes Redo Buffers 15556608 bytes 13.) Create spfile from pfile. SQL> create spfile from pfile='/home/oracle/oracle/product/10.2.0/db_1/dbs/pfilestbregdb.ora';

File created. 14.) Shutdown the DB and do a startup mount. SQL>startup mount; 15.) Start REDO apply process: SQL> alter database recover managed standby database disconnect from session; OR SQL> alter database recover managed standby database nodelay disconnect parallel 8; 16.) Verification. SQL> SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY; To check archive gap: SELECT thread#, low_sequence#, high_sequence# from V$archive_gap; for RAC SELECT thread#, low_sequence#, high_sequence# from gv$archive_gap; To stop redo apply: alter database recover managed standby database cancel; 17.) Check alert log files and verify that you did not receive any error. 18.) Switch some logfiles on the Primary and check if the same are getting applied to the standby. on Primary: SQL> alter system switch logfile; on standby: SQL> SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY; Thanks should be it, your Physical Standby DB should be working fine.

This document explains the step by step process of Configuring 10g R2 Single Instance Dataguard for Single Instance Primary on Redhat Enterprise Linux 32 bit (RHEL3) / CentOS 3.6. Click HERE for Step By Step Process of Configuring RAC Standby Dataguard for RAC Primary ON Redhat Linux.

Task List: 10g R2 Dataguard Technical Architecture Primary DB init parameter Standby DB init parameter Enable Archiving On Primary DB tnsnames.ora/listener.ora configuration Creating Standby Redo logs (SLRs) Backup the Primary DB. Creating the standby controlfile Startig and verifying Standby DB Testing Realtime Apply

Technical Architecture of DataGuard
Primary Database Name: primary Service Name: primary Primary Node: SID: primary Network name (hostname): node1-prv ORACLE_BASE: /u01/app/oracle Standby Database Name: stndby Service Name: stndby Standby Node: SID: stndby Network name (hostname): node2-prv ORACLE_BASE: /u01/app/oracle

Primary DB init parameter
primary.__db_cache_size=67108864 primary.__java_pool_size=4194304 primary.__large_pool_size=4194304 primary.__shared_pool_size=88080384 primary.__streams_pool_size=0 *.archive_lag_target=0 *.audit_file_dest='/u01/app/oracle/admin/primary/adump' *.background_dump_dest='/u01/app/oracle/admin/primary/bdump' *.compatible='10.2.0.1.0' *.control_files='/u01/app/oracle/oradata/PRIMARY/controlfile/o1_mf_26lg 83r9_.ctl','/u01/app/oracle/flash_recovery_area/PRIMARY/controlfile/o1_ mf_26lg844c_.ctl' *.core_dump_dest='/u01/app/oracle/admin/primary/cdump' *.db_block_size=8192 *.db_create_file_dest='/u01/app/oracle/oradata' *.db_domain='' *.db_file_multiblock_read_count=16 *.db_name='primary' *.db_recovery_file_dest='/u01/app/oracle/flash_recovery_area' *.db_recovery_file_dest_size=2147483648 *.db_unique_name='primary' *.dg_broker_start=TRUE *.dispatchers='(PROTOCOL=TCP) (SERVICE=primaryXDB)' *.fal_client='primary' *.fal_server='stndby' *.job_queue_processes=10 *.log_archive_config='DG_CONFIG=(primary,stndby)' *.log_archive_dest_1='LOCATION=/u01/app/oracle/oradata/PRIMARY/arch VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=primary' *.log_archive_dest_2='SERVICE=stndby LGWR ASYNC VALID_FOR=(ALL_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=stndby'

*.log_archive_dest_state_1='ENABLE' *.log_archive_dest_state_2='ENABLE' *.log_archive_format='%t_%s_%r.dbf' primary.log_archive_format='%t_%s_%r.dbf' *.log_archive_max_processes=2 *.log_archive_min_succeed_dest=1 primary.log_archive_trace=0 *.open_cursors=300 *.pga_aggregate_target=16777216 *.processes=150 *.remote_login_passwordfile='EXCLUSIVE' *.sga_target=167772160 primary.standby_archive_dest='/u01/app/oracle/oradata/PRIMARY/arch ' *.standby_file_management='AUTO' *.undo_management='AUTO' *.undo_tablespace='UNDOTBS1' *.user_dump_dest='/u01/app/oracle/admin/primary/udump' *.local_listener=prim

Standby DB init parameter
stndby.__db_cache_size=75497472 stndby.__java_pool_size=4194304 stndby.__large_pool_size=4194304 stndby.__shared_pool_size=79691776 stndby.__streams_pool_size=0 *.archive_lag_target=0 *.audit_file_dest='/u01/app/oracle/admin/stndby/adump' *.background_dump_dest='/u01/app/oracle/admin/stndby/bdump' *.compatible='10.2.0.1.0' *.control_files='/u01/app/oracle/oradata/STNDBY/controlfile/stndby01.ct l','/u01/app/oracle/flash_recovery_area/STNDBY/controlfile/stndby02.ctl' *.core_dump_dest='/u01/app/oracle/admin/stndby/cdump' *.db_block_size=8192 *.db_create_file_dest='/u01/app/oracle/oradata' *.db_domain='' *.db_file_multiblock_read_count=16 *.db_name='primary' *.db_recovery_file_dest='/u01/app/oracle/flash_recovery_area' *.db_recovery_file_dest_size=2147483648 *.db_unique_name='stndby' *.dg_broker_start=TRUE *.dispatchers='(PROTOCOL=TCP) (SERVICE=stndbyXDB)' *.fal_client='stndby' *.fal_server='primary' *.job_queue_processes=10 *.log_archive_config='DG_CONFIG=(stndby,primary)' *.log_archive_dest_1='LOCATION=/u01/app/oracle/oradata/STNDBY/arch VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=stndby' *.log_archive_dest_2='SERVICE=primary LGWR ASYNC VALID_FOR=(ALL_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=primary' *.log_archive_dest_state_1='ENABLE' *.log_archive_dest_state_2='ENABLE'

*.log_archive_format='%t_%s_%r.dbf' *.log_archive_max_processes=2 *.log_archive_trace=0 *.db_file_name_convert= 'PRIMARY', 'STNDBY' *.log_file_name_convert='PRIMARY', 'STNDBY' *.open_cursors=300 *.pga_aggregate_target=16777216 *.processes=150 *.remote_login_passwordfile='EXCLUSIVE' *.sga_target=167772160 *.standby_archive_dest='/u01/app/oracle/oradata/STNDBY/arch' *.standby_file_management='AUTO' *.undo_management='AUTO' *.undo_tablespace='UNDOTBS1' *.user_dump_dest='/u01/app/oracle/admin/stndby/udump' *.local_listener=stnd

Enabling Archiving on primary DB:
Ensure that the primary is in archive log mode
SQL>shutdown immediate SQL>startup mount; SQL>alter database archivelog; SQL>alter database open;

tnsnames/listener.ora configuration:
# tnsnames.ora Network Configuration File: /u01/app/oracle/product/10.2.0/db10g/network/admin/tnsnames.ora # Generated by Oracle configuration tools. STNDBY = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = node2-prv)(PORT = 10521)) ) (CONNECT_DATA = (SERVICE_NAME = STNDBY) ) ) PRIM = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = node1-prv)(PORT = 10521))) PRIMARY = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = node1-prv)(PORT = 10521)) ) (CONNECT_DATA = (SERVICE_NAME = PRIMARY) ) ) EXTPROC_CONNECTION_DATA = (DESCRIPTION =

(ADDRESS_LIST = (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0)) ) (CONNECT_DATA = (SID = PLSExtProc) (PRESENTATION = RO) ) )

Copy the same file to the standby server and adjust it based on the listener.ora file. Also update the listener.ora file so that it listen the SIDs mentioned in the tnsnames.ora file.
# listener.ora Network Configuration File: /u01/app/oracle/product/10.2.0/db10g/network/admin/listener.ora # Generated by Oracle configuration tools. SID_LIST_LISTENER_STBY = (SID_LIST = (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME = /u01/app/oracle/product/10.2.0/db10g) (PROGRAM = extproc) ) (SID_DESC = (SID_NAME = stndby) (GLOBAL_DBNAME = stndby_DGMGRL) (ORACLE_HOME = /u01/app/oracle/product/10.2.0/db10g) ) ) LISTENER_STBY = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = node2-prv)(PORT = 10521)) (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0)) ) )

Standby Redo Logs (SLRs) Creation:
In case of OMF: Get the max group# of online redo logs on PRIMARY database
SELECT max (group#) from v$logfile;

Create the standby redo logs on the primary database with the same size of that of online redo logs. If the above query returns the value of 3 and each logfile is 50M in size (from the below query) then, create atleast 4standby redo logs of the size of 50M per thread. SELECT byte from v$log; Create the SRL's :
ALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50M

/ ALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50M / ALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M / ALTER DATABASE ADD STANDBY LOGFILE GROUP 7 SIZE 50M /

Backup the primary DB:
Take a cold/Hot/RMAN backup of the primary database. I used the cold backup in this case. SQL>SHUTDOWN IMMEDIATE Backup the data files, online redologs and the standby logs if created and scp to the standby server in the corresponding directory. I used the same directory structure as that with primary. The only difference was the name of the directory. For e.g., On primary database, I have a path of /u01/app/oracle/oradata/PRIMARY/datafile whereas On standby server, I have a path of/u01/app/oracle/oradata/STNDBY/datafile, This is the reason, I have used the db_file_name_convert parameter in the primary init.ora file with the value ofdb_file_name_convert=’PRIMARY’,’STNDBY’ and in the standby init.ora file with the value of db_file_name_convert=’STNDBY’, ‘PRIMARY’ Create the Standby Controlfile: On Primary Database:
SQL>STARTUP MOUNT; SQL>ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/tmp/stndby01.ctl'; SQL>ALTER DATABASE OPEN;

Copy the stndby01.ctl file to the standby site. I have multiplexed it in the initstndby.ora file. So I SCPed the same file to both the locations mentioned in the initstndby.ora file. Also, copied the $ORACLE_HOME/dbs/orapwprimary file of the primary to the same location on the standby with the name oforapwstndby. Starting and Verifying the standby DB:
SQL>create spfile from pfile; SQL>STARTUP MOUNT; SQL>ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;

Verify the Standby :

Identify the existing files on the standby
SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;

Switch a log on the primary database:
ALTER SYSTEM SWITCH LOGFILE;

Re-Run the same SQL to make sure that the logs are received and applied to the standby server. Verify that these logs were applied:
SELECT SEQUENCE#,APPLIED FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;

Testing Real-time Apply: On Primary Database, create a table 'test' and insert a record.
INSERT INTO test VALUES (sysdate); COMMIT;

Do not make a log switch because I set up the LGWR ASYNC option so that the redo should be transferred and applied to the standby server in real time. On the STANDBY DB server:
SELECT PROCESS, STATUS,SEQUENCE#,BLOCK#,BLOCKS, DELAY_MINS FROM V$MANAGED_STANDBY; ALTER DATABASER RECOVER MANAGED STANDBY DATABASE CANCEL; ALTER DATABASE OPEN READ ONLY; SELECT * FROM test;

You should see the committed transaction. Now, Place the standby back in managed recover mode
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;

This will take the standby directly from read only mode and place it in managed recovery mode.
THURSDAY, JANUARY 21, 2010

Step by Step, document for creating Physical Standby Database, 10g DATA GUARD

10g Data Guard, Physical Standby Creation, step by step

primary database name: white on rac2 machine

standby database name: black on rac1 machine

Creating a Data Guard Physical Standby environment, General Review. Manually setting up a Physical standby database is a simple task when all prerequisites and setup steps are carefully met and executed. In this example I did use 2 hosts, that host a RAC database. All RAC preinstall requisites are then in place and no additional configuration was necessary to implement Data Guard Physical Standby manually.

The Enviroment 2 Linux servers, Oracle Distribution 2.6.9-55 EL i686 i386 GNU/Linux Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 ssh is configured for user oracle on both nodes Oracle Home is on identical path on both nodes Implementation notes: Once you have your primary database up and running these are the steps to follow: 1. Enable Forced Logging 2. Create a Password File 3. Configure a Standby Redo Log 4. Enable Archiving 5. Set Primary Database Initialization Parameters Having followed these steps to implement the Physical Standby you need to follow these steps: 1. Create a Control File for the Standby Database 2. Backup the Primary Database and transfer a copy to the Standby node. 3. Prepare an Initialization Parameter File for the Standby Database 4. Configure the listener and tnsnames to support the database on both nodes 5. Set Up the Environment to Support the Standby Database on the standby node. 6. Start the Physical Standby Database 7. Verify the Physical Standby Database Is Performing Properly

Step by Step Implementation of a Physical Standby Environment Primary Database Steps Primary Database General View

SQL> archive log list; Database log mode Automatic archival Archive destination Oldest online log sequence Current log sequence 1 No Archive Mode Disabled USE_DB_RECOVERY_FILE_DEST 0

SQL> select name from v$database;

NAME --------WHITE

SQL> select name from v$datafile;

NAME -------------------------------------------------------------------------------/u01/app/oracle/oradata/white/system01.dbf /u01/app/oracle/oradata/white/undotbs01.dbf /u01/app/oracle/oradata/white/sysaux01.dbf /u01/app/oracle/oradata/white/users01.dbf

SQL> show parameters unique

NAME

TYPE

VALUE

------------------------------------ ----------- -----------------------------db_unique_name SQL> string white

Enable Forced Logging

In order to implement Standby Database we enable 'Forced Logging'. This option ensures that even in the event that a 'nologging' operation is done, force logging takes precedence and all operations are logged into the redo logs. SQL> ALTER DATABASE FORCE LOGGING; Database altered. Create a Password File A password file must be created on the Primary and copied over to the Standby site. The sys password must be identical on both sites. This is a key pre requisite in order to be able to ship and apply archived logs from Primary to Standby.

[oracle@rac2 ~]$ cd $ORACLE_HOME/dbs [oracle@rac2 dbs]$ orapwd file=orapwwhite password=oracle force=y

SQL> select * from v$pwfile_users;

USERNAME

SYSDB SYSOP

------------------------------ ----- ----SYS TRUE TRUE

Configure a Standby Redo Log A Standby Redo log is added to enable Data Guard Maximum Availability and Maximum Protection modes. It is important to configure the Standby Redo Logs (SRL) with the same size as the online redo logs. In this example I'm using Oracle Managed Files, that's why I don't need to provide the SRL path and file name. If you are not using OMF's you then must pass the full qualified name. SQL> select group#,type,member from v$logfile;

GROUP# TYPE

MEMBER

---------- ------- --------------------------------------------------

3 ONLINE /u01/app/oracle/oradata/white/redo03.log 2 ONLINE /u01/app/oracle/oradata/white/redo02.log 1 ONLINE /u01/app/oracle/oradata/white/redo01.log SQL> select bytes from v$log; BYTES ---------52428800 52428800 52428800

SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 4 2 '/u01/app/oracle/oradata/white/stby04.log' size 50m;

Database altered.

SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 5 2 '/u01/app/oracle/oradata/white/stby05.log' size 50m;

Database altered.

SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 6 2 '/u01/app/oracle/oradata/white/stby06.log' size 50m;

Database altered.

SQL> SELECT GROUP#,TYPE,MEMBER FROM V$LOGFILE;

GROUP# TYPE

MEMBER

---------- ------- -------------------------------------------------3 ONLINE /u01/app/oracle/oradata/white/redo03.log 2 ONLINE /u01/app/oracle/oradata/white/redo02.log 1 ONLINE /u01/app/oracle/oradata/white/redo01.log 4 STANDBY /u01/app/oracle/oradata/white/stby04.log

5 STANDBY /u01/app/oracle/oradata/white/stby05.log 6 STANDBY /u01/app/oracle/oradata/white/stby06.log

6 rows selected.

Set Primary Database Initialization Parameters Data Guard must use spfile, in order to configure it we create and configure the standby parameters on a regular pfile, and once it is ready we convert it to an spfile. Several init.ora parameters control the behavior of a Data Guard environment. In this example the Primary database init.ora is configured so that it can hold both roles, as Primary or Standby.

SQL> CREATE PFILE FROM SPFILE;

File created.

(or)

SQL> CREATE PFILE='/tmp/initwhite.ora' from spfile;

File created.

Edit the pfile to add the standby parameters, here shown highlighted:

white.__db_cache_size=184549376 white.__java_pool_size=4194304 white.__large_pool_size=4194304 white.__shared_pool_size=88080384 white.__streams_pool_size=0 *.audit_file_dest='/u01/app/oracle/admin/white/adump' *.background_dump_dest='/u01/app/oracle/admin/white/bdump' *.compatible='10.2.0.1.0'

*.control_files='/u01/app/oracle/oradata/white/control01.ctl','/u01/app/oracle/oradata/white/contro l02.ctl','/u01/app/oracle/oradata/white/control03.ctl' *.core_dump_dest='/u01/app/oracle/admin/white/cdump' *.db_block_size=8192 *.db_domain='' *.db_file_multiblock_read_count=16 *.db_name='white' *.db_recovery_file_dest='/u01/app/oracle/flash_recovery_area' *.db_recovery_file_dest_size=2147483648 *.dispatchers='(PROTOCOL=TCP) (SERVICE=whiteXDB)' *.job_queue_processes=10 *.open_cursors=300 *.pga_aggregate_target=94371840 *.processes=150 *.remote_login_passwordfile='EXCLUSIVE' *.sga_target=285212672 *.undo_management='AUTO' *.undo_tablespace='UNDOTBS1' *.user_dump_dest='/u01/app/oracle/admin/white/udump' db_unique_name='white' LOG_ARCHIVE_CONFIG='DG_CONFIG=(white,black)' LOG_ARCHIVE_DEST_1='LOCATION=/u01/app/oracle/oradata/white/arch/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=white' LOG_ARCHIVE_DEST_2='SERVICE=black LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=black' LOG_ARCHIVE_DEST_STATE_1=ENABLE LOG_ARCHIVE_DEST_STATE_2=ENABLE LOG_ARCHIVE_FORMAT=%t_%s_%r.arc LOG_ARCHIVE_MAX_PROCESSES=30 #Standby role parameters-----------------------------------------fal_server=black fal_client=white standby_file_management=auto

db_file_name_convert='/u01/app/oracle/oradata/black/','/u01/app/oracle/oradata/white/' log_file_name_convert='/u01/app/oracle/oradata/black/','/u01/app/oracle/oradata/white/'

Once the new parameter file is ready we create from it the spfile: SQL> shutdown immediate; Database closed. Database dismounted. ORACLE instance shut down. SQL> startup nomount pfile=/u01/app/oracle/product/10.2.0/db_1/dbs/initwhite.ora ORA-16032: parameter LOG_ARCHIVE_DEST_1 destination string cannot be translated ORA-07286: sksagdi: cannot obtain device information. Linux Error: 2: No such file or directory note: create a archive log destination(location) folder as per in parameter file and then startup the database. SQL> startup nomount pfile=/u01/app/oracle/product/10.2.0/db_1/dbs/initwhite.ora ORACLE instance started.

Total System Global Area 285212672 bytes Fixed Size Variable Size Database Buffers Redo Buffers 1218992 bytes 96470608 bytes 184549376 bytes 2973696 bytes

SQL> create spfile from pfile;

File created.

SQL> shutdown immediate; ORA-01507: database not mounted

ORACLE instance shut down.

Enable Archiving On 10g you can enable archive log mode by mounting the database and executing the archivelog command: SQL> startup mount ORACLE instance started.

Total System Global Area 285212672 bytes Fixed Size Variable Size Database Buffers Redo Buffers Database mounted. SQL> alter database archivelog; 1218992 bytes 96470608 bytes 184549376 bytes 2973696 bytes

Database altered.

SQL> alter database open;

Database altered.

SQL> archive log list; Database log mode Automatic archival Archive destination Oldest online log sequence Archive Mode Enabled /u01/app/oracle/oradata/white/arch/ 1

Next log sequence to archive 2 Current log sequence SQL> 2

Standby Database Steps Here, i am going to create standby database using backup of the primary database datafiles,redologs, controlfile by rman. compare with user managed backup, rman is comfortable and flexible method.

Create an RMAN backup which we will use later to create the standby:

[oracle@rac2 ~]$ . oraenv ORACLE_SID = [oracle] ? white [oracle@rac2 ~]$ rman target=/

Recovery Manager: Release 10.2.0.1.0 - Production on Wed Jan 20 18:41:51 2010

Copyright (c) 1982, 2005, Oracle. All rights reserved.

connected to target database: WHITE (DBID=3603807872)

RMAN> backup full database format '/u01/app/oracle/backup/%d_%U.bckp' plus archivelog format '/u01/app/oracle/backup/%d_%U.bckp';

Next, create a standby controlfile backup via RMAN: RMAN> configure channel device type disk format '/u01/app/oracle/backup/%U';

new RMAN configuration parameters: CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/u01/app/oracle/backup/%U'; new RMAN configuration parameters are successfully stored released channel: ORA_DISK_1

RMAN> BACKUP CURRENT CONTROLFILE FOR STANDBY;

RMAN> BACKUP ARCHIVELOG ALL;

In this simple example, I am backing up the primary database to disk; therefore, I must make the backupsets available to the standby host if I want to use them as the basis for my duplicate operation: [oracle@rac2 ~]$ cd /u01/app/oracle/backup [oracle@rac2 backup]$ ls -lart

total 636080 drwxrwxr-x 9 oracle oinstall 4096 Jan 20 18:42 ..

-rw-r----- 1 oracle oinstall 50418176 Jan 20 18:43 WHITE_01l3v1uv_1_1.bckp -rw-r----- 1 oracle oinstall 531472384 Jan 20 18:54 WHITE_02l3v203_1_1.bckp -rw-r----- 1 oracle oinstall 7143424 Jan 20 18:54 WHITE_03l3v2jf_1_1.bckp -rw-r----- 1 oracle oinstall 1346560 Jan 20 18:54 WHITE_04l3v2jv_1_1.bckp -rw-r----- 1 oracle oinstall 7110656 Jan 20 19:19 05l3v41r_1_1 drwxr-xr-x 2 oracle oinstall 4096 Jan 20 19:20 .

-rw-r----- 1 oracle oinstall 53174272 Jan 20 19:21 06l3v448_1_1 [oracle@rac2 backup]$ scp * oracle@rac1:/u01/app/oracle/backup/ 05l3v41r_1_1 06l3v448_1_1 WHITE_01l3v1uv_1_1.bckp WHITE_02l3v203_1_1.bckp WHITE_03l3v2jf_1_1.bckp WHITE_04l3v2jv_1_1.bckp NOTE: The primary and standby database location for backup folder must be same. for eg: /u01/app/oracle/backup folder On the standby node create the required directories to get the datafiles mkdir -p /u01/app/oracle/oradata/black mkdir -p /u01/app/oracle/oradata/black/arch mkdir -p /u01/app/oracle/admin/black mkdir -p /u01/app/oracle/admin/black/adump mkdir -p /u01/app/oracle/admin/black/bdump mkdir -p /u01/app/oracle/admin/black/udump mkdir -p /u01/app/oracle/flash_recovery_area/WHITE mkdir -p /u01/app/oracle/flash_recovery_area/WHITE/onlinelog 100% 6944KB 6.8MB/s 00:00 100% 51MB 16.9MB/s 00:03 100% 48MB 2.7MB/s 00:18 100% 507MB 1.5MB/s 05:47 100% 6976KB 996.6KB/s 00:07 100% 1315KB 1.3MB/s 00:01

Prepare an Initialization Parameter File for the Standby Database

Copy from the primary pfile to the standby destination

[oracle@rac2 ~]$ cd /u01/app/oracle/product/10.2.0/db_1/dbs/ [oracle@rac2 dbs]$ scp initwhite.ora oracle@rac1:/tmp/initblack.ora initwhite.ora 100% 1704 1.7KB/s 00:00

Copy and edit the primary init.ora to set it up for the standby role,as here shown highlighted:

black.__db_cache_size=188743680 black.__java_pool_size=4194304 black.__large_pool_size=4194304 black.__shared_pool_size=83886080 black.__streams_pool_size=0 *.audit_file_dest='/u01/app/oracle/admin/black/adump' *.background_dump_dest='/u01/app/oracle/admin/black/bdump' *.compatible='10.2.0.1.0' *.control_files='/u01/app/oracle/oradata/black/control01.ctl','/u01/app/oracle/oradata/black/control 02.ctl','/u01/app/oracle/oradata/black/control03.ctl' *.core_dump_dest='/u01/app/oracle/admin/black/cdump' *.db_block_size=8192 *.db_domain='' *.db_file_multiblock_read_count=16 *.db_file_name_convert='/u01/app/oracle/oradata/white/','/u01/app/oracle/oradata/black/' *.db_name='white' *.db_recovery_file_dest='/u01/app/oracle/flash_recovery_area' *.db_recovery_file_dest_size=2147483648 *.db_unique_name='black' *.dispatchers='(PROTOCOL=TCP) (SERVICE=blackXDB)' *.fal_client='black' *.fal_server='white' *.job_queue_processes=10 *.LOG_ARCHIVE_CONFIG='DG_CONFIG=(white,black)' *.LOG_ARCHIVE_DEST_1='LOCATION=/u01/app/oracle/oradata/black/arch/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=black' *.LOG_ARCHIVE_DEST_2='SERVICE=white LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)

DB_UNIQUE_NAME=white' *.LOG_ARCHIVE_DEST_STATE_1='ENABLE' *.LOG_ARCHIVE_DEST_STATE_2='ENABLE' *.LOG_ARCHIVE_FORMAT='%t_%s_%r.arc' *.LOG_ARCHIVE_MAX_PROCESSES=30 *.log_file_name_convert='/u01/app/oracle/oradata/white/','/u01/app/oracle/oradata/black/' *.open_cursors=300 *.pga_aggregate_target=94371840 *.processes=150 *.remote_login_passwordfile='EXCLUSIVE' *.sga_target=285212672 *.standby_file_management='auto' *.undo_management='AUTO' *.undo_tablespace='UNDOTBS1' *.user_dump_dest='/u01/app/oracle/admin/black/udump' Configure the listener and tnsnames to support the database on both nodes Configure listener.ora on both servers to hold entries for both databases #on RAC2 Machine LISTENER_VMRACTEST = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rac2.localdomain)(PORT = 1521)) ) )

SID_LIST_LISTENER_VMRACTEST = (SID_LIST = (SID_DESC = (GLOBAL_DBNAME = white) (ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1) (SID_NAME = white) ) )

#on rac1 machine LISTENER_VMRACTEST = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rac1.localdomain)(PORT = 1521)) ) )

SID_LIST_LISTENER_VMRACTEST = (SID_LIST = (SID_DESC = (GLOBAL_DBNAME = black) (ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1) (SID_NAME = black) ) )

Configure tnsnames.ora on both servers to hold entries for both databases #on rac2 machine LISTENER_VMRACTEST = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rac2.localdomain)(PORT = 1521)) ) )

WHITE = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rac2.localdomain)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED)

(SERVICE_NAME = white) ) ) BLACK = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rac1.localdomain)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = black) ) ) #on rac1 machine LISTENER_VMRACTEST = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rac1.localdomain)(PORT = 1521)) ) ) WHITE = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rac2.localdomain)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = white) ) ) BLACK = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rac1.localdomain)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = black)

) ) Start the listener and check tnsping on both nodes to both services #on machine rac1 [oracle@rac1 tmp]$ lsnrctl stop LISTENER_VMRACTEST

LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 20-JAN-2010 23:59:41

Copyright (c) 1991, 2005, Oracle. All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=rac1.localdomain)(PORT=1521))) The command completed successfully

[oracle@rac1 tmp]$ lsnrctl start LISTENER_VMRACTEST

LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 21-JAN-2010 00:00:00

Copyright (c) 1991, 2005, Oracle. All rights reserved.

Starting /u01/app/oracle/product/10.2.0/db_1/bin/tnslsnr: please wait...

TNSLSNR for Linux: Version 10.2.0.1.0 - Production System parameter file is /u01/app/oracle/product/10.2.0/db_1/network/admin/listener.ora Log messages written to /u01/app/oracle/product/10.2.0/db_1/network/log/listener_vmractest.log Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=rac1.localdomain)(PORT=1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=rac1.localdomain)(PORT=1521))) STATUS of the LISTENER -----------------------Alias Version Start Date Uptime Trace Level LISTENER_VMRACTEST TNSLSNR for Linux: Version 10.2.0.1.0 - Production 21-JAN-2010 00:00:00 0 days 0 hr. 0 min. 0 sec off

Security SNMP

ON: Local OS Authentication OFF

Listener Parameter File /u01/app/oracle/product/10.2.0/db_1/network/admin/listener.ora Listener Log File /u01/app/oracle/product/10.2.0/db_1/network/log/listener_vmractest.log

Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=rac1.localdomain)(PORT=1521))) Services Summary... Service "black" has 1 instance(s). Instance "black", status UNKNOWN, has 1 handler(s) for this service... Service "black_DGMGRL" has 1 instance(s). Instance "black", status UNKNOWN, has 1 handler(s) for this service... The command completed successfully

[oracle@rac1 tmp]$ tnsping black

TNS Ping Utility for Linux: Version 10.2.0.1.0 - Production on 21-JAN-2010 00:00:21

Copyright (c) 1997, 2005, Oracle. All rights reserved.

Used parameter files: /u01/app/oracle/product/10.2.0/db_1/network/admin/sqlnet.ora

Used TNSNAMES adapter to resolve the alias Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rac1.localdomain)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = black))) OK (10 msec)

[oracle@rac1 tmp]$ tnsping white

TNS Ping Utility for Linux: Version 10.2.0.1.0 - Production on 21-JAN-2010 00:00:29

Copyright (c) 1997, 2005, Oracle. All rights reserved.

Used parameter files:

/u01/app/oracle/product/10.2.0/db_1/network/admin/sqlnet.ora

Used TNSNAMES adapter to resolve the alias Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rac2.localdomain)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = white))) OK (10 msec)

#on rac2 machine [oracle@rac2 dbs]$ lsnrctl stop LISTENER_VMRACTEST

LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 21-JAN-2010 00:22:48

Copyright (c) 1991, 2005, Oracle. All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=rac2.localdomain)(PORT=1 521))) The command completed successfully

[oracle@rac2 dbs]$ lsnrctl start LISTENER_VMRACTEST

LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 21-JAN-2010 00:23:08

Copyright (c) 1991, 2005, Oracle. All rights reserved.

Starting /u01/app/oracle/product/10.2.0/db_1/bin/tnslsnr: please wait...

TNSLSNR for Linux: Version 10.2.0.1.0 - Production System parameter file is /u01/app/oracle/product/10.2.0/db_1/network/admin/liste ner.ora Log messages written to /u01/app/oracle/product/10.2.0/db_1/network/log/listener _vmractest.log Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=rac2.localdomain)(PORT=1 521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=rac2.localdomain)(PORT=1 521))) STATUS of the LISTENER ------------------------

Alias Version Start Date Uptime Trace Level Security SNMP

LISTENER_VMRACTEST TNSLSNR for Linux: Version 10.2.0.1.0 - Production 21-JAN-2010 00:23:08 0 days 0 hr. 0 min. 0 sec off ON: Local OS Authentication OFF

Listener Parameter File /u01/app/oracle/product/10.2.0/db_1/network/admin/list ener.ora Listener Log File /u01/app/oracle/product/10.2.0/db_1/network/log/listen er_vmractest.log

Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=rac2.localdomain)(PORT=1521))) Services Summary... Service "white" has 1 instance(s). Instance "white", status UNKNOWN, has 1 handler(s) for this service... Service "white_DGMGRL" has 1 instance(s). Instance "white", status UNKNOWN, has 1 handler(s) for this service... The command completed successfully

[oracle@rac2 dbs]$ tnsping white

TNS Ping Utility for Linux: Version 10.2.0.1.0 - Production on 21-JAN-2010 00:23 :14

Copyright (c) 1997, 2005, Oracle. All rights reserved.

Used parameter files: /u01/app/oracle/product/10.2.0/db_1/network/admin/sqlnet.ora

Used TNSNAMES adapter to resolve the alias Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rac2.loc aldomain)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = whi te))) OK (0 msec)

[oracle@rac2 dbs]$ tnsping black

TNS Ping Utility for Linux: Version 10.2.0.1.0 - Production on 21-JAN-2010 00:23 :18

Copyright (c) 1997, 2005, Oracle. All rights reserved.

Used parameter files: /u01/app/oracle/product/10.2.0/db_1/network/admin/sqlnet.ora

Used TNSNAMES adapter to resolve the alias Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rac1.loc aldomain)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = bla ck))) OK (10 msec)

Set Up the Environment to Support the Standby Database on the standby node. Create a passwordfile for the standby: [oracle@rac1 ~]$ orapwd file=$ORACLE_HOME/dbs/orapwblack password=oracle note: sys password must be identical for both primary and standby database

Append an entry to oratab:

[oracle@rac1 ~]$ echo "black:/u01/app/oracle/product/10.2.0/db_1:N" >> /etc/oratab

Startup nomount the Standby database

Nomount the standby instance in preparation for the duplicate operation: Startup nomount the Standby database and generate an spfile

[oracle@rac1 ~]$ . oraenv ORACLE_SID = [whiteowl] ? black [oracle@rac1 ~]$ sqlplus '/as sysdba'

SQL*Plus: Release 10.2.0.1.0 - Production on Thu Jan 21 00:38:03 2010

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup nomount pfile='/tmp/initblack.ora' ORACLE instance started.

Total System Global Area 285212672 bytes Fixed Size Variable Size Database Buffers Redo Buffers 1218992 bytes 92276304 bytes 188743680 bytes 2973696 bytes

SQL> create spfile from pfile='/tmp/initblack.ora';

File created.

SQL> shutdown immediate
ORA-01507: database not mounted

ORACLE instance shut down.

SQL> startup nomount
ORACLE instance started.

Total System Global Area 285212672 bytes Fixed Size Variable Size Database Buffers Redo Buffers 1218992 bytes 92276304 bytes 188743680 bytes 2973696 bytes

Create the standby database using rman: [oracle@rac1 ~]$ . oraenv ORACLE_SID = [oracle] ? black [oracle@rac1 ~]$ rman target=sys/oracle@white auxiliary=/

Recovery Manager: Release 10.2.0.1.0 - Production on Thu Jan 21 00:43:11 2010

Copyright (c) 1982, 2005, Oracle. All rights reserved.

connected to target database: WHITE (DBID=3603807872) connected to auxiliary database: WHITE (not mounted)

RMAN> DUPLICATE TARGET DATABASE FOR STANDBY NOFILENAMECHECK;

Start the redo apply: SQL> alter database recover managed standby database disconnect from session;

Test the configuration by generating archive logs from the primary and then querying the standby to see if the logs are being successfully applied.

On the Primary: SQL> alter system switch logfile; SQL> alter system archive log current;

SQL> archive log list; Database log mode Automatic archival Archive destination Oldest online log sequence Next log sequence to archive Current log sequence Archive Mode Enabled /u01/app/oracle/oradata/white/arch/ 8 10 10

On the Standby: SQL> archive log list; Database log mode Automatic archival Archive destination Oldest online log sequence Next log sequence to archive Current log sequence Archive Mode Enabled /u01/app/oracle/oradata/black/arch/ 8 0 10

SQL> SELECT SEQUENCE#,APPLIED FROM V$ARCHIVED_LOG 2 ORDER BY SEQUENCE#;

Stop the managed recovery process on the standby:

SQL> alter database recover managed standby database cancel;

Step-by-Step Instructions for Creating a Physical Standby Database using SQL commands 1. General Overview : The purpose of this document is to create a step by step guideline for using the Oracle Dataguard a High available mechanism. I spent about 7 days investigating about a problem faced from my first dataguard experience. Both primary and standby servers are on linux redhat : same version of OS.

- Primary DB : 10.2.0.1 - Standby DB : 10.2.0.3 - Host Name of Primary DB : arcdb01.es.egwn.lan - Host Name of Standby DB : x06.d15.lan I m trying to setup oracle dataguard for 10G ; both primary and standby databases are in 10GR2. Configuring Oracle DataGuard using SQL commands – Creating a physical standby database ; I can’t get the log files or Archive logs on my StandBy db. Checking v$archive_dest_status view on PRIMARY DB, I found the error below : FROM PRIMARY DB : I getting an error Oracle not available SELECT * FROM v$archive_dest_status DEST_ID 2 DEST_NAME LOG_ARCHIVE_DEST_2 STATUS ERROR TYPE PHYSICAL DATABASE_MODE UNKNOWN

RECOVERY_MODE UNKNOWN PROTECTION_MODE MAXIMUM PERFORMANCE DESTINATION X06.D15.LAN STANDBY_LOGFILE_COUNT 0 STANDBY_LOGFILE_ACTIVE 0 ARCHIVED_THREAD# 0 ARCHIVED_SEQ# 0 APPLIED_THREAD# 0 APPLIED_SEQ# 0 ERROR ORA-01034: ORACLE not available SRL NO DB_UNIQUE_NAME STANDBY SYNCHRONIZATION_STATUS CHECK CONFIGURATION SYNCHRONIZED NO
Logfiles were not applied on my standby DB even I thougth all confirgurations were succesfully done, until I realized the origin of my problem was the version of Oracle on both server. Even both oracle servers were in Oracle 10GR2, please always make sure that they have the SAME VERSION. I spent a little bit more than hour to patch my Primary DB to 10.2.0.3 and now My dataguard works perfectly. 2- Step by Step : In this section we will perform our Data Guard WorkShop, which outlines the procedure to create a Physical standby.

Steps are :
- Create a backup of the primary - Create a Standby Control file - adjust the pfile of the Primary - Transfer the datafiles to the standby host - create same directories for trace files - edit the pfile of the standby - mount the standby

Notes :
Both primary and standby servers are on linux redhat : same version of OS - Primary DB : 10.2.0.3 - Standby DB : 10.2.0.3 - Host Name of Primary DB : arcdb01.es.egwn.lan - Host Name of Standby DB : x06.d15.lan Step 1 : Setup Listeners

Tnsnames.ora of the Primary DB : [oracle@arcdb01 ~]$ export TNS_ADMIN=$ORACLE_HOME/network/admin [oracle@arcdb01 ~]$ cat /u01/app/oracle/oracle/product/10.2.0/ARCDB01/network/admin/tnsnames.ora X06.D15.LAN = (DESCRIPTION = (ADDRESS_LIST =

(ADDRESS = (PROTOCOL = TCP)(HOST = X06.D15.LAN)(PORT = 1521)) ) (CONNECT_DATA = (SID = ARCDB01) (SERVER = DEDICATED) ) ) ARCDB01.ES.EGWN.LAN = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = ARCDB01.ES.EGWN.LAN)(PORT = 1521)) ) (CONNECT_DATA = (SID = ARCDB01) (SERVER = DEDICATED) ) ) Tnsnames.ora of the Standby DB : [oracle@x06 dbf]$ export TNS_ADMIN=$ORACLE_HOME/network/admin [oracle@x06 dbf]$ cat $TNS_ADMIN/tnsnames.ora # tnsnames.ora Network Configuration File: /u01/app/oracle/oracle/product/10.2.0/X06/oracle/network/admin/tnsnames.ora # Generated by Oracle configuration tools. ARCDB01.ES.EGWN.LAN = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = ARCDB01.ES.EGWN.LAN)(PORT = 1521)) ) (CONNECT_DATA = (SID = ARCDB01) (SERVER = DEDICATED) ) ) X06.D15.LAN = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = X06.D15.LAN)(PORT = 1521)) ) (CONNECT_DATA = (SID = ARCDB01) (SERVER = DEDICATED) ) )

[oracle@x06 dbf]$
Step 2 : check Listeners status

TNSPING of the STANDBY DB from the PRIMARY DB : [oracle@arcdb01 ~]$ tnsping x06.d15.lan TNS Ping Utility for Linux: Version 10.2.0.3.0 – Production on 26-MAR-2010 15:33:12 Copyright (c) 1997, 2005, Oracle. All rights reserved. Used parameter files: /u01/app/oracle/oracle/product/10.2.0/ARCDB01/network/admin/sqlnet.ora Used TNSNAMES adapter to resolve the alias Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = X06.D15.LAN)(PORT = 1521))) (CONNECT_DATA = (SID = ARCDB01) (SERVER = DEDICATED))) OK (0 msec) [oracle@arcdb01 ~]$ TNSPING Of the Primary DB from the Standby DB : [oracle@x06 bdump]$ tnsping ARCDB01.ES.EGWN.LAN TNS Ping Utility for Linux: Version 10.2.0.3.0 – Production on 26-MAR-2010 15:34:05 Copyright (c) 1997, 2006, Oracle. All rights reserved. Used parameter files: /u01/app/oracle/oracle/product/10.2.0/X06/oracle/network/admin/sqlnet.ora Used TNSNAMES adapter to resolve the alias Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = ARCDB01.ES.EGWN.LAN)(PORT = 1521))) (CONNECT_DATA = (SID = ARCDB01) (SERVER = DEDICATED))) OK (20 msec) [oracle@x06 bdump]$
Step 3 : Enable archiving and force logging As a dataguard is dependent on redo to maintain the standby, we must assure that the primary database is in archivelog mode. To place the primary into archivelog, perform the following steps :

[oracle@arcdb01 ~]$ sqlplus /nolog SQL*Plus: Release 10.2.0.3.0 – Production on Mon Mar 22 20:26:19 2010 Copyright (c) 1982, 2005, Oracle. All rights reserved. SQL> connect sys as sysdba Enter password: Connected. SQL> SQL> show parameter spfile NAME TYPE VALUE ———————————— ———– —————————— spfile string /u01/app/oracle/oracle/product /10.2.0/ARCDB01/dbs/spfileARCD B01.ora

SQL> select force_logging from v$database 2; FOR — NO SQL> archive log list Database log mode Archive Mode Automatic archival Enabled Archive destination /u01/app/oracle/flash_recovery_area/ARCDB01/archivelog/ Oldest online log sequence 1418 Next log sequence to archive 1420 Current log sequence 1420 SQL> shutdown immediate; Database closed. Database dismounted. ORACLE instance shut down. SQL> startup mount ORACLE instance started. Total System Global Area 419430400 bytes Fixed Size 2021280 bytes Variable Size 117442656 bytes Database Buffers 293601280 bytes Redo Buffers 6365184 bytes Database mounted. SQL> alter database archivelog ; Database altered. SQL> alter database force logging; Database altered. SQL> alter database open; Database altered. SQL> SQL> SQL> select log_mode, force_logging from v$database; LOG_MODE FOR ———— — ARCHIVELOG YES
Step 4 : Create a password file Due to new log transport security and authentification features, it is madatory that every database in a Dataguard configuration utilize a password file. In addition, the password for the sys user must be identical on every system for log transport services to function, If the primary DB does not currently have a password file, create one with the following steps :

[oracle@arcdb01 ~]$ orapwd file=/u00/oracle/product/10.2.0/db_1/dbs/orapwarcdb01 password=orawiss entries=5 force=y

Once the password file is created, you must set the following parameter in the spfile while the database is in nomount state :

alter system set remote_login_passwordfile=exclusive scope=spfile;
Tansfer the password file to the standby DB :

[oracle@x06 dbf]$ cd /u01/app/oracle/oracle/product/10.2.0/X06/oracle/dbs/ [oracle@x06 dbs]$ ls hc_X06.dat initdw.ora init.ora lkSTANDBY lkX06 orapwarcdb01 orapwX06 spfileX06.ora [oracle@x06 dbs]$
Step 5 : Configure the primary initilization paraneters We must configure the parameters to control log transport services and log apply services so that the database will operate in either role with no parameter modification. While the database is mounted on a primry controlfile , the standby parameters are not read and are not into effect, so they will not affect the operation of the database while in the primary role. The parameters shown here are in bold, to be placed into a primary standby.ora pfile : [oracle@arcdb01 ~]$ vi /home/oracle/ADVDB/standby.ora

ARCDB01.__db_cache_size=297795584 ARCDB01.__java_pool_size=4194304 ARCDB01.__large_pool_size=4194304 ARCDB01.__shared_pool_size=100663296 ARCDB01.__streams_pool_size=4194304 *.audit_file_dest=’/u01/app/oracle/admin/ARCDB01/adump’ *.background_dump_dest=’/u01/app/oracle/admin/ARCDB01/bdump’ *.compatible=’10.2.0.1.0′ *.control_file_record_keep_time=8 *.control_files=’/u01/app/oracle/oradata/ARCDB01/control01.ctl’,'/u01/app/oracle/oradata/ARCDB01/control02.ctl’,'/u01/ap p/oracle/oradata/ARCDB01/control03.ctl’ *.core_dump_dest=’/u01/app/oracle/admin/ARCDB01/cdump’ *.db_block_checking=’TRUE’ *.db_block_size=8192 *.db_domain=” *.db_file_multiblock_read_count=16 *.db_name=’ARCDB01′ *.db_recovery_file_dest=’/u01/app/oracle/flash_recovery_area’ *.db_recovery_file_dest_size=2147483648 *.db_unique_name=’PRIMARY’ *.dispatchers=’(PROTOCOL=TCP) (SERVICE=ARCDB01XDB)’ *.FAL_Client=’ARCDB01.ES.EGWN.LAN’ *.FAL_Server=’X06.D15.LAN’ *.job_queue_processes=10 *.LOG_ARCHIVE_CONFIG=’DG_CONFIG=(PRIMARY,STANDBY)’ *.log_archive_dest_1=’location=/u01/app/oracle/flash_recovery_area/ARCDB01/archivelog/’ *.log_archive_dest_2=’Service=X06.D15.LAN LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)

DB_UNIQUE_NAME=STANDBY’ *.log_archive_dest_state_1=’ENABLE’ *.log_archive_dest_state_2=’DEFER’ *.log_archive_format=’%s_arc_ln%r_db%d_%t.arc’ *.open_cursors=300 *.pga_aggregate_target=16777216 *.processes=200 *.remote_login_passwordfile=’EXCLUSIVE’ *.SERVICE_NAMES=’PRIMARY’ *.sessions=205 *.sga_max_size=419430400 *.sga_target=419430400 *.Standby_File_Management=’AUTO’ *.undo_management=’AUTO’ *.undo_tablespace=’UNDOTBS1′ *.user_dump_dest=’/u01/app/oracle/admin/ARCDB01/udump’ *.utl_file_dir=’/home/oracle/my_logminer’

Step 6 : Create a backup of the primary database A physical standby can be created using either a hot or cold backup as long as all the necessary archivelogs are available to bring the database to be a consistent state. You can simply use RMAN to backup the primary database.

RMAN> backup database plus archivelog;
Step 7 : Create the Primary spfile and Standby controlfile First, restard the primary DB using the pfile created in step 5 and then, with the primary DB in either a mount or open state, create a standby controlfile with the following syntaxes :

[oracle@arcdb01 ~]$ vi /home/oracle/ADVDB/standby.ora [oracle@arcdb01 ~]$ sqlplus /nolog SQL*Plus: Release 10.2.0.3.0 – Production on Sun Mar 28 14:53:18 2010 Copyright (c) 1982, 2006, Oracle. All Rights Reserved. SQL> connect sys as sysdba Enter password: Connected to an idle instance. SQL> create spfile from pfile=’/home/oracle/ADVDB/standby.ora’; File created. SQL> startup ORACLE instance started. Total System Global Area 419430400 bytes Fixed Size 2073248 bytes Variable Size 113249632 bytes Database Buffers 297795584 bytes Redo Buffers 6311936 bytes Database mounted.

Database opened. SQL> alter database create standby controlfile as ‘/home/oracle/ADVDB/standby_x06.ctl’; Database altered. SQL> SQL> create pfile=’/home/oracle/ADVDB/standby.ora’ from spfile; File created. SQL>
As you can see in te last command, we create a standby pfile from the spfile, jut to be sure to have the correct pfile when this will be transferred to the standby Db, modified and the used as a pfile of the standby db. Step 7 : Create the Standby spfile Remember, at the previous step, we have created the pfile, this should be transferred to the standby host using for example scp or sftp commands. Now, we should modify the pfile of the standby DB. below, are parameters that needed to be modified in our configuration. ARCDB01.__java_pool_size=4194304 ARCDB01.__large_pool_size=4194304 ARCDB01.__shared_pool_size=100663296 ARCDB01.__streams_pool_size=4194304 *.audit_file_dest=’/u01/app/oracle/admin/ARCDB01/adump’ *.background_dump_dest=’/u01/app/oracle/admin/ARCDB01/bdump’ *.compatible=’10.2.0.1.0′ *.control_file_record_keep_time=8 *.control_files=’/home/oracle/dbf/standby_x06.ctl’ *.db_file_name_convert=’/u01/app/oracle/oradata/ARCDB01′,’/home/oracle/dbf’,'/home/oracle/oradata’,'/home/o racle/dbf’ *.log_file_name_convert=’/u01/app/oracle/oradata/ARCDB01′,’/home/oracle/dbf’,'/home/oracle/oradata’,'/home/ oracle/dbf’ *.core_dump_dest=’/u01/app/oracle/admin/ARCDB01/cdump’ *.db_block_checking=’TRUE’ *.db_block_size=8192 *.db_domain=” *.db_file_multiblock_read_count=16 *.db_name=’ARCDB01′ *.db_recovery_file_dest=’/u01/app/oracle/flash_recovery_area’ *.db_recovery_file_dest_size=2147483648 *.db_unique_name=’STANDBY’ *.dispatchers=’(PROTOCOL=TCP) (SERVICE=ARCDB01XDB)’ *.FAL_Server=’ARCDB01.ES.EGWN.LAN’ *.FAL_client=’X06.D15.LAN’ *.job_queue_processes=10 *.LOG_ARCHIVE_CONFIG=’DG_CONFIG=(PRIMARY,STANDBY)’ *.log_archive_dest_1=’location=/u01/app/oracle/flash_recovery_area/ARCDB01/archivelog/’ *.log_archive_dest_2=’Service=ARCDB01.ES.EGWN.LAN VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)

DB_UNIQUE_NAME=PRIMARY’ *.log_archive_dest_state_1=’ENABLE’ *.log_archive_dest_state_2=’ENABLE’ *.log_archive_format=’%s_arc_ln%r_db%d_%t.arc’ *.open_cursors=300 *.pga_aggregate_target=16777216 *.processes=200 *.remote_login_passwordfile=’EXCLUSIVE’ *.SERVICE_NAMES=’PRIMARY’ *.sessions=205 *.sga_max_size=419430400 *.sga_target=419430400 *.Standby_File_Management=’AUTO’ *.undo_management=’AUTO’ *.undo_tablespace=’UNDOTBS1′ *.user_dump_dest=’/u01/app/oracle/admin/ARCDB01/udump’ *.utl_file_dir=’/home/oracle/my_logminer’ Please not that other parameters, such as dump destinations, may need to be modified depending on your environment. Step 8 : Transfer files to the standby host Using an operating system utility, transfer the files of the primary DB to the standby DB including : - The controlfile generated previously (standby_x06.ctl) - The Standby.ora pfile modified in the previous step - All primary DB datafiles, you can use the steps below : from the Primary DB, connect as a DBA :

SQL> SELECT * FROM Dba_Data_Files ; FILE_NAME /u01/app/oracle/oradata/ARCDB01/users01.dbf /u01/app/oracle/oradata/ARCDB01/sysaux01.dbf /u01/app/oracle/oradata/ARCDB01/undotbs01.dbf /u01/app/oracle/oradata/ARCDB01/system01.dbf /u01/app/oracle/oradata/ARCDB01/rep_for_rman_devexen15.dbf /u01/app/oracle/oradata/ARCDB01/REP_FOR_BACKRECK_TEST_01.dbf /u01/app/oracle/oradata/ARCDB01/indx_01.dbf SQL> shutdown immediate ; Database closed. Database dismounted. ORACLE instance shut down. from the standby DB : Using username “oracle”. Authenticating with public key “imported-openssh-key” Red Hat Enterprise Linux Server release 5.4 – Linux 2.6.18-164.11.1.el5xen

Four 2.83GHz Intel Pentium Xeon cpus with 4GB RAM -> x06.d15.lan get *.dbf Fetching /u01/app/oracle/oradata/ARCDB01/REP_FOR_BACKRECK_TEST_01.dbf to REP_FOR _BACKRECK_TEST_01.dbf /u01/app/oracle/oradata/ARCDB01/REP_FOR_BACKRECK_TEST_01.dbf 100% 3315MB 24.0MB/s 02:18 Fetching /u01/app/oracle/oradata/ARCDB01/cp_indx_01.dbf to cp_indx_01.dbf /u01/app/oracle/oradata/ARCDB01/cp_indx_01.dbf 100% 600MB 26.1MB/s 00:23 Fetching /u01/app/oracle/oradata/ARCDB01/indx_01.dbf to indx_01.dbf /u01/app/oracle/oradata/ARCDB01/indx_01.dbf 100% 600MB 26.1MB/s 00:23 Fetching /u01/app/oracle/oradata/ARCDB01/rep_for_rman_devexen15.dbf to rep_for_rman_devexen15.dbf /u01/app/oracle/oradata/ARCDB01/rep_for_rman_devexen15.dbf 100% 50MB 25.0MB/s 00:02 Fetching /u01/app/oracle/oradata/ARCDB01/sysaux01.dbf to sysaux01.dbf /u01/app/oracle/oradata/ARCDB01/sysaux01.dbf 100% 530MB 21.2MB/s 00:25 Fetching /u01/app/oracle/oradata/ARCDB01/system01.dbf to system01.dbf /u01/app/oracle/oradata/ARCDB01/system01.dbf 100% 520MB 27.4MB/s 00:19 Fetching /u01/app/oracle/oradata/ARCDB01/temp01.dbf to temp01.dbf /u01/app/oracle/oradata/ARCDB01/temp01.dbf 100% 164MB 23.4MB/s 00:07 Fetching /u01/app/oracle/oradata/ARCDB01/undotbs01.dbf to undotbs01.dbf /u01/app/oracle/oradata/ARCDB01/undotbs01.dbf 100% 655MB 26.2MB/s 00:25 Fetching /u01/app/oracle/oradata/ARCDB01/users01.dbf to users01.dbf /u01/app/oracle/oradata/ARCDB01/users01.dbf 100% 5128KB 5.0MB/s 00:00 sftp>
Step 9 : Create an spfile for the standby instance From the standby DB :

SQL> shutdown immediate; ORA-01109: database not open Database dismounted. ORACLE instance shut down. SQL> create spfile from pfile=’/home/oracle/dbf/standby.ora’; File created. SQL> startup nomount; ORACLE instance started. Total System Global Area 419430400 bytes Fixed Size 2073248 bytes Variable Size 134221152 bytes Database Buffers 276824064 bytes Redo Buffers 6311936 bytes SQL> alter database mount standby database; Database altered. SQL>
Step 10 : Begin shipping redo to the standby database if you remember, earlier we deferred log_archive_dest_2 on the primary until we had the standby mounted. Now, it is time to enable that destination and begin shipping redo to the standby.

On the primary, enter the following command :

alter system set log_archive_dest_state_2=enable scope=both;
Sometimes it happens that you face the problem of Archive Gaps where by a range of archived redo log files is created. Archive gaps are created whenever the next archived redo log file generated by the primary database is not applied to the standby database. It is usually recommended to increase the LOG_ARCHIVE_MAX_PROCESSES parameter in order to resolve archive gaps by controlling the number of archive processes the instance uses. I recommand you to increase your archive max processes parameter at your both primary and standby db :

alter system set log_archive_max_processes=4 scope=both;
From the primary DB, Check the sequence # and the archiving mode by executing following command :

SQL> Archive Log List Database log mode Archive Mode Automatic archival Enabled Archive destination /u01/app/oracle/flash_recovery_area/ARCDB01/archivelog/ Oldest online log sequence 1592 Next log sequence to archive 1594 Current log sequence 1594
Next, peform a log switch on the primary and verify that the transmission of the log was successful :

SQL> alter system switch logfile; System altered. SQL> SELECT * FROM v$archive_dest where dest_id=2 ; DEST_ID 2 DEST_NAME LOG_ARCHIVE_DEST_2 STATUS VALID BINDING OPTIONAL NAME_SPACE SYSTEM TARGET STANDBY ARCHIVER LGWR SCHEDULE ACTIVE DESTINATION X06.D15.LAN LOG_SEQUENCE 1599 REOPEN_SECS 300 DELAY_MINS 0 MAX_CONNECTIONS 1 NET_TIMEOUT 180 PROCESS LGWR REGISTER YES FAIL_DATE FAIL_SEQUENCE 0 FAIL_BLOCK 0 FAILURE_COUNT 0 MAX_FAILURE 0 ERROR ALTERNATE NONE

DEPENDENCY NONE REMOTE_TEMPLATE NONE QUOTA_SIZE 0 QUOTA_USED 0 MOUNTID 0 TRANSMIT_MODE ASYNCHRONOUS ASYNC_BLOCKS 61440 AFFIRM NO TYPE PUBLIC VALID_NOW YES VALID_TYPE ONLINE_LOGFILE VALID_ROLE PRIMARY_ROLE DB_UNIQUE_NAME STANDBY VERIFY NO
if the transmission was successful, the status of the destination should be valid. If the status is invalid, investigate the error listed in the error column to correct any issues. Now, lets make a real test on our dataguard system :

From the primary DB : Connected to Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 Connected as wissem SQL> create table test (teste number); Table created SQL> insert into test values (1); 1 row inserted SQL> commit; Commit complete SQL> From the Standby DB : SQL> select * from dba_users d where d.username = ‘WISSEM’; USERNAME USER_ID PASSWORD —————————— ———- —————————— ACCOUNT_STATUS LOCK_DATE EXPIRY_DA ——————————– ——— ——— DEFAULT_TABLESPACE TEMPORARY_TABLESPACE CREATED —————————— —————————— ——— PROFILE INITIAL_RSRC_CONSUMER_GROUP —————————— —————————— EXTERNAL_NAME ——————————————————————————– WISSEM 61 4531384AFBFF9B98 OPEN USERS TEMP 21-AUG-08

USERNAME USER_ID PASSWORD —————————— ———- —————————— ACCOUNT_STATUS LOCK_DATE EXPIRY_DA ——————————– ——— ——— DEFAULT_TABLESPACE TEMPORARY_TABLESPACE CREATED —————————— —————————— ——— PROFILE INITIAL_RSRC_CONSUMER_GROUP —————————— —————————— EXTERNAL_NAME ——————————————————————————– DEFAULT DEFAULT_CONSUMER_GROUP SQL> select * from wissem.test; select * from wissem.test * ERROR at line 1: ORA-00942: table or view does not exist BACK to the primary DB and perform a log switch : SQL> alter system switch logfile; System altered. SQL> Archive Log List Database log mode Archive Mode Automatic archival Enabled Archive destination /u01/app/oracle/flash_recovery_area/ARCDB01/archivelog/ Oldest online log sequence 1598 Next log sequence to archive 1600 Current log sequence 1600 SQL> Check Log list from the STANDBY DB : SQL> alter database recover managed standby database disconnect from session 2; Database altered. SQL> Archive Log List Database log mode Archive Mode Automatic archival Enabled Archive destination /u01/app/oracle/flash_recovery_area/ARCDB01/archivelog/ Oldest online log sequence 1598 Next log sequence to archive 0 Current log sequence 1600 SQL> alter database recover managed standby database cancel; Database altered. SQL> alter database open read only; Database altered. SQL> select * from wissem.test

2; TESTE ———1 SQL>
Now, we can see the results of our new table from the standby DB.

Happy dataguard!

Step-by-step instructions on how to create a Physical Standby Database on Windows and UNIX servers, and maintenance tips on the databases in a Data Guard Environment. Oracle 10g Data Guard is a great tool to ensure high availability, data protection and disaster recovery for enterprise data. I have been working on Data Guard/Standby databases using both Grid control and SQL command line for a couple of years, and my latest experience with Data Guard was manually creating a Physical Standby Database for a Laboratory Information Management System (LIMS) half a year ago. I maintain it daily and it works well. I would like to share my experience with the other DBAs. In this example the database version is 10.2.0.3.. The Primary database and Standby database are located on different machines at different sites. The Primary database is called PRIM and the Standby database is called STAN. I use Flash Recovery Area, and OMF. I. Before you get started: 1. Make sure the operating system and platform architecture on the primary and standby systems are the same; 2. Install Oracle database software without the starter database on the standby server and patch it if necessary. Make sure the same Oracle software release is used on the Primary and Standby databases, and Oracle home paths are identical. 3. Test the Standby Database creation on a test environment first before working on the Production database. II. On the Primary Database Side: 1. Enable forced logging on your primary database: SQL> ALTER DATABASE FORCE LOGGING; 2. Create a password file if it doesn’t exist. 1) To check if a password file already exists, run the following command: SQL> select * from v$pwfile_users; 2) If it doesn’t exist, use the following command to create one: - On Windows: $cd %ORACLE_HOME%\database

$orapwd file=pwdPRIM.ora password=xxxxxxxx force=y (Note: Replace xxxxxxxxx with the password for the SYS user.) - On UNIX: $Cd $ORACLE_HOME/dbs $Orapwd file=pwdPRIM.ora password=xxxxxxxx force=y (Note: Replace xxxxxxxxx with your actual password for the SYS user.) 3. Configure a Standby Redo log. 1) The size of the standby redo log files should match the size of the current Primary database online redo log files. To find out the size of your online redo log files: SQL> select bytes from v$log; BYTES ---------52428800 52428800 52428800 2) Use the following command to determine your current log file groups: SQL> select group#, member from v$logfile; 3) Create standby Redo log groups. My primary database had 3 log file groups originally and I created 3 standby redo log groups using the following commands: SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50M; SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50M; SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M; 4) To verify the results of the standby redo log groups creation, run the following query: SQL>select * from v$standby_log; 4. Enable Archiving on Primary. If your primary database is not already in Archive Log mode, enable the archive log mode: SQL>shutdown immediate; SQL>startup mount; SQL>alter database archivelog; SQL>alter database open; SQL>archive log list; 5. Set Primary Database Initialization Parameters Create a text initialization parameter file (PFILE) from the server parameter file (SPFILE), to add the new primary role parameters.

1) Create pfile from spfile for the primary database: - On Windows: SQL>create pfile=’\database\pfilePRIM.ora’ from spfile; (Note- specify your Oracle home path to replace ‘’). - On UNIX: SQL>create pfile=’/dbs/pfilePRIM.ora’ from spfile; (Note- specify your Oracle home path to replace ‘’). 2) Edit pfilePRIM.ora to add the new primary and standby role parameters: (Here the file paths are from a windows system. For UNIX system, specify the path accordingly) db_name=PRIM db_unique_name=PRIM LOG_ARCHIVE_CONFIG='DG_CONFIG=(PRIM,STAN)' LOG_ARCHIVE_DEST_1= 'LOCATION=F:\Oracle\flash_recovery_area\PRIM\ARCHIVELOG VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=PRIM' LOG_ARCHIVE_DEST_2= 'SERVICE=STAN LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=STAN' LOG_ARCHIVE_DEST_STATE_1=ENABLE LOG_ARCHIVE_DEST_STATE_2=ENABLE LOG_ARCHIVE_FORMAT=%t_%s_%r.arc LOG_ARCHIVE_MAX_PROCESSES=30 remote_login_passwordfile='EXCLUSIVE' FAL_SERVER=STAN FAL_CLIENT=PRIM STANDBY_FILE_MANAGEMENT=AUTO # Specify the location of the standby DB datafiles followed by the primary location; DB_FILE_NAME_CONVERT='E:\oracle\product\10.2.0\oradata\STAN\DATAFILE','E:\oracle\product\10.2.0\o radata\PRIM\DATAFILE' # Specify the location of the standby DB online redo log files followed by the primary location LOG_FILE_NAME_CONVERT=’E:\oracle\product\10.2.0\oradata\STAN\ONLINELOG’,’E:\oracle\product\10. 2.0\oradata\PRIM\ONLINELOG’,’F:\Oracle\flash_recovery_area\STAN\ONLINELOG’,’F:\Oracle\flash_recov ery_area\PRIM\ONLINELOG’ 6. Create spfile from pfile, and restart primary database using the new spfile. Data Guard must use SPFILE. Create the SPFILE and restart database. - On windows: SQL> shutdown immediate; SQL> startup nomount pfile=’\database\pfilePRIM.ora’; SQL>create spfile from pfile=’\database\pfilePRIM.ora’;

-- Restart the Primary database using the newly created SPFILE. SQL>shutdown immediate; SQL>Startup; (Note- specify your Oracle home path to replace ‘’). - On UNIX: SQL> shutdown immediate; SQL> startup nomount pfile=’/dbs/pfilePRIM.ora’; SQL>create spfile from pfile=’/dbs/pfilePRIM.ora’; -- Restart the Primary database using the newly created SPFILE. SQL>shutdown immediate; SQL>Startup; (Note- specify your Oracle home path to replace ‘’). III. On the Standby Database Site: 1. Create a copy of Primary database data files on the Standby Server: On Primary DB: SQL>shutdown immediate; On Standby Server (While the Primary database is shut down): 1) Create directory for data files, for example, on windows, E:\oracle\product\10.2.0\oradata\STAN\DATAFILE. On UNIX, create the directory accordingly. 2) Copy the data files and temp files over. 3) Create directory (multiplexing) for online logs, for example, on Windows, E:\oracle\product\10.2.0\oradata\STAN\ONLINELOG and F:\Oracle\flash_recovery_area\STAN\ONLINELOG. On UNIX, create the directories accordingly. 4) Copy the online logs over. 2. Create a Control File for the standby database: On Primary DB, create a control file for the standby to use: SQL>startup mount; SQL>alter database create standby controlfile as ‘STAN.ctl; SQL>ALTER DATABASE OPEN; 3. Copy the Primary DB pfile to Standby server and rename/edit the file. 1) Copy pfilePRIM.ora from Primary server to Standby server, to database folder on Windows or dbs folder on UNIX under the Oracle home path.

2) Rename it to pfileSTAN.ora, and modify the file as follows. : (Here the file paths are from a windows system. For UNIX system, specify the path accordingly) *.audit_file_dest='E:\oracle\product\10.2.0\admin\STAN\adump' *.background_dump_dest='E:\oracle\product\10.2.0\admin\STAN\bdump' *.core_dump_dest='E:\oracle\product\10.2.0\admin\STAN\cdump' *.user_dump_dest='E:\oracle\product\10.2.0\admin\STAN\udump' *.compatible='10.2.0.3.0' control_files='E:\ORACLE\PRODUCT\10.2.0\ORADATA\STAN\CONTROLFILE\STAN.CTL','F:\ORACLE\FLASH_ RECOVERY_AREA\STAN\CONTROLFILE\STAN.CTL' db_name='PRIM' db_unique_name=STAN LOG_ARCHIVE_CONFIG=’DG_CONFIG=(PRIM,STAN)’ LOG_ARCHIVE_DEST_1= ‘LOCATION=F:\Oracle\flash_recovery_area\STAN\ARCHIVELOG VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=STAN’ LOG_ARCHIVE_DEST_2= ‘SERVICE=PRIM LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=PRIM’ LOG_ARCHIVE_DEST_STATE_1=ENABLE LOG_ARCHIVE_DEST_STATE_2=ENABLE LOG_ARCHIVE_FORMAT=%t_%s_%r.arc LOG_ARCHIVE_MAX_PROCESSES=30 FAL_SERVER=PRIM FAL_CLIENT=STAN remote_login_passwordfile='EXCLUSIVE' # Specify the location of the primary DB datafiles followed by the standby location DB_FILE_NAME_CONVERT=’E:\oracle\product\10.2.0\oradata\PRIM\DATAFILE’,’E:\oracle\product\10.2.0\ oradata\STAN\DATAFILE’ # Specify the location of the primary DB online redo log files followed by the standby location LOG_FILE_NAME_CONVERT=’E:\oracle\product\10.2.0\oradata\PRIM\ONLINELOG’,’E:\oracle\product\10. 2.0\oradata\STAN\ONLINELOG’,’F:\Oracle\flash_recovery_area\PRIM\ONLINELOG’,’F:\Oracle\flash_recov ery_area\STAN\ONLINELOG’ STANDBY_FILE_MANAGEMENT=AUTO (Note: Not all the parameter entries are listed here.) 4. On Standby server, create all required directories for dump and archived log destination: Create directories adump, bdump, cdump, udump, and archived log destinations for the standby database. 5. Copy the standby control file ‘STAN.ctl’ from primary to standby destinations ;

6. Copy the Primary password file to standby and rename it to pwdSTAN.ora. On Windows copy it to \database folder, and on UNIX copy it to /dbs directory. And then rename the password file. 7. For Windows, create a Windows-based services (optional): $oradim –NEW –SID STAN –STARTMODE manual 8. Configure listeners for the primary and standby databases. 1) On Primary system: use Oracle Net Manager to configure a listener for PRIM and STAN. Then restart the listener. $lsnrctl stop $lsnrctl start 2) On Standby server: use Net Manager to configure a listener for PRIM and STAN. Then restart the listener. $lsnrctl stop $lsnrctl start 9. Create Oracle Net service names. 1) On Primary system: use Oracle Net Manager to create network service names for PRIM and STAN. Check tnsping to both services: $tnsping PRIM $tnsping STAN 2) On Standby system: use Oracle Net Manager to create network service names for PRIM and STAN. Check tnsping to both services: $tnsping PRIM $tnsping STAN 10. On Standby server, setup the environment variables to point to the Standby database. Set up ORACLE_HOME and ORACLE_SID. 11. Start up nomount the standby database and generate a spfile. - On Windows: SQL>startup nomount pfile=’\database\pfileSTAN.ora’; SQL>create spfile from pfile=’\database\pfileSTAN.ora’; -- Restart the Standby database using the newly created SPFILE. SQL>shutdown immediate; SQL>startup mount; - On UNIX: SQL>startup nomount pfile=’/dbs/pfileSTAN.ora’; SQL>create spfile from pfile=’/dbs/pfileSTAN.ora’;

-- Restart the Standby database using the newly created SPFILE. SQL>shutdown immediate; SQL>startup mount; (Note- specify your Oracle home path to replace ‘’). 12. Start Redo apply 1) On the standby database, to start redo apply: SQL>alter database recover managed standby database disconnect from session; If you ever need to stop log apply services: SQL> alter database recover managed standby database cancel; 13. Verify the standby database is performing properly: 1) On Standby perform a query: SQL>select sequence#, first_time, next_time from v$archived_log; 2) On Primary, force a logfile switch: SQL>alter system switch logfile; 3) On Standby, verify the archived redo log files were applied: SQL>select sequence#, applied from v$archived_log order by sequence#; 14. If you want the redo data to be applied as it is received without waiting for the current standby redo log file to be archived, enable the real-time apply. To start real-time apply: SQL> alter database recover managed standby database using current logfile disconnect; 15. To create multiple standby databases, repeat this procedure. IV. Maintenance: 1. Check the alert log files of Primary and Standby databases frequently to monitor the database operations in a Data Guard environment. 2. Cleanup the archive logs on Primary and Standby servers. I scheduled weekly Hot Whole database backup against my primary database that also backs up and delete the archived logs on Primary. For the standby database, I run RMAN to backup and delete the archive logs once per week. $rman target /@STAN; RMAN>backup archivelog all delete input;

To delete the archivelog backup files on the standby server, I run the following once a month: RMAN>delete backupset; 3. Password management The password for the SYS user must be identical on every system for the redo data transmission to succeed. If you change the password for SYS on Primary database, you will have to update the password file for Standby database accordingly, otherwise the logs won’t be shipped to the standby server.

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->