You are on page 1of 146

Contents

1 RMAN ..................................................................................................................................................3
1.1 Categories of Failures ..............................................................................................................3
1.2 RMAN Architecture ..................................................................................................................5
1.3 Backups, Backup Sets, and Backup Pieces ...............................................................................9
2 RMAN Commands ..........................................................................................................................13
2.1 Configuring Persistent Settings for RMAN .............................................................................14
2.2 Image Copies..........................................................................................................................19
3 RMAN Backup Types .................................................................................................................24
3.1 Full Backup .............................................................................................................................24
3.2 Incremental Backup ...............................................................................................................24
3.3 Block Change Tracking ...........................................................................................................25
4 Scenario for Complete and Incomplete Recovery..............................................27
4.1 Restore and Recovery of a Whole Database .........................................................................27
4.2 Restore and Complete Recovery of Individual Tablespaces: .................................................27
4.3 Restore and Complete Recovery of Datafile ..........................................................................30
4.4 Incomplete Recovery Scenario ..............................................................................................31
4.5 RMAN SCN-BASED ................................................................................................................32
4.6 Restore Point..........................................................................................................................34
4.7 RMAN SEQUENCE-BASED .....................................................................................................36
4.8 RMAN TIME-BASED ...............................................................................................................38
5 Controlfile And Spfile Scenario ...............................................................................40
5.1 All Control Files Lost ..............................................................................................................40
5.2 Loss of SPFILE .........................................................................................................................41
5.3 Loss of Control File and SPFILE: .............................................................................................42
6 Recovery Catalog ...................................................................................................................43
6.1 Steps for configuring the recovery catalog ............................................................................43
7 Block Corruption ...................................................................................................................46
7.1 Types of Block Corruption......................................................................................................46
7.2 Manual Corruption of a Block................................................................................................46
7.3 DBVERIFY Utility .....................................................................................................................48
7.4 Analyze CMD ..........................................................................................................................49
7.5 Exp command to detect the corruption ................................................................................51
1
8 CLONING ......................................................................................................................................53
8.1 User-Managed Cloning ..........................................................................................................54
8.2 RMAN Cloning ........................................................................................................................57
9 Automatic Memory Management .........................................................................................59
10 Partiontioning ........................................................................................................................67
10.1 Introduction to Partitioning .................................................................................................67
10.2 Advantages of Partitioning ...................................................................................................67
10.3 Partitioning Methods ...........................................................................................................69
10.4 IOT- Index Organize Table .....................................................................................................85
10.5 Advantages of IOT ................................................................................................................85
10.6 Creating Index-Organized Tables ..........................................................................................86
11 Row Chaining and Migration ...........................................................................................88
11.1 Logical & Physical Space management ................................................................................88
11.2 How to fix row chaining or migration ...................................................................................90
12 Automatic Work Repository..............................................................................................92
12.1 Automatic Database Diagnostic Monitor (ADDM) ...............................................................96
13 Automatic Storage Management ....................................................................................101
13.1 Benefits of ASM..................................................................................................................101
13.2 Limitations ..........................................................................................................................101
13.3 ASM Architecture ...............................................................................................................102
14 Data-Guard................................................................................................................................ 113
14.1 Steps ...................................................................................................................................121
14.2 Role Transfer (Switchover) .................................................................................................136
15 Upgrade Oracle database 10g R2 to 11g R2........................................................139

2
1 RMAN

The administrator’s duties are to:

• Protect the database from failure wherever possible

• Increase the Mean-Time-Between-Failures (MTBF)

• Decrease the Mean-Time-To-Recover (MTTR)

• Minimize the loss of data

1.1 Categories of Failures

Failures can generally be divided into the following categories:

Logical Failure

➢ Statement failure
A single database operation (select, insert, update, or delete) fails.

➢ User process failure


A single database session fails.

➢ Network failure
Connectivity to the database is lost.

➢ User error
A user successfully completes an operation, but the operation (dropping a table or entering bad
data) is incorrect .
➢ Instance failure
3
The database instance shuts down unexpectedly.

Media failure

➢ One or more of the database files are lost (that is, the files have beendeleted or the disk has
failed).

Backups can be performed by using:

➢ Recovery Manager
➢ User-managed

Backup strategy may include:

➢ The entire database (whole)


➢ A portion of the database (partial)

Backup type may indicate inclusion of:

➢ All information from all data files (full)


➢ Only information that has changed since some previous backup (incremental)

Backup modes:

➢ Offline (consistent, cold)


➢ Online (inconsistent, hot)

4
1.2 RMAN Architecture

• Recovery Manager (RMAN) is a utility that can manage all of your Oracle backup and
recovery activities. DBAs are often wary of using RMAN because of its perceived complexity
and its control over performing critical tasks. The traditional backup and recovery methods
are tried-and-true. Thus, when your livelihood depends on your ability to back up and
recover the database, why implement a technology like RMAN? The reason is that RMAN
comes with several benefits:

o Incremental backups that only copy data blocks that have changed since the last
backup.
o Tablespaces are not put in backup mode, thus there is no extra redo log generation
during online backups.
o Detection of corrupt blocks during backups.
o Parallelization of I/O operations.
o Automatic logging of all backup and recovery operations.
o Built-in reporting and listing commands.

➢ RMAN's architecture is a combination of an executable program (the rman utility) and


background processes that interact with one or more databases and with I/O devices. There
are several key architectural components to be aware of:

• RMAN executable
• Server process
• Channels
• Target database
• Recovery catalog database (optional)
• Media management layer (optional)
• Backups, backup sets, and backup pieces

5
The following sections describe each of these components.

RMAN Executable

➢ The RMAN executable, usually named rman, is the program that manages all backup and
recovery operations. You interact with the RMAN executable to specify backup and recovery
operations you want to perform.
➢ The executable then interacts with the target database, starts the necessary server processes,
and performs the operations that you requested.
➢ Finally, the RMAN executable records those operations in the target database's control file and
the recovery catalog database, if you have one.

Server Processes

➢ RMAN server processes are background processes, started on the server, used to communicate
between RMAN and the databases. They can also communicate between RMAN and any disk,
tape, or other I/O devices. RMAN server processes do all the real work for a backup or restore
operation, and a typical backup or restore operation results in several server processes being
started.

➢ Server processes are started under the following conditions:

• When you start RMAN and connect to your target database


• When you connect to your catalog -- if you are using a recovery catalog database
• When you allocate and open an I/O channel during a backup or recovery operation

6
Channels

Image of Channel Allocation

➢ A channel is an RMAN server process started when there is a need to communicate with an I/O
device, such as a disk or a tape. A channel is what reads and writes RMAN backup files. Any
time you issue an RMAN allocate channel command, a server process is started on the target
database server. It is through the allocation of channels that you govern I/O characteristics
such as:

• Type of I/O device being read or written to, either a disk or an sbt_tape
• Number of processes simultaneously accessing an I/O device
• Maximum size of files created on I/O devices
• Maximum rate at which database files are read
• Maximum number of files open at a time

Target Database

➢ The target database is the database on which RMAN performs backup, restore, and recovery

7
operations. This is the database that owns the datafiles, control files, and archived redo files
that are backed up, restored, or recovered. Note that RMAN does not back up the online redo
logs of the target database.

Recovery Catalog Database

➢ The recovery catalog database is an optional repository used by RMAN to record information
concerning backup and recovery activities performed on the target. The recovery catalog
consists of three components:

➢ A separate database referred to as the catalog database (from the target database)
➢ A schema within the catalog database
➢ Tables (and supporting objects) within the schema that contain data pertaining to RMAN
backup and recovery operations performed on the target. The catalog is typically a database
that you build on a different host from your target database.
➢ The reason for this is that you don't want a failure on the target host to affect your ability to
use the catalog. If both the catalog and target are on the same box, a single media failure
can put you in a situation from which you can't recover your target database.

➢ Inside the catalog database is a special schema containing the tables that store information
about RMAN backup and recovery activities. This includes information such as:

• Details about the physical structure of the target database


• A log of backup operations performed on the target database's datafiles, control files, and
archived redo log files
• Stored scripts containing frequently used sequences of RMAN commands

➢ Why is the catalog optional? When RMAN performs any backup operation, it writes
information about that task to the target database's control files. Therefore, RMAN does not
need a catalog to operate. If you choose to implement a recovery catalog database, then
8
RMAN will store additional information about what was backed up -- often called metadata --
in the catalog.

➢ The primary reason for implementing a catalog is that it enables the greatest flexibility in
backup and recovery scenarios. Using a catalog gives you access to a longer history of backups
and allows you to manage all of your backup and recovery operations from one repository.
Utilizing a catalog makes available to you all the features of RMAN. For reasons such as these,
we recommend using a catalog database.

1.3 Backups, Backup Sets, and Backup Pieces

➢ When you issue an RMAN backup command, RMAN creates backup sets, which are logical
groupings of physical files. The physical files that RMAN creates on your backup media are
called backup pieces. When working with RMAN, you need to understand that the following
terms have specific meanings:

➢ A backup of all or part of your database. This results from issuing an RMAN backup
command. A backup consists of one or more backup sets.

➢ A logical grouping of backup files -- the backup pieces -- that are created when you issue an
RMAN backup command. A backup set is RMAN's name for a collection of files associated with
a backup. A backup set is composed of one or more backup pieces.

➢ A physical binary file created by RMAN during a backup. Backup pieces are written to your
backup medium, whether to disk or tape. They contain blocks from the target database's
datafiles, archived redo log files, and control files.

➢ When RMAN constructs a backup piece from datafiles, there are a several rules that it follows:

• A datafile cannot span backup sets.

9
• A datafile can span backup pieces as long as it stays within one backup set.
• Datafiles and control files can coexist in the same backup sets.
• Archived redo log files are never in the same backup set as datafiles or control files.
• RMAN is the only tool that can operate on backup pieces. If you need to restore a file from
an RMAN backup, you must use RMAN to do it. There's no way for you to manually
reconstruct database files from the backup pieces. You must use RMAN to restore files from
a backup piece.

Starting RMAN

You must have access to the SYSDBA privilege before you can connect to the target database using
RMAN.

➢ Starting RMAN locally

Example:
[oracle@localhost admin]$ export ORACLE_SID=riaz
[oracle@localhost admin]$ rman target /
Recovery Manager: Release 10.2.0.1.0 - Production on Thu Mar 10 10:45:52 2011
Copyright (c) 1982, 2005, Oracle. All rights reserved.
connected to target database: RIAZ (DBID=194143921)
RMAN>

➢ Starting RMAN remotely

Example:
[oracle@localhost admin]$ lsnrctl start
[oracle@localhost admin]$ tnsping riaz

10
Note:-

To connect from another server, use the Oracle Net service name for the target database:
In Listener.ora file

SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1)
(SID_NAME = riaz)
)
)

LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))
(ADDRESS = (PROTOCOL = TCP)(HOST = 172.24.0.181)(PORT = 1521))
)
)

In Tnsnames.ora file

riaz =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 172.24.0.181)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = riaz)
)

11
)

Need a password file


[oracle@localhost ~]$ orapwd file=orapwriaz password=oracle entries=4 force=y

SQL> select * from v$pwfile_users;

USERNAME SYSDBA SYSOPER


---------------------------------------------------------------
SYS TRUE TRUE

[oracle@localhost ~]$ export ORACLE_SID=riaz


[oracle@localhost ~]$ rman target sys/oracle@riaz
Recovery Manager: Release 10.2.0.1.0 - Production on Thu Mar 10 10:55:39 2011
Copyright (c) 1982, 2005, Oracle. All rights reserved.
connected to target database: RIAZ (DBID=194143921)
RMAN>

12
2 RMAN COMMANDS

➢ Interactive Mode

RMAN> backup database;

➢ Batch Mode

➢ In Batch Mode we enter the RMAN command into one file and save it.Run that file on command
line.

Example:

[oracle@station14 seth]$ vi fullbkp.rcv


backup database;

[oracle@station14 seth]$ rman target / @fullbkp.rcv


You can store the output in one file.
[oracle@station14 seth]$ rman target / @fullbkp.rcv log f1.log
OR
[oracle@localhost riaz]$ rman target sys/oracle@riaz @full.rcv log new.log append

➢ Stand-Alone

Example:

RMAN> backup
2> format '/u01/app/oracle/oradata/riaz/bkp/df_%d_%s_%p.bkp'
3> tablespace users;

OR

13
RMAN> backup
2> format '/u01/app/oracle/oradata/riaz/bkp/%U'
3> tablespace users;

OR
RMAN> backup datafile 1;

OR

RMAN> backup
2> format '/u01/app/oracle/oradata/riaz/bkp/%U'
3> database;

➢ Job-Commands

Example:

RMAN> run{
2> allocate channel d1 type disk;
3> backup database;}

2.1 Configuring Persistent Settings for RMAN

➢ Use the RMAN command show to view the current value of one or all of RMAN’s configuration
settings. The show command will let you view the value of a specified RMAN setting. For example,
the following show command displays whether the autobackup of the control file has been
enabled:

RMAN> show controlfile autobackup;

14
RMAN configuration parameters are:

RMAN>CONFIGURE CONTROLFILE AUTOBACKUP OFF;

➢ The show all command displays both settings that you have configured and any default settings.
Any default settings will be displayed with a # default at the end of the line. For example, the
following is the output from executing the show all command:

RMAN> show all;


RMAN configuration parameters are:

CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default


CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUPTYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'ZLIB'; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO
'C:\ORACLE\PRODUCT\11.1\DB_1\DATABASE\SNCFORCL.ORA'; # default

15
➢ Configure automatic channels

RMAN> configure DEVICE TYPE DISK PARALLELISM 3;


OR
CONFIGURE DEVICE TYPE DISK PARALLELISM 3 BACKUP TYPE TO BACKUPSET;

RMAN> configure DEVICE TYPE DISK clear

➢ Specify the backup retention policy

• Retention policy governs how long database backups are retained, and determines how far
into the past you can recover your database.

• Retention policy can be set in terms of a recovery window

Example:
RMAN> CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7 DAYS;

(This command ensures that RMAN retains all backups needed to recover the
database to any point in time in the last 7 days)
or a redundancy value (how many backups of each file must be retained).

RMAN> CONFIGURE RETENTION POLICY TO REDUNDANCY 3;

(This command ensures that RMAN retains three backups of each datafile.)

RMAN> CONFIGURE RETENTION POLICY CLEAR;

➢ Specify the number of backup copies to be created

16
RMAN> CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 2;

RMAN> CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK clear;

RMAN> CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 2;

RMAN> CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK clear;

➢ Set the default backup type to BACKUPSET or COPY

RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO copy;

RMAN>CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO COPY;

RMAN> CONFIGURE DEVICE TYPE DISK clear;

➢ Limit the size of backup sets

RMAN> CONFIGURE MAXSETSIZE TO 10m;

RMAN> CONFIGURE MAXSETSIZE clear;

Caution:
➢ If a datafile being backed up is bigger than MAXSETSIZE then your backup will fail. Always
ensure that MAXSETSIZE is as large as your largest datafile.

Exempt a tablespace from backup


RMAN> configure exclude for tablespace uu;
RMAN> show exclude;

17
RMAN configuration parameters are:
CONFIGURE EXCLUDE FOR TABLESPACE 'UU';
RMAN> backup database noexclude;

Note: The above command will also take the backup of excluded tablespace.

RMAN> configure exclude for tablespace uu clear;

Configure automatic backup of control files

➢ RMAN can be configured to automatically back up the control file and server parameter file
whenever the database structure metadata in the control file changes and whenever a backup
record is added.

➢ The autobackup enables RMAN to recover the database even if the current control file,
catalog, and server parameter file are lost. Because the filename for the autobackup uses a
well-known format, RMAN can search for it without access to a repository, and then restore
the server parameter file.

➢ After you have started the instance with the restored server parameter file, RMAN can restore
the control file from an autobackup.

➢ After you mount the control file, the RMAN repository is available and RMAN can restore the
datafiles and find the archived redo log.

RMAN> CONFIGURE CONTROLFILE AUTOBACKUP on;

RMAN> CONFIGURE CONTROLFILE AUTOBACKUP clear;

18
➢ Parallelization Of Backup Sets

RMAN> run{
2> allocate channel d1 type disk;
3> allocate channel d2 type disk;
4> backup
5> (datafile 1,2 channel d1)
6> (datafile 3,4 channel d2);}

➢ Compressed Backups

RMAN> backup as
2> compressed backupset database;

RMAN> backup as compressed backupset datafile 1;

RMAN> configure device type disk backup type to compressed backupset;

RMAN> configure device type disk parallelism 2 backup type to compressed backupset;

RMAN> configure device type disk clear;

2.2 Image Copies

➢ Image Copies are duplicates of data or archived log files (similar to simply copying the files by
using OS commands). Backup sets are collections of one or more binary files that contain one
or more data or archived log files. With backup sets, empty data blocks are not stored, thereby
causing backup sets to use less space on the disk or tape. Backup sets can be compressed to
further reduce the space requirements of the backup. Image copies must be backed up to the

19
disk. Backup sets can be sent either to the disk or directly to the tape.

Example:
RMAN> backup as copy database ;

RMAN> backup as copy


2> datafile 1
3> format '/u01/app/oracle/admin/seth/rbkp/%U';

OR

RMAN> run{
2> allocate channel d1 type disk;
3> copy
4> datafile 1 to '/u01/app/oracle/admin/seth/rbkp/%U';
5> }

➢ Tags for backups

RMAN> backup tag 'month_full_bkp' datafile 1,2,3;

RMAN> run
2> {allocate channel c1 type disk;
3> backup tag 'fuul_bkp' datafile 1,2;}

➢ Backup Of Archived Log Files

RMAN> backup archivelog all;


RMAN> backup

20
2> format '/u01/app/oracle/admin/seth/rbkp/%U'
3> archivelog all;

OR

RMAN> run {
2> allocate channel c1 type disk;
3> backup archivelog all;}
RMAN> backup archivelog from sequence=80;

➢ Whole Database backup

RMAN> backup database plus archivelog;

➢ Backup validation with rman :

RMAN>backup validate check logical database archivelog all;

RMAN>backup validate database;

SQL>select * from v$database_block_corruption

➢ SWITCH Command

It always will be written in run block. this cmd is = to alter database rename file

RMAN> run { set newname for datafile 4 to 'new location';


restore datafile 4;
switch datafile all;
recover datafile 4;}

21
➢ LIST Command
RMAN>list backupset;

RMAN>list backup of datafile 1;


OR
RMAN> LIST BACKUP OF DATAFILE '/u01/app/oracle/oradata/rash/system01.dbf';

RMAN>list copy of tablespace "SYSTEM";

RMAN>list copy of database archivelog from time ='SYSDATE-1';

➢ Llist backup of all files in a database.

RMAN> LIST BACKUP OF DATABASE;

List backup sets and copies containing archive logs for a specified range

RMAN> list copy of database archivelog


2> from time ='sysdate-2';

➢ REPORT Command
RMAN>report schema;

This command will tell you which datafile required which type of backup??
(if tablespace is in nologging mode then it will not display in this command)

RMAN>report need backup;

Which backups can be deleted (that is, are obsolete)

22
RMAN>report obsolete;

RMAN> report unrecoverable;

Note: when a datafile has been changed by unrecoverable operation such as direct load so u
must perform a full backup or incremental backup of affected datafiles.

➢ Maxcorrupt

RMAN> run {
2> restore datafile 4;
3> set maxcorrupt for datafile 4 to 2;
4> recover datafile 4;}

23
3 RMAN Backup Types

3.1 Full Backup

➢ A full backup of a database will contain complete backups of all the datafiles.

3.2 Incremental Backup

➢ An incremental backups contain only the changed data blocks in the datafiles. Obviously, then,
incremental backups can potentially take a much shorter time than full backups. You can make
incremen tal backups only with the help of RMAN—you can’t make incremental backups using
user-managed backup techniques.

3.2.1 Incremental Backup :

level 1 Type

RMAN> BACKUP INCREMENTAL LEVEL 1 DATABASE;

•To perform a cumulative incremental backup, use the following command:

RMAN> BACKUP INCREMENTAL LEVEL 1 CUMULATIVE DATABASE;

3.2.2 Differential Backup

Backup level 0 or level 1 (by default)

SQL> select FILE#, INCREMENTAL_LEVEL, INCREMENTAL_CHANGE#, DATAFILE_BLOCKS,BLOCKS


from v$backup_datafile;

• To perform an incremental backup at level 0, use the following command:

24
RMAN> backup incremental level 0 database;

•To perform a differential incremental backup, use the following command:

3.2.3 Cumulative Backup

Backups always from level 0

Diffrential level 1 backup of tablespace udata


RMAN> backup incremental level 1 tablespace udata;

SQL> SELECT FILE#, INCREMENTAL_LEVEL, INCREMENTAL_CHANGE#, DATAFILE_BLOCKS,BLOCKS


FROMv$backup_datafile
WHERE file#=5

RMAN> backup incremental level 1 cumulative tablespace udata;


RMAN> backup incremental level 0 tablespace udata;
RMAN> backup incremental level 1 tablespace udata;
RMAN> backup incremental level 1 cumulative tablespace udata;

3.3 Block Change Tracking

➢ The entire data file is read during each incremental backup, even if just a very small part of
that file has changed since the last incremental backup.

➢ If change tracking is enabled RMAN uses the change tracking file to identified changed blocks
for incremental backup, thus avoiding the need to scan every block in datafile.

➢ But for level 0 incremental backup RMAN still has to track the entire datafile.

➢ CTWR (change tracking writer) process will write the changes in the file.

25
Example:

SQL>alter database enable block change tracking using file


/u01/app/oracle/oradata/rash/track/trc.f'
Database altered.

SQL>select * from v$block_change_tracking

NOTE: If OMF feature is enabled (db_create_file_dest) then no need to give the path as u have
given earlier .

SQL> alter database enable block change tracking;

SQL> alter database disable block change tracking;


Database altered.

SQL> select * from v$block_change_tracking

26
4 Scenario for complete and incomplete Recovery
4.1 Restore and Recovery of a Whole Database

➢ In this scenario, you have a current control file and SPFILE but all datafiles are damaged or lost.
You must restore and recover the whole database.

SQL> shut abort


ORACLE instance shut down.

SQL> startup mount


ORACLE instance started.
SQL> !rman target /

RMAN> restore database;


RMAN> recover database;
RMAN> alter database open;
OR
RMAN> run{
2> restore database;
3> recover database;
4> alter database open;}

4.2 Restore and Complete Recovery of Individual Tablespaces:

➢ In this scenario, the database is open, and some but not all of the datafiles are damaged.

➢ You can restore and recover the damaged tablespace, while leaving the database open so that
the rest of the database remains available.

27
Note: you don't have to shut down the database you can do recovery in open stage.but if u
want to shut down the database and perform the recover in mount stage.

SQL> !rman target /

RMAN> backup database;


RMAN> exit;
SQL> conn riaz/riaz

SQL> insert into t1 values(60);

SQL> select * from t1;


ID
----------
10
20
30
40
50
60
SQL>commit;

Here you lost the datafile of udata tablepsace

SQL> conn riaz/riaz


SQL> select * from t1;
select * from t1
*
ERROR at line 1:
ORA-00376: file 5 cannot be read at this time

28
ORA-01110: data file 5: '/u01/app/oracle/oradata/fdb1/udata01.dbf'

SQL> conn / as sysdba


SQL> alter tablespace udata offline;

alter tablespace udata offline


*
ERROR at line 1:
ORA-01191: file 5 is already offline - cannot do a normal offline
ORA-01110: data file 5: '/u01/app/oracle/oradata/fdb1/udata01.dbf'

SQL> alter tablespace udata offline immediate;


Tablespace altered.

SQL> !rman target /


RMAN> restore tablespace udata;
RMAN> recover tablespace udata;
RMAN> exit

SQL> alter tablespace udata online;


Tablespace altered.

SQL> conn riaz/riaz


SQL> select * from t1;
ID
----------
10
20
30
40

29
50
60

4.3 Restore and Complete Recovery of Datafile

SQL> !rman target /

RMAN> backup database;


[rashmeet@station253 rashmeet]$ rm -rf users01.dbf

SQL> conn riaz/riaz


SQL> select * from tab;

TNAME TABTYPE CLUSTERID


------------------------------ ------- ----------
T2 TABLE
T1 TABLE

SQL> select * from t1;


select * from t1
*
ERROR at line 1:
ORA-01116: error in opening database file 5
ORA-01110: data file 5: '/u01/app/oracle/oradata/rashmeet/users01.dbf'
ORA-27041: unable to open file
Linux Error: 2: No such file or directory
Additional information: 3

Note: If data is not there in DBBC then u will get error.

30
SQL> conn / as sysdba
Connected.

SQL> alter database datafile 5 offline;


Database altered.

SQL> !rman target /

RMAN> restore datafile 5;


RMAN> recover datafile 5;
SQL> alter database datafile 5 online;
Database altered.

4.4 Incomplete Recovery Scenario

When to perform incomplete recovery

➢ User error (DML/Drop)


➢ Missing of current Redo log file or loss of Archived file

How to perform incomplete Recovery

• RMAN -Time-Based
• RMAN-SCN Based/Change Based (until SCN clause)
• RMAN-Log Sequence

➢ User Managed
• Cancel-Based
• Time-Based
31
• SCN Based/Change Based (until CHANGE clause)

➢ Cancel-Based

• A current redo log file or group is damaged and is not available for recovery. Mirroring should
prevent the need for this type of recovery.
• An archived redo log file needed for recovery is lost. Frequent backups and multiple archive
destinations should prevent the need for this type of recovery.

4.5 RMAN SCN BASED

SQL> !rman target /


connected to target database: PRIM (DBID=4026912893)

RMAN> backup database;

SQL> select CURRENT_SCN from v$database;


CURRENT_SCN
-------------------------
420709

SQL> conn riaz/riaz

SQL> drop table employees;

SQL> conn / as sysdba


Connected.

SQL> select CURRENT_SCN from v$database;

32
CURRENT_SCN
----------------------------
420806

SQL> shut immediate


SQL> startup mount
SQL> !rman target /

RMAN> run{
2> set until scn 420709;
3> restore database;
4> recover database;
5> alter database open resetlogs;}

OR

RMAN> restore database;


RMAN> recover database until scn 420709;
RMAN> alter database open resetlogs;
RMAN> exit

SQL> conn riaz/riaz

SQL> select * from tab;


TNAME TABTYPE CLUSTERID
---------------------------------------------------- ------- ----------
EMPLOYEES TABLE

33
Note:--

➢ RMAN-managed backups, you specify the system change number (SCN) of the last committed
change to be recovered.
➢ Recovery terminates after all changes up to the specified SCN are committed.
➢ You can optionally use the UNTIL RESTORE POINT syntax and specify an alias for the SCN, called
a restore point.

Note: If you specify wrong SCN no then u may lose your data, thats why it is an incomplete recovery.

4.6 Restore Point


Example of UNTIL RESTORE POINT syntax

RMAN> backup database;

SQL> select * from rash.emp;

ID NAME
---------- ----------------------
10 rash
20 sheetu

2 rows selected.

SQL> create restore point my_data;


Restore point created.

SQL> insert into rash.emp values (30,'ruby');


1 row created.

34
SQL> insert into rash.emp values (40,'noor');
1 row created.

SQL> commit;
Commit complete.

SQL> drop table rash.emp;


Table dropped.

SQL> shut immediate


SQL> startup mount
SQL> !rman target/
RMAN> run{
2> set until restore point my_data;
3> restore database;
4> recover database;
5> alter database open resetlogs;}

SQL> select * from rash.emp;

ID NAME
---------- ----------------------
10 rash
20 sheetu

Note Here i lost my data, because i created restore point before insert the new data.

Information can be viewd from V$restore_point

35
4.7 RMAN SEQUENCE BASED

➢ If there is any gap in archived log file , or the archived log file is missing.
➢ Recover upto a particular log sequence no, bt does not include that seq.no.

Example:

SQL> conn / as sysdba


Connected.
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /u01/app/oracle/oradata/prim/af
Oldest online log sequence 6
Next log sequence to archive 8
Current log sequence 8

RMAN> backup database;


SQL> alter system switch logfile;
System altered.

SQL> archive log list;


Database log mode Archive Mode
Automatic archival Enabled
Archive destination /u01/app/oracle/oradata/prim/af
Oldest online log sequence 8
Next log sequence to archive 10
Current log sequence 10

SQL> conn riaz/riaz


SQL> drop table employees ;
36
SQL> conn / as sysdba

SQL> alter system switch logfile;


System altered.

SQL> archive log list

Database log mode Archive Mode


Automatic archival Enabled
Archive destination /u01/app/oracle/oradata/prim/af
Oldest online log sequence 11
Next log sequence to archive 13
Current log sequence 13

SQL> shut immediate

SQL> startup mount

SQL> !rman target /


connected to target database: PRIM (DBID=4026912893, not open)

RMAN> run{
2> set until sequence 10 thread 1;
3> restore database;
4> recover database;
5> alter database open resetlogs;}

SQL> conn riaz/riaz


SQL> select * from tab;

37
TNAME TABTYPE CLUSTERID
-------------------------------------------------- ------- ----------
EMPLOYEES TABLE
COUNTRIES TABLE
REGIONS TABLE
LOCATIONS TABLE
DEPARTMENTS TABLE
JOBS TABLE
JOB_HISTORY TABLE
7 rows selected.

4.8 RMAN TIME-BASED

10:55:40 SQL> !rman target /


connected to target database: PRIM (DBID=4026912893)

RMAN> backup database;


RMAN>exit;

11:01:16 SQL> conn riaz/riaz


11:02:05 SQL> drop table employees cascade constraints;
Table dropped.

11:02:15 SQL> conn / as sysdba


Connected.

11:02:26 SQL> shut immediate;

[rashmeet@station253 admin]$ export NLS_DATE_FORMAT='YYYY-MM- DD:HH24:MI:SS'

38
[rashmeet@station253 admin]$ export NLS_LANG=american

[rashmeet@station253 admin]$ export ORACLE_SID=rashmeet

[rashmeet@station253 admin]$ sqlplus / as sysdba

11:02:46 SQL> startup mount;

11:02:56 SQL> !rman target /

RMAN> run{
2> set until time = '2011-01-10:11:02:03';
3> restore database;
4> recover database;
5> alter database open resetlogs;}
11:04:40 SQL> conn riaz/riaz
11:06:59 SQL>select * from employees;

39
5 Controlfile And Spfile Scenario

5.1 All Control Files Lost

➢ When no changes happen in the database

RMAN> CONFIGURE CONTROLFILE AUTOBACKUP on;


SQL> !rman target /
connected to target database: PRIM (DBID=4026912893)
RMAN> backup database;
SQL> !rm -rf /u01/app/oracle/oradata/prim/cf/prim.ctl
SQL> shut immediate
SQL> startup
ORACLE instance started.
Total System Global Area 314572800 bytes
Fixed Size 1219136 bytes
Variable Size 83887552 bytes
Database Buffers 222298112 bytes
Redo Buffers 7168000 bytes

ORA-00205: error in identifying control file, check alert log for more info
SQL> shut immediate
SQL> startup nomount
SQL> !rman target /
connected to target database: prim (not mounted)
RMAN> set DBID=4026912893;
RMAN> restore controlfile from autobackup;/restore controlfile from '/backup/file'
RMAN> alter database mount;
RMAN> recover database;
RMAN> alter database open resetlogs;
40
5.2 Loss of SPFILE

SQL> !rman target /


RMAN> backup database;
SQL> !rm /u01/app/oracle/product/10.2.0/db_1/dbs/spfileprim.ora
SQL> shut immediate;
SQL> startup nomount
ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file '/u01/app/oracle/product/10.2.0/db_1/dbs/initprim.ora'

SQL> !rman target /

➢ Shut down the instance and restart it without mounting. When the SPFILE is not available,
RMAN starts the instance with a dummy parameter file. For example:

RMAN> STARTUP FORCE NOMOUNT;


startup failed: ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file '/u01/app/oracle/product/10.2.0/db_1/dbs/initprim.ora'
starting Oracle instance without parameter file for retrival of spfile

Oracle instance started


Total System Global Area 159383552 bytes
Fixed Size 1218244 bytes
Variable Size 58722620 bytes
Database Buffers 92274688 bytes
Redo Buffers 7168000 bytes

RMAN> set DBID=4026912893;

41
RMAN> RESTORE SPFILE FROM AUTOBACKUP;

RMAN> shutdown immediate

SQL> startup

5.3 Loss of Control File and SPFILE:

SQL> !rman target/


RMAN> backup database;

SQL> !rm -rf /u01/app/oracle/oradata/pcdb/c1.ctl


SQL> !rm -rf /u01/app/oracle/product/10.2.0/db_1/dbs/spfilepcdb.ora

SQL> shut immediate/shut abort;

SQL> !rman target /

➢ Shut down the instance and restart it without mounting. When the SPFILE is not available, RMAN
starts the instance with a dummy parameter file. For example:
RMAN> set dbid=2391778528;
RMAN> run{
2> startup force nomount;
3> restore SPFILE FROM AUTOBACKUP;
4> shutdown immediate;
5> startup nomount;
6> restore controlfile from autobackup;
8> alter database mount;
9> recover database;
10> alter database open resetlogs;}

42
6 Recovery Catalog

➢ Rman repositroy data is always stored in the controlfile of the target database. But it can also be
startoed in a separate database, called a recovery catalog.

➢ A recovery catalog preserves backup information in the seperate database, which is useful in the
event of a lost control file. This allows you to store a longer history of backups than what is
possible with a controlfile – based repository.

➢ A single recovery catalog is able to store information for multiple target database.The recovery
catalog can also hold RMAN stored scripts, which are sequences of RMAN commands backup
tasks.

6.1 Steps for configuring the recovery catalog

1. create a seprate database--for recovery catalog (orcl)


(my target database is prod)

2.orcl>> create tablespace rman_tbs


datafile '/u01/app/oracle/oradata/orcl/rman_tbs.dbf' size 500m;

3.orcl>> create user rman identified by rman


default tablespace rman_tbs
quota unlimited on rman_tbs;
User created.

orcl>> grant connect,resource to rman;


Grant succeeded.

orcl>> grant recovery_catalog_owner to rman;

43
Grant succeeded.

orcl>>exit

4.create a connect identifier for recovery catalog database (orcl)

[oracle@station223 admin]$ export ORACLE_SID=prod

[oracle@station223 admin]$ rman catalog rman/rman@orcl target prod

Recovery Manager: Release 10.2.0.1.0 - Production on Wed Jan 19 14:56:20 2011


Copyright (c) 1982, 2005, Oracle. All rights reserved.

target database Password:


connected to target database: PROD (DBID=1202588170)
connected to recovery catalog database

5.RMAN> create catalog;

recovery catalog created

6.RMAN> register database;

RMAN> list incarnation;

List of Database Incarnations


DB Key Inc Key DB Name DB ID STATUS Reset SCN Reset Time
------- ------- -------- ---------------- --- ---------- ----------
1 2 PROD 1202588170 CURRENT 1 16-JAN-11

44
7.register another database
oracle@station223 admin]$ export ORACLE_SID=prim
[oracle@station223 admin]$ rman catalog rman/rman@orcl target prim
RMAN> register database;
exit

orcl>> select name, dbid from RC_DATABASE;

NAME DBID
-------------------------------------
PROD 1202588170
PRIM 4026912893

45
7 Block Corruption

7.1 Types of Block Corruption

➢ Physical Corruption/Media Corruption


➢ Logical Corruption

7.1.1 Physical Corruption/Media Corruption

➢ Typically corruption is caused by faulty hardware or operating system.

➢ Blocks can not be read from the OS.

➢ If there is difference in checksum (DBWR calculates all bytes stored in the block, which is
called checksum and that is stored in the header of every data block. When the SP reads the
blocks and recalculates the bytes (checksum), if there is any difference in (invalid) checksum ,
then it is physical curroption.) that is physical corruption.

7.1.2 Logical Corruption

➢ Logically corrupt blocks are marked corrupt by the Oracle database after it detects the
inconsistency.

➢ Checksum is valid but the contents of blocks are inconsistence.

➢ It happens due to oracle internal error.

7.2 Manual Corruption of a Block

46
SQL> CREATE TABLE corruption_test (id NUMBER);
Table created.

SQL> INSERT INTO corruption_test VALUES(1);


1 row created.

SQL> COMMIT;
Commit complete.

SQL> SELECT * FROM corruption_test;


ID
----------
1

SQL>SELECT header_block
FROM dba_segments WHERE segment_name='CORRUPTION_TEST';

HEADER_BLOCK
------------
61145

SQL> select segment_name, file_id, block_id


from dba_extents
where segment_name='CORRUPTION_TEST'

SEGMENT_NAME FILE_ID BLOCK_ID


----------------------------------------------------------------------------- ----------
CORRUPTION_TEST 1 61145

SQL> !

47
[oracle@station223 ~]$ dd of=/u01/app/oracle/oradata/orcl/system01.dbf bs=8192 conv=notrunc
seek=61146 <<EOF
> testing corruption
> EOF
0+1 records in
0+1 records out
19 bytes (19 B) copied, 5.2134e-05 seconds, 364 kB/s
[oracle@station223 ~]$ exit

SQL> ALTER SYSTEM FLUSH BUFFER_CACHE;


System altered.

SQL> SELECT * FROM corruption_test;

SELECT * FROM corruption_test


*
ERROR at line 1:
ORA-01578: ORACLE data block corrupted (file # 1, block # 61146)
ORA-01110: data file 1: '/u01/app/oracle/oradata/orcl/system01.dbf'

7.3 DBVERIFY Utility

➢ Works only on data files; redo log files and controlfile cannot be checked.

➢ Can be used while the database is open.

➢ DBVERIFY only checks a block in isolation; it does not know whether the block is part of an
existing object or not.

48
oracle@localhost ~]$ dbv file=/u01/app/oracle/oradata/target/data/system01.dbf blocksize=8192

DBVERIFY - Verification complete


Total Pages Examined : 38400
Total Pages Processed (Data) : 10349
Total Pages Failing (Data) : 0
Total Pages Processed (Index): 3210
Total Pages Failing (Index): 0
Total Pages Processed (Other): 1620
Total Pages Processed (Seg) : 0
Total Pages Failing (Seg) : 0
Total Pages Empty : 23220
Total Pages Marked Corrupt : 1
Total Pages Influx :0
Highest block SCN : 992575 (0.992575)

SQL>!rman target /

RMAN> BLOCKRECOVER DATAFILE 1 block 61146;

SQL> SELECT * FROM corruption_test;

ID
----------
1

7.4 Analyze CMD

➢ Checks the logical corruption.

49
➢ It generates only reports.

➢ It validates table and index entries.

SQL> analyze table hr.emp validate structure;


Table analyzed.

Information will be written in alert log file

Parameter db_block_checking (false,off/low/medium/full,true)

➢ It performs the block checking for all datablocks.(it checks the block when data is moving to
the block).
➢ It prevents memory and data corruption.
➢ It increases the overhead(load) from 1 to 10%.
➢ It can be set by alter session or alter system command.

SQL> alter system set db_block_checking=full;


System altered.

Note: If this parameter value is false/off bt the block checking for the system tablespace is always turn
on.

Parameter db_block_checksum (true/false/typical)

➢ Determines the corruption by disks.


➢ DBWR calculates all bytes stored in the block, which is called checksum and that is stored in
the header of every data block when writing it to the disk.

50
7.5 exp command to detect the corruption

[oracle@localhost ~]$ exp system/system tables=sys.t1


About to export specified tables via Conventional Path ...
Current user changed to SYS
. . exporting table T1
EXP-00056: ORACLE error 1578 encountered
ORA-01578: ORACLE data block corrupted (file # 1, block # 21747)
ORA-01110: data file 1: '/u01/app/oracle/oradata/target/data/system01.dbf'
Export terminated successfully with warnings.

Note: It will not work with expdp command


dd  Convert and copy a file
of=FILE  Write to FILE instead of stdout
bs=BYTES
conv=notrunc  Do not truncate the output file
(CONVS convert the file as per the comma separated symbol list)
seek=BLOCKS  Skip BLOCKS obs-sized blocks at start of output

List of currently corrupted database blocks

SQL>desc v$database_block_corruption

SQL> desc v$backup_corruption

SQL> desc v$copy_corruption

Note: If block is corrupted then need to take a backup by this command.

RMAN> backup validate database;

51
SQL> select * from v$backup_corruption;

SQL> select * from v$database_block_corruption;

SQL> select * from v$backup_corruption;

RMAN> blockrecover datafile 1 block 21641 ;

52
8 CLONING

8.1 User Managed

1.Get the file path information using below query.

SQL>Select tablespace_name, file_name from dba_data_files order by 1;

2. Parameter file backup

If troy database running on spfile.

SQL> Create pfile=’/u01/backup/inittroy.ora’ from spfile;

3.Put the database in begin backup mode

SQL> alter database begin backup;

To ensure the begin backup mode tablespace using below query.


SQL> Select * from v$backup; (refer the Change#, Time column)

To ensure the begin backup mode tablespace using below query


SQL> Select * from v$backup;

Using os command to copy the datafiles belongs to begin backup mode tablespace & placed in
backup path. (Refer below example)

]$cp *.dbf /u01/app/oracle/oradata/clone

]$cp *.log /u01/app/oracle/oradata/clone

53
]$cp arch* /u01/app/oracle/oradata/clone

SQL> alter database end backup;

4.Take the controlfile backup.

SQL> Alter database backup controlfile to trace as ‘/u01/backup/control01.ora’;

5.Backup all your archive log files between the previous backup and the new backup as well.

Clone Database side: (Clone database)

Database Name: Clone

1. Create the appropriate folder in corresponding path & placed the backup files in corresponding
folder.(bdump,udump,create,pfile,cdump,oradata)

2. Change the init.ora parameter like control file path, dbname, instance name etc...

3. Create the password file using orapwd utility.

(Database in windows we need to create the service id using oradim utility)

4. Startup the Database in NOMOUNT stage.

5. Create the control file for cloning database.

• Using backup controlfile trace to generate the create controlfile scripts.

54
• Change the Database name & files path, also change 'REUSE' needs to be changed to 'SET'.

Example:

CREATE CONTROLFILE SET DATABASE "clone" RESETLOGS FORCE LOGGING NOARCHIVELOG


MAXLOGFILES 50
MAXLOGMEMBERS 5
MAXDATAFILES 100
MAXINSTANCES 1
MAXLOGHISTORY 453
LOGFILE
GROUP 1 '/U01/oradata/clone/redo01.log' SIZE 200M,
GROUP 2 '/U01/oradata/clone/redo02.log' SIZE 200M,
GROUP 3 '/U01/oradata/clone/redo03.log' SIZE 200M
DATAFILE
'/U01/oradata/clone/system01.dbf',
'/U01/oradata/clone/undotbs01.dbf',
'/U01/oradata/clone/users01.dbf',
CHARACTER SET WE8ISO8859P1;

Note: placed the script in sql prompt. Now controlfile created.

SQL>@createcotrol.sql
6. Recover the database using controlfile.

SQL>recover database using backup controlfile until cancel;

7. Now open the database.

55
SQL>alter database open resetlogs;

Note: Check the logfile, datafiles status.

If u want to reuse the same controlfile of production side then u need to use REUSE keyword.

STARTUP NOMOUNT
CREATE CONTROLFILE REUSE SET DATABASE "CLONE" RESETLOGS ARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 2
MAXDATAFILES 30
MAXINSTANCES 1
MAXLOGHISTORY 292
LOGFILE
GROUP 1 '/u01/app/oracle/oradata/clone/redo01.log' SIZE 10M,
GROUP 2 '/u01/app/oracle/oradata/clone/redo02.log' SIZE 10M
-- STANDBY LOGFILE
DATAFILE
'/u01/app/oracle/oradata/clone/system01.dbf',
'/u01/app/oracle/oradata/clone/undo01.dbf',
'/u01/app/oracle/oradata/clone/sysaux01.dbf',
'/u01/app/oracle/oradata/clone/users01.dbf'
;

• If u want to create a new controlfile to clone side then u need to use SET keyword only

STARTUP NOMOUNT
CREATE CONTROLFILE SET DATABASE "CLONE" RESETLOGS ARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 2

56
MAXDATAFILES 30
MAXINSTANCES 1
MAXLOGHISTORY 292
LOGFILE
GROUP 1 '/u01/app/oracle/oradata/clone/redo01.log' SIZE 10M,
GROUP 2 '/u01/app/oracle/oradata/clone/redo02.log' SIZE 10M
-- STANDBY LOGFILE
DATAFILE
'/u01/app/oracle/oradata/clone/system01.dbf',
'/u01/app/oracle/oradata/clone/undo01.dbf',
'/u01/app/oracle/oradata/clone/sysaux01.dbf',
'/u01/app/oracle/oradata/clone/users01.dbf'
;

8.2 RMAN Cloning

• Steps

Prod-Side

1. Take a backup with RMAN

RMAN>backup database plus archivelog.

SQL>create pfile='initclone.ora' from spfile.

2. Make directory for clone database.

3. Create connection string for both databases.

57
4. Make sure listener is up

Clone-Side:

5. Edit ur pfile for clone and add two additional parameters if you are using diffreent directory
structure.

db_file_name_convert=’/u01/app/oracle/oradata/prod/’, ’/u01/app/oracle/oradata/clone/’

log_file_name_convert=’/u01/app/oracle/oradata/prod/’, ’/u01/app/oracle/oradata/clone/’

6. ]$export ORACLE_SID=clone

]$ sqlplus sys/oracle@clone as sysdba

SQL>startup nomount pfile='/u01/app/oracle/admin/clone/initclone.ora';

SQL>create spfile from pfile='/u01/app/oracle/admin/clone/initclone.ora';

SQL>startup force nomount

Prod-side

]$export ORACLE_SID=prod

]$rman target sys/oracle@prod auxiliary sys/oracle@clone

RMAN> duplicate target database to 'clone';

58
9 Automatic Memory Management

➢ How to measure performance

Hit Ratios

• Buffer cache hit ratio > 95% for OLTP


• Library Cache hit ratio > 95% for OLTP
• Dictionary Cache Hit ratio > 90% for OLTP

➢ How to calculate buffer cache hit ratio

• Physical Reads: This statistic indicates the number of data blocks (i.e. tables, indexes, and
rollback segments) read from disk into the Buffer Cache since instance startup.

• Physical Reads Direct This statistic indicates the number of reads that bypassed the Buffer
Cache because the data blocks were read directly from disk instead. Because direct physical
reads are done intentionally by Oracle when using certain features like export or Parallel
Query.

• Session Logical Reads This statistic indicates total number of reads requested for data. This
value includes requests satisfied by access to buffers in memory and requests that caused
physical I/O.

• Physical Reads Direct (LOB) This statistic indicates the number of reads that bypassed the
Buffer Cache because the data blocks were associated with a Large Object (LOB) datatype
Session Logical Reads This statistic indicates total number of reads requested for data. This
value includes requests satisfied by access to buffers in memory and requests that caused
physical I/O.

➢ How to measure performance of buffer cache


59
• Free Buffer Waits These waits occur whenever the Server Process had to wait for Database
Writer to write a dirty buffer to disk (DBWR)

SQL> SELECT event, total_waits


FROM v$system_event
WHERE event = ’free buffer waits’

• Buffer Busy Waits These waits occur whenever a buffer requested by user Server Processes is
already in memory, but is in use by another process. These waits can occur for rollback
segment buffers as well as data and index buffers.

SQL> SELECT event, total_waits, average_wait


FROM v$system_event
WHERE event = ’buffer busy waits’

➢ How to increase performance of Buffer Cache

• Make Buffer cache bigger (DB_CACHE_SIZE)


• Use multiple buffer pools (keep, recycle)
• Cache tables in memory
• Bypass buffer cache
• Use indexes properly (to avoid FTS)

➢ Make it bigger. (DB_CACHE_SIZE, SGA_MAX_SIZE)

• Set Value of DB_BUFFER_CACHE (in bytes). The total size of the Buffer Cache, hared Pool, and
Redo Log Buffer cannot exceed SGA_MAX_SIZE.

60
➢ How much of buffer cache is enough

✓ Set DB_CACHE_ADVICE=ON (consumes memory & CPU to collect statistics for providing
advice.

SQL> SELECT name, size_for_estimate, estd_physical_reads, estd_physicai_read_factor


(physical/total) not shown below
FROM v$db_cache_advice
WHERE block_size = ’8192’
AND advice_status = ’ON’;

➢ Improve Buffer Cache Performance

• How to assign segments to the above pools


✓ ALTER TABLE apps.employee STORAGE (BUFFER_POOLKEEP);
✓ ALTER INDEX apps.employee_first_name_idx STORAGE (BUFFER_POOL KEEP);
✓ ALTER TABLE apps.sales_history STORAGE (BUFFER_POOL RECYCLE);

➢ How to find out which objects are in which pool

SQL> SELECT owner, segment_type, segment_name, buffer_pool


FROM dba_segments

➢ How to find out the sizes of buffer pools.

SQL> SELECT name, block_size, current_size


FROM v$buffer_pool;

➢ Cache tables in Buffer Cache

61
➢ No matter how many Buffer Pools you decide to use, each one is still managed by LRU

➢ Small tables that require FTS will be always at thecLRU end of the LRU list. Therefore they
would be kicked out of buffer cache earlier. To avoid this cache them

➢ You can cache a table using three methods

• Table creation

SQL> CREATE TABLE phone_list ( employee_id number, phone_number


varchar2(11),extension varchar2(4)) TABLESPACE appl_tab STORAGE (INITIAL 50K NEXT
50K PCTINCREASE 0) CACHE;

• Alter table

SQL> ALTER TABLE employee CACHE;

• Use hint (caches table only for the query time)

SQL>SELECT /*+ CACHE */ last_name, first_name


FROM employee;

➢ How to decide which tables are already cached

SQL> SELECT owner, table_name


FROM dba_tables
WHERE LTRIM(cache) = ’Y’;

62
➢ Bypass the Buffer Cache

• We can bypass buffer cache using Direct Path read and direct insert commands
• export DIRECT=TRUE (Direct Path reading)
• SQL DIRECT=TRUE in sqlldr (Direct Path inserting)

➢ Additional relevant commands

• alter system flush buffer_cache


• alter system flush shared_pool

➢ Initialization Parameters

• DB_BLOCK_SIZE=2048 Size of the database block

• DB_CACHE_SIZE=512M Size of the buffer cache (Defaultbuffer cache)

• DB_RECYCLE_CACHE_SIZE=256M Size of recycle part of buffer cache (Should be used


for large tables which are usually read into memory with FTS that are not frequently
queried)
• DB_KEEP_CACHE_SIZE=256M Size of keep part of buffer cache (Should be used for
small tables that are frequently queried)

• DB_BLOCK_MULTI_TEAD_COUNT=8 Server process will read 8 blocks at a time from


disk bring into buffer cache Good for DSS

• DB_2K_CACHE_SIZE=100M Set this parameter if you have created a tablespace with a


non-default 2K block size

• JAVA_POOL_SIZE Size of java pool

63
• LARGE_POOL_SIZE Size of large pool

• SHARRED_POOL_SIZE  Size of shared pool

• LOG_BUFFER Size of Redo buffer

➢ What is ASSM (Automatic Shared Memory Management)

➢ For example, in a system that runs large online transactional processing (OLTP)jobs during the
day (requiring a large buffer cache) and runs parallel batch jobs at night (requiring a large value
for the large pool), you would have to simultaneously configure both the buffer cache and
the large pool to accommodate your peak
requirements.

➢ With Automatic Shared Memory Management, when the OLTP job runs, the buffer cache grabs
most of the memory to allow for good I/O performance. When the data analysis and
reporting batch job starts up later, the memory is automatically migrated to the large
pool so that it can be used by parallel query operations without producing memory overflow
errors.

➢ How does ASMM work

• Based on workload information, MMAN captures statistics periodically in the


background.

• MMAN uses the different memory advisories.

• Memory is moved to where it is most needed.

64
• Using an SPFILE is recommended:

• Component sizes saved across shutdowns

• Saved values used to bootstrap component sizes

• Avoids having to relearn optimal values

• Init parameters for ASMM SGA_TARGET Size of total SGA SGA_MAX Maximum size
that SGA can grow up to using alter system set sga_target command
STATISTIC_LEVEL=TYPICAL MMAN needs to collect statistics for ASMM to work.

➢ Memory components not managed by ASMM

• Log_buffer_size
• Db_cache_keep_size
65
• Db_cache_recycle_size
• Streams_pool_size
• Fixed SGA

➢ When SGA_TARGET is set, the total size of manual SGA size parameters is subtracted from the
SGA_TARGET value, and the balance is given to the auto-tuned SGA components.

Note: STATISTICS_LEVEL init parameter must be set to TYPICAL or ALL for ASMM to work

SQL> select component, current_size, user_specified_size,


from v$sga_dynamic_component;

➢ Initial recommendations for Database Server

• Server Memory 1GB or less


• Server Memory × .55 = Memory to All SGAs
• SGA × .45 = Memory to the Shared Pool
• SGA × .45 = Memory to the Database Buffer Cache
• SGA × .10 = Memory to the Redo Log Buffer
• Server Memory more than 1GB
• Server Memory × .65 = Memory to All SGAs
• SGA × .45 = Memory to the Shared Pool
• SGA × .45 = Memory to the Database Buffer Cache
• SGA × .10 = Memory to the Redo Log Buffer

66
10 Partiontioning

10.1 Introduction to Partitioning-:

➢ Partitioning addresses key issues in supporting very large tables and indexes by letting you
decompose them into smaller and more manageable pieces called partitions.
➢ SQL queries and DML statements do not need to be modified in order to access partitioned
tables.
➢ However, after partitions are defined, DDL statements can access and manipulate individuals
partitions rather than entire tables or indexes.
➢ This is how partitioning can simplify the manageability of large database objects.
➢ Each partition of a table or index must have the same logical attributes, such as column names,
datatypes, and constraints, but each partition can have separate physical attributes such as
pctfree, pctused, and tablespaces.
➢ Partitioning is useful for many different types of applications, particularly applications that
manage large volumes of data.
➢ OLTP systems often benefit from improvements in manageability and availability, while data
warehousing systems benefit from performance and manageability.

Note-: All partitions of a partitioned object must reside in tablespaces of a single block size.

10.2 Advantages of Partitioning

➢ Partitioning enables data management operations such data loads, index creation and
rebuilding, and backup/recovery at the partition level, rather than on the entire table. This
results in significantly reduced times for these operations.
➢ Partitioning improves query performance. In many cases, the results of a query can be
achieved by accessing a subset of partitions, rather than the entire table. For some queries,
this technique (called partition pruning) can provide order-of-magnitude gains in performance.
➢ Partitioning can significantly reduce the impact of scheduled downtime for maintenance

67
operations.
➢ Partition independence for partition maintenance operations lets you perform concurrent
maintenance operations on different partitions of the same table or index. You can also run
concurrent SELECT and DML operations against partitions that are unaffected by maintenance
operations.
➢ Partitioning increases the availability of mission-critical databases if critical tables and indexes
are divided into partitions to reduce the maintenance windows, recovery times, and impact of
failures.
➢ Partitioning can be implemented without requiring any modifications to your applications. For
example, you could convert a nonpartitioned table to a partitioned table without needing to
modify any of the SELECT statements or DML statements which access that table. You do not
need to rewrite your application code to take advantage of partitioning.

Partition Key

➢ Each row in a partitioned table is unambiguously assigned to a single partition.


➢ The partition key is a set of one or more columns that determines the partition for each row.
➢ Oracle automatically directs insert, update, and delete operations to the appropriate partition
through the use of the partition key.
➢ Consists of an ordered list of 1 to 16 columns.
➢ Cannot contain a LEVEL, ROWID, or MLSLABEL pseudocolumn or a column of type ROWID.
➢ Can contain columns that are NULLable

Partitioned Tables
➢ Tables can be partitioned into up to 1024K-1 separate partitions.
➢ Any table can be partitioned except those tables containing columns with LONG or LONG RAW
datatypes.
➢ You can, however, use tables containing columns with CLOB or BLOB datatypes.

68
10.3 Partitioning Methods

➢ There are several partitioning methods offered by Oracle Database:

• Range partitioning
• Hash partitioning
• List partitioning
• Composite range-hash partitioning
• Composite range-list partitioning

10.3.1 Range Partitioning

➢ Use range partitioning to map rows to partitions based on ranges of column values.
➢ This type of partitioning is useful when dealing with data that has logical ranges into which it
can be distributed; for example, months of the year.
➢ Performance is best when the data evenly distributes across the range.
➢ If partitioning by range causes partitions to vary dramatically in size because of unequal
distribution, you may want to consider one of the other methods of partitioning.
➢ When creating range partitions, you must specify:

• Partitioning method: range


• Partitioning column(s)

➢ Range partitioning maps data to partitions based on ranges of partition key values that you
establish for each partition. It is the most common type of partitioning and is often used with
dates. For example, you might want to partition sales data into monthly partitions.
➢ When using range partitioning, consider the following rules:

• Each partition has a VALUES LESS THAN clause, which specifies a noninclusive upper
bound for the partitions. Any values of the partition key equal to or higher than this
literal are added to the next higher partition.
69
• All partitions, except the first, have an implicit lower bound specified by the VALUES
LESS THAN clause on the previous partition.
• A MAXVALUE literal can be defined for the highest partition. MAXVALUE represents a
virtual infinite value that sorts higher than any other possible value for the partition
key, including the null value.

➢ The example below creates a table of four partitions, one for each quarter of sales. The
columns sale_year, sale_month, and sale_day are the partitioning columns, while their values
constitute the partitioning key of a specific row. The VALUES LESS THAN clause determines the
partition bound: rows with partitioning key values that compare less than the ordered list of
values specified by the clause are stored in the partition. Each partition is given a name
(sales_q1, sales_q2, ...), and each partition is contained in a separate tablespace (tsa, tsb, ...).

Example:

SQL>CREATE TABLE sales ( invoice_no NUMBER, sale_year INT NOT NULL,


sale_month INT NOT NULL, sale_day INT NOT NULL )
PARTITION BY RANGE (sale_year, sale_month, sale_day)
( PARTITION sales_q1 VALUES LESS THAN (1999, 04, 01)
TABLESPACE tsa,
PARTITION sales_q2 VALUES LESS THAN (1999, 07, 01)
TABLESPACE tsb,
PARTITION sales_q3 VALUES LESS THAN (1999, 10, 01)
TABLESPACE tsc,
PARTITION sales_q4 VALUES LESS THAN (2000, 01, 01)
TABLESPACE tsd );

➢ A row with sale_year=1999, sale_month=8, and sale_day=1 has a partitioning key of (1999, 8,
1) and would be stored in partition sales_q3.

70
Example:2

➢ A typical example is given in the following section. The statement creates a table (sales_range)
that is range partitioned on the sales_date field.

SQL>CREATE TABLE sales_range (salesman_id NUMBER(5), salesman_name VARCHAR2(30),


sales_amount NUMBER(10), sales_date DATE)
PARTITION BY RANGE(sales_date)
(
PARTITION sales_jan2000 VALUES LESS THAN(TO_DATE('02/01/2000','MM/DD/YYYY')),
PARTITION sales_feb2000 VALUES LESS THAN(TO_DATE('03/01/2000','MM/DD/YYYY')),
PARTITION sales_mar2000 VALUES LESS THAN(TO_DATE('04/01/2000','MM/DD/YYYY')),
PARTITION sales_apr2000 VALUES LESS THAN(TO_DATE('05/01/2000','MM/DD/YYYY'))
)
);

Example-3

SQL>CREATE TABLE students (student_id NUMBER(10), degree VARCHAR2(3),


graduation_date DATE, final_gpa NUMBER)
PARTITION BY RANGE (graduation_date)
( PARTITION students_2000
VALUES LESS THAN (TO_DATE('01-JUN-2000','DD-MON-YYYY'))
TABLESPACE users
Storage (initial 1M next 1M minextents 5),
PARTITION students_2001
VALUES LESS THAN (TO_DATE('01-JUN-2001','DD-MON-YYYY'))
TABLESPACE users
Storage (initial 1M next 1M minextents 3),
PARTITION students_errors

71
VALUES LESS THAN (MAXVALUE)
TABLESPACE users)
ENABLE ROW MOVEMENT;

Evaluation:

Graduation_date < TO_DATE('01-JUN-2000','DD-MON-YYYY')


Graduation_date < TO_DATE('01-JUN-2000','DD-MON-YYYY')
Graduation_date < max(graduation_date)

Points to remember-:

➢ A Table can have upto 65636 partitions

➢ There cannot be a gap in range partitioning

➢ Less than – not less than or equal to

➢ Partition_key cannot be long or raw

➢ ORA-14400: Insert partition key is beyond highest

➢ Partition Key can be composite key with up to 16 columns

SQL> select segment_name, partition_name, extent_id, blocks


2 from dba_extents where segment_name = 'SALES_RANGE';
SQL>select segment_name, partition_name, extent_id, blocks
from dba_extents where segment_name = 'STUDENTS'

72
10.3.2 List Partitioning

➢ List partitioning enables you to explicitly control how rows map to partitions. You do this by
specifying a list of discrete values for the partitioning key in the description for each partition.
This is different from range partitioning, where a range of values is associated with a partition
and from hash partitioning, where a hash function controls the row-to-partition mapping. The
advantage of list partitioning is that you can group and organize unordered and unrelated sets
of data in a natural way.

➢ Use list partitioning when you require explicit control over how rows map to partitions. You
can specify a list of discrete values for the partitioning column in the description for each
partition. This is different from range partitioning, where a range of values is associated with a
partition, and from hash partitioning, where the user has no control of the row to partition
mapping.

➢ The list partitioning method is specifically designed for modeling data distributions that follow
discrete values. This cannot be easily done by range or hash partitioning because: Range
partitioning assumes a natural range of values for the partitioning column. It is not possible to
group together out-of-range values partitions.

➢ Hash partitioning allows no control over the distribution of data because the data is distributed
over the various partitions using the system hash function. Again, this makes it impossible to
logically group together discrete values for the partitioning columns into partitions.

➢ Further, list partitioning allows unordered and unrelated sets of data to be grouped and
organized together very naturally.

➢ Unlike the range and hash partitioning methods, multicolumn partitioning is not supported for
list partitioning. If a table is partitioned by list, the partitioning key can consist only of a single
column of the table. Otherwise all columns that can be partitioned by the range or hash

73
methods can be partitioned by the list partitioning method.

➢ When creating list partitions, you must specify:

• Partitioning method: list


• Partitioning column

➢ Partition descriptions, each specifying a list of literal values (a value list), which are the discrete
values of the partitioning column that qualify a row to be included in the partition .

Example

➢ The following example creates a list-partitioned table. It creates table q1_sales_by_region


which is partitioned by regions consisting of groups of states.

SQL> CREATE TABLE q1_sales_by_region (deptno number, deptname varchar2(20),


quarterly_sales number(10, 2), state varchar2(2))
PARTITION BY LIST (state)
(PARTITION q1_northwest VALUES ('OR', 'WA'),
PARTITION q1_southwest VALUES ('AZ', 'UT', 'NM'),
PARTITION q1_northeast VALUES ('NY', 'VM', 'NJ'),
PARTITION q1_southeast VALUES ('FL', 'GA'),
PARTITION q1_northcentral VALUES ('SD', 'WI'),
PARTITION q1_southcentral VALUES ('OK', 'TX'));

➢ A row is mapped to a partition by checking whether the value of the partitioning column for a
row matches a value in the value list that describes the partition.

➢ For example, some sample rows are inserted as follows:


(10, 'accounting', 100, 'WA') maps to partition q1_northwest

74
(20, 'R&D', 150, 'OR') maps to partition q1_northwest
(30, 'sales', 100, 'FL') maps to partition q1_southeast
(40, 'HR', 10, 'TX') maps to partition q1_southwest
(50, 'systems engineering', 10, 'CA') does not map to any partition in the table and raises an
error.

➢ Unlike range partitioning, with list partitioning, there is no apparent sense of order between
partitions.

➢ You can also specify a default partition into which rows that do not map to any other partition
are mapped. If a default partition were specified in the preceding example, the state CA would
map to that partition.

Example-2

➢ The details of list partitioning can best be described with an example. In this case, lets say you
want to partition a sales table by region. That means grouping states together according to
their geographical location as in the following example.

SQL> CREATE TABLE sales_list


(salesman_id NUMBER(5),
salesman_name VARCHAR2(30),
sales_state VARCHAR2(20),
sales_amount NUMBER(10),
sales_date DATE)
PARTITION BY LIST(sales_state)
(
PARTITION sales_west VALUES('California', 'Hawaii'),
PARTITION sales_east VALUES ('New York', 'Virginia', 'Florida'),
PARTITION sales_central VALUES('Texas', 'Illinois'),

75
PARTITION sales_other VALUES(DEFAULT)
)
);

➢ A row is mapped to a partition by checking whether the value of the partitioning column for a
row falls within the set of values that describes the partition. For example, the rows are
inserted as follows:

(10, 'Jones', 'Hawaii', 100, '05-JAN-2000') maps to partition sales_west


(21, 'Smith', 'Florida', 150, '15-JAN-2000') maps to partition sales_east
(32, 'Lee', 'Colorado', 130, '21-JAN-2000') maps to partition sales_other

➢ Unlike range and hash partitioning, multicolumn partition keys are not supported for list
partitioning. If a table is partitioned by list, the partitioning key can only consist of a single
column of the table.

➢ The DEFAULT partition enables you to avoid specifying all possible values for a list-partitioned
table by using a default partition, so that all rows that do not map to any other partition do not
generate an error.

10.3.3 Hash Partitioning

➢ Hash partitioning enables easy partitioning of data that does not lend itself to range or list
partitioning.

➢ It does this with a simple syntax and is easy to implement. It is a better choice than range
partitioning when:

• You do not know beforehand how much data maps into a given range.
• The sizes of range partitions would differ quite substantially or would be difficult to

76
balance manually

➢ Range partitioning would cause the data to be undesirably clustered.

➢ Performance features such as parallel DML, partition pruning, and partition-wise joins are
important.

➢ The concepts of splitting, dropping or merging partitions do not apply to hash partitions.
Instead, hash partitions can be added and coalesced.

➢ Use hash partitioning if your data does not easily lend itself to range partitioning, but you
would like to partition for performance and manageability reasons. Hash partitioning provides
a method of evenly distributing data across a specified number of partitions. Rows are mapped
into partitions based on a hash value of the partitioning key. Creating and using hash partitions
gives you a highly tunable method of data placement, because you can influence availability
and performance by spreading these evenly sized partitions across I/O devices (striping).

➢ To create hash partitions you specify the following:

• Partitioning method: hash


• Partitioning column(s)

Example-1

➢ The following example creates a hash-partitioned table. The partitioning column is id, four
partitions are created and assigned system generated names, and they are placed in four
named tablespaces (gear1, gear2, ...).

77
SQL> CREATE TABLE scubagear (id NUMBER, name VARCHAR2 (60))
PARTITION BY HASH (id)
PARTITIONS 4
STORE IN (gear1, gear2, gear3, gear4);

Example-2

SQL>CREATE TABLE sales_hash (salesman_id NUMBER(5),


salesman_name VARCHAR2(30),
sales_amount NUMBER(10),
week_no NUMBER(2))
PARTITION BY HASH(salesman_id)
PARTITIONS 4
STORE IN (ts1, ts2, ts3, ts4);

➢ The preceding statement creates a table sales_hash, which is hash partitioned on salesman_id
field. The tablespace names are ts1, ts2, ts3, and ts4. With this syntax, we ensure that we
create the partitions in a round-robin manner across the specified tablespaces.

10.3.4 Composite Partitioning

➢ Composite partitioning partitions data using the range method, and within each partition,
subpartitions it using the hash or list method. Composite range-hash partitioning provides the
improved manageability of range partitioning and the data placement, striping, and parallelism
advantages of hash partitioning. Composite range-list partitioning provides the manageability
of range partitioning and the explicit control of list partitioning for the subpartitions.

➢ Composite partitioning supports historical operations, such as adding new range partitions, but
also provides higher degrees of parallelism for DML operations and finer granularity of data

78
placement through subpartitioning.

When to Use Composite Range-List Partitioning

➢ Like the composite range-hash partitioning method, the composite range-list partitioning
method provides for partitioning based on a two level hierarchy. The first level of partitioning
is based on a range of values, as for range partitioning; the second level is based on discrete
values, as for list partitioning. This form of composite partitioning is well suited for historical
data, but lets you further group the rows of data based on unordered or unrelated column
values.

➢ When creating range-list partitions, you specify the following:

• Partitioning method: range


• Partitioning column(s)
• Subpartitioning method: list
• Subpartitioning column

➢ Subpartition descriptions, each specifying a list of literal values (a value list), which are the
discrete values of the subpartitioning column that qualify a row to be included in the
subpartition.

Example-1

➢ The following example illustrates how range-list partitioning might be used. The example tracks
sales data of products by quarters and within each quarter, groups it by specified states.

SQL> CREATE TABLE quarterly_regional_sales (deptno number, item_no varchar2(20),


txn_date date, txn_amount number, state varchar2(2))
TABLESPACE ts4

79
PARTITION BY RANGE (txn_date)
SUBPARTITION BY LIST (state)
(PARTITION q1_1999 VALUES LESS THAN (TO_DATE('1-APR-1999','DD-MON-YYYY'))
(SUBPARTITION q1_1999_northwest VALUES ('OR', 'WA'),
SUBPARTITION q1_1999_southwest VALUES ('AZ', 'UT', 'NM'),
SUBPARTITION q1_1999_northeast VALUES ('NY', 'VM', 'NJ'),
SUBPARTITION q1_1999_southeast VALUES ('FL', 'GA'),
SUBPARTITION q1_1999_northcentral VALUES ('SD', 'WI'),
SUBPARTITION q1_1999_southcentral VALUES ('OK', 'TX')
),
PARTITION q2_1999 VALUES LESS THAN ( TO_DATE('1-JUL-1999','DD-MON-YYYY'))
(SUBPARTITION q2_1999_northwest VALUES ('OR', 'WA'),
SUBPARTITION q2_1999_southwest VALUES ('AZ', 'UT', 'NM'),
SUBPARTITION q2_1999_northeast VALUES ('NY', 'VM', 'NJ'),
SUBPARTITION q2_1999_southeast VALUES ('FL', 'GA'),
SUBPARTITION q2_1999_northcentral VALUES ('SD', 'WI'),
SUBPARTITION q2_1999_southcentral VALUES ('OK', 'TX')
),
PARTITION q3_1999 VALUES LESS THAN (TO_DATE('1-OCT-1999','DD-MON-YYYY'))
(SUBPARTITION q3_1999_northwest VALUES ('OR', 'WA'),
SUBPARTITION q3_1999_southwest VALUES ('AZ', 'UT', 'NM'),
SUBPARTITION q3_1999_northeast VALUES ('NY', 'VM', 'NJ'),
SUBPARTITION q3_1999_southeast VALUES ('FL', 'GA'),
SUBPARTITION q3_1999_northcentral VALUES ('SD', 'WI'),
SUBPARTITION q3_1999_southcentral VALUES ('OK', 'TX')
),
PARTITION q4_1999 VALUES LESS THAN ( TO_DATE('1-JAN-2000','DD-MON-YYYY'))
(SUBPARTITION q4_1999_northwest VALUES ('OR', 'WA'),
SUBPARTITION q4_1999_southwest VALUES ('AZ', 'UT', 'NM'),
SUBPARTITION q4_1999_northeast VALUES ('NY', 'VM', 'NJ'),

80
SUBPARTITION q4_1999_southeast VALUES ('FL', 'GA'),
SUBPARTITION q4_1999_northcentral VALUES ('SD', 'WI'),
SUBPARTITION q4_1999_southcentral VALUES ('OK', 'TX')
)
);

➢ A row is mapped to a partition by checking whether the value of the partitioning column for a
row falls within a specific partition range. The row is then mapped to a subpartition within that
partition by identifying the subpartition whose descriptor value list contains a value matching
the subpartition column value.

➢ For example, some sample rows are inserted as follows:

(10, 4532130, '23-Jan-1999', 8934.10, 'WA') maps to subpartition q1_1999_northwest


(20, 5671621, '15-May-1999', 49021.21, 'OR') maps to subpartition q2_1999_northwest
(30, 9977612, '07-Sep-1999', 30987.90, 'FL') maps to subpartition q3_1999_southeast
(40, 9977612, '29-Nov-1999', 67891.45, 'TX') maps to subpartition q4_1999_southcentral
40, 4532130, '5-Jan-2000', 897231.55, 'TX') does not map to any partition in the table and raises an
error
(50, 5671621, '17-Dec-1999', 76123.35, 'CA') does not map to any subpartition in the table and raises
an error

➢ The partitions of a range-list partitioned table are logical structures only, as their data is stored
in the segments of their subpartitions. The list subpartitions have the same characteristics as
list partitions.

➢ You can specify a default subpartition, just as you specify a default partition for list
partitioning.

Example-2

81
SQL>CREATE TABLE bimonthly_regional_sales
(deptno NUMBER, item_no VARCHAR2(20),
txn_date DATE, txn_amount NUMBER,
state VARCHAR2(2))
PARTITION BY RANGE (txn_date)
SUBPARTITION BY LIST (state)
SUBPARTITION TEMPLATE(
SUBPARTITION east VALUES('NY', 'VA', 'FL') TABLESPACE ts1,
SUBPARTITION west VALUES('CA', 'OR', 'HI') TABLESPACE ts2,
SUBPARTITION central VALUES('IL', 'TX', 'MO') TABLESPACE ts3)
(
PARTITION janfeb_2000 VALUES LESS THAN (TO_DATE('1-MAR-2000','DD-MON-YYYY')),
PARTITION marapr_2000 VALUES LESS THAN (TO_DATE('1-MAY-2000','DD-MON-YYYY')),
PARTITION mayjun_2000 VALUES LESS THAN (TO_DATE('1-JUL-2000','DD-MON-YYYY'))
)
);

➢ This statement creates a table bimonthly_regional_sales that is range partitioned on the


txn_date field and list subpartitioned on state. When you use a template, Oracle names the
subpartitions by concatenating the partition name, an underscore, and the subpartition name
from the template. Oracle places this subpartition in the tablespace specified in the template.
In the previous statement, janfeb_2000_east is created and placed in tablespace ts1 while
janfeb_2000_central is created and placed in tablespace ts3. In the same manner,
mayjun_2000_east is placed in tablespace ts1 while mayjun_2000_central is placed in
tablespace ts3. Figure 18-5 offers a graphical view of the table bimonthly_regional_sales and
its 9 individual subpartitions.

82
10.3.5 Composite Range-Hash Partitioning

➢ Range-hash partitioning partitions data using the range method, and within each partition,
subpartitions it using the hash method. These composite partitions are ideal for both historical
data and striping, and provide improved manageability of range partitioning and data
placement, as well as the parallelism.

Advantages of hash partitioning:

➢ When creating range-hash partitions, you specify the following:

• Partitioning method: range


• Partitioning column(s)
• Partition descriptions identifying partition bounds
• Subpartitioning method: hash
• Subpartitioning column(s)

➢ The following statement creates a range-hash partitioned table. In this example, three range
partitions are created, each containing eight subpartitions. Because the subpartitions are not
named, system generated names are assigned, but the STORE IN clause distributes them across
the 4 specified tablespaces (ts1, ...,ts4).

Example-1

SQL>CREATE TABLE scubagear (equipno NUMBER, equipname VARCHAR(32), price NUMBER)


PARTITION BY RANGE (equipno) SUBPARTITION BY HASH(equipname)
SUBPARTITIONS 8 STORE IN (ts1, ts2, ts3, ts4)
(PARTITION p1 VALUES LESS THAN (1000),
PARTITION p2 VALUES LESS THAN (2000),
PARTITION p3 VALUES LESS THAN (MAXVALUE));
83
➢ The partitions of a range-hash partitioned table are logical structures only, as their data is
stored in the segments of their subpartitions. As with partitions, these subpartitions share the
same logical attributes.

➢ Unlike range partitions in a range-partitioned table, the subpartitions cannot have different
physical attributes from the owning partition, although they are not required to reside in the
same tablespace.

Example-2

SQL>CREATE TABLE sales_composite


(salesman_id NUMBER(5),
salesman_name VARCHAR2(30),
sales_amount NUMBER(10),
sales_date DATE)
PARTITION BY RANGE(sales_date)
SUBPARTITION BY HASH(salesman_id)
SUBPARTITION TEMPLATE(
SUBPARTITION sp1 TABLESPACE ts1,
SUBPARTITION sp2 TABLESPACE ts2,
SUBPARTITION sp3 TABLESPACE ts3,
SUBPARTITION sp4 TABLESPACE ts4)
(PARTITION sales_jan2000 VALUES LESS THAN(TO_DATE('02/01/2000','MM/DD/YYYY'))
PARTITION sales_feb2000 VALUES LESS THAN(TO_DATE('03/01/2000','MM/DD/YYYY'))
PARTITION sales_mar2000 VALUES LESS THAN(TO_DATE('04/01/2000','MM/DD/YYYY'))
PARTITION sales_apr2000 VALUES LESS THAN(TO_DATE('05/01/2000','MM/DD/YYYY'))
PARTITION sales_may2000 VALUES LESS THAN(TO_DATE('06/01/2000','MM/DD/YYYY')));

84
➢ This statement creates a table sales_composite that is range partitioned on the sales_date field
and hash subpartitioned on salesman_id. When you use a template, Oracle names the
subpartitions by concatenating the partition name, an underscore, and the subpartition name
from the template. Oracle places this subpartition in the tablespace specified in the template.
In the previous statement, sales_jan2000_sp1 is created and placed in tablespace ts1 while
sales_jan2000_sp4 is created and placed in tablespace ts4. In the same manner,
sales_apr2000_sp1 is created and placed in tablespace ts1 while sales_apr2000_sp4 is created
and placed in tablespace ts4.

➢ The following query displays the subpartition names and tablespaces:

SQL> SELECT TABLESPACE_NAME, PARTITION_NAME, SUBPARTITION_NAME


FROM DBA_TAB_SUBPARTITIONS WHERE TABLE_NAME='EMP_SUB_TEMPLATE'
ORDER BY TABLESPACE_NAME;

10.4 IOT- Index Organize Table

➢ An index-organized table has a storage organization that is a variant of a primary B-tree. Unlike
an ordinary (heap-organized) table whose data is stored as an unordered collection (heap),
data for an index-organized table is stored in a B-tree index structure in a primary key sorted
manner. Each leaf block in the index structure stores both the key and nonkey columns.

10.5 Advantages of IOT

The structure of an index-organized table provides the following benefits:

85
➢ Fast random access on the primary key because an index-only scan is sufficient. And, because
there is no separate table storage area, changes to the table data (such as adding new rows,
updating rows, or deleting rows) result only in updating the index structure.

➢ Fast range access on the primary key because the rows are clustered in primary key order.

➢ Lower storage requirements because duplication of primary keys is avoided. They are not
stored both in the index and underlying table, as is true with heap-organized tables.

➢ Index-organized tables have full table functionality. They support features such as constraints,
triggers, LOB and object columns, partitioning, parallel operations, online reorganization, and
replication. And, they offer these additional features:

• Key compression
• Overflow storage area and specific column placement
• Secondary indexes, including bitmap indexes.

10.6 Creating Index-Organized Tables

➢ You use the CREATE TABLE statement to create index-organized tables, but you must provide
additional information:

➢ An ORGANIZATION INDEX qualifier, which indicates that this is an index-organized table.

➢ A primary key, specified through a column constraint clause (for a single column primary key)
or a table constraint clause (for a multiple-column primary key).

Optionally, you can specify the following:

86
➢ An OVERFLOW clause, which preserves dense clustering of the B-tree index by storing the row
column values exceeding a specified threshold in a separate overflow data segment.

➢ A PCTTHRESHOLD value, which defines the percentage of space reserved in the index block for
an index-organized table. Any portion of the row that exceeds the specified threshold is stored
in the overflow segment. In other words, the row is broken at a column boundary into two
pieces, a head piece and tail piece. The head piece fits in the specified threshold and is stored
along with the key in the index leaf block. The tail piece is stored in the overflow area as one
or more row pieces. Thus, the index entry contains the key value, the nonkey column values
that fit the specified threshold, and a pointer to the rest of the row.

➢ An INCLUDING clause, which can be used to specify nonkey columns that are to be stored in
the overflow data segment.

Example

SQL> CREATE TABLE admin_docindex( token char(20),


doc_id NUMBER, token_frequency NUMBER,
token_offsets VARCHAR2(512),
CONSTRAINT pk_admin_docindex PRIMARY KEY (token, doc_id)) ORGANIZATION INDEX
TABLESPACE admin_tbs PCTTHRESHOLD 20 OVERFLOW TABLESPACE admin_tbs2;

87
11 Row Chaining and Migration

11.1 Logical & Physical Space management

Blocks  Extents  Segments  Tablespaces  Data files


(tables)
(indexes)
(partitions)

➢ What is pctused & pctfree

Block  header + data

➢ pctused(40)- inserts (high for DSS systems – systems with no or few updates)
➢ pctfree(10)- updates (high for update intensive systems)

➢ What is Row Chaining

• Row does not fit in one db_block.


• huge rows, LOBS
• cannot fix it

➢ What is Row Migration

• Row fits in the db_block but after update it does not fit in it.
• small rows - not enough pctfree to grow after update
• row is moved to the next block and a pointer is left behind
• after moving to next block it fits on one block
• study your updates and use right pctfree

88
Note: Row chaining and migration slows the performance, as Oracle has to read multiple blocks to
fetch one row.

How to find if users are querying chained or migrated rows


Example:
SQL>SELECT 'Chained or Migrated Rows = '||value
FROM v$sysstat
WHERE name = 'table fetch continued row'

How to find if table has migrated or chained rows

➢ analyze table employee;

SQL>SELECT owner, table_name, chain_cnt


FROM dba_tables
where chain_cnt > 0

How to find which rows are chained or migrated

➢ Create chained_rows table to capture chained rows (rdbms/utlchain.sql)

SQL> create table CHAINED_ROWS (


owner_name varchar2(30),
table_name varchar2(30),
cluster_name varchar2(30),
partition_name varchar2(30),
subpartition_name varchar2(30),
head_rowid rowid,
analyze_timestamp date);

89
➢ ANALYZE TABLE employees LIST CHAINED ROWS into chained_rows;

11.2 How to fix row chaining or migration

1. Alter table move, followed by index rebuilds

• ALTER TABLE employees MOVE PCTFREE 20 PCTUSED 40


STORAGE (INITIAL 20K NEXT 40K MINEXTENTS 2
MAXEXTENTS 20 PCTINCREASE 0);
• alter index idx_myobjects rebuild ;

2. Export/import

3. Create table as select

4. delete and reinsert

5. Work only on chained rows

• Analyze the table to get the row ID in chained_rows table


• Copy those rows to a temporary table
• Delete the rows from the original table
• Insert the rows from step 2 back to the original table

Find out number of rows per block

SQL> SELECT dbms_rowid.rowid_block_number(rowid), count(*)


from employees
group by dbms_rowid.rowid_block_number(rowid)

90
How to simulate chaining problem

➢ If your block size is 8k, create a table with row size more han 8k as follows. Note the use of
CHAR and not VARCHAAR2.

SQL>Create table abc (col1 char(2000),


Col2 char(2000),
Col3 char(2000),
Col4 char(2000),
Col5 char(2000));

SQL> Insert into abc values(‘y’, null, null, null, null ) un-chained row

SQL> Insert into abc values(‘x’, ‘x’, ‘x’, ‘x’, ‘x’)  chained row

SQL> Update abc set col2=’x’, col3=’x’, Where col1= ‘y’  migrated row

91
12 Automatic Work Repository

➢ The AWR plays the role of the “data warehouse of the database,” and it is the basis for
most of Oracle’s self- management functionality. The AWR collects and maintains
performance statistics for problem-detection and self-tuning purposes. By default, every
60 minutes the database collects statistical information from the SGA and stores it the AWR,
in the form of snapshots. Several database
components, such as the ADDM and other management advisors, use the AWR data to detect
problems and for tuning the database. Like the ADDM, the AWR is automatically active upon
starting the instance.

Where does AWR collect stats from

92
➢ Statistics is collected by 2 background processes every 60 minutes

• MMON – Manageability Monitor


• MMNL – Memory Monitor Light

➢ AWR data is stored in SYSAUX tablespace under schema named SYSMAN.


➢ To activate AWR collection set STATISTICS_LEVEL parameter to

✓ BASIC  Disables AWR


✓ TYPICAL  Standard level stat collection
✓ ALL  Standard level plus execution plan of SQL

➢ To change the stat collection interval execute the following procedure

SQL> Exec dbms_workload_repository.modify_snapshot_setting


(interval => 60,
retention => 43200)
43200 minutes is 30 days – default retention period is 7 days

93
➢ To create a snapshot manually, use the CREATE_SNAPSHOT procedure, as follows:

SQL> exec dbms_workload_repository.create_snapshot ();

➢ To drop shapshot from AWR execute

SQL>DBMS_WORKLOAD_REPOSITORY.DROP_SNAPSHOT_RANGE (
low_snap_id => 40,
high_snap_id => 60,
dbid => 2210828132);

➢ If you set the snapshot interval to 0, the AWR will stop collecting snapshot data. This
means that the ADDM, the SQL Tuning Advisor, the Undo Advisor, and the Segment Advisor
will all be adversely affected, because they depend on the AWR data.

➢ To create an AWR baseline for comparison execute

SQL> DBMS_WORKLOAD_REPOSITORY.CREATE_BASELINE(
START_SNAP_ID => 125,
END_SNAP_ID => 185,
BASELINE_NAME => 'peak_time baseline',
DBID => 2210828132);

➢ To drop an existing baseline execute

SQL> DBMS_WORKLOAD_REPOSITORY.DROP_BASELINE
(BASELINE_NAME => 'peak_time_baseline',
CASCADE => FALSE,
DBID => 2210828132);

94
➢ By setting the CASCADE parameter to TRUE, you can drop the actual snapshots as well.
➢ If your Sysaux tablespace runs out of space, Oracle will automatically delete the oldest set of
snapshots to make room for new snapshots.
➢ Make sure you don’t confuse the AWR report with the ADDM report that you obtain by
running the addmrpt.sql script.
➢ The ADDM report is also based on the AWR snapshot data, but it highlights both the problems
in the database and the recommendations for resolving them.

What information is in AWR report

➢ The AWR reports include voluminous information, including the following:


➢ Load profile
➢ Top five timed events
➢ Wait events and latch activity
➢ Time-model statistics
➢ Operating system statistics
➢ SQL ordered by elapsed time
➢ Tablespace and file I/O statistics
➢ Buffer pool and PGA statistics and advisories

How to generate an AWR report

SQL> @$ORACLE_HOME/rdbms/admin/awrrpt.sql

Data Dictionary Views for AWR

➢ The DBA_HIST_SNAPSHOT view shows all snapshots saved in the AWR.


➢ The DBA_HIST_WR_CONTROL view displays the settings to control the AWR.
➢ The DBA_HIST_BASELINE view shows all baselines and their beginning and ending snap ID
numbers.

95
12.1 Automatic Database Diagnostic Monitor (ADDM)

What is Time Model Statistics

➢ Oracle collects statistics regarding how it is spending time on various tasks. This information or
statistics is called as “Time-Model Statistics”. It is stored in v$SYS_TIME_MODEL or
V$SESS_TIME_MODEL. The most important
statistics from this is “DB-Time Statistics”

SQL> select * from v$sys_time_model;

STAT_ID STAT_NAME VALUE


-------------------------------------------------------------------------------------------------------------
4146561234 DB time 454542121
4157170894 background elapsed time 1366555665 2451517896
background cpu time 20048677 4127043053 sequence
load elapsed time 812969 1431595225 parse time elapsed
372226525
2821698184 hard parse elapsed time 141896749 1990024365
sql execute elapsed time 386159975

➢ DB time includes both the wait time and processing time (CPU time), but doesn’t include the
idle time incurred by your processes. For example, if you spend an hour connected to the
database and you’re idle for 58 of those minutes, the DB time is only 2 minutes.

What is the Goal of ADDM

➢ The basic rationale behind the ADDM is to reduce a key database metric called DB time, which
is the total time (in microseconds) the database spends actually processing users’
requests.

96
➢ Oracle manages the ADDM with the help of the new MMON background process. Each
time the AWR takes a snapshot (every hour by default), the MMON process tells ADDM to
analyze the interval between the last two AWR snapshots. Thus, by default, the ADDM
automatically runs each time the AWR snapshot is taken
What problems ADDM can help us with

• Expensive SQL statements


• I/O performance issues
• Locking and concurrency issues
• Excessive parsing
• Resource bottlenecks, including memory and CPU bottlenecks
• Undersized memory allocation
• Connection management issues, such as excessive logon/logoff activity

How to configure ADDM

➢ Oracle enables the ADDM feature by default, and your only task is to make sure that the
STATISTICS_LEVEL initialization parameter is set to TYPICAL or ALL in order for the AWR to
gather its performance Statistics.

How to run ADDM

➢ You can also request that the ADDM analyze past instance performance by examining AWR
snapshot data that falls between any two nonadjacent snapshots. The only requirements
regarding the selection of the AWR snapshots are these:

✓ The snapshots must not contain any errors.


✓ There can’t be a database shutdown between the two snapshots. The AWR
holds only cumulative database statistics, and once you shut down the
database, all the cumulative data will lose its meaning.

97
How to view ADDM Report

➢ You can view the ADDM analysis reports in three different ways:

• You can use the Oracle-provided addmrpt.sql script (located in the


$ORACLE_HOME/rdbms/admin directory) to create an ad hoc ADDM report for a time
period covered by any pair of snapshots.

• You can use the DBMS_ADVISOR package and create an ADDM report by using the
CREATE_REPORT procedure.

• You can use the OEM to view the performance findings of the stored ADDM reports,
which are proactively created each hour after the AWR snapshots

What information is in ADDM Report

• The definition of the performance problem


• he root cause of the performance problem
• Recommendation(s) for fixing the problem
• The rationale for the proposed recommendations

Note: If a problem contributes to inappropriate or excessive DB time,ADDM automatically flags it


as an issue needing attention. If there is a problem in your system, but it doesn’t contribute
significantly to the DB time, ADDM will simply ignore it. Thus, the ADDM is focused on the single
mantra: reduce DB time.

Example

Take a first AWR snapshot by running the following command

98
SQL> EXECUTE dbms_workload_repository.create_snapshot();
PL/SQL procedure successfully completed.

After some time take another snapshot again


SQL> EXECUTE dbms_workload_repository.create_snapshot();
PL/SQL procedure successfully completed.

Now we will generate an ADDM report from the above two snapshots

SQL> @/u03/app/oracle/rdbms/admin addmrpt.sql


Current Instance
~~~~~~~~~~~~~~~~
DB Id DB Name Inst Num Instance
----------- ------------ -------- ------------
877170026 FINANCE 1 finance
Instances in this Workload Repository schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
DB Id Inst Num DB Name Instance Host
------------ -------- ------------ ------------ ----
866170026 1 FINANCE finance prod5

Using 866170026 for database Id


Using 1 for instance number
Specify the number of days of snapshots to choose from
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Entering the number of days (n) will result in the most recent
(n) days of snapshots being listed. Pressing <return> without
specifying a number lists all completed snapshots.
Listing the last 3 days of Completed Snapshots
Snap
Instance DB Name Snap Id Snap Started Level
------------ ------------ --------- ------------------ -----
finance FINANCE 3067 22 Jul 2005 05:00 1
3068 22 Jul 2009 06:00 1
3069 22 Jul 2009 07:01 1

99
3070 22 Jul 2009 08:00 1
3071 22 Jul 2009 09:00 1
3072 22 Jul 2009 10:00 1
3073 22 Jul 2009 11:00 1
3074 22 Jul 2009 12:01 1
3075 22 Jul 2009 13:00 1
3076 22 Jul 2009 14:00 1
Specify the Begin and End Snapshot Ids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Enter value for begin_snap: 3075
Begin Snapshot Id specified: 3075
Enter value for end_snap: 3076
End Snapshot Id specified: 3076
Specify the Report Name
~~~~~~~~~~~~~~~~~~~~~~~
The default report file name is addmrpt_1_3075_3076.txt.
To use this name, press <return> to continue, otherwise enter an alternative.
Enter value for report_name:
Using the report name addmrpt_1_3075_3076.txt
Running the ADDM analysis on the specified pair of snapshots . . .

100
13 Automatic Storage Management

13.1 Benefits of ASM


➢ Works with Oracle Managed Files(OMF)
➢ Prevents fragmentation of Disk
➢ Adding disks is straight forward – automatic online disk reorganization
➢ Makes disk space management easier
➢ Improves performance by spreading I/O across disks
➢ High availability is provided by mirroring
➢ No need for third party logical volume managers(LVM) Veritas, EMC - to do mirroring and
striping
➢ Better than LVM – ASM strips files and not logical volumes(LVs)
➢ Less dependence on System Administrators for disk management
➢ OEM provides GUI to manage ASM
➢ ASM works with RAC to eliminate the need for LVM or Cluster file system

13.2 Limitations

➢ Cannot use regular OS commands like ls , cp, tar


➢ Backup must be taken using RMAN
➢ ASM and non-ASM files can co-exist in one database Not recommended

101
13.3 ASM Architecture

Extents (physical unit)  Data Files(logical unit)  Disks  Disk Groups

➢ Striping: ASM divides data files into extents and spreads across the disks Mirroring: ASM does
not mirror the entire disk, like in RAID, but mirrors database objects.

➢ Mirroring: ASM does not mirror the entire disk, like in RAID, but mirrors database objects.

➢ Auto Balancing: If a disk is added to a disk group, data files are automatically moved to new
disk to balance the I/O

ASM Instance

➢ ASM instance is a set of processes and SGA (usually 60M to 100M) just like a database
instance. But it does not get mounted on a database.

102
Processes

➢ RBAL (Rebalance Master)– Performs disk automatic rebalancing I/O Activity

➢ ARBn(1-9)(ASM Rebalancer) – Moves extents between disks

➢ Does not have any data files and therefore no data dictionary.

➢ Can be in nomount or mount state only. In mount state ASM disk groups are made available.

➢ Sysoper  startup, shutdown, mount or dismount disk group, make disk group online/offline,
rebalance disk group, perform integrity check on disk group. Access to v$asm* views.

➢ Sysdba  all operations that can be done by sysoper plus some more like create, delete and
add disks to disk groups.

Single Instance and Clustered Environments

➢ Each database server that has database files managed by ASM needs to be running an ASM
instance. A single ASM instance can service one or more single-instance databases on a stand-
alone server.

➢ Each ASM disk group can be shared among all the databases on the server.

➢ In a clustered environment, each node runs an ASM instance, and the ASM instances
communicate with each other on a peer-to-peer basis. This is true for both RAC environments
and non-RAC clustered environments where multiple single-instance databases across multiple
nodes share a clustered pool of storage that is managed by ASM. If a node is part of a Real
Application Clusters (RAC) system, the peer-to-peer communications service is already installed

103
on that server. If the node is part of a cluster where RAC is not installed, the Oracle
Clusterware, Cluster Ready Services (CRS), must be installed on that node.

Database Instance Processes

➢ ASMB (ASM Background)– communication between database and ASM instance. This is a
foreground or client process which connects to the ASM instance.

➢ RBAL – opens and closes disks on behalf of database instance.

➢ Database makes initial contact with ASM instance to get information, but then accesses the
files directly.

➢ ASM instance can’t be shutdown while database instances accessing data files are running.

➢ When an ASM instance mounts a disk group, it registers the disk group and connect string
with Group Services. The database instance knows the name of the disk group, and can
therefore use it to lookup connect information for the
correct ASM instance.

ASM File Names

➢ +group/dbname/file_type/tag.file.incarnation

Example: +DATA2/shekhardb/datafile/users.276.1

✓ Group  Disk group name


✓ Dbname  Database name
✓ File_type  e.g. controlfile, datafile, online_log, archive_log, temp, backupset (for RMAN),

104
dumpset (for expdp)
✓ Tag  Type-specific info regarding file
✓ File.incarnation  provides uniqueness

ASM datafiles are by default 100MB autoextensible with maxsize of unlimited

Disk Group Architecture

✓ Extents (physical unit)  Data Files (logical unit)  Disks (could be a partition or a disk)
Failure Groups Disk Groups

✓ Logical concepts like table extents, segments and tablespaces work the same way in ASM
✓ ASM file is always spread across every disk in an ASM Disk group

Striping

✓ Extent size = 1M  called as coarse striping. Good for OLTP


✓ Extent size = 128K  called as fine striping. Good for OLAP

Mirroring

✓ External redundancy  only 1 failure group within a disk group. (non critical systems)
✓ Normal redundancy  Two failure groups within a disk group. (critical systems)
✓ High redundancy  Three failure group within a disk group. (Highly critical systems)

Balancing

➢ ASM_POWER_LIMIT This initialization parameter can be set to a lower value to be gentle on


I/O during rebalancing of data files when a new disk is added to or dropped from a disk
group. This balancing is automatic.

105
Data Dictionary views

➢ V$ASM_DISKGROUPS  One row for each disk group.


➢ V$ASM_DISKS  One row for each disk. Available in both
➢ V$ASM_FILES  One row for each file (data, control, redo, archive etc.) available only in ASM
instance.
➢ V$ASM_OPERATION  one row for each executing operation like add, drop, resize, rebalance
etc. Available only in ASM instance.
➢ V$ASM_CLIENT  One row for each database using disk group. Available in both database as
well as ASM instance

ASM & CSS (Cluster Synchronization Service)

➢ CSS must be running before ASM is created or started

$ ps -ef | grep css

➢ You can also check CSS process by using crsctl command

$ crsctl check cssd

How to start CSS

1. Log in as the root user.


2. Make sure you add the Oracle home directory to your path, as shown here:

# export PATH=$PATH:/u01/app/oracle/product/10.2.0/bin

3. Run the following command to start the CSS daemon:

106
# localconfig add
/etc/oracle does not exist. Creating it now.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Configuration for local CSS has been initialized
Adding to inittab
Startup will be queued to init within 30+60 seconds.
Checking the status of new Oracle init process...
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
localhost
CSS is active on all nodes
Oracle CSS service is installed and running under
init(1M)
#

4. Now, check for the CSS daemon again:


# crsctl check css
CSS appears healthy

How Create ASM Instance

Create init.ora

➢ INSTANCE_TYPE: In an Oracle Database 10g database, you have two types of Oracle instances:
RDBMS and ASM.RDBMS, of course, refers to the normal Oracle databases, and ASM refers to
the new ASM instance. Set the INSTANCE_TYPE parameter to ASM. This will implicitly set the
DB_UNIQUE_NAME parameter to +ASM.

107
➢ ASM_POWER_LIMIT: This is the maximum speed of this ASM instance during a rebalance disk
operation. This operation redistributes the data files evenly and balances I/O load across the
disks. The default is 1 and the range is from 1 to 11 (1 is slowest and 11 is fastest).

➢ ASM_DISKSTRING: This is the location where Oracle should look during a disk-discovery
process. The format of the disk string may vary according to your operating system. You can
specify a list of values as follows; this example limits the ASM discovery to disks whose names
end in s1 and s2 only:

ASM_DISKSTRING = '/dev/rdsk/*s1', '/dev/rdsk/*s2'

➢ ASM_DISKGROUPS: Here you specify the name of any disk group that you want to mount
automatically at instance startup the default value for this parameter is NULL.

Commands

SQL> CREATE DISKGROUP group1 HIGH REDUNDANCY 2


2 failgroup group1 disk
3 '/devices/disk1',
4 '/devices/disk2',

108
5 '/devices/disk3',
6 '/devices/disk4',
7 failgroup group2 disk
8 '/devices/disk5',
9 '/devices/disk6',
10 '/devices/disk7',
11 '/devices/disk8',
12 failgroup group3 disk
13 '/devices/disk9',
14 '/devices/disk10',
15 '/devices/disk11',
16 '/devices/disk12';

SQL> ALTER DISKGROUP group1 ADD DISK


'/devices/disk5' name disk5,
'/devices/disk6' name disk6,

SQL> DROP DISKGROUP group1 INCLUDING CONTENTS


SQL> ALTER DISKGROUP dgroup1 REBALANCE POWER 5;

How to create an ASM Based RDBMS Database

Init.ora

DB_CREATE_FILE_DEST = '+dgroup1'
DB_RECOVERY_FILE_DEST = '+dgroup2'
DB_RECOVERY_FILE_DEST_SIZE = 100G

SQL> Create database

109
SQL> CREATE TABLESPACE tbsp1 DATAFILE '+group1';

The asmcmd Command-Line Tool

➢ In Oracle Database 10g Release 2, you can also manage ASM using a command-line tool, which
gives you more flexibility than having to use SQL*Plus or the Database Control. To invoke the
command-line administrative tool, called asmcmd, enter this command (after the ASM
instance is started):

$ asmcmd

ASMCMD>

➢ The command-line tool has about a dozen commands you can use to manage ASM file
systems, and it includes familiar UNIX/Linux commands such as ls, du, which checks ASM disk
usage. To get a complete list of commands, type help at the help command prompt
(ASMCMD>). By typing followed by a command, you can get details about that command.

How to migrate from Regular DB instance to ASM

Here’s a brief summary of how to use RMAN to migrate a database to ASM:

1. Shut down the database in a consistent mode by using the


SHUTDOWN IMMEDIATE command.

2. DB_CREATE_FILE_DEST Add the and DB_CREATE_ONLINE_LOG_DEST_n parameters, as well as the


new flash recovery area initialization parameters, DB_RECOVERY_FILE_DEST and

110
DB_RECOVERY_FILE_DEST_SIZE, and to your database parameter file so you can use
an OMF-based
file system.

3. Delete the control file parameter from the SPFILE, since Oracle will create new control files in
the OMF file destinations by restoring them from the non-ASM database control files.

4. Start the database with STARTUP NOMOUNT command:

RMAN> CONNECT TARGET;


RMAN> STARTUP NOMOUNT;

5. Restore the old control file in the new location, as shown here:

RMAN> RESTORE CONTROLFILE from '/u01/oradata/con1.ctl';

6. Mount the database:

RMAN> ALTER DATABASE MOUNT;

7. The following command will copy your database files into an ASM disk group:

RMAN> BACKUP AS COPY DATABASE FORMAT ‘+dgroup1’;

8. Use the SWITCH command to switch all data files into the ASM disk group dgroup1:
RMAN> SWITCH DATABASE TO COPY;
At this point, all data files will be converted to ASM type. You still have your original data file copies
on disk, which you can use to restore your database if necessary.

9. Open the database with the following command:

111
RMAN> ALTER DATABASE OPEN;

10. For each redo log member, use the following command to move it to the ASM system:

RMAN> SQL ‘alter database rename '/u01/test/log1' to '+dgroup1' ‘;

11. Archive the current online redo logs, and delete the old non-ASM redo logs. Since RMAN doesn’t
migrate temp files, you must manually create a temporary tablespace using the CREATE TEMPORARY
TABLESPACE statement. You’ll now have an ASM-based file system. You still have your old non-ASM
files as backups in the RMAN catalog.

112
14 Data-Guard

Introduction to Oracle Data Guard

➢ Oracle Data Guard ensures high availability, data protection, and disaster recovery for
enterprise data.

➢ Data Guard provides a comprehensive set of services that create,maintain, manage, and
monitor one or more standby databases to enable production Oracle databases to survive
disasters and data corruptions.

➢ Data Guard maintains these standby databases as transactionally consistent copies of the
production database.

➢ Then if the production database becomes unavailable because of a planned or an unplanned


outage, Data Guard can switch any standby database to the production role, minimizing the
downtime associated with the outage.

➢ Data Guard can be used with traditional backup,restoration to provide a high level of data
protection and data availability.

Data Guard Configurations

➢ A Data Guard configuration consists of one production database and one or more standby
databases.

➢ The databases in a Data Guard configuration are connected by Oracle Net and may be
dispersed geographically.

➢ There are no restrictions on where the databases are located, provided they can communicate
113
with each other.

For Example

➢ You can have a standby database on the same system as the production database, Along with
two standby databases on other systems at remote locations.

➢ You can manage primary and standby databases using the SQL command-line interfaces or the
Data Guard broker interfaces,including a command-line interface (DGMGRL) and a graphical
user interface that is integrated in Oracle Enterprise Manager.

Primary Database

➢ A Data Guard configuration contains one production database, also referred to as the primary
database, that functions in the primary role.This is the database that is accessed by most of
your applications.

➢ The primary database can be either a single-instance Oracle database or an Oracle Real
Application Clusters database.

Standby Databases

➢ A standby database is a transactionally consistent copy of the primary database.Using a backup


copy of the primary database, you can create up to nine standby databases and incorporate
them in a Data Guard configuration.

➢ Once created, Data Guard automatically maintains each standby database by transmitting redo
data from the primary database and then applying the redo to the standby database.

➢ Similar to a primary database, a standby database can be either a single-instance Oracle

114
database or an Oracle Real Application Clusters database. A standby database can be either a
physical standby database or a logical standby database.

Physical Standby Database

➢ Provides a physically identical copy of the primary database, with on disk database structures
that are identical to the primary database on a block-for-block basis.The database schema,
including indexes, are the same.

➢ A physical standby database is kept synchronized with the primary database, though Redo
Apply, which recovers the redo data received from the primary database and applies the redo
to the physical standby database.

➢ A physical standby database can be used for business purposes other than disaster recovery on
a limited basis.

Logical Standby Database

➢ Contains the same logical information as the production database, although the physical
organization and structure of the data can be different.

➢ The logical standby database is kept synchronized with the primary database though SQL
Apply,which transforms the data in the redo received from the primary database into SQL
statements and then executing the SQL statements on the standby database.

➢ A logical standby database can be used for other business purposes in addition to disaster
recovery requirements. This allows users to access a logical standby database for queries and
reporting purposes at any time.

➢ Also, using a logical standby database, you can upgrade Oracle Database software and patch

115
sets with almost no downtime. Thus, a logical standby database can be used concurrently for
data protection, reporting, and database upgrades.

Data Guard Services

➢ How Data Guard manages the transmission of redo data, the application of redo data, and
changes to the database roles:--

(1)Redo Transport Services

Control the automated transfer of redo data from the production database to one or more archival
destinations.

(2)Log Apply Services

➢ Apply redo data on the standby database to maintain transactional synchronization with the
primary database.

➢ Redo data can be applied either from archived redo log files, or, if real-time apply is enabled,
directly from the standby redo log files as they are being filled,without requiring the redo data
to be archived first at the standby database.

(3)Role Transitions

➢ Change the role of a database from a standby database to a primary database, or from a
primary database to a standby database using either a switchover or a failover operation.

(1)Redo Transport Services

➢ Redo transport services control the automated transfer of redo data from the production

116
database to one or more archival destinations.

Redo transport services perform the following tasks-:

➢ Transmit redo data from the primary system to the standby systems in the configuration.

➢ Manage the process of resolving any gaps in the archived redo log files due to a network
failure.

➢ Enforce the database protection modes.

➢ Automatically detect missing or corrupted archived redo log files on a standby system and
automatically retrieve replacement archived redo log files from the primary database or
another standby database.

(2)Log Apply Services

➢ The redo data transmitted from the primary database is written on the standby system into
standby redo log files, if configured, and then archived into archived redo log files.

➢ Log apply services automatically apply the redo data on the standby database to maintain
consistency with the primary database.

➢ It also allows read-only access to the data.

➢ The main difference between physical and logical standby databases is the manner in which
log apply services apply the archived redo data:

➢ For physical standby databases, Data Guard uses Redo Apply technology, which applies redo
data on the standby database using standard recovery techniques of an Oracle database.

117
➢ For logical standby databases, Data Guard uses SQL Apply technology, which first transforms
the received redo data into SQL statements and then executes the generated SQL statements
on the logical standby database,

(3)Role Transitions

➢ An Oracle database operates in one of two roles:


Primary or Standby.

➢ Using Data Guard, you can change the role of a database using either a switchover or a failover
operation.

Switchover

➢ A switchover is a role reversal between the primary database and one of its standby
databases.A switchover ensures no data loss. This is typically done for planned maintenance of
the primary system.

➢ During a switchover, the primary database transitions to a standby role, and the standby
database transitions to the primary role.The transition occurs without having to re-create
either database.

Failover

➢ A failover is when the primary database is unavailable.Failover is performed only in the event
of a catastrophic failure of the primary database, and the failover results in a transition of a
standby database to the primary role.

➢ The database administrator can configure Data Guard to ensure no data loss.

118
➢ The role transitions are invoked manually using -SQL statements On the remote destination,
the remote file server process (RFS) will, in turn, write the redo data to an archived redo log
file from a standby redo log file.

➢ Log applyservices use Redo Apply (MRP process1) or SQL Apply (LSP process2) to apply the
redo to the standby database.

Data Guard Protection Modes

➢ In some situations, a business cannot afford to lose data.

➢ In other situations, the availability of the database may be more important than the loss of
data.Some applications require maximum database performance and can tolerate some small
amount of data loss.

There are three distinct modes of data protection.

➢ Maximum Protection

➢ Maximum Availability

➢ Maximum Performance

Maximum Protection

➢ This protection mode ensures that no data loss will occur if the primary database fails.

➢ To provide this level of protection, the redo data needed to recover each transaction must be
written to both the local online redo log and to the standby redo log on at least one standby

119
database before the transaction commits.

Maximum Availability

➢ This protection mode provides the highest level of data protection that is possible without
compromising the availability of the primary database.

➢ Like maximum protection mode, a transaction will not commit until the redo needed to
recover that transaction is written to the local online redo log and to the standby redo log of at
least one transactionally consistent standby database.

➢ Instead, the primary database operates in maximum performance mode until the fault is
corrected,and all gaps in redo log files are resolved.

➢ When all gaps are resolved, the primary database automatically resumes operating in
maximum availability mode.

➢ This mode ensures that no data loss will occur if the primary database fails, but only if a
second fault does not prevent a complete set of redo data from being sent from the primary
database to at least one standby database.

Maximum Performance

➢ This protection mode (the default) provides the highest level of data protection that is possible
without affecting the performance of the primary database.

➢ This is accomplished by allowing a transaction to commit as soon as the redo data needed to
recover that transaction is written to the local online redo log.

➢ The maximum protection and maximum availability modes require that standby redo log files

120
are configured on at least one standby database in the configuration.

➢ All three protection modes require that specific log transport attributes be specified on the
LOG_ARCHIVE_DEST_n initialization parameter to send redo data to at least one standby
database.

Data Guard Benefits

➢ Disaster recovery, data protection, and high availability

➢ Complete data protection

➢ Efficient use of system resources

➢ Flexibility in data protection to balance availability against performance requirements

➢ Automatic gap detection and resolution

➢ Automatic role transitions

14.1 Steps

Primary-Side
Add the parameters in init.ora
background_dump_dest='/u01/app/oracle/admin/prim4/bdump'
compatible='10.2.0'
control_files='/u01/app/oracle/oradata/prim4/cf/c1.ctl'
db_file_name_convert='/u01/app/oracle/oradata/stan4/df','/u01/app/oracle/oradata/prim4/df'
db_name='prim4'

121
db_unique_name='prim4'
log_archive_config='dg_config=(prim4,stan4)'
fal_client='prim4'
fal_server='stan4'
log_archive_dest_1='LOCATION=/u01/app/oracle/oradata/prim4/af valid_for=(all_logfiles,all_roles)
db_unique_name=prim4'
log_archive_dest_2='SERVICE=stan4 valid_for=(online_logfiles,primary_role)
db_unique_name=stan4'
log_archive_dest_state_1='ENABLE'
log_archive_dest_state_2='ENABLE'
log_file_name_convert='/u01/app/oracle/oradata/stan4/rf','/u01/app/oracle/oradata/prim4/rf'
sga_max_size=300m
sga_target=250m
standby_file_management='auto'
undo_management='auto'
undo_tablespace='undotbs'
user_dump_dest='/u01/app/oracle/admin/prim4/udump'
remote_login_passwordfile=exclusive

Listener File For PrimaryDatabase

SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = prim)
(ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1)
(SID_NAME = prim)
)
(SID_DESC =

122
(GLOBAL_DBNAME = stan)
(ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1)
(SID_NAME = stan)
)
)

LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))
(ADDRESS = (PROTOCOL = TCP)(HOST = 172.24.0.180)(PORT = 1521))
)
)

Tnsnames File For PriamryDB

prim =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 172.24.0.180)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = prim)
)
)

stan =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 172.24.0.181)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)

123
(SERVICE_NAME = stan)
)
)

Listener File For Standby

SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = stan)
(ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1)
)
(SID_DESC =
(SID_NAME = prim)
(ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1)
)
)

LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))
(ADDRESS = (PROTOCOL = TCP)(HOST = 172.24.0.181)(PORT = 1521))
)
)

Tnsnames File For Standby

stan =

124
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 172.24.0.181)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = stan)
)
)

prim =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 172.24.0.180)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = prim)
)
)

SQL> startup pfile='/u01/app/oracle/admin/prim/pfile/initprim.ora';

SQL> create spfile from pfile='/u01/app/oracle/admin/prim/pfile/initprim.ora';

SQL> startup

SQL> show parameter spfile

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
spfile string /u01/app/oracle/product/10.2.0/db_1/dbs/spfileprim.ora

125
SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled

SQL> alter database force logging;

SQL> select force_logging from v$database;

FOR
---
YES

[oracle@station31 ~]$ cd $ORACLE_HOME

[oracle@station31 db_1]$ cd dbs

[oracle@station31 dbs]$ pwd

/u01/app/oracle/product/10.2.0/db_1/dbs

[oracle@station31 dbs]$ orapwd file=orapwprim password=oracle entries=4 force=y

[oracle@station31 dbs]$ ls

SQL> startup nomount

SQL> alter system set

126
remote_login_passwordfile=exclusive scope=spfile;

System altered.

SQL> show parameter remote_login_passwordfile

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
remote_login_passwordfile string EXCLUSIVE

SQL> shut immediate

SQL> startup

SQL> shut immediate

SQL> startup mount

SQL> alter database create standby controlfile as 'stan.ctl';

SQL> shut immediate

Copy all the datafiles and redologfile from prim to stan

[oracle@station31 data]$ scp -r * oracle@172.24.0.181:/u01/app/oracle/oradata/stan/data


sysaux01.dbf 100% 250MB 11.4MB/s 00:22
system01.dbf 100% 300MB 11.5MB/s 00:26
temp01.dbf 100% 50MB 10.0MB/s 00:05
unotbs.dbf 100% 50MB 12.5MB/s 00:04

127
usres01.dbf 100% 100MB 12.5MB/s 00:08

[oracle@station31 redo]$ scp -r * oracle@172.24.0.181:/u01/app/oracle/oradata/stan/redo

oracle@172.24.0.181's password:
redo01.log 100% 10MB 10.0MB/s 00:01
redo02.log 100% 10MB 10.0MB/s 00:01

Copy the standby control file from prim to stan db

[oracle@station31 dbs]$ scp -r stan.ctl oracle@172.24.0.181:/u01/app/oracle/oradata/stan/ctl

oracle@172.24.0.181's password:

stan.ctl 100% 5936KB 5.8MB/s 00:01

Copy the password file from pim to stan db and rename


[oracle@station31 dbs]$ scp -r orapwprim

oracle@172.24.0.181:/u01/app/oracle/product/10.2.0/db_1/dbs/orapwstan

oracle@172.24.0.181's password:

orapwprim 100% 1536 1.5KB/s 00:00

Copy the pfile from prim to stan db and rename and edit

background_dump_dest='/u01/app/oracle/admin/stan4/bdump'
compatible='10.2.0'

128
control_files='/u01/app/oracle/oradata/stan4/cf/stan4.ctl'
db_file_name_convert='/u01/app/oracle/oradata/prim4/df','/u01/app/oracle/oradata/stan4/df'
db_name='prim4'
db_unique_name='stan4'
fal_client='stan4'
fal_server='prim4'
log_archive_config='dg_config=(prim4,stan4)'
log_archive_dest_1='LOCATION=/u01/app/oracle/oradata/stan4/af valid_for=(all_logfiles,all_roles)
db_unique_name=stan4'
log_archive_dest_2='SERVICE=prim4 valid_for=(online_logfiles,primary_role)
db_unique_name=prim4'
log_archive_dest_state_1='ENABLE'
log_archive_dest_state_2='ENABLE'
log_file_name_convert='/u01/app/oracle/oradata/prim4/rf','/u01/app/oracle/oradata/stan4/rf'
remote_login_passwordfile='exclusive'
sga_max_size=300m
sga_target=250m
standby_file_management='auto'
undo_management='auto'
undo_tablespace='undotbs'
user_dump_dest='/u01/app/oracle/admin/stan4/udump'

[oracle@station31 dbs]$ scp -r initprim.ora oracle@172.24.0.181:/u01/app/oracle/admin/stan/pfile

oracle@172.24.0.181's password:

initprim.ora 100% 969 1.0KB/s 00:00

[oracle@localhost pfile]$ cp initprim.ora initstan.ora

129
Standby Side

[oracle@localhost pfile]$ export ORACLE_SID=stan


[oracle@localhost pfile]$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.1.0 - Production on Thu Mar 3 12:05:01 2011


Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to an idle instance.

STAN> startup nomount pfile='/u01/app/oracle/admin/stan/pfile/initstan.ora';

ORACLE instance started.

Total System Global Area 314572800 bytes


Fixed Size 1219136 bytes
Variable Size 96470464 bytes
Database Buffers 209715200 bytes
Redo Buffers 7168000 bytes

STAN> create spfile from pfile='/u01/app/oracle/admin/stan/pfile/initstan.ora';


File created.

STAN> shut immediate

STAN> startup mount

STAN>> archive log list

Database log mode Archive Mode

130
Automatic archival Enabled
Archive destination /u01/app/oracle/oradata/stan/af
Oldest online log sequence 65
Next log sequence to archive 66
Current log sequence 66

PRIM>> archive log list


Database log mode Archive Mode
Automatic archival Enabled
Archive destination /u01/app/oracle/oradata/prim/af
Oldest online log sequence 65
Next log sequence to archive 66
Current log sequence 66

STAN>>select SEQUENCE#,FIRST_TIME, NEXT_TIME, ARCHIVED,APPLIED


2 from v$archived_log;

no rows selected

STAN>>alter database recover managed standby database disconnect from session;

PRIM>>alter system switch logfile;


System altered.

PRIM>>archive log list


Database log mode Archive Mode
Automatic archival Enabled
Archive destination /u01/app/oracle/oradata/prim/af
Oldest online log sequence 67
Next log sequence to archive 68

131
Current log sequence 68

STAN> alter database recover managed standby database cancel;


Database altered.

STAN> select recovery_mode from v$archive_dest_status


2 where dest_id=2;

RECOVERY_MODE
-----------------------
MANAGED

STAN>> select SEQUENCE#,FIRST_TIME, NEXT_TIME, ARCHIVED,APPLIED


2 from v$archived_log;

SEQUENCE# FIRST_TIM NEXT_TIME ARC APP


-----------------------------------------------------------------------
61 03-MAR-11 03-MAR-11 YES NO
62 03-MAR-11 03-MAR-11 YES NO
63 03-MAR-11 03-MAR-11 YES NO
64 03-MAR-11 03-MAR-11 YES NO
65 03-MAR-11 03-MAR-11 YES NO
66 03-MAR-11 03-MAR-11 YES YES
67 03-MAR-11 03-MAR-11 YES YES
68 03-MAR-11 03-MAR-11 YES YES
69 03-MAR-11 03-MAR-11 YES YES
70 03-MAR-11 03-MAR-11 YES YES

10 rows selected.

132
PRIM>>archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /u01/app/oracle/oradata/prim/af
Oldest online log sequence 70
Next log sequence to archive 71
Current log sequence 71

PRIM>>create user rash identified by rash;


User created.

PRIM>>grant connect,resource to rash;


Grant succeeded.

PRIM>>conn rash/rash
Connected.

PRIM>>create table emp (id number, name varchar2(22));


Table created.

PRIM>>insert into emp values (&id,'&n');

Enter value for id: 10


Enter value for n: rash
old 1: insert into emp values (&id,'&n')
new 1: insert into emp values (10,'rash')

1 row created.

PRIM>>/

133
Enter value for id: 20
Enter value for n: riaz
old 1: insert into emp values (&id,'&n')
new 1: insert into emp values (20,'riaz')

1 row created.
PRIM>>commit;
Commit complete.

PRIM>>select * from emp;

ID NAME
---------- ----------------------
10 rash
20 riaz

PRIM>>conn / as sysdba
Connected.

PRIM>>archive log list


Database log mode Archive Mode
Automatic archival Enabled
Archive destination /u01/app/oracle/oradata/prim/af
Oldest online log sequence 70
Next log sequence to archive 71
Current log sequence 71

134
PRIM>>alter system switch logfile;
System altered.

PRIM>>/

System altered.

STAN>>select SEQUENCE#,FIRST_TIME, NEXT_TIME, ARCHIVED,APPLIED


2* from v$archived_log

SEQUENCE# FIRST_TIM NEXT_TIME ARC APP


---------- --------- --------- --- ---
71 03-MAR-11 03-MAR-11 YES YES
72 03-MAR-11 03-MAR-11 YES YES

STAN>>alter database recover managed standby database cancel;


Database altered.

STAN>>alter database open read only;

STAN>>select username,default_tablespace from dba_users;

USERNAME DEFAULT_TABLESPACE
------------------------------ ------------------------------
OUTLN SYSTEM
SYS SYSTEM
SYSTEM SYSTEM
RASH USERS
KAJAL USERS

135
DBSNMP SYSAUX
TSMSYS USERS
DIP USERS

8 rows selected.

STAN> select dest_id, status , destination from v$archive_dest


2 where dest_id in(1,2);

DEST_ID STATUSDESTINATION
----------------------------------------------------------
1 VALID /u01/app/oracle/oradata/stan/af
2 VALID prim

PRIM>>select dest_id, status , destination from v$archive_dest


2 where dest_id in(1,2);

STAN>>select instance_name from v$instance;

PRIM>>select instance_name from v$instance;

14.2 Role Transfer (Switchover)

PRIM>>select DATABASE_ROLE ,switchover_status from v$database

DATABASE_ROLE SWITCHOVER_STATUS
-------------------------------------------------------------------------

136
PRIMARY TO STANDBY

Note: primary database should be in open mode.

PRIM>>alter database commit to switchover to physical standby;


Database altered.

PRIM>>shut immediate

Note: Standby database either mounted in redo apply mode or open for read only access.

STAN>>select DATABASE_ROLE ,switchover_status from v$database;

STAN>>alter database commit to switchover to primary;


Database altered.

PRIM>>startup mount

STAN>>select DATABASE_ROLE ,switchover_status from v$database;

DATABASE_ROLE SWITCHOVER_STATUS
-------------------------------------------------------------------------------------
PRIMARY TO STANDBY

PRIM>>select DATABASE_ROLE ,switchover_status from v$database;

DATABASE_ROLE SWITCHOVER_STATUS
------------------------------------------------------------------------

137
PHYSICAL STANDBY TO PRIMARY

PRIM>>select dest_id, status , destination from v$archive_dest


2 where dest_id in(1,2);

STAN>> select SEQUENCE#,FIRST_TIME, NEXT_TIME, ARCHIVED,APPLIED


2 from v$archived_log;

STAN> select recovery_mode from v$archive_dest_status


2 where dest_id=2;

RECOVERY_MODE
--------------------------------
MANAGED

SQL>alter database recover managed standby database cancel;

SQL>alter database recover managed standby database disconnect from session;

138
15 Upgrade Oracle database 10g R2 to 11g R2

Test to upgrade Oracle Database 10gR2 to 11gR2 by command-line. make sure my database >=
10.2.0.2

SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
oradb

SQL> select * from v$version;


BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.5.0
PL/SQL Release 10.2.0.5.0 - Production
CORE 10.2.0.5.0 Production
TNS for Linux: Version 10.2.0.5.0 - Production
NLSRTL Version 10.2.0.5.0 - Production
- Use Pre upgrade script to check (utlu112i.sql)

SQL> @utlu112i.sql
Oracle Database 11.2 Pre-Upgrade Information Tool 11-23-2010 10:03:59
Script Version: 11.2.0.2.0 Build: 001
.
**********************************************************************
Database:
**********************************************************************
✓ name: ORADB
✓ version: 10.2.0.5.0

139
✓ compatible: 10.2.0.5.0
✓ blocksize: 8192
✓ platform: Linux x86 64-bit
✓ timezone file: V4

**********************************************************************
Tablespaces: [make adjustments in the current environment]
**********************************************************************
✓ SYSTEM tablespace is adequate for the upgrade.
✓ minimum required size: 693 MB
✓ UNDOTBS1 tablespace is adequate for the upgrade.
✓ minimum required size: 467 MB
✓ SYSAUX tablespace is adequate for the upgrade.
✓ minimum required size: 481 MB
✓ TEMP tablespace is adequate for the upgrade.
✓ minimum required size: 61 MB
.
**********************************************************************
Flashback: OFF
**********************************************************************
**********************************************************************
Update Parameters: [Update Oracle Database 11.2 init.ora or spfile]
Note: Pre-upgrade tool was run on a lower version 64-bit database.
**********************************************************************
✓ If Target Oracle is 32-Bit, refer here for Update Parameters:
✓ No update parameter changes are required.
✓ If Target Oracle is 64-Bit, refer here for Update Parameters:
✓ No update parameter changes are required.
.
**********************************************************************

140
Renamed Parameters: [Update Oracle Database 11.2 init.ora or spfile]
**********************************************************************
✓ No renamed parameters found. No changes are required.

**********************************************************************
Obsolete/Deprecated Parameters: [Update Oracle Database 11.2 init.ora or spfile]
**********************************************************************
✓ background_dump_dest 11.1 DEPRECATED replaced by "diagnostic_dest"
✓ user_dump_dest 11.1 DEPRECATED replaced by "diagnostic_dest"
.
**********************************************************************
Components: [The following database components will be upgraded or installed]
**********************************************************************
✓ Oracle Catalog Views [upgrade] VALID
✓ Oracle Packages and Types [upgrade] VALID
✓ JServer JAVA Virtual Machine [upgrade] VALID
✓ Oracle XDK for Java [upgrade] VALID
✓ Oracle Workspace Manager [upgrade] VALID
✓ OLAP Analytic Workspace [upgrade] VALID
✓ OLAP Catalog [upgrade] VALID
✓ EM Repository [upgrade] VALID
✓ Oracle Text [upgrade] VALID
✓ Oracle XML Database [upgrade] VALID
✓ Oracle Java Packages [upgrade] VALID
✓ Oracle interMedia [upgrade] VALID
✓ Spatial [upgrade] VALID
✓ Data Mining [upgrade] VALID
✓ Expression Filter [upgrade] VALID
✓ Rule Manager [upgrade] VALID
✓ Oracle OLAP API [upgrade] VALID

141
**********************************************************************
Miscellaneous Warnings
**********************************************************************
WARNING:  Database is using a timezone file older than version 14.
.... After the release migration, it is recommended that DBMS_DST package
.... be used to upgrade the 10.2.0.5.0 database timezone version
.... to the latest version which comes with the new release.

WARNING: EM Database Control Repository exists in the database.


.... Direct downgrade of EM Database Control is not supported. Refer to the
.... Upgrade Guide for instructions to save the EM data prior to upgrade.

WARNING:  Your recycle bin is turned on and currently contains no objects.


.... Because it is REQUIRED that the recycle bin be empty prior to upgrading
.... and your recycle bin is turned on, you may need to execute the command:
PURGE DBA_RECYCLEBIN
.... prior to executing your upgrade to confirm the recycle bin is empty.

**********************************************************************
Recommendations
**********************************************************************
✓ Oracle recommends gathering dictionary statistics prior to upgrading the database.
✓ To gather dictionary statistics execute the following command while connected as SYSDBA:
EXECUTE dbms_stats.gather_dictionary_stats;
**********************************************************************
✓ Oracle recommends reviewing any defined events prior to upgrading.
✓ To view existing non-default events execute the following commands while connected AS
SYSDBA:

142
Events:

SELECT (translate(value,chr(13)||chr(10),' ')) FROM sys.v$parameter2


WHERE UPPER(name) ='EVENT' AND isdefault='FALSE'

Trace Events:

SELECT (translate(value,chr(13)||chr(10),' ')) from sys.v$parameter2


WHERE UPPER(name) = '_TRACE_EVENTS' AND isdefault='FALSE'
Changes will need to be made in the init.ora or spfile.
**********************************************************************
- Need speed for upgrading, so truncate AUD$ table and gather dictionary statistics
SQL> truncate table SYS.AUD$ drop storage;
Table truncated.

SQL> exec DBMS_STATS.GATHER_DICTIONARY_STATS;

Start to upgrade:

SQL> shutdown immediate;

Copy spfile + orapw from Old home -> New home.


$ cp /u01/app/oracle/product/10.2.0/db_1/dbs/*oradb*
/u01/app/oracle/product/11.2.0/dbhome_2/dbs/

Use New home to startup upgrade


$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_2
SQL> connect / as sysdba
Connected to an idle instance.

143
SQL> startup upgrade;
ORACLE instance started.

SQL> spool upgrade.log


SQL> set echo on
SQL> set termout on

SQL> @?/rdbms/admin/catupgrd.sql
SQL> spool off

Check error in spool file and startup again


SQL> startup
Database opened.

➢ This is post upgrade script: only necessary when upgrading from ≥ 10.1

SQL> @?/rdbms/admin/catuppst.sql

➢ Generate fixed object stats

SQL> exec dbms_stats.gather_fixed_objects_stats;


PL/SQL procedure successfully completed.

➢ Recompile
SQL> @?/rdbms/admin/utlrp.sql
➢ During recompilation: check number of invalid objects
SQL> SELECT COUNT(*) FROM obj$ WHERE status IN (4, 5, 6);

➢ Post upgrade script


SQL> @?/rdbms/admin/utlu112s.sql

144
Oracle Database 11.2 Post-Upgrade Status Tool 11-23-2010 15:40:48
...
Total Upgrade Time: 04:00:52
Compare invalid objects scripts
SQL> @?/rdbms/admin/utluiobj.sql
Create pfile from spfile and modify some parameters
SQL> create pfile='/tmp/pfile' from spfile;
File created.
Adjust time zone data
SQL> startup upgrade
Database opened.
SQL> exec dbms_dst.begin_upgrade(new_version => 11);
PL/SQL procedure successfully completed.
SQL> shutdown immediate;
SQL> startup
Database opened.
SQL> set serveroutput on;
SQL> declare num_of_failures number;
begin
dbms_dst.upgrade_database(num_of_failures);
dbms_output.put_line(num_of_failures);
dbms_dst.end_upgrade(num_of_failures);
dbms_output.put_line(num_of_failures);
end;
/

Check
-- Check Oracle Version
SQL> select instance_name from v$instance;
INSTANCE_NAME

145
----------------
oradb
SQL> select * from v$version;
BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0
PL/SQL Release 11.2.0.2.0 - Production
CORE 11.2.0.2.0 Production
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production
then copy/check listener and etc (from Old home to New home)

146

You might also like