Professional Documents
Culture Documents
1 RMAN ..................................................................................................................................................3
1.1 Categories of Failures ..............................................................................................................3
1.2 RMAN Architecture ..................................................................................................................5
1.3 Backups, Backup Sets, and Backup Pieces ...............................................................................9
2 RMAN Commands ..........................................................................................................................13
2.1 Configuring Persistent Settings for RMAN .............................................................................14
2.2 Image Copies..........................................................................................................................19
3 RMAN Backup Types .................................................................................................................24
3.1 Full Backup .............................................................................................................................24
3.2 Incremental Backup ...............................................................................................................24
3.3 Block Change Tracking ...........................................................................................................25
4 Scenario for Complete and Incomplete Recovery..............................................27
4.1 Restore and Recovery of a Whole Database .........................................................................27
4.2 Restore and Complete Recovery of Individual Tablespaces: .................................................27
4.3 Restore and Complete Recovery of Datafile ..........................................................................30
4.4 Incomplete Recovery Scenario ..............................................................................................31
4.5 RMAN SCN-BASED ................................................................................................................32
4.6 Restore Point..........................................................................................................................34
4.7 RMAN SEQUENCE-BASED .....................................................................................................36
4.8 RMAN TIME-BASED ...............................................................................................................38
5 Controlfile And Spfile Scenario ...............................................................................40
5.1 All Control Files Lost ..............................................................................................................40
5.2 Loss of SPFILE .........................................................................................................................41
5.3 Loss of Control File and SPFILE: .............................................................................................42
6 Recovery Catalog ...................................................................................................................43
6.1 Steps for configuring the recovery catalog ............................................................................43
7 Block Corruption ...................................................................................................................46
7.1 Types of Block Corruption......................................................................................................46
7.2 Manual Corruption of a Block................................................................................................46
7.3 DBVERIFY Utility .....................................................................................................................48
7.4 Analyze CMD ..........................................................................................................................49
7.5 Exp command to detect the corruption ................................................................................51
1
8 CLONING ......................................................................................................................................53
8.1 User-Managed Cloning ..........................................................................................................54
8.2 RMAN Cloning ........................................................................................................................57
9 Automatic Memory Management .........................................................................................59
10 Partiontioning ........................................................................................................................67
10.1 Introduction to Partitioning .................................................................................................67
10.2 Advantages of Partitioning ...................................................................................................67
10.3 Partitioning Methods ...........................................................................................................69
10.4 IOT- Index Organize Table .....................................................................................................85
10.5 Advantages of IOT ................................................................................................................85
10.6 Creating Index-Organized Tables ..........................................................................................86
11 Row Chaining and Migration ...........................................................................................88
11.1 Logical & Physical Space management ................................................................................88
11.2 How to fix row chaining or migration ...................................................................................90
12 Automatic Work Repository..............................................................................................92
12.1 Automatic Database Diagnostic Monitor (ADDM) ...............................................................96
13 Automatic Storage Management ....................................................................................101
13.1 Benefits of ASM..................................................................................................................101
13.2 Limitations ..........................................................................................................................101
13.3 ASM Architecture ...............................................................................................................102
14 Data-Guard................................................................................................................................ 113
14.1 Steps ...................................................................................................................................121
14.2 Role Transfer (Switchover) .................................................................................................136
15 Upgrade Oracle database 10g R2 to 11g R2........................................................139
2
1 RMAN
Logical Failure
➢ Statement failure
A single database operation (select, insert, update, or delete) fails.
➢ Network failure
Connectivity to the database is lost.
➢ User error
A user successfully completes an operation, but the operation (dropping a table or entering bad
data) is incorrect .
➢ Instance failure
3
The database instance shuts down unexpectedly.
Media failure
➢ One or more of the database files are lost (that is, the files have beendeleted or the disk has
failed).
➢ Recovery Manager
➢ User-managed
Backup modes:
4
1.2 RMAN Architecture
• Recovery Manager (RMAN) is a utility that can manage all of your Oracle backup and
recovery activities. DBAs are often wary of using RMAN because of its perceived complexity
and its control over performing critical tasks. The traditional backup and recovery methods
are tried-and-true. Thus, when your livelihood depends on your ability to back up and
recover the database, why implement a technology like RMAN? The reason is that RMAN
comes with several benefits:
o Incremental backups that only copy data blocks that have changed since the last
backup.
o Tablespaces are not put in backup mode, thus there is no extra redo log generation
during online backups.
o Detection of corrupt blocks during backups.
o Parallelization of I/O operations.
o Automatic logging of all backup and recovery operations.
o Built-in reporting and listing commands.
• RMAN executable
• Server process
• Channels
• Target database
• Recovery catalog database (optional)
• Media management layer (optional)
• Backups, backup sets, and backup pieces
5
The following sections describe each of these components.
RMAN Executable
➢ The RMAN executable, usually named rman, is the program that manages all backup and
recovery operations. You interact with the RMAN executable to specify backup and recovery
operations you want to perform.
➢ The executable then interacts with the target database, starts the necessary server processes,
and performs the operations that you requested.
➢ Finally, the RMAN executable records those operations in the target database's control file and
the recovery catalog database, if you have one.
Server Processes
➢ RMAN server processes are background processes, started on the server, used to communicate
between RMAN and the databases. They can also communicate between RMAN and any disk,
tape, or other I/O devices. RMAN server processes do all the real work for a backup or restore
operation, and a typical backup or restore operation results in several server processes being
started.
6
Channels
➢ A channel is an RMAN server process started when there is a need to communicate with an I/O
device, such as a disk or a tape. A channel is what reads and writes RMAN backup files. Any
time you issue an RMAN allocate channel command, a server process is started on the target
database server. It is through the allocation of channels that you govern I/O characteristics
such as:
• Type of I/O device being read or written to, either a disk or an sbt_tape
• Number of processes simultaneously accessing an I/O device
• Maximum size of files created on I/O devices
• Maximum rate at which database files are read
• Maximum number of files open at a time
Target Database
➢ The target database is the database on which RMAN performs backup, restore, and recovery
7
operations. This is the database that owns the datafiles, control files, and archived redo files
that are backed up, restored, or recovered. Note that RMAN does not back up the online redo
logs of the target database.
➢ The recovery catalog database is an optional repository used by RMAN to record information
concerning backup and recovery activities performed on the target. The recovery catalog
consists of three components:
➢ A separate database referred to as the catalog database (from the target database)
➢ A schema within the catalog database
➢ Tables (and supporting objects) within the schema that contain data pertaining to RMAN
backup and recovery operations performed on the target. The catalog is typically a database
that you build on a different host from your target database.
➢ The reason for this is that you don't want a failure on the target host to affect your ability to
use the catalog. If both the catalog and target are on the same box, a single media failure
can put you in a situation from which you can't recover your target database.
➢ Inside the catalog database is a special schema containing the tables that store information
about RMAN backup and recovery activities. This includes information such as:
➢ Why is the catalog optional? When RMAN performs any backup operation, it writes
information about that task to the target database's control files. Therefore, RMAN does not
need a catalog to operate. If you choose to implement a recovery catalog database, then
8
RMAN will store additional information about what was backed up -- often called metadata --
in the catalog.
➢ The primary reason for implementing a catalog is that it enables the greatest flexibility in
backup and recovery scenarios. Using a catalog gives you access to a longer history of backups
and allows you to manage all of your backup and recovery operations from one repository.
Utilizing a catalog makes available to you all the features of RMAN. For reasons such as these,
we recommend using a catalog database.
➢ When you issue an RMAN backup command, RMAN creates backup sets, which are logical
groupings of physical files. The physical files that RMAN creates on your backup media are
called backup pieces. When working with RMAN, you need to understand that the following
terms have specific meanings:
➢ A backup of all or part of your database. This results from issuing an RMAN backup
command. A backup consists of one or more backup sets.
➢ A logical grouping of backup files -- the backup pieces -- that are created when you issue an
RMAN backup command. A backup set is RMAN's name for a collection of files associated with
a backup. A backup set is composed of one or more backup pieces.
➢ A physical binary file created by RMAN during a backup. Backup pieces are written to your
backup medium, whether to disk or tape. They contain blocks from the target database's
datafiles, archived redo log files, and control files.
➢ When RMAN constructs a backup piece from datafiles, there are a several rules that it follows:
9
• A datafile can span backup pieces as long as it stays within one backup set.
• Datafiles and control files can coexist in the same backup sets.
• Archived redo log files are never in the same backup set as datafiles or control files.
• RMAN is the only tool that can operate on backup pieces. If you need to restore a file from
an RMAN backup, you must use RMAN to do it. There's no way for you to manually
reconstruct database files from the backup pieces. You must use RMAN to restore files from
a backup piece.
Starting RMAN
You must have access to the SYSDBA privilege before you can connect to the target database using
RMAN.
Example:
[oracle@localhost admin]$ export ORACLE_SID=riaz
[oracle@localhost admin]$ rman target /
Recovery Manager: Release 10.2.0.1.0 - Production on Thu Mar 10 10:45:52 2011
Copyright (c) 1982, 2005, Oracle. All rights reserved.
connected to target database: RIAZ (DBID=194143921)
RMAN>
Example:
[oracle@localhost admin]$ lsnrctl start
[oracle@localhost admin]$ tnsping riaz
10
Note:-
To connect from another server, use the Oracle Net service name for the target database:
In Listener.ora file
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1)
(SID_NAME = riaz)
)
)
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))
(ADDRESS = (PROTOCOL = TCP)(HOST = 172.24.0.181)(PORT = 1521))
)
)
In Tnsnames.ora file
riaz =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 172.24.0.181)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = riaz)
)
11
)
12
2 RMAN COMMANDS
➢ Interactive Mode
➢ Batch Mode
➢ In Batch Mode we enter the RMAN command into one file and save it.Run that file on command
line.
Example:
➢ Stand-Alone
Example:
RMAN> backup
2> format '/u01/app/oracle/oradata/riaz/bkp/df_%d_%s_%p.bkp'
3> tablespace users;
OR
13
RMAN> backup
2> format '/u01/app/oracle/oradata/riaz/bkp/%U'
3> tablespace users;
OR
RMAN> backup datafile 1;
OR
RMAN> backup
2> format '/u01/app/oracle/oradata/riaz/bkp/%U'
3> database;
➢ Job-Commands
Example:
RMAN> run{
2> allocate channel d1 type disk;
3> backup database;}
➢ Use the RMAN command show to view the current value of one or all of RMAN’s configuration
settings. The show command will let you view the value of a specified RMAN setting. For example,
the following show command displays whether the autobackup of the control file has been
enabled:
14
RMAN configuration parameters are:
➢ The show all command displays both settings that you have configured and any default settings.
Any default settings will be displayed with a # default at the end of the line. For example, the
following is the output from executing the show all command:
15
➢ Configure automatic channels
• Retention policy governs how long database backups are retained, and determines how far
into the past you can recover your database.
Example:
RMAN> CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7 DAYS;
(This command ensures that RMAN retains all backups needed to recover the
database to any point in time in the last 7 days)
or a redundancy value (how many backups of each file must be retained).
(This command ensures that RMAN retains three backups of each datafile.)
16
RMAN> CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 2;
RMAN> CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK clear;
RMAN> CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK clear;
Caution:
➢ If a datafile being backed up is bigger than MAXSETSIZE then your backup will fail. Always
ensure that MAXSETSIZE is as large as your largest datafile.
17
RMAN configuration parameters are:
CONFIGURE EXCLUDE FOR TABLESPACE 'UU';
RMAN> backup database noexclude;
Note: The above command will also take the backup of excluded tablespace.
➢ RMAN can be configured to automatically back up the control file and server parameter file
whenever the database structure metadata in the control file changes and whenever a backup
record is added.
➢ The autobackup enables RMAN to recover the database even if the current control file,
catalog, and server parameter file are lost. Because the filename for the autobackup uses a
well-known format, RMAN can search for it without access to a repository, and then restore
the server parameter file.
➢ After you have started the instance with the restored server parameter file, RMAN can restore
the control file from an autobackup.
➢ After you mount the control file, the RMAN repository is available and RMAN can restore the
datafiles and find the archived redo log.
18
➢ Parallelization Of Backup Sets
RMAN> run{
2> allocate channel d1 type disk;
3> allocate channel d2 type disk;
4> backup
5> (datafile 1,2 channel d1)
6> (datafile 3,4 channel d2);}
➢ Compressed Backups
RMAN> backup as
2> compressed backupset database;
RMAN> configure device type disk parallelism 2 backup type to compressed backupset;
➢ Image Copies are duplicates of data or archived log files (similar to simply copying the files by
using OS commands). Backup sets are collections of one or more binary files that contain one
or more data or archived log files. With backup sets, empty data blocks are not stored, thereby
causing backup sets to use less space on the disk or tape. Backup sets can be compressed to
further reduce the space requirements of the backup. Image copies must be backed up to the
19
disk. Backup sets can be sent either to the disk or directly to the tape.
Example:
RMAN> backup as copy database ;
OR
RMAN> run{
2> allocate channel d1 type disk;
3> copy
4> datafile 1 to '/u01/app/oracle/admin/seth/rbkp/%U';
5> }
RMAN> run
2> {allocate channel c1 type disk;
3> backup tag 'fuul_bkp' datafile 1,2;}
20
2> format '/u01/app/oracle/admin/seth/rbkp/%U'
3> archivelog all;
OR
RMAN> run {
2> allocate channel c1 type disk;
3> backup archivelog all;}
RMAN> backup archivelog from sequence=80;
➢ SWITCH Command
It always will be written in run block. this cmd is = to alter database rename file
21
➢ LIST Command
RMAN>list backupset;
List backup sets and copies containing archive logs for a specified range
➢ REPORT Command
RMAN>report schema;
This command will tell you which datafile required which type of backup??
(if tablespace is in nologging mode then it will not display in this command)
22
RMAN>report obsolete;
Note: when a datafile has been changed by unrecoverable operation such as direct load so u
must perform a full backup or incremental backup of affected datafiles.
➢ Maxcorrupt
RMAN> run {
2> restore datafile 4;
3> set maxcorrupt for datafile 4 to 2;
4> recover datafile 4;}
23
3 RMAN Backup Types
➢ A full backup of a database will contain complete backups of all the datafiles.
➢ An incremental backups contain only the changed data blocks in the datafiles. Obviously, then,
incremental backups can potentially take a much shorter time than full backups. You can make
incremen tal backups only with the help of RMAN—you can’t make incremental backups using
user-managed backup techniques.
level 1 Type
24
RMAN> backup incremental level 0 database;
➢ The entire data file is read during each incremental backup, even if just a very small part of
that file has changed since the last incremental backup.
➢ If change tracking is enabled RMAN uses the change tracking file to identified changed blocks
for incremental backup, thus avoiding the need to scan every block in datafile.
➢ But for level 0 incremental backup RMAN still has to track the entire datafile.
➢ CTWR (change tracking writer) process will write the changes in the file.
25
Example:
NOTE: If OMF feature is enabled (db_create_file_dest) then no need to give the path as u have
given earlier .
26
4 Scenario for complete and incomplete Recovery
4.1 Restore and Recovery of a Whole Database
➢ In this scenario, you have a current control file and SPFILE but all datafiles are damaged or lost.
You must restore and recover the whole database.
➢ In this scenario, the database is open, and some but not all of the datafiles are damaged.
➢ You can restore and recover the damaged tablespace, while leaving the database open so that
the rest of the database remains available.
27
Note: you don't have to shut down the database you can do recovery in open stage.but if u
want to shut down the database and perform the recover in mount stage.
28
ORA-01110: data file 5: '/u01/app/oracle/oradata/fdb1/udata01.dbf'
29
50
60
30
SQL> conn / as sysdba
Connected.
• RMAN -Time-Based
• RMAN-SCN Based/Change Based (until SCN clause)
• RMAN-Log Sequence
➢ User Managed
• Cancel-Based
• Time-Based
31
• SCN Based/Change Based (until CHANGE clause)
➢ Cancel-Based
• A current redo log file or group is damaged and is not available for recovery. Mirroring should
prevent the need for this type of recovery.
• An archived redo log file needed for recovery is lost. Frequent backups and multiple archive
destinations should prevent the need for this type of recovery.
32
CURRENT_SCN
----------------------------
420806
RMAN> run{
2> set until scn 420709;
3> restore database;
4> recover database;
5> alter database open resetlogs;}
OR
33
Note:--
➢ RMAN-managed backups, you specify the system change number (SCN) of the last committed
change to be recovered.
➢ Recovery terminates after all changes up to the specified SCN are committed.
➢ You can optionally use the UNTIL RESTORE POINT syntax and specify an alias for the SCN, called
a restore point.
Note: If you specify wrong SCN no then u may lose your data, thats why it is an incomplete recovery.
ID NAME
---------- ----------------------
10 rash
20 sheetu
2 rows selected.
34
SQL> insert into rash.emp values (40,'noor');
1 row created.
SQL> commit;
Commit complete.
ID NAME
---------- ----------------------
10 rash
20 sheetu
Note Here i lost my data, because i created restore point before insert the new data.
35
4.7 RMAN SEQUENCE BASED
➢ If there is any gap in archived log file , or the archived log file is missing.
➢ Recover upto a particular log sequence no, bt does not include that seq.no.
Example:
RMAN> run{
2> set until sequence 10 thread 1;
3> restore database;
4> recover database;
5> alter database open resetlogs;}
37
TNAME TABTYPE CLUSTERID
-------------------------------------------------- ------- ----------
EMPLOYEES TABLE
COUNTRIES TABLE
REGIONS TABLE
LOCATIONS TABLE
DEPARTMENTS TABLE
JOBS TABLE
JOB_HISTORY TABLE
7 rows selected.
38
[rashmeet@station253 admin]$ export NLS_LANG=american
RMAN> run{
2> set until time = '2011-01-10:11:02:03';
3> restore database;
4> recover database;
5> alter database open resetlogs;}
11:04:40 SQL> conn riaz/riaz
11:06:59 SQL>select * from employees;
39
5 Controlfile And Spfile Scenario
ORA-00205: error in identifying control file, check alert log for more info
SQL> shut immediate
SQL> startup nomount
SQL> !rman target /
connected to target database: prim (not mounted)
RMAN> set DBID=4026912893;
RMAN> restore controlfile from autobackup;/restore controlfile from '/backup/file'
RMAN> alter database mount;
RMAN> recover database;
RMAN> alter database open resetlogs;
40
5.2 Loss of SPFILE
➢ Shut down the instance and restart it without mounting. When the SPFILE is not available,
RMAN starts the instance with a dummy parameter file. For example:
41
RMAN> RESTORE SPFILE FROM AUTOBACKUP;
SQL> startup
➢ Shut down the instance and restart it without mounting. When the SPFILE is not available, RMAN
starts the instance with a dummy parameter file. For example:
RMAN> set dbid=2391778528;
RMAN> run{
2> startup force nomount;
3> restore SPFILE FROM AUTOBACKUP;
4> shutdown immediate;
5> startup nomount;
6> restore controlfile from autobackup;
8> alter database mount;
9> recover database;
10> alter database open resetlogs;}
42
6 Recovery Catalog
➢ Rman repositroy data is always stored in the controlfile of the target database. But it can also be
startoed in a separate database, called a recovery catalog.
➢ A recovery catalog preserves backup information in the seperate database, which is useful in the
event of a lost control file. This allows you to store a longer history of backups than what is
possible with a controlfile – based repository.
➢ A single recovery catalog is able to store information for multiple target database.The recovery
catalog can also hold RMAN stored scripts, which are sequences of RMAN commands backup
tasks.
43
Grant succeeded.
orcl>>exit
44
7.register another database
oracle@station223 admin]$ export ORACLE_SID=prim
[oracle@station223 admin]$ rman catalog rman/rman@orcl target prim
RMAN> register database;
exit
NAME DBID
-------------------------------------
PROD 1202588170
PRIM 4026912893
45
7 Block Corruption
➢ If there is difference in checksum (DBWR calculates all bytes stored in the block, which is
called checksum and that is stored in the header of every data block. When the SP reads the
blocks and recalculates the bytes (checksum), if there is any difference in (invalid) checksum ,
then it is physical curroption.) that is physical corruption.
➢ Logically corrupt blocks are marked corrupt by the Oracle database after it detects the
inconsistency.
46
SQL> CREATE TABLE corruption_test (id NUMBER);
Table created.
SQL> COMMIT;
Commit complete.
SQL>SELECT header_block
FROM dba_segments WHERE segment_name='CORRUPTION_TEST';
HEADER_BLOCK
------------
61145
SQL> !
47
[oracle@station223 ~]$ dd of=/u01/app/oracle/oradata/orcl/system01.dbf bs=8192 conv=notrunc
seek=61146 <<EOF
> testing corruption
> EOF
0+1 records in
0+1 records out
19 bytes (19 B) copied, 5.2134e-05 seconds, 364 kB/s
[oracle@station223 ~]$ exit
➢ Works only on data files; redo log files and controlfile cannot be checked.
➢ DBVERIFY only checks a block in isolation; it does not know whether the block is part of an
existing object or not.
48
oracle@localhost ~]$ dbv file=/u01/app/oracle/oradata/target/data/system01.dbf blocksize=8192
SQL>!rman target /
ID
----------
1
49
➢ It generates only reports.
➢ It performs the block checking for all datablocks.(it checks the block when data is moving to
the block).
➢ It prevents memory and data corruption.
➢ It increases the overhead(load) from 1 to 10%.
➢ It can be set by alter session or alter system command.
Note: If this parameter value is false/off bt the block checking for the system tablespace is always turn
on.
50
7.5 exp command to detect the corruption
SQL>desc v$database_block_corruption
51
SQL> select * from v$backup_corruption;
52
8 CLONING
Using os command to copy the datafiles belongs to begin backup mode tablespace & placed in
backup path. (Refer below example)
53
]$cp arch* /u01/app/oracle/oradata/clone
5.Backup all your archive log files between the previous backup and the new backup as well.
1. Create the appropriate folder in corresponding path & placed the backup files in corresponding
folder.(bdump,udump,create,pfile,cdump,oradata)
2. Change the init.ora parameter like control file path, dbname, instance name etc...
54
• Change the Database name & files path, also change 'REUSE' needs to be changed to 'SET'.
Example:
SQL>@createcotrol.sql
6. Recover the database using controlfile.
55
SQL>alter database open resetlogs;
If u want to reuse the same controlfile of production side then u need to use REUSE keyword.
STARTUP NOMOUNT
CREATE CONTROLFILE REUSE SET DATABASE "CLONE" RESETLOGS ARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 2
MAXDATAFILES 30
MAXINSTANCES 1
MAXLOGHISTORY 292
LOGFILE
GROUP 1 '/u01/app/oracle/oradata/clone/redo01.log' SIZE 10M,
GROUP 2 '/u01/app/oracle/oradata/clone/redo02.log' SIZE 10M
-- STANDBY LOGFILE
DATAFILE
'/u01/app/oracle/oradata/clone/system01.dbf',
'/u01/app/oracle/oradata/clone/undo01.dbf',
'/u01/app/oracle/oradata/clone/sysaux01.dbf',
'/u01/app/oracle/oradata/clone/users01.dbf'
;
• If u want to create a new controlfile to clone side then u need to use SET keyword only
STARTUP NOMOUNT
CREATE CONTROLFILE SET DATABASE "CLONE" RESETLOGS ARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 2
56
MAXDATAFILES 30
MAXINSTANCES 1
MAXLOGHISTORY 292
LOGFILE
GROUP 1 '/u01/app/oracle/oradata/clone/redo01.log' SIZE 10M,
GROUP 2 '/u01/app/oracle/oradata/clone/redo02.log' SIZE 10M
-- STANDBY LOGFILE
DATAFILE
'/u01/app/oracle/oradata/clone/system01.dbf',
'/u01/app/oracle/oradata/clone/undo01.dbf',
'/u01/app/oracle/oradata/clone/sysaux01.dbf',
'/u01/app/oracle/oradata/clone/users01.dbf'
;
• Steps
Prod-Side
57
4. Make sure listener is up
Clone-Side:
5. Edit ur pfile for clone and add two additional parameters if you are using diffreent directory
structure.
db_file_name_convert=’/u01/app/oracle/oradata/prod/’, ’/u01/app/oracle/oradata/clone/’
log_file_name_convert=’/u01/app/oracle/oradata/prod/’, ’/u01/app/oracle/oradata/clone/’
6. ]$export ORACLE_SID=clone
Prod-side
]$export ORACLE_SID=prod
58
9 Automatic Memory Management
Hit Ratios
• Physical Reads: This statistic indicates the number of data blocks (i.e. tables, indexes, and
rollback segments) read from disk into the Buffer Cache since instance startup.
• Physical Reads Direct This statistic indicates the number of reads that bypassed the Buffer
Cache because the data blocks were read directly from disk instead. Because direct physical
reads are done intentionally by Oracle when using certain features like export or Parallel
Query.
• Session Logical Reads This statistic indicates total number of reads requested for data. This
value includes requests satisfied by access to buffers in memory and requests that caused
physical I/O.
• Physical Reads Direct (LOB) This statistic indicates the number of reads that bypassed the
Buffer Cache because the data blocks were associated with a Large Object (LOB) datatype
Session Logical Reads This statistic indicates total number of reads requested for data. This
value includes requests satisfied by access to buffers in memory and requests that caused
physical I/O.
• Buffer Busy Waits These waits occur whenever a buffer requested by user Server Processes is
already in memory, but is in use by another process. These waits can occur for rollback
segment buffers as well as data and index buffers.
• Set Value of DB_BUFFER_CACHE (in bytes). The total size of the Buffer Cache, hared Pool, and
Redo Log Buffer cannot exceed SGA_MAX_SIZE.
60
➢ How much of buffer cache is enough
✓ Set DB_CACHE_ADVICE=ON (consumes memory & CPU to collect statistics for providing
advice.
61
➢ No matter how many Buffer Pools you decide to use, each one is still managed by LRU
➢ Small tables that require FTS will be always at thecLRU end of the LRU list. Therefore they
would be kicked out of buffer cache earlier. To avoid this cache them
• Table creation
• Alter table
62
➢ Bypass the Buffer Cache
• We can bypass buffer cache using Direct Path read and direct insert commands
• export DIRECT=TRUE (Direct Path reading)
• SQL DIRECT=TRUE in sqlldr (Direct Path inserting)
➢ Initialization Parameters
63
• LARGE_POOL_SIZE Size of large pool
➢ For example, in a system that runs large online transactional processing (OLTP)jobs during the
day (requiring a large buffer cache) and runs parallel batch jobs at night (requiring a large value
for the large pool), you would have to simultaneously configure both the buffer cache and
the large pool to accommodate your peak
requirements.
➢ With Automatic Shared Memory Management, when the OLTP job runs, the buffer cache grabs
most of the memory to allow for good I/O performance. When the data analysis and
reporting batch job starts up later, the memory is automatically migrated to the large
pool so that it can be used by parallel query operations without producing memory overflow
errors.
64
• Using an SPFILE is recommended:
• Init parameters for ASMM SGA_TARGET Size of total SGA SGA_MAX Maximum size
that SGA can grow up to using alter system set sga_target command
STATISTIC_LEVEL=TYPICAL MMAN needs to collect statistics for ASMM to work.
• Log_buffer_size
• Db_cache_keep_size
65
• Db_cache_recycle_size
• Streams_pool_size
• Fixed SGA
➢ When SGA_TARGET is set, the total size of manual SGA size parameters is subtracted from the
SGA_TARGET value, and the balance is given to the auto-tuned SGA components.
Note: STATISTICS_LEVEL init parameter must be set to TYPICAL or ALL for ASMM to work
66
10 Partiontioning
➢ Partitioning addresses key issues in supporting very large tables and indexes by letting you
decompose them into smaller and more manageable pieces called partitions.
➢ SQL queries and DML statements do not need to be modified in order to access partitioned
tables.
➢ However, after partitions are defined, DDL statements can access and manipulate individuals
partitions rather than entire tables or indexes.
➢ This is how partitioning can simplify the manageability of large database objects.
➢ Each partition of a table or index must have the same logical attributes, such as column names,
datatypes, and constraints, but each partition can have separate physical attributes such as
pctfree, pctused, and tablespaces.
➢ Partitioning is useful for many different types of applications, particularly applications that
manage large volumes of data.
➢ OLTP systems often benefit from improvements in manageability and availability, while data
warehousing systems benefit from performance and manageability.
Note-: All partitions of a partitioned object must reside in tablespaces of a single block size.
➢ Partitioning enables data management operations such data loads, index creation and
rebuilding, and backup/recovery at the partition level, rather than on the entire table. This
results in significantly reduced times for these operations.
➢ Partitioning improves query performance. In many cases, the results of a query can be
achieved by accessing a subset of partitions, rather than the entire table. For some queries,
this technique (called partition pruning) can provide order-of-magnitude gains in performance.
➢ Partitioning can significantly reduce the impact of scheduled downtime for maintenance
67
operations.
➢ Partition independence for partition maintenance operations lets you perform concurrent
maintenance operations on different partitions of the same table or index. You can also run
concurrent SELECT and DML operations against partitions that are unaffected by maintenance
operations.
➢ Partitioning increases the availability of mission-critical databases if critical tables and indexes
are divided into partitions to reduce the maintenance windows, recovery times, and impact of
failures.
➢ Partitioning can be implemented without requiring any modifications to your applications. For
example, you could convert a nonpartitioned table to a partitioned table without needing to
modify any of the SELECT statements or DML statements which access that table. You do not
need to rewrite your application code to take advantage of partitioning.
Partition Key
Partitioned Tables
➢ Tables can be partitioned into up to 1024K-1 separate partitions.
➢ Any table can be partitioned except those tables containing columns with LONG or LONG RAW
datatypes.
➢ You can, however, use tables containing columns with CLOB or BLOB datatypes.
68
10.3 Partitioning Methods
• Range partitioning
• Hash partitioning
• List partitioning
• Composite range-hash partitioning
• Composite range-list partitioning
➢ Use range partitioning to map rows to partitions based on ranges of column values.
➢ This type of partitioning is useful when dealing with data that has logical ranges into which it
can be distributed; for example, months of the year.
➢ Performance is best when the data evenly distributes across the range.
➢ If partitioning by range causes partitions to vary dramatically in size because of unequal
distribution, you may want to consider one of the other methods of partitioning.
➢ When creating range partitions, you must specify:
➢ Range partitioning maps data to partitions based on ranges of partition key values that you
establish for each partition. It is the most common type of partitioning and is often used with
dates. For example, you might want to partition sales data into monthly partitions.
➢ When using range partitioning, consider the following rules:
• Each partition has a VALUES LESS THAN clause, which specifies a noninclusive upper
bound for the partitions. Any values of the partition key equal to or higher than this
literal are added to the next higher partition.
69
• All partitions, except the first, have an implicit lower bound specified by the VALUES
LESS THAN clause on the previous partition.
• A MAXVALUE literal can be defined for the highest partition. MAXVALUE represents a
virtual infinite value that sorts higher than any other possible value for the partition
key, including the null value.
➢ The example below creates a table of four partitions, one for each quarter of sales. The
columns sale_year, sale_month, and sale_day are the partitioning columns, while their values
constitute the partitioning key of a specific row. The VALUES LESS THAN clause determines the
partition bound: rows with partitioning key values that compare less than the ordered list of
values specified by the clause are stored in the partition. Each partition is given a name
(sales_q1, sales_q2, ...), and each partition is contained in a separate tablespace (tsa, tsb, ...).
Example:
➢ A row with sale_year=1999, sale_month=8, and sale_day=1 has a partitioning key of (1999, 8,
1) and would be stored in partition sales_q3.
70
Example:2
➢ A typical example is given in the following section. The statement creates a table (sales_range)
that is range partitioned on the sales_date field.
Example-3
71
VALUES LESS THAN (MAXVALUE)
TABLESPACE users)
ENABLE ROW MOVEMENT;
Evaluation:
Points to remember-:
72
10.3.2 List Partitioning
➢ List partitioning enables you to explicitly control how rows map to partitions. You do this by
specifying a list of discrete values for the partitioning key in the description for each partition.
This is different from range partitioning, where a range of values is associated with a partition
and from hash partitioning, where a hash function controls the row-to-partition mapping. The
advantage of list partitioning is that you can group and organize unordered and unrelated sets
of data in a natural way.
➢ Use list partitioning when you require explicit control over how rows map to partitions. You
can specify a list of discrete values for the partitioning column in the description for each
partition. This is different from range partitioning, where a range of values is associated with a
partition, and from hash partitioning, where the user has no control of the row to partition
mapping.
➢ The list partitioning method is specifically designed for modeling data distributions that follow
discrete values. This cannot be easily done by range or hash partitioning because: Range
partitioning assumes a natural range of values for the partitioning column. It is not possible to
group together out-of-range values partitions.
➢ Hash partitioning allows no control over the distribution of data because the data is distributed
over the various partitions using the system hash function. Again, this makes it impossible to
logically group together discrete values for the partitioning columns into partitions.
➢ Further, list partitioning allows unordered and unrelated sets of data to be grouped and
organized together very naturally.
➢ Unlike the range and hash partitioning methods, multicolumn partitioning is not supported for
list partitioning. If a table is partitioned by list, the partitioning key can consist only of a single
column of the table. Otherwise all columns that can be partitioned by the range or hash
73
methods can be partitioned by the list partitioning method.
➢ Partition descriptions, each specifying a list of literal values (a value list), which are the discrete
values of the partitioning column that qualify a row to be included in the partition .
Example
➢ A row is mapped to a partition by checking whether the value of the partitioning column for a
row matches a value in the value list that describes the partition.
74
(20, 'R&D', 150, 'OR') maps to partition q1_northwest
(30, 'sales', 100, 'FL') maps to partition q1_southeast
(40, 'HR', 10, 'TX') maps to partition q1_southwest
(50, 'systems engineering', 10, 'CA') does not map to any partition in the table and raises an
error.
➢ Unlike range partitioning, with list partitioning, there is no apparent sense of order between
partitions.
➢ You can also specify a default partition into which rows that do not map to any other partition
are mapped. If a default partition were specified in the preceding example, the state CA would
map to that partition.
Example-2
➢ The details of list partitioning can best be described with an example. In this case, lets say you
want to partition a sales table by region. That means grouping states together according to
their geographical location as in the following example.
75
PARTITION sales_other VALUES(DEFAULT)
)
);
➢ A row is mapped to a partition by checking whether the value of the partitioning column for a
row falls within the set of values that describes the partition. For example, the rows are
inserted as follows:
➢ Unlike range and hash partitioning, multicolumn partition keys are not supported for list
partitioning. If a table is partitioned by list, the partitioning key can only consist of a single
column of the table.
➢ The DEFAULT partition enables you to avoid specifying all possible values for a list-partitioned
table by using a default partition, so that all rows that do not map to any other partition do not
generate an error.
➢ Hash partitioning enables easy partitioning of data that does not lend itself to range or list
partitioning.
➢ It does this with a simple syntax and is easy to implement. It is a better choice than range
partitioning when:
• You do not know beforehand how much data maps into a given range.
• The sizes of range partitions would differ quite substantially or would be difficult to
76
balance manually
➢ Performance features such as parallel DML, partition pruning, and partition-wise joins are
important.
➢ The concepts of splitting, dropping or merging partitions do not apply to hash partitions.
Instead, hash partitions can be added and coalesced.
➢ Use hash partitioning if your data does not easily lend itself to range partitioning, but you
would like to partition for performance and manageability reasons. Hash partitioning provides
a method of evenly distributing data across a specified number of partitions. Rows are mapped
into partitions based on a hash value of the partitioning key. Creating and using hash partitions
gives you a highly tunable method of data placement, because you can influence availability
and performance by spreading these evenly sized partitions across I/O devices (striping).
Example-1
➢ The following example creates a hash-partitioned table. The partitioning column is id, four
partitions are created and assigned system generated names, and they are placed in four
named tablespaces (gear1, gear2, ...).
77
SQL> CREATE TABLE scubagear (id NUMBER, name VARCHAR2 (60))
PARTITION BY HASH (id)
PARTITIONS 4
STORE IN (gear1, gear2, gear3, gear4);
Example-2
➢ The preceding statement creates a table sales_hash, which is hash partitioned on salesman_id
field. The tablespace names are ts1, ts2, ts3, and ts4. With this syntax, we ensure that we
create the partitions in a round-robin manner across the specified tablespaces.
➢ Composite partitioning partitions data using the range method, and within each partition,
subpartitions it using the hash or list method. Composite range-hash partitioning provides the
improved manageability of range partitioning and the data placement, striping, and parallelism
advantages of hash partitioning. Composite range-list partitioning provides the manageability
of range partitioning and the explicit control of list partitioning for the subpartitions.
➢ Composite partitioning supports historical operations, such as adding new range partitions, but
also provides higher degrees of parallelism for DML operations and finer granularity of data
78
placement through subpartitioning.
➢ Like the composite range-hash partitioning method, the composite range-list partitioning
method provides for partitioning based on a two level hierarchy. The first level of partitioning
is based on a range of values, as for range partitioning; the second level is based on discrete
values, as for list partitioning. This form of composite partitioning is well suited for historical
data, but lets you further group the rows of data based on unordered or unrelated column
values.
➢ Subpartition descriptions, each specifying a list of literal values (a value list), which are the
discrete values of the subpartitioning column that qualify a row to be included in the
subpartition.
Example-1
➢ The following example illustrates how range-list partitioning might be used. The example tracks
sales data of products by quarters and within each quarter, groups it by specified states.
79
PARTITION BY RANGE (txn_date)
SUBPARTITION BY LIST (state)
(PARTITION q1_1999 VALUES LESS THAN (TO_DATE('1-APR-1999','DD-MON-YYYY'))
(SUBPARTITION q1_1999_northwest VALUES ('OR', 'WA'),
SUBPARTITION q1_1999_southwest VALUES ('AZ', 'UT', 'NM'),
SUBPARTITION q1_1999_northeast VALUES ('NY', 'VM', 'NJ'),
SUBPARTITION q1_1999_southeast VALUES ('FL', 'GA'),
SUBPARTITION q1_1999_northcentral VALUES ('SD', 'WI'),
SUBPARTITION q1_1999_southcentral VALUES ('OK', 'TX')
),
PARTITION q2_1999 VALUES LESS THAN ( TO_DATE('1-JUL-1999','DD-MON-YYYY'))
(SUBPARTITION q2_1999_northwest VALUES ('OR', 'WA'),
SUBPARTITION q2_1999_southwest VALUES ('AZ', 'UT', 'NM'),
SUBPARTITION q2_1999_northeast VALUES ('NY', 'VM', 'NJ'),
SUBPARTITION q2_1999_southeast VALUES ('FL', 'GA'),
SUBPARTITION q2_1999_northcentral VALUES ('SD', 'WI'),
SUBPARTITION q2_1999_southcentral VALUES ('OK', 'TX')
),
PARTITION q3_1999 VALUES LESS THAN (TO_DATE('1-OCT-1999','DD-MON-YYYY'))
(SUBPARTITION q3_1999_northwest VALUES ('OR', 'WA'),
SUBPARTITION q3_1999_southwest VALUES ('AZ', 'UT', 'NM'),
SUBPARTITION q3_1999_northeast VALUES ('NY', 'VM', 'NJ'),
SUBPARTITION q3_1999_southeast VALUES ('FL', 'GA'),
SUBPARTITION q3_1999_northcentral VALUES ('SD', 'WI'),
SUBPARTITION q3_1999_southcentral VALUES ('OK', 'TX')
),
PARTITION q4_1999 VALUES LESS THAN ( TO_DATE('1-JAN-2000','DD-MON-YYYY'))
(SUBPARTITION q4_1999_northwest VALUES ('OR', 'WA'),
SUBPARTITION q4_1999_southwest VALUES ('AZ', 'UT', 'NM'),
SUBPARTITION q4_1999_northeast VALUES ('NY', 'VM', 'NJ'),
80
SUBPARTITION q4_1999_southeast VALUES ('FL', 'GA'),
SUBPARTITION q4_1999_northcentral VALUES ('SD', 'WI'),
SUBPARTITION q4_1999_southcentral VALUES ('OK', 'TX')
)
);
➢ A row is mapped to a partition by checking whether the value of the partitioning column for a
row falls within a specific partition range. The row is then mapped to a subpartition within that
partition by identifying the subpartition whose descriptor value list contains a value matching
the subpartition column value.
➢ The partitions of a range-list partitioned table are logical structures only, as their data is stored
in the segments of their subpartitions. The list subpartitions have the same characteristics as
list partitions.
➢ You can specify a default subpartition, just as you specify a default partition for list
partitioning.
Example-2
81
SQL>CREATE TABLE bimonthly_regional_sales
(deptno NUMBER, item_no VARCHAR2(20),
txn_date DATE, txn_amount NUMBER,
state VARCHAR2(2))
PARTITION BY RANGE (txn_date)
SUBPARTITION BY LIST (state)
SUBPARTITION TEMPLATE(
SUBPARTITION east VALUES('NY', 'VA', 'FL') TABLESPACE ts1,
SUBPARTITION west VALUES('CA', 'OR', 'HI') TABLESPACE ts2,
SUBPARTITION central VALUES('IL', 'TX', 'MO') TABLESPACE ts3)
(
PARTITION janfeb_2000 VALUES LESS THAN (TO_DATE('1-MAR-2000','DD-MON-YYYY')),
PARTITION marapr_2000 VALUES LESS THAN (TO_DATE('1-MAY-2000','DD-MON-YYYY')),
PARTITION mayjun_2000 VALUES LESS THAN (TO_DATE('1-JUL-2000','DD-MON-YYYY'))
)
);
82
10.3.5 Composite Range-Hash Partitioning
➢ Range-hash partitioning partitions data using the range method, and within each partition,
subpartitions it using the hash method. These composite partitions are ideal for both historical
data and striping, and provide improved manageability of range partitioning and data
placement, as well as the parallelism.
➢ The following statement creates a range-hash partitioned table. In this example, three range
partitions are created, each containing eight subpartitions. Because the subpartitions are not
named, system generated names are assigned, but the STORE IN clause distributes them across
the 4 specified tablespaces (ts1, ...,ts4).
Example-1
➢ Unlike range partitions in a range-partitioned table, the subpartitions cannot have different
physical attributes from the owning partition, although they are not required to reside in the
same tablespace.
Example-2
84
➢ This statement creates a table sales_composite that is range partitioned on the sales_date field
and hash subpartitioned on salesman_id. When you use a template, Oracle names the
subpartitions by concatenating the partition name, an underscore, and the subpartition name
from the template. Oracle places this subpartition in the tablespace specified in the template.
In the previous statement, sales_jan2000_sp1 is created and placed in tablespace ts1 while
sales_jan2000_sp4 is created and placed in tablespace ts4. In the same manner,
sales_apr2000_sp1 is created and placed in tablespace ts1 while sales_apr2000_sp4 is created
and placed in tablespace ts4.
➢ An index-organized table has a storage organization that is a variant of a primary B-tree. Unlike
an ordinary (heap-organized) table whose data is stored as an unordered collection (heap),
data for an index-organized table is stored in a B-tree index structure in a primary key sorted
manner. Each leaf block in the index structure stores both the key and nonkey columns.
85
➢ Fast random access on the primary key because an index-only scan is sufficient. And, because
there is no separate table storage area, changes to the table data (such as adding new rows,
updating rows, or deleting rows) result only in updating the index structure.
➢ Fast range access on the primary key because the rows are clustered in primary key order.
➢ Lower storage requirements because duplication of primary keys is avoided. They are not
stored both in the index and underlying table, as is true with heap-organized tables.
➢ Index-organized tables have full table functionality. They support features such as constraints,
triggers, LOB and object columns, partitioning, parallel operations, online reorganization, and
replication. And, they offer these additional features:
• Key compression
• Overflow storage area and specific column placement
• Secondary indexes, including bitmap indexes.
➢ You use the CREATE TABLE statement to create index-organized tables, but you must provide
additional information:
➢ A primary key, specified through a column constraint clause (for a single column primary key)
or a table constraint clause (for a multiple-column primary key).
86
➢ An OVERFLOW clause, which preserves dense clustering of the B-tree index by storing the row
column values exceeding a specified threshold in a separate overflow data segment.
➢ A PCTTHRESHOLD value, which defines the percentage of space reserved in the index block for
an index-organized table. Any portion of the row that exceeds the specified threshold is stored
in the overflow segment. In other words, the row is broken at a column boundary into two
pieces, a head piece and tail piece. The head piece fits in the specified threshold and is stored
along with the key in the index leaf block. The tail piece is stored in the overflow area as one
or more row pieces. Thus, the index entry contains the key value, the nonkey column values
that fit the specified threshold, and a pointer to the rest of the row.
➢ An INCLUDING clause, which can be used to specify nonkey columns that are to be stored in
the overflow data segment.
Example
87
11 Row Chaining and Migration
➢ pctused(40)- inserts (high for DSS systems – systems with no or few updates)
➢ pctfree(10)- updates (high for update intensive systems)
• Row fits in the db_block but after update it does not fit in it.
• small rows - not enough pctfree to grow after update
• row is moved to the next block and a pointer is left behind
• after moving to next block it fits on one block
• study your updates and use right pctfree
88
Note: Row chaining and migration slows the performance, as Oracle has to read multiple blocks to
fetch one row.
89
➢ ANALYZE TABLE employees LIST CHAINED ROWS into chained_rows;
2. Export/import
90
How to simulate chaining problem
➢ If your block size is 8k, create a table with row size more han 8k as follows. Note the use of
CHAR and not VARCHAAR2.
SQL> Insert into abc values(‘y’, null, null, null, null ) un-chained row
SQL> Insert into abc values(‘x’, ‘x’, ‘x’, ‘x’, ‘x’) chained row
SQL> Update abc set col2=’x’, col3=’x’, Where col1= ‘y’ migrated row
91
12 Automatic Work Repository
➢ The AWR plays the role of the “data warehouse of the database,” and it is the basis for
most of Oracle’s self- management functionality. The AWR collects and maintains
performance statistics for problem-detection and self-tuning purposes. By default, every
60 minutes the database collects statistical information from the SGA and stores it the AWR,
in the form of snapshots. Several database
components, such as the ADDM and other management advisors, use the AWR data to detect
problems and for tuning the database. Like the ADDM, the AWR is automatically active upon
starting the instance.
92
➢ Statistics is collected by 2 background processes every 60 minutes
93
➢ To create a snapshot manually, use the CREATE_SNAPSHOT procedure, as follows:
SQL>DBMS_WORKLOAD_REPOSITORY.DROP_SNAPSHOT_RANGE (
low_snap_id => 40,
high_snap_id => 60,
dbid => 2210828132);
➢ If you set the snapshot interval to 0, the AWR will stop collecting snapshot data. This
means that the ADDM, the SQL Tuning Advisor, the Undo Advisor, and the Segment Advisor
will all be adversely affected, because they depend on the AWR data.
SQL> DBMS_WORKLOAD_REPOSITORY.CREATE_BASELINE(
START_SNAP_ID => 125,
END_SNAP_ID => 185,
BASELINE_NAME => 'peak_time baseline',
DBID => 2210828132);
SQL> DBMS_WORKLOAD_REPOSITORY.DROP_BASELINE
(BASELINE_NAME => 'peak_time_baseline',
CASCADE => FALSE,
DBID => 2210828132);
94
➢ By setting the CASCADE parameter to TRUE, you can drop the actual snapshots as well.
➢ If your Sysaux tablespace runs out of space, Oracle will automatically delete the oldest set of
snapshots to make room for new snapshots.
➢ Make sure you don’t confuse the AWR report with the ADDM report that you obtain by
running the addmrpt.sql script.
➢ The ADDM report is also based on the AWR snapshot data, but it highlights both the problems
in the database and the recommendations for resolving them.
SQL> @$ORACLE_HOME/rdbms/admin/awrrpt.sql
95
12.1 Automatic Database Diagnostic Monitor (ADDM)
➢ Oracle collects statistics regarding how it is spending time on various tasks. This information or
statistics is called as “Time-Model Statistics”. It is stored in v$SYS_TIME_MODEL or
V$SESS_TIME_MODEL. The most important
statistics from this is “DB-Time Statistics”
➢ DB time includes both the wait time and processing time (CPU time), but doesn’t include the
idle time incurred by your processes. For example, if you spend an hour connected to the
database and you’re idle for 58 of those minutes, the DB time is only 2 minutes.
➢ The basic rationale behind the ADDM is to reduce a key database metric called DB time, which
is the total time (in microseconds) the database spends actually processing users’
requests.
96
➢ Oracle manages the ADDM with the help of the new MMON background process. Each
time the AWR takes a snapshot (every hour by default), the MMON process tells ADDM to
analyze the interval between the last two AWR snapshots. Thus, by default, the ADDM
automatically runs each time the AWR snapshot is taken
What problems ADDM can help us with
➢ Oracle enables the ADDM feature by default, and your only task is to make sure that the
STATISTICS_LEVEL initialization parameter is set to TYPICAL or ALL in order for the AWR to
gather its performance Statistics.
➢ You can also request that the ADDM analyze past instance performance by examining AWR
snapshot data that falls between any two nonadjacent snapshots. The only requirements
regarding the selection of the AWR snapshots are these:
97
How to view ADDM Report
➢ You can view the ADDM analysis reports in three different ways:
• You can use the DBMS_ADVISOR package and create an ADDM report by using the
CREATE_REPORT procedure.
• You can use the OEM to view the performance findings of the stored ADDM reports,
which are proactively created each hour after the AWR snapshots
Example
98
SQL> EXECUTE dbms_workload_repository.create_snapshot();
PL/SQL procedure successfully completed.
Now we will generate an ADDM report from the above two snapshots
99
3070 22 Jul 2009 08:00 1
3071 22 Jul 2009 09:00 1
3072 22 Jul 2009 10:00 1
3073 22 Jul 2009 11:00 1
3074 22 Jul 2009 12:01 1
3075 22 Jul 2009 13:00 1
3076 22 Jul 2009 14:00 1
Specify the Begin and End Snapshot Ids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Enter value for begin_snap: 3075
Begin Snapshot Id specified: 3075
Enter value for end_snap: 3076
End Snapshot Id specified: 3076
Specify the Report Name
~~~~~~~~~~~~~~~~~~~~~~~
The default report file name is addmrpt_1_3075_3076.txt.
To use this name, press <return> to continue, otherwise enter an alternative.
Enter value for report_name:
Using the report name addmrpt_1_3075_3076.txt
Running the ADDM analysis on the specified pair of snapshots . . .
100
13 Automatic Storage Management
13.2 Limitations
101
13.3 ASM Architecture
➢ Striping: ASM divides data files into extents and spreads across the disks Mirroring: ASM does
not mirror the entire disk, like in RAID, but mirrors database objects.
➢ Mirroring: ASM does not mirror the entire disk, like in RAID, but mirrors database objects.
➢ Auto Balancing: If a disk is added to a disk group, data files are automatically moved to new
disk to balance the I/O
ASM Instance
➢ ASM instance is a set of processes and SGA (usually 60M to 100M) just like a database
instance. But it does not get mounted on a database.
102
Processes
➢ Does not have any data files and therefore no data dictionary.
➢ Can be in nomount or mount state only. In mount state ASM disk groups are made available.
➢ Sysoper startup, shutdown, mount or dismount disk group, make disk group online/offline,
rebalance disk group, perform integrity check on disk group. Access to v$asm* views.
➢ Sysdba all operations that can be done by sysoper plus some more like create, delete and
add disks to disk groups.
➢ Each database server that has database files managed by ASM needs to be running an ASM
instance. A single ASM instance can service one or more single-instance databases on a stand-
alone server.
➢ Each ASM disk group can be shared among all the databases on the server.
➢ In a clustered environment, each node runs an ASM instance, and the ASM instances
communicate with each other on a peer-to-peer basis. This is true for both RAC environments
and non-RAC clustered environments where multiple single-instance databases across multiple
nodes share a clustered pool of storage that is managed by ASM. If a node is part of a Real
Application Clusters (RAC) system, the peer-to-peer communications service is already installed
103
on that server. If the node is part of a cluster where RAC is not installed, the Oracle
Clusterware, Cluster Ready Services (CRS), must be installed on that node.
➢ ASMB (ASM Background)– communication between database and ASM instance. This is a
foreground or client process which connects to the ASM instance.
➢ Database makes initial contact with ASM instance to get information, but then accesses the
files directly.
➢ ASM instance can’t be shutdown while database instances accessing data files are running.
➢ When an ASM instance mounts a disk group, it registers the disk group and connect string
with Group Services. The database instance knows the name of the disk group, and can
therefore use it to lookup connect information for the
correct ASM instance.
➢ +group/dbname/file_type/tag.file.incarnation
Example: +DATA2/shekhardb/datafile/users.276.1
104
dumpset (for expdp)
✓ Tag Type-specific info regarding file
✓ File.incarnation provides uniqueness
✓ Extents (physical unit) Data Files (logical unit) Disks (could be a partition or a disk)
Failure Groups Disk Groups
✓ Logical concepts like table extents, segments and tablespaces work the same way in ASM
✓ ASM file is always spread across every disk in an ASM Disk group
Striping
Mirroring
✓ External redundancy only 1 failure group within a disk group. (non critical systems)
✓ Normal redundancy Two failure groups within a disk group. (critical systems)
✓ High redundancy Three failure group within a disk group. (Highly critical systems)
Balancing
105
Data Dictionary views
# export PATH=$PATH:/u01/app/oracle/product/10.2.0/bin
106
# localconfig add
/etc/oracle does not exist. Creating it now.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Configuration for local CSS has been initialized
Adding to inittab
Startup will be queued to init within 30+60 seconds.
Checking the status of new Oracle init process...
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
localhost
CSS is active on all nodes
Oracle CSS service is installed and running under
init(1M)
#
Create init.ora
➢ INSTANCE_TYPE: In an Oracle Database 10g database, you have two types of Oracle instances:
RDBMS and ASM.RDBMS, of course, refers to the normal Oracle databases, and ASM refers to
the new ASM instance. Set the INSTANCE_TYPE parameter to ASM. This will implicitly set the
DB_UNIQUE_NAME parameter to +ASM.
107
➢ ASM_POWER_LIMIT: This is the maximum speed of this ASM instance during a rebalance disk
operation. This operation redistributes the data files evenly and balances I/O load across the
disks. The default is 1 and the range is from 1 to 11 (1 is slowest and 11 is fastest).
➢ ASM_DISKSTRING: This is the location where Oracle should look during a disk-discovery
process. The format of the disk string may vary according to your operating system. You can
specify a list of values as follows; this example limits the ASM discovery to disks whose names
end in s1 and s2 only:
➢ ASM_DISKGROUPS: Here you specify the name of any disk group that you want to mount
automatically at instance startup the default value for this parameter is NULL.
Commands
108
5 '/devices/disk3',
6 '/devices/disk4',
7 failgroup group2 disk
8 '/devices/disk5',
9 '/devices/disk6',
10 '/devices/disk7',
11 '/devices/disk8',
12 failgroup group3 disk
13 '/devices/disk9',
14 '/devices/disk10',
15 '/devices/disk11',
16 '/devices/disk12';
Init.ora
DB_CREATE_FILE_DEST = '+dgroup1'
DB_RECOVERY_FILE_DEST = '+dgroup2'
DB_RECOVERY_FILE_DEST_SIZE = 100G
109
SQL> CREATE TABLESPACE tbsp1 DATAFILE '+group1';
➢ In Oracle Database 10g Release 2, you can also manage ASM using a command-line tool, which
gives you more flexibility than having to use SQL*Plus or the Database Control. To invoke the
command-line administrative tool, called asmcmd, enter this command (after the ASM
instance is started):
$ asmcmd
ASMCMD>
➢ The command-line tool has about a dozen commands you can use to manage ASM file
systems, and it includes familiar UNIX/Linux commands such as ls, du, which checks ASM disk
usage. To get a complete list of commands, type help at the help command prompt
(ASMCMD>). By typing followed by a command, you can get details about that command.
110
DB_RECOVERY_FILE_DEST_SIZE, and to your database parameter file so you can use
an OMF-based
file system.
3. Delete the control file parameter from the SPFILE, since Oracle will create new control files in
the OMF file destinations by restoring them from the non-ASM database control files.
5. Restore the old control file in the new location, as shown here:
7. The following command will copy your database files into an ASM disk group:
8. Use the SWITCH command to switch all data files into the ASM disk group dgroup1:
RMAN> SWITCH DATABASE TO COPY;
At this point, all data files will be converted to ASM type. You still have your original data file copies
on disk, which you can use to restore your database if necessary.
111
RMAN> ALTER DATABASE OPEN;
10. For each redo log member, use the following command to move it to the ASM system:
11. Archive the current online redo logs, and delete the old non-ASM redo logs. Since RMAN doesn’t
migrate temp files, you must manually create a temporary tablespace using the CREATE TEMPORARY
TABLESPACE statement. You’ll now have an ASM-based file system. You still have your old non-ASM
files as backups in the RMAN catalog.
112
14 Data-Guard
➢ Oracle Data Guard ensures high availability, data protection, and disaster recovery for
enterprise data.
➢ Data Guard provides a comprehensive set of services that create,maintain, manage, and
monitor one or more standby databases to enable production Oracle databases to survive
disasters and data corruptions.
➢ Data Guard maintains these standby databases as transactionally consistent copies of the
production database.
➢ Data Guard can be used with traditional backup,restoration to provide a high level of data
protection and data availability.
➢ A Data Guard configuration consists of one production database and one or more standby
databases.
➢ The databases in a Data Guard configuration are connected by Oracle Net and may be
dispersed geographically.
➢ There are no restrictions on where the databases are located, provided they can communicate
113
with each other.
For Example
➢ You can have a standby database on the same system as the production database, Along with
two standby databases on other systems at remote locations.
➢ You can manage primary and standby databases using the SQL command-line interfaces or the
Data Guard broker interfaces,including a command-line interface (DGMGRL) and a graphical
user interface that is integrated in Oracle Enterprise Manager.
Primary Database
➢ A Data Guard configuration contains one production database, also referred to as the primary
database, that functions in the primary role.This is the database that is accessed by most of
your applications.
➢ The primary database can be either a single-instance Oracle database or an Oracle Real
Application Clusters database.
Standby Databases
➢ Once created, Data Guard automatically maintains each standby database by transmitting redo
data from the primary database and then applying the redo to the standby database.
114
database or an Oracle Real Application Clusters database. A standby database can be either a
physical standby database or a logical standby database.
➢ Provides a physically identical copy of the primary database, with on disk database structures
that are identical to the primary database on a block-for-block basis.The database schema,
including indexes, are the same.
➢ A physical standby database is kept synchronized with the primary database, though Redo
Apply, which recovers the redo data received from the primary database and applies the redo
to the physical standby database.
➢ A physical standby database can be used for business purposes other than disaster recovery on
a limited basis.
➢ Contains the same logical information as the production database, although the physical
organization and structure of the data can be different.
➢ The logical standby database is kept synchronized with the primary database though SQL
Apply,which transforms the data in the redo received from the primary database into SQL
statements and then executing the SQL statements on the standby database.
➢ A logical standby database can be used for other business purposes in addition to disaster
recovery requirements. This allows users to access a logical standby database for queries and
reporting purposes at any time.
➢ Also, using a logical standby database, you can upgrade Oracle Database software and patch
115
sets with almost no downtime. Thus, a logical standby database can be used concurrently for
data protection, reporting, and database upgrades.
➢ How Data Guard manages the transmission of redo data, the application of redo data, and
changes to the database roles:--
Control the automated transfer of redo data from the production database to one or more archival
destinations.
➢ Apply redo data on the standby database to maintain transactional synchronization with the
primary database.
➢ Redo data can be applied either from archived redo log files, or, if real-time apply is enabled,
directly from the standby redo log files as they are being filled,without requiring the redo data
to be archived first at the standby database.
(3)Role Transitions
➢ Change the role of a database from a standby database to a primary database, or from a
primary database to a standby database using either a switchover or a failover operation.
➢ Redo transport services control the automated transfer of redo data from the production
116
database to one or more archival destinations.
➢ Transmit redo data from the primary system to the standby systems in the configuration.
➢ Manage the process of resolving any gaps in the archived redo log files due to a network
failure.
➢ Automatically detect missing or corrupted archived redo log files on a standby system and
automatically retrieve replacement archived redo log files from the primary database or
another standby database.
➢ The redo data transmitted from the primary database is written on the standby system into
standby redo log files, if configured, and then archived into archived redo log files.
➢ Log apply services automatically apply the redo data on the standby database to maintain
consistency with the primary database.
➢ The main difference between physical and logical standby databases is the manner in which
log apply services apply the archived redo data:
➢ For physical standby databases, Data Guard uses Redo Apply technology, which applies redo
data on the standby database using standard recovery techniques of an Oracle database.
117
➢ For logical standby databases, Data Guard uses SQL Apply technology, which first transforms
the received redo data into SQL statements and then executes the generated SQL statements
on the logical standby database,
(3)Role Transitions
➢ Using Data Guard, you can change the role of a database using either a switchover or a failover
operation.
Switchover
➢ A switchover is a role reversal between the primary database and one of its standby
databases.A switchover ensures no data loss. This is typically done for planned maintenance of
the primary system.
➢ During a switchover, the primary database transitions to a standby role, and the standby
database transitions to the primary role.The transition occurs without having to re-create
either database.
Failover
➢ A failover is when the primary database is unavailable.Failover is performed only in the event
of a catastrophic failure of the primary database, and the failover results in a transition of a
standby database to the primary role.
➢ The database administrator can configure Data Guard to ensure no data loss.
118
➢ The role transitions are invoked manually using -SQL statements On the remote destination,
the remote file server process (RFS) will, in turn, write the redo data to an archived redo log
file from a standby redo log file.
➢ Log applyservices use Redo Apply (MRP process1) or SQL Apply (LSP process2) to apply the
redo to the standby database.
➢ In other situations, the availability of the database may be more important than the loss of
data.Some applications require maximum database performance and can tolerate some small
amount of data loss.
➢ Maximum Protection
➢ Maximum Availability
➢ Maximum Performance
Maximum Protection
➢ This protection mode ensures that no data loss will occur if the primary database fails.
➢ To provide this level of protection, the redo data needed to recover each transaction must be
written to both the local online redo log and to the standby redo log on at least one standby
119
database before the transaction commits.
Maximum Availability
➢ This protection mode provides the highest level of data protection that is possible without
compromising the availability of the primary database.
➢ Like maximum protection mode, a transaction will not commit until the redo needed to
recover that transaction is written to the local online redo log and to the standby redo log of at
least one transactionally consistent standby database.
➢ Instead, the primary database operates in maximum performance mode until the fault is
corrected,and all gaps in redo log files are resolved.
➢ When all gaps are resolved, the primary database automatically resumes operating in
maximum availability mode.
➢ This mode ensures that no data loss will occur if the primary database fails, but only if a
second fault does not prevent a complete set of redo data from being sent from the primary
database to at least one standby database.
Maximum Performance
➢ This protection mode (the default) provides the highest level of data protection that is possible
without affecting the performance of the primary database.
➢ This is accomplished by allowing a transaction to commit as soon as the redo data needed to
recover that transaction is written to the local online redo log.
➢ The maximum protection and maximum availability modes require that standby redo log files
120
are configured on at least one standby database in the configuration.
➢ All three protection modes require that specific log transport attributes be specified on the
LOG_ARCHIVE_DEST_n initialization parameter to send redo data to at least one standby
database.
14.1 Steps
Primary-Side
Add the parameters in init.ora
background_dump_dest='/u01/app/oracle/admin/prim4/bdump'
compatible='10.2.0'
control_files='/u01/app/oracle/oradata/prim4/cf/c1.ctl'
db_file_name_convert='/u01/app/oracle/oradata/stan4/df','/u01/app/oracle/oradata/prim4/df'
db_name='prim4'
121
db_unique_name='prim4'
log_archive_config='dg_config=(prim4,stan4)'
fal_client='prim4'
fal_server='stan4'
log_archive_dest_1='LOCATION=/u01/app/oracle/oradata/prim4/af valid_for=(all_logfiles,all_roles)
db_unique_name=prim4'
log_archive_dest_2='SERVICE=stan4 valid_for=(online_logfiles,primary_role)
db_unique_name=stan4'
log_archive_dest_state_1='ENABLE'
log_archive_dest_state_2='ENABLE'
log_file_name_convert='/u01/app/oracle/oradata/stan4/rf','/u01/app/oracle/oradata/prim4/rf'
sga_max_size=300m
sga_target=250m
standby_file_management='auto'
undo_management='auto'
undo_tablespace='undotbs'
user_dump_dest='/u01/app/oracle/admin/prim4/udump'
remote_login_passwordfile=exclusive
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = prim)
(ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1)
(SID_NAME = prim)
)
(SID_DESC =
122
(GLOBAL_DBNAME = stan)
(ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1)
(SID_NAME = stan)
)
)
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))
(ADDRESS = (PROTOCOL = TCP)(HOST = 172.24.0.180)(PORT = 1521))
)
)
prim =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 172.24.0.180)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = prim)
)
)
stan =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 172.24.0.181)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
123
(SERVICE_NAME = stan)
)
)
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = stan)
(ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1)
)
(SID_DESC =
(SID_NAME = prim)
(ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1)
)
)
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))
(ADDRESS = (PROTOCOL = TCP)(HOST = 172.24.0.181)(PORT = 1521))
)
)
stan =
124
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 172.24.0.181)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = stan)
)
)
prim =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 172.24.0.180)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = prim)
)
)
SQL> startup
125
SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
FOR
---
YES
/u01/app/oracle/product/10.2.0/db_1/dbs
[oracle@station31 dbs]$ ls
126
remote_login_passwordfile=exclusive scope=spfile;
System altered.
SQL> startup
127
usres01.dbf 100% 100MB 12.5MB/s 00:08
oracle@172.24.0.181's password:
redo01.log 100% 10MB 10.0MB/s 00:01
redo02.log 100% 10MB 10.0MB/s 00:01
oracle@172.24.0.181's password:
oracle@172.24.0.181:/u01/app/oracle/product/10.2.0/db_1/dbs/orapwstan
oracle@172.24.0.181's password:
Copy the pfile from prim to stan db and rename and edit
background_dump_dest='/u01/app/oracle/admin/stan4/bdump'
compatible='10.2.0'
128
control_files='/u01/app/oracle/oradata/stan4/cf/stan4.ctl'
db_file_name_convert='/u01/app/oracle/oradata/prim4/df','/u01/app/oracle/oradata/stan4/df'
db_name='prim4'
db_unique_name='stan4'
fal_client='stan4'
fal_server='prim4'
log_archive_config='dg_config=(prim4,stan4)'
log_archive_dest_1='LOCATION=/u01/app/oracle/oradata/stan4/af valid_for=(all_logfiles,all_roles)
db_unique_name=stan4'
log_archive_dest_2='SERVICE=prim4 valid_for=(online_logfiles,primary_role)
db_unique_name=prim4'
log_archive_dest_state_1='ENABLE'
log_archive_dest_state_2='ENABLE'
log_file_name_convert='/u01/app/oracle/oradata/prim4/rf','/u01/app/oracle/oradata/stan4/rf'
remote_login_passwordfile='exclusive'
sga_max_size=300m
sga_target=250m
standby_file_management='auto'
undo_management='auto'
undo_tablespace='undotbs'
user_dump_dest='/u01/app/oracle/admin/stan4/udump'
oracle@172.24.0.181's password:
129
Standby Side
130
Automatic archival Enabled
Archive destination /u01/app/oracle/oradata/stan/af
Oldest online log sequence 65
Next log sequence to archive 66
Current log sequence 66
no rows selected
131
Current log sequence 68
RECOVERY_MODE
-----------------------
MANAGED
10 rows selected.
132
PRIM>>archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /u01/app/oracle/oradata/prim/af
Oldest online log sequence 70
Next log sequence to archive 71
Current log sequence 71
PRIM>>conn rash/rash
Connected.
1 row created.
PRIM>>/
133
Enter value for id: 20
Enter value for n: riaz
old 1: insert into emp values (&id,'&n')
new 1: insert into emp values (20,'riaz')
1 row created.
PRIM>>commit;
Commit complete.
ID NAME
---------- ----------------------
10 rash
20 riaz
PRIM>>conn / as sysdba
Connected.
134
PRIM>>alter system switch logfile;
System altered.
PRIM>>/
System altered.
USERNAME DEFAULT_TABLESPACE
------------------------------ ------------------------------
OUTLN SYSTEM
SYS SYSTEM
SYSTEM SYSTEM
RASH USERS
KAJAL USERS
135
DBSNMP SYSAUX
TSMSYS USERS
DIP USERS
8 rows selected.
DEST_ID STATUSDESTINATION
----------------------------------------------------------
1 VALID /u01/app/oracle/oradata/stan/af
2 VALID prim
DATABASE_ROLE SWITCHOVER_STATUS
-------------------------------------------------------------------------
136
PRIMARY TO STANDBY
PRIM>>shut immediate
Note: Standby database either mounted in redo apply mode or open for read only access.
PRIM>>startup mount
DATABASE_ROLE SWITCHOVER_STATUS
-------------------------------------------------------------------------------------
PRIMARY TO STANDBY
DATABASE_ROLE SWITCHOVER_STATUS
------------------------------------------------------------------------
137
PHYSICAL STANDBY TO PRIMARY
RECOVERY_MODE
--------------------------------
MANAGED
138
15 Upgrade Oracle database 10g R2 to 11g R2
Test to upgrade Oracle Database 10gR2 to 11gR2 by command-line. make sure my database >=
10.2.0.2
INSTANCE_NAME
----------------
oradb
SQL> @utlu112i.sql
Oracle Database 11.2 Pre-Upgrade Information Tool 11-23-2010 10:03:59
Script Version: 11.2.0.2.0 Build: 001
.
**********************************************************************
Database:
**********************************************************************
✓ name: ORADB
✓ version: 10.2.0.5.0
139
✓ compatible: 10.2.0.5.0
✓ blocksize: 8192
✓ platform: Linux x86 64-bit
✓ timezone file: V4
**********************************************************************
Tablespaces: [make adjustments in the current environment]
**********************************************************************
✓ SYSTEM tablespace is adequate for the upgrade.
✓ minimum required size: 693 MB
✓ UNDOTBS1 tablespace is adequate for the upgrade.
✓ minimum required size: 467 MB
✓ SYSAUX tablespace is adequate for the upgrade.
✓ minimum required size: 481 MB
✓ TEMP tablespace is adequate for the upgrade.
✓ minimum required size: 61 MB
.
**********************************************************************
Flashback: OFF
**********************************************************************
**********************************************************************
Update Parameters: [Update Oracle Database 11.2 init.ora or spfile]
Note: Pre-upgrade tool was run on a lower version 64-bit database.
**********************************************************************
✓ If Target Oracle is 32-Bit, refer here for Update Parameters:
✓ No update parameter changes are required.
✓ If Target Oracle is 64-Bit, refer here for Update Parameters:
✓ No update parameter changes are required.
.
**********************************************************************
140
Renamed Parameters: [Update Oracle Database 11.2 init.ora or spfile]
**********************************************************************
✓ No renamed parameters found. No changes are required.
**********************************************************************
Obsolete/Deprecated Parameters: [Update Oracle Database 11.2 init.ora or spfile]
**********************************************************************
✓ background_dump_dest 11.1 DEPRECATED replaced by "diagnostic_dest"
✓ user_dump_dest 11.1 DEPRECATED replaced by "diagnostic_dest"
.
**********************************************************************
Components: [The following database components will be upgraded or installed]
**********************************************************************
✓ Oracle Catalog Views [upgrade] VALID
✓ Oracle Packages and Types [upgrade] VALID
✓ JServer JAVA Virtual Machine [upgrade] VALID
✓ Oracle XDK for Java [upgrade] VALID
✓ Oracle Workspace Manager [upgrade] VALID
✓ OLAP Analytic Workspace [upgrade] VALID
✓ OLAP Catalog [upgrade] VALID
✓ EM Repository [upgrade] VALID
✓ Oracle Text [upgrade] VALID
✓ Oracle XML Database [upgrade] VALID
✓ Oracle Java Packages [upgrade] VALID
✓ Oracle interMedia [upgrade] VALID
✓ Spatial [upgrade] VALID
✓ Data Mining [upgrade] VALID
✓ Expression Filter [upgrade] VALID
✓ Rule Manager [upgrade] VALID
✓ Oracle OLAP API [upgrade] VALID
141
**********************************************************************
Miscellaneous Warnings
**********************************************************************
WARNING: Database is using a timezone file older than version 14.
.... After the release migration, it is recommended that DBMS_DST package
.... be used to upgrade the 10.2.0.5.0 database timezone version
.... to the latest version which comes with the new release.
**********************************************************************
Recommendations
**********************************************************************
✓ Oracle recommends gathering dictionary statistics prior to upgrading the database.
✓ To gather dictionary statistics execute the following command while connected as SYSDBA:
EXECUTE dbms_stats.gather_dictionary_stats;
**********************************************************************
✓ Oracle recommends reviewing any defined events prior to upgrading.
✓ To view existing non-default events execute the following commands while connected AS
SYSDBA:
142
Events:
Trace Events:
Start to upgrade:
143
SQL> startup upgrade;
ORACLE instance started.
SQL> @?/rdbms/admin/catupgrd.sql
SQL> spool off
➢ This is post upgrade script: only necessary when upgrading from ≥ 10.1
SQL> @?/rdbms/admin/catuppst.sql
➢ Recompile
SQL> @?/rdbms/admin/utlrp.sql
➢ During recompilation: check number of invalid objects
SQL> SELECT COUNT(*) FROM obj$ WHERE status IN (4, 5, 6);
144
Oracle Database 11.2 Post-Upgrade Status Tool 11-23-2010 15:40:48
...
Total Upgrade Time: 04:00:52
Compare invalid objects scripts
SQL> @?/rdbms/admin/utluiobj.sql
Create pfile from spfile and modify some parameters
SQL> create pfile='/tmp/pfile' from spfile;
File created.
Adjust time zone data
SQL> startup upgrade
Database opened.
SQL> exec dbms_dst.begin_upgrade(new_version => 11);
PL/SQL procedure successfully completed.
SQL> shutdown immediate;
SQL> startup
Database opened.
SQL> set serveroutput on;
SQL> declare num_of_failures number;
begin
dbms_dst.upgrade_database(num_of_failures);
dbms_output.put_line(num_of_failures);
dbms_dst.end_upgrade(num_of_failures);
dbms_output.put_line(num_of_failures);
end;
/
Check
-- Check Oracle Version
SQL> select instance_name from v$instance;
INSTANCE_NAME
145
----------------
oradb
SQL> select * from v$version;
BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0
PL/SQL Release 11.2.0.2.0 - Production
CORE 11.2.0.2.0 Production
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production
then copy/check listener and etc (from Old home to New home)
146