You are on page 1of 229

RMAN Incremental Backups

RMAN incremental backups backup only datafile blocks that have changed since a specified previous backup.
We can make incremental backups of databases, individual tablespaces or datafiles.
The primary reasons for making incremental backups parts of your strategy are:

For use in a strategy based on incrementally updated backups, where these incremental backups are
used to periodically roll forward an image copy of the database

To reduce the amount of time needed for daily backups

To save network bandwidth when backing up over a network

To be able to recover changes to objects created with the NOLOGGING option. For example, direct load

inserts do not create redo log entries and their changes cannot be reproduced with media recovery. They do,
however, change data blocks and so are captured by incremental backups.

To reduce backup sizes for NOARCHIVELOG databases. Instead of making a whole database backup

every time, you can make incremental backups.
As with full backups, if you are in ARCHIVELOG mode, you can make incremental backups if the database is
open; if the database is in NOARCHIVELOG mode, then you can only make incremental backups after a
consistent shutdown.
One effective strategy is to make incremental backups to disk, and then back up the resulting backup sets to a
media manager with BACKUP AS BACKUPSET. The incremental backups are generally smaller than full
backups, which limits the space required to store them until they are moved to tape. Then, when the incremental
backups on disk are backed up to tape, it is more likely that tape streaming can be sustained because all blocks
of the incremental backup are copied to tape. There is no possibility of delay due to time required for RMAN to
locate changed blocks in the datafiles.
RMAN backups can be classified in these ways:

Full or incremental

Open or closed

Consistent or inconsistent

Note that these backup classifications apply only to datafile backups. Backups of other files, such as archivelogs
and control files, always include the complete file and are never inconsistent.
Backup
Type
Full

Definition
A backup of a datafile that includes every allocated block in the file being backed up. A full backup of
a datafile can be an image copy, in which case every data block is backed up. It can also be stored
in a backup set, in which case datafile blocks not in use may be skipped.
A full backup cannot be part of an incremental backup strategy; that is, it cannot be the parent for a
subsequent incremental backup.
Incremental An incremental backup is either a level 0 backup, which includes every block in the file except blocks
compressed out because they have never been used, or a level 1 backup, which includes only those
blocks that have been changed since the parent backup was taken.
A level 0 incremental backup is physically identical to a full backup. The only difference is that the
level 0 backup is recorded as an incremental backup in the RMAN repository, so it can be used as
the parent for a level 1 backup.
Open
A backup of online, read/write datafiles when the database is open.
Closed
A backup of any part of the target database when it is mounted but not open. Closed backups can be

consistent or inconsistent.
Consistent A backup taken when the database is mounted (but not open) after a normal shutdown. The
checkpoint SCNs in the datafile headers match the header information in the control file. None of the
datafiles has changes beyond its checkpoint. Consistent backups can be restored without recovery.
Note: If you restore a consistent backup and open the database in read/write mode without
recovery, transactions after the backup are lost. You still need to perform an OPEN RESETLOGS.
Inconsistent A backup of any part of the target database when it is open or when a crash occurred
or SHUTDOWN ABORT was run prior to mounting.
An inconsistent backup requires recovery to become consistent.
The goal of an incremental backup is to back up only those data blocks that have changed since a previous
backup. You can use RMAN to create incremental backups of datafiles, tablespaces, or the whole database.
During media recovery, RMAN examines the restored files to determine whether it can recover them with an
incremental backup. If it has a choice, then RMAN always chooses incremental backups over archived logs, as
applying changes at a block level is faster than reapplying individual changes.
RMAN does not need to restore a base incremental backup of a datafile in order to apply incremental backups to
the datafile during recovery. For example, you can restore non-incremental image copies of the datafiles in the
database, and RMAN can recover them with incremental backups.
Incremental backups allow faster daily backups, use less network bandwidth when backing up over a network,
and provide better performance when tape I/O bandwidth limits backup performance. They also allow recovery of
database changes not reflected in the redo logs, such as direct load inserts. Finally, incremental backups can be
used to back up NOARCHIVELOG databases, and are smaller than complete copies of the database (though
they still require a clean database shutdown).
One effective strategy is to make incremental backups to disk (as image copies), and then back up these image
copies to a media manager with BACKUP AS BACKUPSET. Then, you do not have the problem of keeping the
tape streaming that sometimes occurs when making incremental backups directly to tape. Because incremental
backups are not as big as full backups, you can create them on disk more easily.
Incremental Backup Algorithm
Each data block in a datafile contains a system change number (SCN), which is the SCN at which the most
recent change was made to the block. During an incremental backup, RMAN reads the SCN of each data block
in the input file and compares it to the checkpoint SCN of the parent incremental backup. If the SCN in the input
data block is greater than or equal to the checkpoint SCN of the parent, then RMAN copies the block.
Note that if you enable the block change tracking feature, RMAN can refer to the change tracking file to identify
changed blocks in datafiles without scanning the full contents of the datafile. Once enabled, block change
tracking does not alter how you take or use incremental backups, other than offering increased performance.
Level 0 and Level 1 Incremental Backups
Incremental backups can be either level 0 or level 1. A level 0 incremental backup, which is the base for
subsequent incremental backups, copies all blocks containing data, backing the datafile up into a backup set just
as a full backup would. The only difference between a level 0 incremental backup and a full backup is that a full
backup is never included in an incremental strategy.
A level 1 incremental backup can be either of the following types:

A differential backup, which backs up all blocks changed after the most recent incremental backup at
level 1 or 0

A cumulative backup, which backs up all blocks changed after the most recent incremental backup at

level 0
Incremental backups are differential by default.
Note:
Cumulative backups are preferable to differential backups when recovery time is more important than disk space,
because during recovery each differential backup must be applied in succession. Use cumulative incremental
backups instead of differential, if enough disk space is available to store cumulative incremental backups.
The size of the backup file depends solely upon the number of blocks modified and the incremental backup level.
Differential Incremental Backups
In a differential level 1 backup, RMAN backs up all blocks that have changed since the most recent cumulative or
differential incremental backup, whether at level 1 or level 0. RMAN determines which level 1 backup occurred
most recently and backs up all blocks modified after that backup. If no level 1 is available, RMAN copies all
blocks changed since the level 0 backup.
The following command performs a level 1 differential incremental backup of the database:
RMAN> BACKUP INCREMENTAL LEVEL 1 DATABASE;

If no level 0 backup is available, then the behavior depends upon the compatibility mode setting. If compatibility is
>=10.0.0, RMAN copies all blocks changed since the file was created, and stores the results as a level 1 backup.
In other words, the SCN at the time the incremental backup is taken is the file creation SCN. If compatibility
<10.0.0, RMAN generates a level 0 backup of the file contents at the time of the backup, to be consistent with the
behavior in previous releases.

In the example shown in above Figure, the following occurs:

Sunday
An incremental level 0 backup backs up all blocks that have ever been in use in this database.

Monday - Saturday

On each day from Monday through Saturday, a differential incremental level 1 backup backs up all blocks that
have changed since the most recent incremental backup at level 1 or 0. So, the Monday backup copies blocks
changed since Sunday level 0 backup, the Tuesday backup copies blocks changed since the Monday level 1
backup, and so forth.

The cycle is repeated for the next week.

Cumulative Incremental Backups
In a cumulative level 1 backup, RMAN backs up all the blocks used since the most recent level 0 incremental
backup. Cumulative incremental backups reduce the work needed for a restore by ensuring that you only need
one incremental backup from any particular level. Cumulative backups require more space and time than
differential backups, however, because they duplicate the work done by previous backups at the same level.
The following command performs a cumulative level 1 incremental backup of the database:
BACKUP INCREMENTAL LEVEL 1 CUMULATIVE DATABASE; # blocks changed since level 0

In the example shown in Figure, the following occurs:

Sunday
An incremental level 0 backup backs up all blocks that have ever been in use in this database.

Monday - Saturday
A cumulative incremental level 1 backup copies all blocks changed since the most recent level 0 backup.
Because the most recent level 0 backup was created on Sunday, the level 1 backup on each day Monday
through Saturday backs up all blocks changed since the Sunday backup.

The cycle is repeated for the next week.

Basic Incremental Backup Strategy
Choose a backup scheme according to an acceptable MTTR (mean time to recover). For example, you can
implement a three-level backup scheme so that a full or level 0 backup is taken monthly, a cumulative level 1 is

A backup strategy based on incrementally updated backups can help minimize time required for media recovery of your database. DATAFILE_BLOCKS FROM V$BACKUP_DATAFILE WHERE INCREMENTAL_LEVEL > 0 AND BLOCKS / DATAFILE_BLOCKS > .dbf. RMAN can restore from this incrementally updated copy and then apply changes from the redo log. while providing the same recovery advantages as image copy backups. For example. When deciding how often to take full or level 0 backups. a good rule of thumb is to take a new level 0 whenever 50% or more of the data has changed. take a new level 0. then when the most recent level 1 backup is about half of the size of the base level 0 backup. Oracle Scheduler  E-mail Notification . if you only create level 1 cumulative backups.taken weekly. The following query displays the number of blocks written to a backup set for each datafile with at least 50% of its blocks backed up: SELECT FILE#.  File Watcher . . you never have to apply more than a day's worth of redo for complete recovery. Incrementally Updated Backups: Rolling Forward Image Copy Backups Oracle's Incrementally Updated Backups feature lets you avoid the overhead of taking full image copy backups of datafiles. Compare the number of blocks in differential or cumulative backups to a base level 0 backup. Making Incremental Backups: BACKUP INCREMENTAL After starting RMAN.File watcher enables jobs to be triggered when a file arrives on a given machine.2) users can now get e-mail notifications on any job activity. then at recovery time. This example makes a differential level 1 backup of the SYSTEM tablespace and datafile tools01. at regular intervals. rolling it forward to the point in time when the level 1 incremental was created. Then. If the rate of change to your database is predictable. During restore and recovery of the database. and applied to the image copy backup. COMPLETION_TIME. BACKUP INCREMENTAL LEVEL = 1 CUMULATIVE TABLESPACE users. such as daily. This example makes a cumulative level 1 backup of the tablespace users. In this scheme.5 ORDER BY COMPLETION_TIME. if you run scripts to implement this strategy daily. and a differential level 1 is taken daily.Oracle Database 11g Release 2 (11. At the beginning of a backup strategy. with the same results as restoring the database from a full backup taken at the SCN of the most recently applied incremental level 1 backup. then you can observe the size of your incremental backups to determine when a new level 0 is appropriate. backing up all blocks changed since the most recent level 0 backup. INCREMENTAL_LEVEL. level 1 incremental backups are taken. For example. BLOCKS.dbf'. It will only back up those data blocks changed since the most recent level 1 or level 0 backup: BACKUP INCREMENTAL LEVEL 1 TABLESPACE SYSTEM DATAFILE 'ora_home/oradata/trgt/tools01. RMAN creates an image copy backup of the datafile. you never have more than one day of redo to apply. This example makes a level 0 incremental backup of the database: BACKUP INCREMENTAL LEVEL 0 DATABASE. run the BACKUP INCREMENTAL command at the RMAN prompt.

REMAP_DATA.  DDL wait option . ENCRYPTION. memory. New option in Datapump export interactive mode .  New DIAGNOSTIC_DEST parameter as replacement for BACKGROUND_DUMP_DEST. The xml file gives a lot more information than the traditional alert log file.  Masking: when we import data from production to test or development instances. TRANSPORTABLE export.log (inDIAGNOSTIC_DEST/trace) and the other one is a log.  New options in Datapump DATA_OPTIONS.Datapump  New options in Datapump DATA_OPTIONS. Dumpfile will be uncompressed automatically before importing. log.  Encryption: The dumpfile can be encrypted while creating. server parameter file (spfile) is in new format that is compliant with Oracle Hardware Assisted Resilient Data(HARD). is not enabled by default and we must change the new parameter SQL> ALTER SYSTEM SET enable_ddl_logging=TRUE SCOPE=BOTH. import. REMAP_TABLE. It defaults to $ORACLE_BASE/diag/. SID clause in "alter system reset" command is optional. are obfuscated/remapped (altered in such a way that they are not identifiable). REMAP_DATA. not just on the encrypted columns as it was in the Oracle Database 10g. SQL> alter system [SID=instance-name] reset parameter-name. We can have logging information for DDL operations in the alert log files. ENCRYPTION_MODE. We can define the statistics to be pending. not only on single column. it will be renamed and will create new alert log file.     From Oracle 11g.  Parameter(p) file & server parameter(sp) file can SQL> create pfile[=location] SQL> create spfile[=location] from memory. not (ii) on multiple columns (column group).  the DDL again. ADRCI> show alert  Logging information for DDL operations will be written into alert log files.xml file (in DIAGNOSTIC_DEST/alert). PARTITION_OPTIONS. In Oracle 10g. From 11g. If log.  be to created TRUE. both data & metadata can be compressed.  In datapump PARTITION_OPTIONS.Oracle will automatically wait for the specified time period during DDL operations and will try to run SQL> ALTER SYSTEM/SESSION SET DDL_LOCK_TIMEOUT = n.REUSE_DUMPFILES.xml can be accessed from ADR command line. we can (i) expressions of values. from from memory. One is the traditional alert_SID. we have to make sure sensitive data such as credit card details. we can specify how the partitions should transform by using  Dumpfile can be compressed. we have two alert log files. ENCRYPTION_ALGORITHM. CORE_DUMP_DEST and USER_DUMP_DEST. create only extended statistics on on columns . From 11g. From 11g.  From Oracle Database 11g. This encryption occurs on the entire dumpfile. TRANSPORTABLE  import. only metadata can be compressed. Data Pump enables us do that by creating a masking function and then using that during import. which means newly gather statistics will not be published or used by the optimizer — giving us an opportunity to test the new statistics before we publish them. REUSE_DUMPFILES.xml reaches 10MB size.  From 11g. etc.

de-duplicated and compressed for more efficient storage.INITIATE_FS_FAILOVER.  New package and procedure. the log transferring service will wait for specified timeout value (set by net_timeout=n) and then give up..Standby databases can now simultaneously be in read and recovery mode .flashback will make use of flashback logs. write access to SecureFiles is faster than a standard Linux file system.. in FRA (Flash/Fast Recovery Area).so use it for running reports 24x7.  LogMiner can be accessed from Oracle Enterprise Manager.) lob (lob-column) store as securefile .. SQL> exec dbms_stats. and logged at several levels to reduce the mean time to recover (MTTR) after a crash. not NULL. by setting compression=enable. Table level control of CBO statistics refresh threshold.  Users with default passwords can be found in DBA_USERS_WITH_DEFPWD.  Logical standby databases now support XML and CLOB datatypes as well as transparent data encryption. introduced to programmatically initiate a failover.physical standby database can be temporarily converted into an updateable one called snapshot standby database. To create SecureFiles: (i) The initialization parameter db_securefile should be set to PERMITTED (the default value).  Offload: Complete database and fast incremental backups. provides the benefits of LOBs and external files. create table table-name ( .. if the standby does not respond in time..  Incremental backup on physical readable physical standby.  Snapshot standby database .  We can compress the redo data that goes to the standby server. SecureFiles can be encrypted for security. Default is 24 hours (1440 minutes). logical standby provides support for DBMS_SCHEDULER..  Flashback Data Archive . SecureFiles SecureFiles provide faster access to unstructured data than normal file systems. If you want to see the value.  From Oracle 11g.  Analytic Workspace Manager (AWM) .  From Oracle 11g..  Default value for audit_trail is DB. query USER$.  When transferring redo data to standby.  Online upgrades: Test on standby and roll to primary. DBMS_DG. For example.set_table_prefs(’HR’. lob-column lob-type [deduplicate] [compress high/low] [encrypt using 'encryptionalgorithm'] [cache/nocache] [logging/nologging] . cached (or not) for faster access (or save the buffer cache space). (ii) The tablespace where we are creating the securefile should be Automatic Segment Space Management (ASSM) enabled (default mode in Oracle Database 11g). Can be purged at regular intervals automatically. Data  Guard improvements Oracle Active Data Guard . EMP’. while read access is about the same.  Creation of physical standby is become easier. will not use undo. The log_auto_delete parameter must be coupled with the log_auto_del_retention_target parameter to specify the number of minutes an archivelog is maintained until it is purged. Real Application Testing(RAT) .a tool to manage OLAP objects in the database. Flashback data archives can be defined on any table/tablespace.  Hash value of the passwords in DBA_USERS (in ALL_USERS and USER_USERS) will be blank. ‘20'). By default some system privileges will beaudited. we can control archive log deletion by setting the log_auto_delete initialization parameter to TRUE. ‘STALE_PERCENT’. explicitly created for that table. Flashback data archives are written by a dedicatedbackground process called FBDA so there is less impact on performance.

The Windows SQL*Plus GUI is deprecated. The location of the files depends on DIAGNOSTIC_DEST parameter. Data Block Integrity Checker. shipped with the DB.  The dbms_stats package has several new procedures to aid in supplementing histogram data.  Checkers .  APEX (Oracle Application Express). space..DB Structure Integrity Checker. initialization parameter changes./adrci  Repair advisors to guide DBAs through the fault diagnosis and resolution process./temp03. Dictionary Integrity Checker.  "duality" between SQL and XML .automated capture of fault diagnostics for faster fault resolution. specified size. Features based patching is also available.  SQL Developer is installed with the database server software (all editions). upgradation. and operating system changes and moving to RAC environment.  Automatic Diagnostic Repository (ADR).  Real-time SQL Monitoring.  SQL Performance Analyzer (SPA) . up to SQL> alter tablespace temp-tbs shrink SQL> alter tablespace temp-tbs shrink space keep SQL> alter tablespace temp-tbs shrink tempfile '.  Online application upgrades and hot patching.dbf' We can check free temp space in new view DBA_TEMP_FREE_SPACE. Transaction Integrity Checker. while creating global temporary tables.  11g SQL Access Advisor provides recommendations with respect to the entire workload. a new XML index & better XQuery support. n{K|M|G|T|P|E}.users can embed XML within PL/SQL and vice versa.  Query rewriting will occur more frequently and for remote tables also. Other features  Temporary tablespace or it's tempfile can be shrinked.capture production workload and replay on different (standby/test/development) environment. The stats are exposed through V$SQL_MONITOR.Health Monitor utility is an automation of the dbms_repair corruption detection utility. This can be managed from Database control or command line.  New binary XML datatype. RAT consists of two components:  Database Replay . For command line. and database upgrades. and then produces a comparison report to help us assess their impact. Replay the process on target database.  Oracle 11g introduced server side connection pool called Database Resident Connection Pool (DRCP).  Health Monitor (HM) utility . and the state of these extended histograms dbms_stats. allows us to see the different metrics of the SQL being executed in real time.identifies SQL execution plan changes and performance regressions. .show_extended_stats_name dbms_stats. formerly known as HTML DB. Accessible through Oracle Enterprise Manager or dbms_sqlpa package. Transfer these files to target box.create_extended_stats dbms_stats. keep n{K|M|G|T|P|E}. including considering the cost of creation and maintaining access structure.. object changes.  hangman Utility – hangman(Hang Manager) utility to detect database bottlenecks. Capture the activities from source database in the form of capture files in capture directory. patching. hardware replacements. execute $ . which is refreshed every second. Redo Integrity Checker.  From 11g. we can specify TEMPORARY tablespaces. Undo Segment Integrity Checker.drop_extended_stats can be seen in the user_tab_col_statistics view:  New package DBMS_ADDM introduced in 11g.Real Application Testing (RAT) will make decision making easier in migration. SPA allows us to get results of some specific SQL or entire SQL workload against various types of changes such as initialization parameter changes. optimizer statisticsrefresh.

Solaris to HP-UX)..g.  Oracle Enterprise Manager (OEM) became browser based. If ENDIAN formats are different we have to use RMAN.0) .an extension of the clustering feature (Real Application Clusters). which has same ENDIAN formats.  Performance and scalability improvements. as LOCAL managed tablespace.  Automated Storage processes related to ASM. table or database level. Imp is still supported for backwards compatibility. Grid Control is used for accessing/managing multiple instances. whether permanent or temporary. ARBx  Manageability improvements (self-tuning features).  undo extents.  Ability to UNDROP (Flashback Drop) a table using a recycle bin.  copy command is deprecated.1. redolog files mentioned in control file and will SP file also.faster data movement with expdp and impdp. Through any browser we can access data of a database in Oracle Enterprise Manager Database Control. Management (ASM).  Windows SQL*Plus GUI & iSQLPlus will not be shipped anymore.  SYSAUX tablespace has been introduced as an auxiliary to SYSTEM.1.0): What's New in Oracle 10g The following new features were introduced with Oracle 10g: Oracle 10g Release 1 (10.  Oracle Enterprise Manager Java console. delete . RETENTION SQL> ALTER TABLESPACE UNDO_TS RETENTION GUARANTEE. GUARANTEE. transaction.January 2004  Grid computing . will delete the datafiles. RBAL.. are the new background  Ability to rename tablespaces (except SYSTEM and SYSAUX).  NID utility has been introduced to change the database name and id. using the following command: SQL> ALTER TABLESPACE oldname RENAME TO newname. Windows to Linux. features (11.  Automatic Database Diagnostic Monitor (ADDM).  Active Session History (ASH).  Automatic Workload Repository (AWR). ASMB. successor for normal exp/imp. In Oracle 10g.  Flashback operations available on row. undo tablespace can guarantee the retention of unexpired SQL> CREATE UNDO TABLESPACE . Use SQL Developer instead.Desupported The following features are desupported/deprecated in Oracle Database 11g Release 1  Oracle export utility (exp).  Ability to transport tablespaces across platforms (e.  Datapump.  New 'drop database' statement.

shared pool.  Rules-Based Optimizer (RBO) is desupported (not deprecated). o DBMS_MONITOR.CTWR will be useful in RMAN. ADDM. This is called Automatic Shared Memory Management (ASMM). ASH. shared pool and java pool. streams pool and the buffer pools for nonstandard block sizes and the non-default ones for KEEP or RECYCLE. will help in working with several advisors. It's also used to update statistics and provide a heart beat mechanism. SGA_TARGET = DB_CACHE_SIZE + SHARED_POOL_SIZE + JAVA_POOL_SIZE + LARGE_POOL_SIZE  o New background processes in Oracle 10g Memory Manager (maximum 1) MMAN . ALTER DATABASE PREPARE TO SWITCHOVER TO PRIMARY. It is a new process added to Oracle 10g as part of automatic shared memory management.  Support for bigfile tablespaces are up to 8EB (Exabytes) in size. which can call OS utilities and programs. RESTRICT  New memory structure in SGA i. not just PL/SQL program units like DBMS_JOB package. java pool and large pool. fordatapump activities & streams replication.ARBx is configured by ASM_POWER_LIMIT. Streams MOUNT pool EXCLUSIVE.  Temporary tablespace groups to group multiple temporary tablespaces into a single group. (streams_pool_size parameter).e. o o Change Tracking Writer (maximum 1) CTWR .  o New features in RMAN Managing recovery related files with flash/fast recovery area. schedules and job classes. thus reducing the time to complete the switchover.  DBA can specify a default tablespace for the database. programs. It doesn't include log buffer. large pool.New background process in Oracle 10g. the ability to prepare the primary database and logical standby for a switchover. to change the value of SGA dynamically. It includes buffer cache. sga_target. On primary. ASMB . ALTER DATABASE PREPARE TO SWITCHOVER TO LOGICAL STANDBY. o DBMS_FILE_TRANSFER package to transfer files. DBMS_WORKLOAD_REPOSITORY.RBAL is the ASM related process that performs rebalancing of disk resources controlled by ASM. On logical standby.  From Oracle Database 10g. o o DBMS_ADVISOR.MMAN dynamically adjust the sizes of the SGA components like DBC. o Memory Monitor Light (maximum 1) MMNL . useful  Introduced new init parameter. to enable end-to-end tracing (tracing is not done only by session. . to aid AWR.MMON monitors SGA and performs various manageability related background tasks. o Re-Balance RBAL .  o New packages DBMS_SCHEDULER.  Auditing: FGA (Fine-grained auditing) now supports DML statements in addition to selects. o Actual Rebalance ARBx .This ASMB process is used to provide information to and from cluster synchronization services used by ASM to manage the disk resources.SQL> STARTUP SQL> DROP DATABASE. o Memory Monitor (maximum 1) MMON . By using this package we can create jobs. but by client identifier).

the login.. From Oracle 10g. New compression algorithm BZIP2 brought in. Cross-platform tablespace conversion. So SQL prompt will be changed after connect command. such as masking columns selectively based on the policy and applying the policy only . Ability to preview the RMAN> restore RMAN> restore tablespace tbs1 preview. .  backups required database to perform preview a restore operation. From 10g. SQL> spool result. This enhancement not only means we have two fewer characters to type.. sqlplus / as sysdba (as well as old sqlplus "/ as sysdba" or sqlplus '/ as sysdba').sql file is not only executed at SQL*Plus startup time. 10g allows us SQL> SQL> SQL> SQL> save myscripts append 6. Automated Tablespace Point-in-Time Recovery. SQL*Plus enhancements 1.log append 5. 4. but provides some additional benefits such as not requiring escape characters in operating systems such as Unix. but also at connect time as well. Now we can login as SYSDBA without the quotation marks.. The default SQL> prompt can be changed by setting the below parameters in $ORACLE_HOME/sqlplus/admin/glogin.o Optimized incremental backups using block change tracking (Faster incremental backups) using a file (named block change tracking file).. [summary]. o o o o o o o Comprehensive backup job tracking and administration with Enterprise Manager.  statements as Query1 save Query2 appended to the files. Virtual Private Database (VPD) has grown into a very powerful feature with the ability to support a variety of requirements. the spool to save command can append to an existing one. o Reducing the time and overhead of full backups with incrementally updated backups. Automatic channel failover on backup & restore..sql  _connect_identifier (will prompt DBNAME>)  _date (will prompt DATE>)  _editor  _o_version  _o_release  _privilege (will prompt AS SYSDBA> or AS SYSOPER> or AS SYSASM>)  _sqlplus_release  _user (will prompt USERNAME>) 2. 3. CTWR (Change Tracking Writer) is the background process responsible for tracking the blocks. describe command can give description of rules and rule sets. myscripts . Backup set binary compression..

 Async COMMITs. making the feature applicable to multiple situations. SQL> select PRIVILEGE from role_sys_privs where ROLE='CONNECT'.  We can now shrink segments. statistics are collected automatically if STATISTIC_LEVEL is set to TYPICAL or ALL. Before 10g. data dictionary objects and fixed objects (x$ tables).September 2005  New asmcmd utility for managing ASM storage.  Transparent Data Encryption.  The CONNECT role can now only connect (CREATE privileges are removed).when certain columns are accessed. instead of ATOMIC_REFRESH to TRUE. MONITORING command.  Statistics can be collected for SYS schema..2. The performance of the policy can also be increased through multiple types of policy by exploiting the nature of the application. No need of ALTER TABLE . tables and indexes to reclaim free blocks. From 10g.  (ASSM) is enabled in the tablespace.  Fast Start Failover for Data Guard was introduced in Oracle 10g R2.. PRIVILEGE ---------------------------------------CREATE VIEW CREATE TABLE ALTER SESSION CREATE CLUSTER CREATE SESSION CREATE SYNONYM CREATE SEQUENCE CREATE DATABASE LINK truncate.  Complete refresh of materialized views will do delete. by setting .  o o o o o o Introduced Advisors SQL Access Advisor SQL Tune Advisor Memory Advisor Undo Advisor Segment Advisor MTTR (Mean Time To Recover) Advisor Oracle 10g Release 2 (10.0) . provided that Automatic Segment Space Management SQL> alter table table-name shrink space.  Passwords for DB Links are encrypted.

Recover the database .g. Oracle9i introduced automatic undo management.  When operating in automatic undo management mode. However. Until Oracle 8i. freed. undo records are used to undo any uncommitted changes applied from the redo log to the data files. any manual undo management SQL statements and initialization parameters are ignored and no error message will be issued e. consumed. rather than by DBA.  Provide read consistency . PRIVILEGE ---------------------------------------CREATE SESSION Undo Tablespace/Undo Management in Oracle Oracle9i introduced undo. What Is Undo and Why? Oracle Database has a method of maintaining information that is used to rollback or undo the changes to the database. to create undo tablespaces in a database that is using rollback segments. From Oracle 9i.From 10g. These records are called rollback or undo records.  Recover from logical corruptions using Flashback features. manual undo management mode. and reused — all under the control of Oracle Database.  System rollback segment exists in both the modes. These records are used to:  Rollback transactions . which allows the dba to exert more control on how long undo information is retained. or to drop rollback segments in a database that is using undo tablespaces. for example. ALTER ROLLBACK SEGMENT statements will be ignored. Notes:  Although both rollback segments and undo tablespaces are supported.during database recovery. SYS> select PRIVILEGE from role_sys_privs where ROLE='CONNECT'. you must bounce the database in order to effect the switch to another method of managing undo. although for migration purposes it is possible.   Analyze data as of an earlier point in time by using Flashback Query. the rollback segments method is referred as "Manual Undo Management Mode" and the new undo tablespaces method as the "Automatic Undo Management Mode". undo records are used to undo changes that were made to the database by the uncommitted transaction. simplifies undo space management and also eliminates the complexity of managing rollback segments. both modes cannot be used in the same database instance. Oracle Database keeps records of actions of transactions.e. Oracle uses rollback segments to manage the undo data. Automatic Undo Management UNDO_MANAGEMENT The following initialization parameter setting causes the STARTUP command to start an instance in automatic undo management mode: UNDO_MANAGEMENT = AUTO The default value for this parameter is MANUAL i. . Oracle strongly recommends that you use undo tablespace to manage undo rather than rollback segments.undo records provide read consistency by maintaining the before image of the data for users who are accessing the data at the same time that another user is changing it. before they are committed and Oracle needs this information to rollback or undo the changes to the database. Space for undo segments is dynamically allocated.when a ROLLBACK statement is issued.

Retention is specified in units of seconds. contents from dba_tablespaces where contents = 'UNDO'. the STARTUP command will fail. The effect of the UNDO_RETENTION parameter is immediate. long-running queries sometimes require old undo information for undoing changes and producing older images of data blocks. This action can potentially cause some queries to fail with the ORA-01555 "snapshot too old" error message. the database automatically selects for use the first available undo tablespace. undo is kept in the tablespace. However. This is not recommended. The default value for the UNDO_RETENTION parameter is 900. but it can only be honored if the current undo tablespace has enough space. To findout the undo tablespaces in SQL> select tablespace_name. database To findout the current undo tablespace SQL> show parameter undo_tablespace (OR) SQL> select VALUE from v$parameter where NAME='undo_tablespace'. ORA-01552 error is issued for any attempts to write non-SYSTEM related undo to the SYSTEM rollback segment. If an active transaction requires undo space and the undo tablespace does not have available space.UNDO_TABLESPACE UNDO_TABLESPACE an optional dynamic parameter. If the database contains multiple undo tablespaces. if you have not already created the undo tablespace. for consistent read purposes. and an alert message is written to the alert log file to warn that the system is running without an undo tablespace. This value specifies the amount of time. The success of several Flashback features can also depend upon older undo information. into which the database will store undo records. You can set the UNDO_RETENTION in the parameter file: UNDO_RETENTION = 1800 You can change the UNDO_RETENTION value at any time using: SQL> ALTER SYSTEM SET UNDO_RETENTION = 2400. An undo tablespace must be available. If there is no undo tablespace available. you can optionally specify at startup that you want an Oracle Database instance to use a specific undo tablespace. The system retains undo for at least the time specified in this parameter. but uses the SYSTEM rollback segment for undo. . UNDO_TABLESPACE = undotbs In this case. specifying the name of an undo tablespace to use. This is done by setting the UNDO_TABLESPACE initialization parameter. UNDO_RETENTION Committed undo information normally is lost when its undo space is overwritten by a newer transaction. then the system starts reusing unexpired undo space (if retention is not guaranteed). can be changed online. the instance starts. The UNDO_TABLESPACE parameter can be used to assign a specific undo tablespace to an instance in an Oracle Real Application Clusters (RAC) environment. When the instance starts up. The default undo tablespace is created at database creation. or an undo tablespace can be created explicitly.

The amount of time for which undo is retained for Oracle Database for the current undo tablespace can be obtained by querying the TUNED_UNDORETENTION column of the V$UNDOSTAT dynamic performance view. you can enable retention guarantee. the database never overwrites unexpired undo data i. the system may retain undo for a shorter duration than that specified by the low threshold value in order to allow DML operations to succeed. UNDO_SUPRESS_ERRORS = false Retention Guarantee Oracle Database 10g lets you guarantee undo retention. SQL> select tuned_undoretention from v$undostat. When you enable this option. you can use the DBA_TABLESPACES view to determine the RETENTION setting for the undo tablespace. or NOT APPLY (used for tablespaces other than the undo tablespace).UNDO_RETENTION applies to both committed and uncommitted transactions since the introduction offlashback query feature in Oracle needs this information to create a read consistent copy of the data in the past. A column named RETENTION will contain a value on GUARANTEE. which means that the database can overwrite the unexpired undo data in order to avoid failure of DML operations if there is not enough free space left in the undo tablespace. You can set a low threshold value for the UNDO_RETENTION parameter so that the system retains the undo for at least the time specified in the parameter. You do not guarantee that unexpired undo is preserved if you specify the RETENTION NOGUARANTEE clause.e. undo data whose age is less than the undo retention period. This option is disabled by default. . because it can cause DML operations to fail if the undo tablespace is not big enough. and you can guarantee a time window in which the execution of Flashback features will succeed. This option must be used with caution. The RETENTION value for LOB columns is set to the value of the UNDO_RETENTION parameter. Set this to true to suppress the errors generated when manual management SQL operations are issued in an automated management mode. long-running queries can complete without risk of receiving the ORA-01555 "snapshot too old" error message. Oracle Database 10g automatically tunes undo retention by collecting database use statistics and estimating undo capacity needs for the successful completion of the queries. From 10g. You enable the guarantee option by specifying the RETENTION GUARANTEE clause for the undo tablespace when it is created by either the CREATE DATABASE or CREATE UNDO TABLESPACE statement or you can later specify this clause in an ALTER TABLESPACE statement. UNDO_SUPRESS_ERRORS In case your code has the alter transaction commands that perform manual undo management operations. Size of Undo Tablespace You can size the undo tablespace appropriately either by using automatic extension of the undo tablespace or by manually estimating the space. However. Under space constraint conditions. with proper settings. Automatic tuning of undo retention is not supported for LOBs. provided that the current undo tablespace has enough space. In order to guarantee the success of queries even at the price of compromising the success of DML operations. A typical use of the guarantee option is when you want to ensure deterministic and predictable behavior of Flashback Query by guaranteeing the availability of the required undo data. NOGUARANTEE.

you can view the output and recommendations in the Automatic Database Diagnostic Monitor (ADDM) in Enterprise Manager. 'START_SNAPSHOT'. 'null'. 'TARGET_OBJECTS'. oid). tname VARCHAR2(30). DBMS_ADVISOR. DBMS_ADVISOR. 1). null.null. DBMS_ADVISOR. BEGIN DBMS_ADVISOR. You activate the Undo Advisor by creating an undo advisor task through the advisor framework. After the system has stabilized and you are more familiar with undo space requirements. you can ensure that long-running queries will succeed by guaranteeing the undo required for such queries. 'Undo Advisor Task').'UNDO_TBS'. An adjustment to the collection interval and retention period for AWR statistics can affect the precision and the type of recommendations the advisor produces. you may be unsure of the space requirements of the undo tablespace.SET_TASK_PARAMETER(tname. you can enable automatic extension for datafiles of the undo tablespace so that they automatically increase in size when more space is needed. the START_SNAPSHOT is "1" and END_SNAPSHOT is "2". The Undo Advisor relies for its analysis on data collected in the Automatic Workload Repository (AWR).SET_TASK_PARAMETER(tname. tname. 'END_SNAPSHOT'. In this case. null.SET_TASK_PARAMETER(tname. tid. oid NUMBER. By combining automatic extension of the undo tablespace with automatically tuned undo retention. the Undo Advisor can help us estimate needed capacity. The name of the advisor is 'Undo Advisor'. Undo Advisor Oracle Database provides an Undo Advisor that provides advice on and helps automate the establishment of your undo environment. and you can then calculate the amount of retention your system will need. DBMS_ADVISOR. You can access the Undo Advisor through Enterprise Manager or through the DBMS_ADVISOR package. / Once you have created the advisor task. The analysis is based on AWR snapshots. In the following example. DBMS_ADVISOR. Calculating space requirements for Undo tablespace You can calculate space requirements manually using the following formula: Undo Space = UNDO_RETENTION in seconds * undo blocks for each second + overhead . which you must specify by setting parameters START_SNAPSHOT and END_SNAPSHOT.Oracle Database supports automatic extension of the undo tablespace to facilitate capacity planning of the undo tablespace in the production environment.CREATE_TASK('Undo Advisor'. When the system is first running in the production environment. The following example creates an undo advisor task to evaluate the undo tablespace.execute_task(tname). oid). This information is also available in the DBA_ADVISOR_* data dictionary views. DECLARE tid NUMBER. 2). If you have decided on a fixed-size undo tablespace.CREATE_OBJECT(tname. Oracle recommends that you set the maximum size of the tablespace to be slightly (10%) more than the current size of the undo tablespace. end.

UNDO TABLESPACE undotbs_01 DATAFILE '/path/undo01. The first method creates the undo tablespace when the CREATE DATABASE statement is issued.dbf' SIZE 2M REUSE You can create more than one undo tablespace. DATAFILE '/path/undo02. the entire operation fails. The CREATE UNDO TABLESPACE statement is the same as the CREATE TABLESPACE statement. since most aspects of undo tablespaces are system managed.dbf' RETENTION GUARANTEE. but only one of them can be active at any one time. However. you need only be concerned with the following actions  Adding datafile SQL> ALTER TABLESPACE undotbs ADD DATAFILE '/path/undo0102. It is reserved for system-managed undo data. You cannot create database objects in an undo tablespace.where: * Undo Space is the number of undo blocks * overhead is the small overhead for metadata and based on extent and file size (DB_BLOCK_SIZE) As an example. if UNDO_RETENTION is set to 2 hours. In the steady state. This occurs when you are creating a new database.8GBs Such computation can be performed by using information in the V$UNDOSTAT view. It uses the CREATE UNDO TABLESPACE statement. If the undo tablespace cannot be created successfully during CREATE DATABASE. and the transaction rate (UPS) is 200 undo blocks for each second. Altering Undo Tablespace Undo tablespaces are altered using the ALTER TABLESPACE statement.. The undo tablespace is named undotbs_01 and one datafile. the required undo space is computed as follows: (2 * 3600 * 200 * 4K) = 5. but you can specify the DATAFILE clause. you can query the view to obtain the transaction rate. The database determines most of the attributes of the undo tablespace.  Renaming data file . is allocated for it. The second method is used with an existing database. with a 4K block size. The overhead figure can also be obtained from the view. This example creates the undotbs_02 undo tablespace: SQL> CREATE UNDO TABLESPACE undotbs_02 AUTOEXTEND ONRETENTION NOGUARANTEE.. SQL> CREATE DATABASE . Managing Undo Tablespaces Creating Undo Tablespace There are two methods of creating an undo tablespace. and the instance is started in automatic undo management mode (UNDO_MANAGEMENT = AUTO). but the UNDO keyword is specified.dbf' AUTOEXTEND ON NEXT 1M MAXSIZE UNLIMITED. The following statement illustrates using the UNDO TABLESPACE clause in a CREATE DATABASE statement. Oracle Database enables you to create a single-file undo tablespace.

SQL> ALTER DATABASE RENAME FILE 'old_full_path' TO 'new_full_path'. Assuming undotbs_01 is the current undo tablespace. by querying the dba_free_space view. since DROP TABLESPACE drops an undo tablespace even if it contains unexpired undo information (within retention period).g. An undo tablespace can only be dropped if it is not currently used by any instance. If the undo tablespace contains any outstanding transactions (e. Dropping Undo Tablespace Use the DROP TABLESPACE statement to drop an undo tablespace. when resizing the undo tablespace you may encounter ORA-03297 error: file contains used data beyond the requested RESIZE value. the instance uses undotbs_02 in place of undotbs_01 as its undo tablespace.  Beginning or ending an open backup on datafile  Enabling and disabling undo retention guarantee SQL> ALTER TABLESPACE undotbs RETENTION GUARANTEE. These are also the only attributes you are permitted to alter. or you want to prevent it from doing so. the DROP TABLESPACE statement fails. We can check the most high used block to check the minimum size that we can resize a particular datafile. INCLUDING CONTENTS. after this command successfully executes. an error is reported and no switching occurs:  The tablespace does not exist . Switching Undo Tablespaces You can switch from using one undo tablespace to another. the ALTER SYSTEM SET statement can be used to assign a new undo tablespace. SQL> DROP TABLESPACE undotbs_01. Another way to set undo tablespace to the size that we want is. a transaction died but has not yet been recovered). SQL> ALTER TABLESPACE undotbs RETENTION NOGUARANTEE.  Resizing datafile SQL> ALTER DATABASE DATAFILE 'data_file_name|data_file_number' RESIZE nK|M|G|T|P|E. All contents of the undo tablespace are removed. If any of the following conditions exist for the tablespace being switched to. you must be careful not to drop an undo tablespace if undo information is needed by some existing queries. to create another undo tablespace. take offline the old and then just drop the big old tablespace. set it the default one. DROP TABLESPACE for undo tablespaces behaves like DROP TABLESPACE .  Making datafile online or offline SQL> ALTER TABLESPACE undotbs offline. you can add more files to it or resize existing datafiles.. SQL> ALTER SYSTEM SET UNDO_TABLESPACE = undotbs_02. Because the UNDO_TABLESPACE initialization parameter is a dynamic parameter. SQL> ALTER TABLESPACE undotbs online.. If an undo tablespace runs out of space. This means that some undo information stills stored above the datafile size we want to set. However.

all transactions started after the switch operation began are assigned to transaction tables in the new undo tablespace. Monitoring Undo Tablespaces Oracle Database also provides proactive help in managing tablespace disk space use by alerting you when tablespaces run low on available space. but undo records for new user transactions cannot be stored in this undo tablespace. after all active transactions have committed. You can specify an undo pool for each consumer group. The Database Resource Manager directive UNDO_POOL allows DBAs to limit the amount of undo space consumed by a group of users (resource consumer group). To prevent excessive alerts. The following dynamic performance views are useful for obtaining space information about the undo tablespace: View V$UNDOSTAT Description Contains statistics for monitoring and tuning undo space. When the switch operation completes successfully. When the alert is generated. Use this statement with care. When the total undo generated by a consumer group exceeds it's undo limit. existing transactions can continue to execute. even after the switch operation completes successfully. In addition to the proactive undo space alerts. the SYSTEM rollback segment is used. If the parameter value for UNDO TABLESPACE is set to '' (two single quotes). the long query alert is issued at most once every 24 hours. you can check the Undo Advisor Page of Enterprise Manager to get more information about the undo tablespace. No other members of the consumer group can perform further updates until undo space is freed from the pool. From then on. This causes ORA-01552 error to be issued for any attempts to write non-SYSTEM related undo to the SYSTEM rollback segment. Use this view to help estimate . The tablespace is not an undo tablespace  The tablespace is already being used by another instance (in RAC environment) The database is online while the switch operation is performed. Eventually. Establishing User Quotas for Undo Space The Oracle Database Resource Manager can be used to establish user quotas for undo space. If there are any pending transactions in the old undo tablespace. Oracle Database also provides alerts if your system has longrunning queries that cause SNAPSHOT TOO OLD errors. When no UNDO_POOL directive is explicitly defined. and user transactions can be executed while this command is being executed. The following example unassigns the current undo tablespace: SQL> ALTER SYSTEM SET UNDO_TABLESPACE = ''. the old undo tablespace enters into a PENDING OFFLINE mode. The switch operation does not wait for transactions in the old undo tablespace to commit. In this mode. nor can it be dropped. the undo tablespace is available for other instances (in an RAC environment). users are allowed unlimited undo space. the current UPDATE transaction generating the redo is terminated. An undo pool controls the amount of total undo that can be generated by a consumer group. An undo tablespace can exist in this PENDING OFFLINE mode. A PENDING OFFLINE undo tablespace cannot be used by another instance. because if there is no undo tablespace available. then the current undo tablespace is switched out and the next available undo tablespace is switched in. the undo tablespace automatically goes from the PENDING OFFLINE mode to the OFFLINE mode.

Oracle uses this view information to tune undo usage in the system. DBA_UNDO_EXTENTS Shows the status and size of each extent in the undo tablespace. e. then UNDO_RETENTION must be set to 43200. the tuning of undo retention. spanning a 7 day cycle. WRH$_UNDOSTAT Contains statistical snapshots of V$UNDOSTAT information. From 10g. Statistics are available for undo space consumption. SQL> select segment_name. To findout the undo segments in the database. WRH$_ROLLSTAT Contains statistical snapshots of V$ROLLSTAT information. You might also want to guarantee that unexpired undo is not overwritten by specifying the RETENTION GUARANTEE clause for the undo tablespace. Oracle strongly recommends that you migrate your database to automatic undo management. if an application requires that a version of the database be available reflecting its content 12 hours previously. It determines how far back in time a database version can be established. The first row of the view contains statistics for the (partial) current time period.g. V$TRANSACTION Contains undo segment information. and the length and SQL ID of long-running queries in the instance. transaction concurrency. information reflects behavior of the undo V$ROLLSTAT segments in the undo tablespace. tablespace_name from dba_rollback_segs.the amount of undo space required for the current workload. These features are part of the overall flashback strategy incorporated into the database and include:  Flashback Query  Flashback Versions Query  Flashback Transaction Query  Flashback Table  Flashback Database The retention period for undo information is an important factor for the successful execution of flashback features. Oracle Database provides a function that provides information on how to size your new undo tablespace based on the configuration and usage of the rollback segments in your system. DBA privileges are required to execute this function: set serveroutput on . Flashback Features Oracle Database includes several features that are based upon undo information and that allow administrators and users to access database information from a previous point in time. END_TIME). This view contains information that spans over a 24 hour period and each row in this view contains data for a 10 minute interval specified by the BEGIN_TIME and END_TIME. Each row belongs to the time interval marked by (BEGIN_TIME. For automatic undo management mode. We must choose an undo retention interval that is long enough to enable users to construct a snapshot of the database for the oldest version of the database that they are interested in. Migration to Automatic Undo Management If you are still using rollback segments to manage undo space. The rows are in descending order by the BEGIN_TIME column value. Each column represents the data collected for the particular statistic in that time interval. The V$UNDOSTAT view is useful for monitoring the effects of transaction execution on undo space in the current instance. The view contains a total of 1008 rows. Each row in the view contains statistics collected in the instance for a 10minute interval.

With Oracle 9i.---------------------------------------. this is called Transportable Database.RBU_MIGRATION.  You need not set a value for the UNDO_RETENTION parameter unless your system has flashback or LOB retention requirements.-------------1 Solaris[tm] OE (32-bit) Big 2 Solaris[tm] OE (64-bit) Big 3 HP-UX (64-bit) Big 4 HP-UX IA (64-bit) Big 5 HP Tru64 UNIX Little 6 AIX-Based Systems (64-bit) Big 7 Microsoft Windows IA (32-bit) Little 8 Microsoft Windows IA (64-bit) Little 9 IBM zSeries Based Linux Big 10 Linux IA (32-bit) Little 11 Linux IA (64-bit) Little 12 Microsoft Windows x86 64-bit Little 13 Linux x86 64-bit Little 15 HP Open VMS Little 16 Apple Mac OS Big . but using different block sizes. The tablespaces being transported can be either dictionary managed or locally managed. TTS (Transportable Tablespaces) technology was further enhanced to support transportation of tablespaces between databases running on different OS platforms (e. Windows to Linux. You can also query the V$TRANSPORTABLE_PLATFORM view to see all the platforms that are supported.  To tune SQL queries or to check on runaway queries.put_line(utbsize_in_MB||'MB').g. from an Oracle database and plug it in to another Oracle database. From this version we can transport whole database. With Oracle 8i.g.DECLARE utbsize_in_MB NUMBER. Solaris to HP-UX). SQL> select * from v$transportable_platform order by platform_id. dbms_output. Best Practices for Undo Tablespace/Undo Management in Oracle This following list of recommendations will help you manage your undo space to best advantage. Oracle introduced transportable tablespace (TTS) technology that moves tablespaces between databases. With Oracle 10g. which has same ENDIAN formats. and to determine their platform names and IDs and their endian format. From Oracle 11g. we can transport single partition of a tablespace between databases. PLATFORM_ID PLATFORM_NAME ENDIAN_FORMAT ----------. Oracle 8i supports tablespace transportation between databases that run on same OS platforms and use the same database block size.  Allow 10 to 20% extra space in your undo tablespace to provide for some fluctuation in your workload. BEGIN utbsize_in_MB := DBMS_UNDO_ADV. use the value of the SQLID column provided in the long query or in the V$UNDOSTAT or WRH$_UNDOSTAT views to retrieve SQL text and other details on the SQL from V$SQL view. Transportable Tablespaces (TTS) in Oracle We can use the transportable tablespaces feature to copy/move subset of data (set of user tablespaces). Windows to Solaris. Tru64 to AIX). / The function returns the undo tablespace size in MBs. END. TTS (Transportable Tablespaces) technology was enhanced to support tablespace transportation between databases on platforms of the same type.  Set the warning and critical alert thresholds for the undo tablespace alert properly. If ENDIAN formats are different you have to use RMAN (e.

TRANSPORT_SET_CHECK procedure.platform_name. sysaux and temporary tablespaces cannot be transported.PLATFORM_NAME. (iii) Transportable Tablespace (TTS) is used to take out of the database pieces of data for various reasons (Archiving. Moving to other databases etc). Second. At Source: Validating Self Containing Property TTS requires all the tablespaces. e. If Automatic Storage Management (ASM) is used with either the source or destination database. Binary_Float and Binary_Double datatypes (new in Oracle 10g) are not supported. However. Oracle 10g removes the second restriction. TRUE). Oracle 9iremoves the first restriction.g. there were three restrictions with TTS. PLATFORM_NAME ENDIAN_FORMAT ---------------------------------------. (iv) Performing tablespace point-in-time-recovery (TSPITR). The source and target database must use the same character set and national character set. You cannot transport a tablespace to a target database in which a tablespace with the same name already exists. or we can transport tablespaces between Windows NT databases. you cannot transport a tablespace from a Sun Solaris database to a Windows NT database. This means that the segments within the migration tablespace set cannot have dependency to a segment in a tablespace out of the transportable tablespace set. must be self contained.TRANSPORT_SET_CHECK('tbs'. tp. both the databases must have same block size. Transportable tablespaces do not support: Materialized views/replication Function-based indexes. Limitations/Restrictions Following are limitations/restrictions of transportable tablespace: System. You can also use transportable tablespaces to move both table and index data. we can transport tablespaces between Sun Solaris databases. undo. you can rename either the tablespace to be transported or the destination tablespace before the transport operation. thereby avoiding the index rebuilds you would have to perform when importing or loading table data. However. . SQL> exec DBMS_TTS. This can be checked using the DBMS_TTS.17 Solaris Operating System (x86) Little 18 IBM Power Based Linux Big 19 HP IA Open VMS Little 20 Solaris Operating System (x86-64) Little 21 Apple Mac OS (x86-64) Little (from Oracle 11g R2) To find out your platform name and it's endian format SQL> select tp. v$transportable_platform tp where d. Moving data using transportable tablespaces can be much faster than performing either anexport/import or unload/load of the same data. you cannot rename the tablespace. In Oracle 8i. (ii) Updating data from OLTP systems to data warehouse systems.-------------Solaris[tm] OE (64-bit) Big Transporting tablespaces is particularly useful for (i) Updating data from production to development and test instances.ENDIAN_FORMAT from v$database d. both platforms must be the same OS. First. because transporting a tablespace only requires copying of datafiles and integrating the tablespace structural information. The source and target database must be on the same hardware platform. Oracle 10g also makes available a command to rename tablespaces.platform_name=tp. you must use RMAN to transport/convert the tablespace. which we are moving. Third.

make all the tablespaces as READ WRITE SQL> alter tablespace tbs-name read write.dmp LOG=/path/tts_imp. exp FILE=/path/dump-file.dmp LOGFILE=tts_exp.SQL> exec DBMS_TTS. Otherwise.log TRANSPORTABLE=always TABLES=trans_table:partition_Q1 partition. they need to be in consistent during the copy activity. then the export will fail. No rows should be displayed If it were not self contained you should either remove the dependencies by dropping/moving them or include the tablespaces of segments into TTS set to which migration set is depended. Put Tablespaces in READ ONLY Mode We will perform physical file system copy of tablespace datafiles. impdp DUMPFILE=tts_partition. SQL> alter tablespace tbs-name read only.TRANSPORT_SET_CHECK('tbs1. FALSE.dmp LOG=/path/tts_exp. if you don’t want them.dbf' for transporting partition. At Target: Import the export file.log FROMUSER=user-name . tbs2.dmp LOGFILE=tts_imp. SQL> SELECT * FROM transport_set_violations.log TTS_OWNERS=user-name TOUSER=user-name TABLESPACES=tbs-name TRANSPORT_TABLESPACE=y DATAFILES=/path/tbs-name.dmp LOGFILE=tts_partition_exp. Export the Metadata Export the metadata of the tablespace set. SQL> drop tablespace tbs-name including contents.dmp LOGFILE=tts_partition_imp. DIRECTORY=exp_dir If the tablespace set being transported is not self-contained. tbs3'.dbf (or) impdp DUMPFILE=tts. Copying Datafiles and export file Copy the datafiles and export file to target server. TRUE). You can drop the tablespaces at source.log TRANSPORT_TABLESPACE=y STATISTICS=none (or) expdp DUMPFILE=tts. So put all the tablespaces in READ ONLY mode.log DIRECTORY=exp_dir REMAP_SCHEMA=master:scott TRANSPORT_DATAFILES='/path/tts. imp FILE=/path/dump-file.log DIRECTORY=exp_dir TRANSPORT_FULL_CHECK=y TABLESPACES=tbs-names TRANSPORT_TABLESPACES=tts for transporting expdp DUMPFILE=tts_partition. In order those datafiles to not to require recovery.

log'. If DB_FILE_NAME_CONVERT is not defined. To move or copy a set of tablespaces into a primary database when a physical standby is being used. RMAN> TRANSPORT TABLESPACE exam TABLESPACE DESTINATION '/disk1/trans' AUXILIARY DESTINATION '/disk1/aux' DATAPUMP DIRECTORY dpdir DUMP FILE 'dmpfile. . Generate a transportable tablespace set that consists of datafiles for the set of tablespaces being transported and an export file containing structural information for the set of tablespaces. A tablespace group lets you assign multiple temporary tablespaces to a single user and increases the addressability of temporary tablespaces.dmp' IMPORT SCRIPT 'impscript. Redo data will be generated and applied at the standby site to plug the tablespace into the standby database. Copy the datafiles to the standby database. then issue the ALTER DATABASE RENAME FILE statement to modify the standby control file after the redo data containing the transportable tablespace has been applied and has failed. Copy the datafiles and the export file to the primary database.DIRECTORY=exp_dir TRANSPORT_DATAFILES='/path/tts_part.dbf' PARTITION_OPTIONS=departition Finally we have to switch the new tablespaces into read write mode: SQL> alter tablespace tbs-name read write. Related Packages DBMS_TTS DBMS_EXTENDED_TTS_CHECKS Temporary Tablespace Group Temporary Tablespace Groups in Oracle Oracle 10g introduced the concept of temporary tablespace groups. TRANSPORT TABLESPACE Using RMAN Create transportable tablespace sets from backup for one or more tablespaces. 3. Using Transportable Tablespaces with a Physical Standby Database We can use the Oracle transportable tablespaces feature to move a subset of an Oracle database and plug it in to another Oracle database. this group of tablespaces instead of a single temporary tablespace. 2. perform the following steps: 1. essentially moving tablespaces between the databases. Invoke the Data Pump utility to plug the set of tablespaces into the primary database.This allows grouping multiple temporary tablespaces into a single group and assigning to a user.sql' EXPORT LOG 'explog. b. tools TABLESPACE DESTINATION '/disk1/trans' AUXILIARY DESTINATION '/disk1/aux' UNTIL TIME 'SYSDATE-15/1440'. The STANDBY_FILE_MANAGEMENT initialization parameter must be set to AUTO. Transport the tablespace set: a. The datafiles must be copied in a directory defined by the DB_FILE_NAME_CONVERT initialization parameter. RMAN> TRANSPORT TABLESPACE example. Plug in the tablespace.

Space will be allocated in a temporary tablespace for doing these types of operations. and is deleted when the last temporary tablespace is removed from the group.3 Temporary tablespaces are used to manage space for database sort and joining operations and for storing global temporary tables. Other SQL operations that might require disk sorting are: CREATE INDEX. INTERSECT. Sort-Merge joins. SELECT DISTINCT. Related Views: DBA_TABLESPACE_GROUPS DBA_TEMP_FILESV$TEMPFILE V$TEMPSTATV$TEMP_SPACE_HEADER V$TEMPSEG_USAGE Temporary Tablespace in Oracle Oracle introduced temporary tablespaces in Oracle 7. Adding a temporary tablespace to temporary tablespace group: SQL> ALTER TEMPORARY TABLESPACE temp TABLESPACE GROUP temp_grp. If the tablespace group does not already exist.  Reduced contention when multiple temporary tablespaces are defined. The following statement creates temporary tablespace temp as a member of the temp_grp tablespace group.  It contains only temporary tablespaces. Removing a temporary tablespace from temporary tablespace group: SQL> ALTER TEMPORARY TABLESPACE temp TABLESPACE GROUP ''. then Oracle Database creates it during execution of this statement. as implicitly created during the creation of a temporary tablespace with the CREATE TEMPORARY TABLESPACE command and by specifying the TABLESPACE GROUP clause. Assigning temporary tablespace group to a user (same as assigning temporary tablespace to a user): SQL> ALTER USER scott TEMPORARY TABLESPACE temp_grp. MINUS. GROUP BY.dbf' SIZE 5M AUTOEXTEND ON TABLESPACE GROUP temp_grp.  It is not explicitly created. etc. Assigning temporary tablespace group as default temporary tablespace: SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp_grp. It is created implicitly when the first temporary tablespace is assigned to it. There is no CREATE TABLESPACE GROUP statement. To see the tablespaces in the temporary tablespace group: SQL> select * from DBA_TABLESPACE_GROUPS. .A temporary tablespace group has the following properties:  It contains one or more temporary tablespaces (there is no upper limit).  It allows a single SQL operation to use multiple temporary tablespaces for sorting.  Finer granularity so you can distribute operations across temporary tablespaces. ORDER BY. Oracle cannot do in memory by using SORT_AREA_SIZE in PGA (Programmable Global Area). For joining two large tables or sorting a bigger result set. Note that a temporary tablespace cannot contain permanent objects and therefore doesn't need to be backed up. SQL> CREATE TEMPORARY TABLESPACE temp TEMPFILE 'temp01. UNION.  It allows the user to use multiple temporary tablespaces in different sessions at the same time. Temporary Tablespace Group Benefits  It allows multiple default temporary tablespaces to be specified at the database level. ANALYZE.

or after deleting them by accident. SQL> CREATE DATABASE oracular . This implies that just recreate them whenever you restore the database..dbf' AUTOEXTEND OFF EXTENT MANAGEMENT LOCAL BLOCKSIZE 2K.CREATE TABLESPACE temp DATAFILE . SQL> CREATE TEMPORARY TABLESPACE TEMPTBS2 TEMPFILE '/path/temp2. e. (3) We cannot specify AUTOALLOCATE for a temporary tablespace. Oracle 7... Oracle provides various ways of creating TEMPORARY tablespaces. DEFAULT TEMPORARY TABLESPACE temp_ts . Example: SQL> CREATE TABLESPACE TEMPTBS DATAFILE '/path/temp.. When you create a tempfiles. Creating Temporary Tablespace From Oracle 9i. we can specify a default temporary tablespace when you create a database.dbf' SIZE 2048M AUTOEXTEND ON NEXT 1M MAXSIZE UNLIMITED LOGGING DEFAULT NOCOMPRESS ONLINE TEMPORARY EXTENT MANAGEMENT DICTIONARY. Restrictions: (1) We cannot specify nonstandard block sizes for a temporary tablespace or if you intend to assign this tablespace as the temporary tablespace for any users. using the DEFAULT TEMPORARY TABLESPACE extension to the CREATE DATABASE statement.. Oracle 8i and above .. Example: SQL> CREATE TABLESPACE TEMPTBS DATAFILE '/path/temp..3 & 8. Example using OMF (Oracle Managed Files): SQL> CREATE TEMPORARY TABLESPACE temp. You can have different tempfile configurations between . Tempfiles are not recorded in the database's control file. TEMPORARY.dbf' SIZE 1000M AUTOEXTEND ON NEXT 8K MAXSIZE 1500M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M BLOCKSIZE 8K.CREATE TABLESPACE temp DATAFILE .g..CREATE TEMPORARY TABLESPACE temp TEMPFILE . All extents of temporary tablespaces are the same size. if no value is specified. so UNIFORM keyword is optional . tempfiles are not fully allocated. Tempfiles (Temporary Datafiles) Unlike normal datafiles. Oracle only writes to the header and last block of the file.... Examples: SQL> CREATE TEMPORARY TABLESPACE TEMPTBS TEMPFILE '/path/temp.0 ...3 . Prior to Oracle 7..if UNIFORM is not defined it will default to 1 MB. This is why it is much quicker to create a tempfiles than to create a normal datafile..A temporary tablespace contains schema objects only for the duration of a session.. SIZE 1000M The MAXSIZE clause will default to UNLIMITED. (2) We cannot specify FORCE LOGGING for an undo or temporary tablespace.dbf' SIZE 2048M AUTOEXTEND ON NEXT 1M MAXSIZE UNLIMITED LOGGING DEFAULT NOCOMPRESS ONLINE EXTENT MANAGEMENT DICTIONARY..

Default Temporary Tablespaces From Oracle 9i.  You cannot rename a tempfile or set it to read-only. will fail). you may encounter error: to a temporary tablespace: '/path/temp01. however. Use the following statement to add a tempfile SQL> ALTER TABLESPACE temp ADD TEMPFILE AUTOEXTEND ON NEXT 250m MAXSIZE UNLIMITED. you cannot use the ALTER TABLESPACE statement for a locally managed temporary tablespace (operations like rename.dbf' SIZE 512m Except for adding a tempfile. one can remove a tempfile from a database. recover. Locally managed temporary tablespaces have temporary datafiles (tempfiles). a temporary tablespace is automatically assigned to users.primary and standby databases in dataguard environment. . However.  Tempfiles are always set to NOLOGGING mode. SQL> drop tablespace temp_tbs including contents and datafiles. which are similar to ordinary datafiles except:  You cannot create a tempfile with the ALTER DATABASE statement. the disk could run out of space later when the tempfiles are accessed. When you create or resize tempfiles. //If the file was created as tempfile SQL> alter database datafile 'tempfile_name' drop. etc. Each database can be assigned one and only one default temporary tablespace. One cannot remove datafiles from a tablespace until you drop the entire tablespace. we can define a default temporary tablespace at database creation time. temporary tablespace. //If the file was created as datafile Dropping temp tablespace SQL> drop tablespace temp_tbs. If you remove all tempfiles from a ORA-25153: Temporary Tablespace is Empty. By default. On certain file systems (like UNIX) disk blocks are allocated not at file creation or resizing. Using this feature. or configure tempfiles to be local instead of shared in a RAC environment. they are not always guaranteed allocation of disk space for the file size specified. the default temporary tablespace is SYSTEM.  Note: This arrangement enables fast tempfile creation and resizing. or by issuing an "ALTER DATABASE" statement: SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp. Look at this example: SQL> alter database tempfile 'tempfile_name' drop including datafiles. set to read only.  Tempfile information is shown in the dictionary view DBA_TEMP_FILES and the dynamic performance view V$TEMPFILE. but before the blocks are accessed.

o Always use TEMPFILE instead of DATAFILE (reduce backup and recovery time). file_name. All new users that are not explicitly assigned a TEMPORARY TABLESPACE will get the default temporary tablespace as its TEMPORARY TABLESPACE. This can be done with one of the following commands: SQL> SQL> CREATE ALTER USER USER scott scott TEMPORARY TEMPORARY To change a user account to use a SQL> ALTER USER user1 SET TEMPORARY TABLESPACE temp_tbs. o Ensure that you create your temporary tablespaces as locally managed instead of dictionary managed (i. Oracle will not change this value next time you change the default temporary tablespace for the database. when you assign a TEMPORARY tablespace to a user. Also. o Stripe your temporary tablespaces over multiple disks to alleviate possible disk contention and to speed- up operations (user processes can read/write to it directly). temporary tablespaces: -DEFAULT TEMPORARY TABLESPACE cannot be taken off-line.e./temp01. To see the default temporary tablespace for a database.175.. Assigning temporary tablespace group to a user (same as assigning temporary tablespace to a user): SQL> ALTER USER scott TEMPORARY TABLESPACE temp_grp. Use V$TEMPFILE and DBA_TEMP_FILES instead. o The UNIFORM SIZE must be a multiple of the SORT_AREA_SIZE parameter. -DEFAULT TEMPORARY TABLESPACE cannot be dropped until you create another one. tablespace Assigning temporary tablespace group as default temporary tablespace: SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp_grp.650. temp.000 .dbf 11. bytes FROM dba_temp_files WHERE tablespace_name = 'TEMP'. TABLESPACE TABLESPACE non-default temp temp. tempfiles are not listed in V$DATAFILE and DBA_DATA_FILES. execute the following query: SQL> select PROPERTY_NAME. TABLESPACE_NAME FILE_NAME BYTES ------------------------------------------------------------TEMP /. Monitoring Temporary Tablespaces Unlike datafiles. use sort space bitmap instead of sys. Performance Considerations Some performance considerations for temporary tablespaces: o Always use temporary tablespaces instead of permanent content tablespaces for sorting & joining (no logging and uses one large sort segment to reduce recursive SQL and ST space management enqueue contention).fet$ and sys. SQL> SELECT tablespace_name.PROPERTY_VALUE from database_properties where property_name like '%TEMP%'.The following restrictions apply to default -DEFAULT TEMPORARY TABLESPACE must be of type TEMPORARY.uet$ for allocating space). The DBA should assign a temporary tablespace to each user in the database to prevent them from allocating sort space in the SYSTEM tablespace.

in sort segments". SQL> alter tablespace temp-tbs shrink space keep n{K|M|G|T|P|E}. The optional KEEP clause defines a minimum size for the tablespace or temp file. Use DBA_TEMP_FREE_SPACE or V$TEMP_SPACE_HEADER instead. (select segblk#+blocks max from v$sort_usage where segblk# = (select max(segblk#) from v$sort_usage) ) hwm group by hwm. Shrinking frees as much space as possible while maintaining the other attributes of the tablespace or temp files. DBA_TEMP_FREE_SPACE. High Water Mark" from v$sort_usage u. (hwm. SQL> select sum( u. SQL> alter tablespace temp-tbs shrink tempfile 'tempfile-name' keep n{K|M|G|T|P|E}. this is from Oracle 10g SQL> alter tablespace temp rename to temp2. monitor name. SQL> select TABLESPACE_NAME. up to specified size. With this script we can monitor the actual space used in a temporary tablespace and see HWM (High Water Mark) of the temporary tablespace. Renaming (temporary) tablespace. In Oracle 11g.SQL> One select can file#.2) temporary segments from "SIZE IN V$SORT_SEGMENT MB's" from and v$tempfile. The below script reports temporary tablespace usage (script was created for Oracle9i Database).max * blk. (select block_size from dba_tablespaces where contents = 'TEMPORARY') blk. round(bytes/(1024*1024). SQL> alter tablespace temp-tbs shrink tempfile 'tempfile-name' . BYTES_FREE BYTES_USED ---------4214226944 free temp * space in new from from V$TEMP_SPACE_HEADER. How to reclaim used space . Resizing tempfile SQL> alter database tempfile temp-name SQL> alter database tempfile '/path/temp01.blocks * blk.max * blk.block_size)/1024/1024 "MB.dbf' resize 1000M. Resizing temporary tablespace SQL> alter tablespace temptbs resize 1000M. temporary tablespace or it's tempfiles can be shrinked. resize integer K|M|G|T|P|E. SQL> alter tablespace temp-tbs shrink space. V$SORT_USAGE. The script is designed to run when there is only one temporary tablespace in the database. BYTES_FREE ---------80740352 view DBA_TEMP_FREE_SPACE. DBA_FREE_SPACE does not record free space for temporary tablespaces. we can check select BYTES_USED.block_size)/1024/1024 "MB.block_size/1024/1024. TABLESPACE_NAME -----------------------------TEMPTBS From SQL> 11g.

SQL> insert into stats$end_stats select * from v$sysstat. (3) If you are using Oracle9i or higher. Statspack is a set of performance monitoring. Statspack versus UTLBSTAT/UTLESTAT The BSTAT-ESTAT utilities capture information directly from the Oracle's in-memory structures and then compare the information from two snapshots in order to produce an elapsed-time report showing the activity of the database. Related Views: DBA_TEMP_FILES DBA_DATA_FILES DBA_TABLESPACES DBA_TEMP_FREE_SPACE (Oracle 11g) V$TEMPFILE V$TEMP_SPACE_HEADER V$TEMPORARY_LOBS V$TEMPSTAT V$TEMPSEG_USAGE Statspack in Oracle Statspack was introduced in Oracle8i. and SQL*Plus scripts that allow the collection. SQL> insert into stats$begin_stats select * from v$sysstat. Although AWR and ADDM (introduced in Oracle 10g) provide better statistics than STATSPACK. should continue to use Statspack. This makes historical data comparisons easier. storage. as it’s successor. The data collected can be analyzed using Statspack reports. high resource SQL statements. simply drop and recreate the temporary tablespace back to its original (or another reasonable) size. which can later be used for reporting and analysis.  . though the old BSTAT/ESTAT scripts are still available. it also supports application tuning activities by providing data which identifies high-load SQL statements. users that are not licensed to use the Enterprise Manager Diagnostic Pack. such as cache hit ratios. e. and viewing of performance data. we can specify TEMPORARY tablespaces. V$SYSSTAT. Statspack package is a set of SQL. while creating global temporary tables. Statspack can be used both proactively to monitor the changing load on a system. and also reactively to investigate a performance problem. Statspack stores the performance statistics permanently in Oracle tables. drop the large tempfile (which will drop the tempfile from the data dictionary and the OS file system).sql. which includes an instance health and load summary page. per transaction and per second statistics (many of these ratios must be calculated manually when using BSTAT/ESTAT). including high resource SQL (and the optimizer execution plans for those statements).Several methods existed to reclaim the space used for a larger than normal temporary tablespace. automation. the traditional wait events and initialization parameters. instead of creating/dropping tables each time.sql and utlestat. PL/SQL. Statspack improves on the existing UTLBSTAT/UTLESTAT performance scripts in the following ways:  Statspack collects more data. diagnosis and reporting utility provided by Oracle. Statspack is a diagnosis tool for instance-wide performance problems. (1) Restarting the database.  Permanent tables owned by PERFSTAT store performance statistics. If we look inside utlbstat. data is inserted into the pre-existing tables. if possible. we see the SQL that samples directly from the v$ views. From 11g. (2) The method that exists for all releases of Oracle is. Statspack pre-calculates many ratios useful when performance tuning.g. Statspack provides improved UTLBSTAT/UTLESTAT functionality.

viewing the data collected is in the hands of the performance engineer when they run the performance report.snap. do not run both as the same user.lis and ensure that no errors were encountered during the installation. along with the database identifier (DBID) and instance number (INSTANCE_NUMBER).SQL: Creates PERFSTAT user and grants privileges  SPCTAB.Statspack separates the data collection from the report generation. To install statspack in batch mode. This snapshot can be used as a baseline for comparison with another snapshot taken at a later time.SQL is run.   Data collection is easy to automate using either dbms_job or an OS utility. you must assign values to the SQL*Plus variables that specify the default and temporary tablespaces before running SPCREATE.lis. $ sqlplus perfstat/perfstat SQL> exec statspack.SQL.SQL script runs the following scripts:  SPCUSR. the STATSPACK software will sample from the RAM in-memory structures inside the SGA and transfer the values into the corresponding STATSPACK tables. Use of this unique combination allows storage of multiple instances of an OracleReal Application Clusters (RAC) database in the same tables. NOTE: If you choose to run BSTAT/ESTAT in conjunction with statspack. Taking such a snapshot stores the current values for the performance statistics in the statspack tables. spctab. Installing and Configuring Statspack $ cd $ORACLE_HOME/rdbms/admin $ sqlplus "/as sysdba" @spcreate. default tablespace. The SPCREATE. The SNAP_ID. Taking snapshots of the database Each snapshot taken is identified by a snapshot ID.lis and spcpkg. When a snapshot is executed. Data is collected when a 'snapshot' is taken. There is a name conflict with the STATS$WAITSTAT table. comprise the unique key for a snapshot.sql You will be prompted for the PERFSTAT user's password. and temporary tablespace. NOTE: Default tablespace or temporary tablespace must not be SYSTEM for PERFSTAT user. statspack objects in it and STATSPACK package.SQL: Creates STATSPACK package Check the log files (created in present directory): spcusr.SQL: Creates STATSPACK tables  SPCPKG. a new SNAP_ID is generated. or . Each time a new collection is taken. which is a unique number generated at the time the snapshot is taken. it does not prompt for the information provided by the variables. This will create PERFSTAT user.  DEFAULT_TABLESPACE: For the default tablespace  TEMPORARY_TABLESPACE: For the temporary tablespace  PERFSTAT_PASSWORD: For the PERFSTAT user password $ sqlplus "/as sysdba" SQL> define default_tablespace='STATS' SQL> define temporary_tablespace='TEMP_TBS' SQL> define perfstat_password='perfstat' SQL> @?/rdbms/admin/spcreate When SPCREATE.

SUBMIT => 999.sql When the report is run. e. 18:00:00'.snap_id.YYYY:HH24:MI:SS') stats$snapshot. such as cron. Scheduling Snapshots gathering There are three methods to automate/schedule the gathering statspack snapshots/statistics.DBMS_JOB. and the collection of statspack snapshots. you must connect to the instance for which you want to collect data.  SPAUTO. Statspack reporting The information captured by a STATSPACK snapshot has accumulated values.v$database.'DD. SQL> select name. you are prompted for the following:  The beginning snapshot ID  The ending snapshot ID  The name of the report text file to be created It is not correct to specify begin and end snapshots where the begin snapshot and end snapshot were taken from . DBMS_JOB procedure to schedule JOB_QUEUE_PROCESSES to greater than 0).  snapshots (you must set the initialization parameter BEGIN (job next_date => what to_date('17/08/2009 interval no_parse SYS. a dbms_job to automate.'dd/mm/yyyy hh24:mi:ss'). END. => 'statspack. the stats$sysstat table is similar to the v$sysstat view. instruct statspack to do gather more details in the snapshot. The information from the v$views collects database information at startup time and continues to add the values until the instance is shutdown. /  Use an OS utility. "Date/Time" from Note that in most cases.'.script can be customized and executed to schedule.g. => 'trunc(SYSDATE+1/24.snap. Note: In RAC environment. => FALSE ).snap(i_snap_level=>10). In order to get a meaningful elapsed-time report. Remember to set timed_statistics to true for the instance. Statspack will then include important timing information in the data it collects.MM. SQL> connect perfstat/perfstat SQL> @?/rdbms/admin/spreport. you must run a STATSPACK report that compares two snapshots.''HH'')'. you can generate performance reports.to_char(snap_time. there is a direct correspondence between the v$view in the SGA and the corresponding STATSPACK table.SQL> exec statspack. After snapshots were taken.SQL .

 Run statspack report. along with all data captured by lower levels. level and threshold.different instance startups. similar to the BSTAT/ESTAT report. To get list of snapshots SQL> select SNAP_ID. the end snap ID. wait statistics. The statspack package includes two reports. and the report name before running SPREPORT. and latch information. SPREPSQL. run SQL report. it does not prompt for the information provided by the variables. lock statistics. buffer pool statistics. The variables are:  BEGIN_SNAP: Specifies the begin snapshot ID  END_SNAP: Specifies the end snapshot ID  REPORT_NAME: Specifies the report output name SQL> connect perfstat SQL> define begin_snap=1 SQL> define end_snap=2 SQL> define report_name=batch_run SQL> @?/rdbms/admin/spreport When SPREPORT. system events. row cache. The SQL report only reports on data relating to the single SQL statement. end snapshot will have smaller values than the begin snapshot. To run the report without being prompted. system statistics. Statspack has two types of collection options. data is captured on any SQL statements that breach the specified thresholds. The level parameter controls the type of data collected from Oracle. Captures general statistics. Level 0 Level (default) 5 Includes capturing high resource usage SQL Statements. In other words. shutting down the Oracle database resets the values in the performance tables to 0. SPREPORT. Adjusting Statspack collection level & threshold These parameters are used as thresholds when collecting data on SQL statements.SQL. SGA. assign values to the SQL*Plus variables that specify the begin snap ID. the resulting output is invalid and then the report shows an appropriate error to indicate this. while the threshold parameter acts as a filter for the collection of SQL statements into the stats$sql_summary table. session events. SQL> SELECT * FROM stats$level_description ORDER BY snap_level. . which is general instance health report that covers all aspects of instance performance.  After examining the instance report. which statspack queries to gather the data. This report calculates and prints ratios and differences for all statistics between the two snapshots. SNAP_TIME from STATS$SNAPSHOT. Because statspack subtracts the begin-snapshot statistics from the end-snapshot statistics. reside in memory. This is necessary because the database's dynamic performance tables. background events.SQL is run. on a single SQL statement (identified by its hash value). Hence.SQL. the instance must not have been shutdown between the times that the begin and end snapshots were taken. including rollback segment.

value with the statspack 1) You can change the default level of a snapshot with the STATSPACK. using the STATSPACK. displays statistics. you specify the required threshold or snapshot level when taking the snapshot. in the If the I_MODIFY_PARAMETER was set to FALSE or omitted. Snapshot level and threshold information STATS$STATSPACK_PARAMETER table. i_buffer_gets_th=>10000. i_modify_parameter=>'true'). row lock. including logical and physical reads. the following statement changes the snapshot level to 10 and modifies the SQL thresholds for BUFFER_GETS and DISK_READS: SQL> EXECUTE STATSPACK. Only the snapshot taken at that point uses the specified values. i_disk_reads_th=>1000).MODIFY_STATSPACK_PARAMETER (i_snap_level=>10. You can change the default parameters used for taking snapshots so that they are tailored to the instance's workload. For example. and (if level is more than six) information on any SQL plan(s) associated with that statement.Level 6 Includes capturing SQL plan and SQL plan usage information for high resource usage SQL Statements. Simply use the appropriate parameter and the new MODIFY_STATSPACK_PARAMETER or SNAP procedure. used by the package is stored in the Creating Execution Plan of an SQL When you examine the instance report. The i_modify_parameter=>'true' changes the level permanent for all snapshots in the future. You can save the new value as the instance's default in either of two ways. Level 7 Captures segment level statistics.SQL. if you find high-load SQL statements that you want to examine more closely or if you have identified one or more problematic SQL statement. The SQL statement to be reported on is identified by a hash value. Level 10 Includes capturing parent & child latch statistics.MODIFY_STATSPACK_PARAMETER procedure. ITL and buffer busy waits.SNAP(i_snap_level=>6). 2) Change the defaults immediately without taking a snapshot. SPREPSQL. along with all data captured by lower levels. Setting the I_MODIFY_PARAMETER value to TRUE saves the new thresholds STATS$STATSPACK_PARAMETER table. These thresholds are used for all subsequent snapshots. The SQL report.SNAP function. along with all data captured by lower levels. you may want to check the execution plan. This procedure changes the values permanently. to take a single level 6 snapshot: SQL> EXECUTE STATSPACK.SNAP(i_snap_level=>8. $ sqlplus perfstat/perfstat . For example. along with all data captured by lower levels. The hash value for each statement is displayed for each statement in the SQL sections of the instance report. but does not take a snapshot. This value is used only for the immediate snapshot taken. the new value is not saved as the default. which is a numerical representation of the statement's SQL text. To temporarily use a snapshot level or threshold that is different from the instance's default snapshot values. Any subsequent snapshots use the preexisting values in the STATS$STATSPACK_PARAMETER table. then the new parameter values are not saved. the complete SQL text. SQL> EXEC STATSPACK.

SQL in batch mode .: SQL> exec statspack. If you want to gather session statistics and wait events for a particular session (in addition to the instance statistics and wait events). SQL> connect perfstat/perfstat SQL> @?/rdbms/admin/sppurge When you run SPPURGE. This deletes snapshots that fall between the begin and end snapshot IDs you specify. assign values to the SQL*Plus variables that specify the begin snap ID.SQL script can run in batch mode. Purging Statspack Data Purge unnecessary data from the PERFSTAT schema using the SPPURGE.SQL report prompts you for the following:  Beginning snapshot ID  Ending snapshot ID  Hash value for the SQL statement  Name of the report text file to be created The SPREPSQL. Purging can require the use of a large rollback segment. by executing the SET TRANSACTION USE ROLLBACK SEGMENT statement before running the SPPURGE. All snapshots that fall within this range are purged.SQL is run. because all data relating to each snapshot ID will be deleted.SQL script. The statistics gathered for the session include session statistics.SQL script.SQL> @?/rdbms/admin/sprepsql. The variables are:  BEGIN_SNAP: specifies the begin snapshot ID  END_SNAP: specifies the end snapshot ID  HASH_VALUE: specifies the hash value  REPORT_NAME: specifies the report output name SQL> connect perfstat SQL> define begin_snap=66 SQL> define end_snap=68 SQL> define hash_value=2342342385 SQL> define report_name=sql_report SQL> @?/rdbms/admin/sprepsql When SPREPSQL. the end snap ID. and lock activity.SQL script. You can avoid rollback segment extension errors in one of two ways:  Specify a smaller range of snapshot IDs to purge. specify the session ID in the call to statspack. it displays the instance to which you are connected and the available snapshots. It then prompts you for the low snap ID and high snap ID.g. To run the report without being prompted. and the report name before running the SPREPSQL.snap(i_session_id=>333).sql The SPREPSQL.  Explicitly use a large rollback segment. session events. Running SPPURGE. the hash value. e. The default behavior is to not gather session level statistics.SQL. it does not prompt for the information provided by the variables.

drops tables and public synonyms  SPDUSR.SQL .sql Note: Better to export the schema as backup before running this script. to ensure that the package was completely uninstalled. the stats from V$SQL are reset.drops the user Check output files produced.  SPCREATE. For example. by calling the following scripts: o SPCUSR. you'll get 1. Note: Better to export the schema as backup before running this script.SQL: Creates statspack user (PERFSTAT).SQL> connect perfstat/perfstat SQL> define losnapid=1 SQL> define hisnapid=2 SQL> @?/rdbms/admin/sppurge When SPPURGE. either using your own export parameters or those provided in SPUEXP. it has been aged out of the pool and reloaded and now shows only 1. query A has 10.000 buffer_gets at snapshot 1.LIS & SPDUSR. SQL> connect perfstat/perfstat SQL> @?/rdbms/admin/sptrunc. Oracle Statspack Scripts Installation and Uninstallation The statspack installation and removal scripts must be run as a user with the SYSDBA privilege. Uninstalling Statspack from Oracle Database If you want to remove the STATSPACK. The SPDROP. Problems in using Statspack Statspack reporting suffers from the following problems: 1) Some statistics may only be reported on COMPLETION of a query. its processing won't be reported during any of the snapshots taken while the query was busy executing.SQL is run. $ sqlplus "/as sysdba" SQL> @?/rdbms/admin/spdrop.000 buffer_gets.sql from snapshot 1 to 2.SQL: Installs the statspack user (PERFSTAT). it’s objects and grants privileges. but at snapshot #2. o SPCTAB. when you run spreport. SPDTAB. in present directory. it does not prompt for the information provided by the variables. . This can throw off the delta calculations and even make it negative. either using your own export parameters or those provided in SPUEXP. Truncating Statspack Data To truncate all performance data and gathered statistics data indiscriminately.SQL.LIS.SQL: Creates statspack tables (run as PERFSTAT).PAR.000-10.000 for this query.000 = -9. use SPTRUNC. So.SQL script calls the following scripts:  SPDTAB. tables and package. For example.PAR. if a query runs for 12 hours.sql This script will drop statspack objects and the PERFSTAT user. 2) If queries are aged out of the shared pool.SQL .

1 schema.dmp log=spuexp.SQL: StatsPack REPort SQL. a dbms_job to automate.  SPREPORT.  SPUP10.2 schema.6 to 8. Backup the existing schema before running the upgrade scripts. Reporting and Automation The statspack reporting and automation scripts must be run as the PERFSTAT user.SQL: Upgrading statspack to 9.  SPPURGE.  SPUP102. This calls SPREPINS.o SPCPKG.SQL.SQL: StatsPack REPort CONfiguration. Report on differences between values recorded in two snapshots.SQL: Purges a limited range of snapshot IDs for a given database instance. This calls SPREPCON.SQL: Truncates all performance data in statspack tables.1.SQL: StatsPack REPort INStance.SQL: Upgrading statspack to 10.  SPUEXP.log compress=y grants=y indexes=y rows=y constraints=y owner=PERFSTAT consistent=y Statspack Documentation . Generates a statspack SQL report for the specific SQL hash value specified. 3.  SPREPINS. and the collection of STATSPACK snapshots.1. synonyms.SQL. You can choose to export the data as a backup before using this script.SQL: Generates a statspack report. SQL Text and any SQL Plans. This calls SPRSQINS. Statspack SQL report to show resource usage.  SPREPSQL.  SPREPCON.SQL: Upgrading statspack to 10.SQL. Upgrade scripts should only be run once. Script can be customized and executed to schedule.  SPRSQINS. Performance Data Maintenance The statspack data maintenance scripts must be run as the PERFSTAT user. Generates a statspack report for the database and instance specified. $ exp file=spuexp.  SPUP817.PAR: An export parameter file supplied for exporting the whole PERFSTAT user.7 NOTE: 1.SQL. Allows configuration of certain aspects of the instance report. Downgrade scripts are not provided.  SPAUTO.1.  SPTRUNC.  SPUP92. This calls SPREPCON. SPDUSR.SQL: Uninstall statspack from database.SQL: Upgrading statspack from 8.SQL: Drops statspack user (PERFSTAT).  o o SPDROP. 2.7 to 9.SQL: StatsPack Report SQl Instance.SQL: Upgrading statspack to 11 schema. by calling the following scripts: SPDTAB. Upgrading Statspack The statspack upgrade scripts must be run as a user with the SYSDBA privilege.SQL: Automates statspack statistics collection. Caution: Do not use this script unless you want to remove all data in the schema you are using. package.SQL: Creates statspack package (run as PERFSTAT).SQL: Upgrading statspack from 8.0  SPUP816.2 schema.SQL: Drops statspack tables.  SPUP90.

Tables in PERFSTAT schema (in Oracle 11g) 28 tables in Oracle8i 68 tables in Oracle 10g 73 tables in Oracle 11g STATS$BG_EVENT_SUMMARY STATS$BUFFER_POOL_STATISTICS STATS$BUFFERED_QUEUES STATS$BUFFERED_SUBSCRIBERS STATS$CR_BLOCK_SERVER STATS$CURRENT_BLOCK_SERVER STATS$DATABASE_INSTANCE STATS$DB_CACHE_ADVICE STATS$DLM_MISC STATS$DYNAMIC_REMASTER_STATS STATS$ENQUEUE_STATISTICS STATS$EVENT_HISTOGRAM STATS$FILE_HISTOGRAM STATS$FILESTATXS STATS$IDLE_EVENT STATS$INSTANCE_CACHE_TRANSFER STATS$INSTANCE_RECOVERY STATS$INTERCONNECT_PINGS STATS$IOSTAT_FUNCTION STATS$IOSTAT_FUNCTION_NAME STATS$JAVA_POOL_ADVICE STATS$LATCH STATS$LATCH_CHILDREN STATS$LATCH_MISSES_SUMMARY STATS$LATCH_PARENT STATS$LEVEL_DESCRIPTION STATS$LIBRARYCACHE STATS$MEMORY_DYNAMIC_COMPS STATS$MEMORY_RESIZE_OPS STATS$MEMORY_TARGET_ADVICE STATS$MUTEX_SLEEP STATS$OSSTAT STATS$OSSTATNAME STATS$PARAMETER STATS$PGA_TARGET_ADVICE STATS$PGASTAT STATS$PROCESS_MEMORY_ROLLUP STATS$PROCESS_ROLLUP STATS$PROPAGATION_RECEIVER STATS$PROPAGATION_SENDER STATS$RESOURCE_LIMIT STATS$ROLLSTAT .The SPDOC.TXT file in the $ORACLE_HOME/rdbms/admin directory contains instructions and documentation on the statspack package.

This method is used if internal statistics are present. Oracle statistics may refer to historical performance statistics that are kept in STATSPACK or AWR. This decision can be made using one of two methods:  Rule Based Optimizer (RBO) . or the amount of data in the database changes the statistics will no longer represent the real state of the database so the CBO decision process may be seriously impaired. select might be done by table scan or by using indexes. The CBO checks several possible execution plans and selects the one with the lowest cost.  Cost Based Optimizer (CBO) .g. and other important information so that SQL statements will always generate the best execution plans. e.STATS$ROWCACHE_SUMMARY STATS$RULE_SET STATS$SEG_STAT STATS$SEG_STAT_OBJ STATS$SESS_TIME_MODEL STATS$SESSION_EVENT STATS$SESSTAT STATS$SGA STATS$SGA_TARGET_ADVICE STATS$SGASTAT STATS$SHARED_POOL_ADVICE STATS$SNAPSHOT STATS$SQL_PLAN STATS$SQL_PLAN_USAGE STATS$SQL_STATISTICS STATS$SQL_SUMMARY STATS$SQL_WORKAREA_HISTOGRAM STATS$SQLTEXT STATS$STATSPACK_PARAMETER STATS$STREAMS_APPLY_SUM STATS$STREAMS_CAPTURE STATS$STREAMS_POOL_ADVICE STATS$SYS_TIME_MODEL STATS$SYSSTAT STATS$SYSTEM_EVENT STATS$TEMP_SQLSTATS STATS$TEMPSTATXS STATS$THREAD STATS$TIME_MODEL_STATNAME STATS$UNDOSTAT STATS$WAITSTAT Oracle Statistics Whenever a valid SQL statement is processed Oracle has to decide how to retrieve the necessary data. using the Cost Based Optimizer. It does the figuring automatically. to figure out the best way to do things. Oracle can do things in several different ways. DBA job is to make sure the numbers are good enough for that optimizer to work properly. If new objects are created. a variety of counts and averages and other numbers. Oracle statistics tell us the size of the tables. but more common use of the term Oracle statistics is about Oracle optimizer Metadata statistics in order to provide the . the distribution of values within columns. This method is no longer favoured by Oracle and was desupported from Oracle 10g.This method is used if the server has no internal statistics relating to the objects referenced by the statement. where cost relates to system resources. It uses statistics. Since Oracle 8i the Cost Based Optimizer (CBO) is the preferred optimizer for Oracle.

the CBO needs to determine the optimal table join order and knowing the cardinality of the intermediate results sets is critical. which are created for the purposes of query optimization and are stored in the data dictionary.cost-based SQL optimizer with the information about the nature of the tables.  o Critical columns . These statistics should not be confused with performance statistics visible through V$ views. information to determine the optimal execution plan. IOTs. External statistics are most useful for SQL running in the all_rows optimizer mode.Oracle will sample the CPU cost and I/O cost during statistics collection and use this Heavily skewed columns . partitions. Optimizer statistics include the following:  o o o Table statistics  o o o Column statistics  o o o Index statistics  o o System statistics Number of rows Number of blocks Average row length Number of distinct values (NDV) in column Number of nulls in column Data distribution (histogram) Number of leaf blocks Levels Clustering factor I/O performance and utilization CPU performance and utilization The optimizer statistics are stored in the data dictionary. These statistics are used by the query optimizer to choose the best execution plan for each SQL statement. . The areas of schema analysis include:  Object statistics .Those columns that are regularly-referenced in SQL statements that are:  External statistics .For n-way table joins. They can be viewed using data dictionary views. Only statistics stored in the dictionary itself have an impact on the cost-based optimizer. o Foreign key columns . etc should be sampled with a deep and statistically valid sample size. Optimizer statistics are a collection of data that describe more details about the database and the objects in the database. based on optimizer_mode. The optimizer is influenced by the following factors:  OPTIMIZER_MODE in the initialization file  Statistics in the data dictionary  Hints OPTIMIZER_MODE CHOOSE ALL_ROWS FIRST_ROWS RULE can have the following values: If we provide Oracle with good statistics about the schema the CBO will almost always generate an optimal execution plan. The statistics mentioned here are optimizer statistics.Statistics for all tables.This helps the CBO properly choose between an index range scan and a full table scan.

to 6 A. The next time such a statement executes. but its operates in a very similar fashion to the DBMS_STATS. Statistics are maintained automatically by Oracle or we can maintain the optimizer statistics manually using the DBMS_STATS package. so that those objects which most need updated statistics are processed first. DBMS_STATS package provides procedures for managing statistics.GATHER_DATABASE_STATS_JOB_PROC procedure. There are several options to create statistics. We can save and restore copies of statistics. For example.M. The primary difference is that the GATHER_DATABASE_STATS_JOB_PROC procedure prioritizes the database objects that require statistics. and significantly reduces the chances of getting poor execution plans because of missing or stale statistics. We can lock statistics to prevent those statistics from changing. Distributed statements accessing objects with new statistics on remote databases are not invalidated. you could export statistics from a production system to a test system. GATHER_STATS_JOB Optimizer statistics are automatically gathered with the job GATHER_STATS_JOB. the maintenance window opens every night from 10 P. Automatic Statistics Gathering The recommended approach to gathering statistics is to allow Oracle to automatically gather the statistics. causing Scheduler to terminate GATHER_STATS_JOB when the maintenance window closes.Many databases get huge benefits from using two sets of stats.Many Oracle professionals will export their production statistics into the development instances so that the test execution plans more closely resemble the production database. The default setting for the stop_on_window_close attribute is TRUE.M. The Scheduler runs this job when the maintenance window is opened. The GATHER_STATS_JOB job gathers optimizer statistics by calling the DBMS_STATS. Enabling Automatic Statistics Gathering . Oracle invalidates any currently parsed SQL statements that access the object. Because the objects in a database can be constantly changing. statistics must be regularly updated so that they accurately describe these database objects. and another for batch (evening jobs). Creating statistics In order to make good use of the CBO. The remaining objects are then processed in the next maintenance window. The new statistics take effect the next time the SQL statement is parsed. The stop_on_window_close attribute controls whether the GATHER_STATS_JOB continues when the maintenance window closes. from Oracle9i release 2 we can collect the external cpu_cost and io_cost metrics.GATHER_DATABASE_STATS procedure using the GATHER AUTO option. For data warehouses and database using the all_rows optimizer_mode. Oracle gathers statistics on all database objects automatically and maintains those statistics in a regularly-scheduled maintenance job. You can export statistics from one system and import those statistics into another system. This ensures that the most-needed statistics are gathered before the maintenance window closes. we need to create statistics for the data in the database. The ability to save and re-use schema statistics is important:  Bi-Modal databases . one for OLTP (daytime). the statement is re-parsed and the optimizer automatically chooses a new execution plan based on the new statistics.The GATHER_DATABASE_STATS_JOB_PROC is an internal procedure. Automated statistics collection eliminates many of the manual tasks associated with managing the query optimizer.When statistics are updated for a database object.  Test databases . The GATHER_DATABASE_STATS_JOB_PROC procedure collects statistics on database objects when the object has no previously gathered statistics or the existing statistics are stale because the underlying object has been modified significantly (more than 10% of the rows). By default. This job gathers statistics on all objects in the database which have missing statistics and stale statistics. and all day on weekends. This job is created automatically at database creation time and is managed by the Scheduler.

it is sufficient to analyze external tables when the corresponding file changes.'EMP').DISABLE('GATHER_STATS_JOB').  DBMS_STATS. For external tables. When Oracle encounters a table with no statistics.  END.  /  The statistics on these tables can be set to values that represent the typical state of the table. We should gather statistics on the table when the tables have a representative number of rows. there are two approaches: The statistics on these tables can be set to NULL. / Automatic statistics gathering relies on the modification monitoring feature. If this feature is disabled.'EMP'). Because the automatic statistics gathering runs during an overnight batch window. preferably as part of the same script or job that is running the bulk load. We can verify that the job exists by viewing the DBA_SCHEDULER_JOBS view: SQL> SELECT * FROM DBA_SCHEDULER_JOBS WHERE JOB_NAME = 'GATHER_STATS_JOB'.LOCK_TABLE_STATS('SCOTT'. and then lock the statistics. Because data manipulation is not allowed against external tables.DELETE_TABLE_STATS('SCOTT'. GATHER_DATABASE_STATS. However. then disable the GATHER_STATS_JOB as follows: BEGIN DBMS_SCHEDULER. The statistics can set to NULL by deleting and then locking the statistics:  BEGIN  DBMS_STATS. This feature is enabled when the STATISTICS_LEVEL parameter is set to TYPICAL (default) or ALL.  Objects which are the target of large bulk loads which add 10% or more to the object's total size. and automatic statistics gathering processing. Oracle dynamically gathers the necessary statistics as part of query optimization. When to Use Manual Statistics Automatic statistics gathering should be sufficient for most database objects which are being modified at a moderate speed. because any statistics generated on the table during the overnight batch window may not be the most appropriate statistics for the daytime workload. you can collect statistics on an individual external table using GATHER_TABLE_STATS. the statistics-gathering procedures should be run on those tables immediately following the load process. This is more effective than the GATHER_STATS_JOB. END. and this parameter should be set to a value of 2 or higher. there are cases where automatic statistics gathering may not be adequate.Automatic statistics gathering is enabled by default when a database is created. Sampling on external tables is not supported so the ESTIMATE_PERCENT option should be explicitly set to NULL. This dynamic sampling feature is controlled by the OPTIMIZER_DYNAMIC_SAMPLING parameter. There are typically two types of such objects:  Volatile tables that are being deleted or truncated and rebuilt during the course of the day. . or when a database is upgraded from an earlier database release. the statistics on tables which are significantly modified during the day may become stale. However.For tables which are being bulk-loaded. The default value is 2. In this case statistics need to be manually gathered. statistics are not collected during GATHER_SCHEMA_STATS. then the automatic statistics gathering job is not able to detect stale statistics. In situations when you want to disable automatic statistics gathering. If the monitoring feature is disabled by setting STATISTICS_LEVEL to BASIC. For  highly volatile tables. automatic statistics gathering cannot detect stale statistics.

the DBMS_STATS package in the PL/SQL Packages and Types reference has taken over the statistics functions. l_job). Over the past few releases. or estimated based on a specific number of rows. In some cases. then you need to manually collect statistics in all schemas. old versions of statistics are saved automatically for future restoring. or a percentage of rows. Scheduling Stats Scheduling the gathering of statistics using DBMS_JOB is the easiest way to make sure they are always up to date: SET DECLARE l_job BEGIN DBMS_JOB. The preferred tool for collecting statistics used to be the ANALYZE command. such as the dynamic performance tables.'.put_line('Job: END. the DBMS_STATS package provides procedures for locking the statistics for a table or schema.remove(X).'SYSDATE COMMIT. Statistics on fixed objects. DBMS_OUTPUT. job to be removed. If the data in the database changes regularly. however to obtain faster and better statistics use . The ANALYZE command is available for all versions of Oracle. The statistics can be computed exactly. index or cluster. The above code sets up a job to gather statistics for SCOTT for the current time every day. Existing EXEC COMMIT. 1'). statistics gathering should be done when database has representative activity. such as highly volatile tables. Whenever statistics in dictionary are modified. Analyze Statement The ANALYZE statement can be used to gather statistics for a specific table.submit(l_job. need to be manually collected using GATHER_FIXED_OBJECTS_STATS procedure. we may want to prevent any new statistics from being gathered on a table or schema by the DBMS_STATS_JOB process. you also need to gather statistics regularly to ensure that the statistics accurately represent characteristics of your database objects. including system schemas. / SERVEROUTPUT ON NUMBER.Another area in which statistics need to be manually gathered is the system statistics. and left the ANALYZE command with more mundane 'health check' work like analyzing chained rows. + ' || END. SYSDATE. Where jobs 'X' is can the number be of the removed using: DBMS_JOB.gather_schema_stats(''SCOTT''). 'BEGIN DBMS_STATS. We can list the current jobs on the server using the DBA_JOBS and DBA_JOBS_RUNNING views. These statistics are not automatically gathered. In those cases. Statistics can be restored using RESTORE procedures of DBMS_STATS package. Fixed objects record current database activity. Manual Statistics Gathering If you choose not to use automatic statistics gathering.

and in 8i and above DBMS_STATS. import.ANALYZE_DATABASE('ESTIMATE'. export. The DBMS_STATS package collects a broader.ANALYZE_SCHEMA('SCOTT'. more accurate set of statistics.ESTIMATE_PERCENT=>25). This PL/SQL package is also used to modify.ANALYZE_SCHEMA you can gather all the statistics for all the tables.ESTIMATE_PERCENT=>15).the procedures supplied .ANALYZE_SCHEMA.'DELETE').GATHER_SCHEMA_STATS. COLUMNS.in 7. PERCENT. EXEC DBMS_UTILITY. clusters and indexes of a schema. EXEC DBMS_UTILITY.ANALYZE_SCHEMA('SCOTT'. SIZE 10. CASCADE.4 and 8. Oracle list a number of benefits to using it including parallel execution. and gathers statistics more efficiently.ANALYZE_SCHEMA('SCOTT'. and .ESTIMATE_ROWS=>100). STATISTICS. We may continue to use ANALYZE statement to for other purposes not related to optimizer statistics collection:  To use the VALIDATE or LIST CHAINED ROWS clauses  To collect information on free list blocks  To sample a number (rather than a percentage) of rows DBMS_UTILITY The DBMS_UTILITY package can be used to gather statistics for a whole schema or database. STATISTICS. view. 500 15 ALL STRUCTURE VALIDATE STRUCTURE cluster. options options options sal DELETE DELETE VALIDATE emp_pk emp_custs VALIDATE or statistics statistics statistics ESTIMATE STATISTICS SAMPLE STATISTICS SAMPLE STATISTICS FOR emp emp LIST table. EXEC DBMS_UTILITY. EXEC DBMS_UTILITY.3.ANALYZE_DATABASE('ESTIMATE'.0 DBMS_UTILITY. The analyze Syntax: ANALYZE ANALYZE ANALYZE ANALYZE ANALYZE ANALYZE ANALYZE ANALYZE ANALYZE ANALYZE ANALYZE ANALYZE ANALYZE table can table index cluster TABLE be used to create tableName indexName clusterName statistics TABLE emp emp emp emp ESTIMATE ESTIMATE ESTIMATE TABLE INDEX ANALYZE ANALYZE ANALYZE TABLE INDEX CLUSTER ANALYZE ANALYZE TABLE TABLE index VALIDATE CHAINED REF ROWS STATISTICS. With DBMS_UTILITY. COMPUTE FOR COLUMNS COMPUTE COMPUTE emp emp_pk emp 1 {compute|estimate|delete} {compute|estimate|delete} {compute|estimate|delete} TABLE emp emp COMPUTE STATISTICS TABLE emp PARTITION (p1) INDEX emp_pk TABLE TABLE TABLE for STATISTICS.ANALYZE_DATABASE('COMPUTE').EXEC DBMS_UTILITY. long term storage of statistics and transfer of statistics between servers. Note: Do not use the COMPUTE and ESTIMATE clauses of ANALYZE statement to collect optimizer statistics. STRUCTURE.'ESTIMATE'.'ESTIMATE'. DBMS_STATS The DBMS_STATS package was introduced in Oracle 8i and is Oracles preferred method of gathering object statistics. ROWS. INTO UPDATE.ANALYZE_SCHEMA('SCOTT'. EXEC DBMS_UTILITY. CASCADE. Both methods follow the same format as the ANALYZE statement: EXEC DBMS_UTILITY. STATISTICS.ESTIMATE_ROWS=>100). These clauses are supported solely for backward compatibility and may be removed in a future release. cr.'COMPUTE'). STATISTICS.

block_sample. statid.GATHER_INDEX_STATS('SCOTT'.GATHER_SCHEMA_STATS('SCOTT'.GATHER_SCHEMA_STATS('SCOTT').GATHER_SCHEMA_STATS(OWNNAME=>'MRT'). Gathering statistics without sampling requires full table scans and sorts of entire tables. or index. if the data dictionary already contains statistics for the object. EXEC EXEC DBMS_STATS. and well as individual columns and partitions of tables. method_opt. such as CTXSYS and DRSYS.delete statistics. It follows a similar format to the other methods.CASCADE=>TRUE). Sampling is an important technique for gathering statistics. The DBMS_STATS package can gather statistics on table and indexes. Statistics Gathering Using Sampling The statistics-gathering operations can utilize sampling to estimate statistics.ESTIMATE_PERCENT=>10). we can use DBMS_STATS to gather statistics on the individual tables instead of the whole cluster.GATHER_TABLE_STATS('SCOTT'. including SYS and SYSTEM.GATHER_DATABASE_STATS. then Oracle updates the existing statistics.GATHER_TABLE_STATS('SCOTT'. EXEC DBMS_STATS.DELETE_SCHEMA_STATS('SCOTT').DELETE_TABLE_STATS('SCOTT'.'SALES'). EXEC DBMS_STATS. DBMS_STATS. DBMS_STATS.ESTIMATE_PERCENT=>15). DBMS_STATS. DBMS_STATS. When gathering statistics on system schemas. EXEC DBMS_STATS. EXEC EXEC DBMS_STATS. gather_fixed). estimate_percent. EXEC DBMS_STATS. When we generate statistics for a table. DBMS_STATS. DBMS_STATS. granularity. column.'EMP').DELETE_PENDING_STATS('SH'. statown. Procedures in the DBMS_STATS package for gathering Procedure Collects GATHER_INDEX_STATS Index statistics GATHER_TABLE_STATS Table. This EXEC EXEC EXEC EXEC EXEC package also gives us the ability to delete statistics: DBMS_STATS.OPTIONS=>'GATHER AUTO'). cascade. This procedure gathers statistics for all system schemas. gather_temp.ESTIMATE_PERCENT=>15).GATHER_SCHEMA_STATS(OWNNAME=>'"DWH"'. options.DELETE_DATABASE_STATS. The older statistics are saved and can be restored later if necessary.DELETE_INDEX_STATS('SCOTT'.'EMP_PK'). stattab.'EMP'). however. degree. we can use the procedure DBMS_STATS.GATHER_DICTIONARY_STATS. DBMS_STATS. no_invalidate. EXEC EXEC DBMS_STATS.'EMP'.GATHER_SCHEMA_STATS(OWNNAME=>'PERFSTAT'. and index statistics GATHER_SCHEMA_STATS Statistics for all objects in a schema statistics on database objects: GATHER_DICTIONARY_STATS Statistics for all dictionary objects GATHER_DATABASE_STATS EXEC EXEC Statistics for all objects in a database DBMS_STATS.GATHER_SCHEMA_STATS(ownname. DBMS_STATS.GATHER_INDEX_STATS('SCOTT'.'EMP_PK'). and other optional schemas. It does not gather cluster statistics. .GATHER_DATABASE_STATS(ESTIMATE_PERCENT=>20). column.'EMP_PK'.

. This ensures the stability of the estimated values by reducing fluctuations.AUTO_SAMPLE_SIZE). this level of statistics may be insufficient for the optimizer's needs if the data within the column is skewed. the optimizer can choose to use either the partition (or subpartition) statistics or the global statistics. ESTIMATE_PERCENT=>DBMS_STATS. based on the statistical property of the object. Oracle automatically determines which columns require histograms and the number of buckets (size) of each histogram. the DBMS_STATS gathering procedures may automatically increase the sampling percentage if the specified percentage did not produce a large enough sample. For example. EXEC DBMS_STATS. While the sampling percentage can be set to any value. DBMS_STATS can gather separate statistics for each partition.DEGREE=>7).GATHER_SCHEMA_STATS(OWNNAME=>'SCOTT'. Both types of statistics are important for most applications. The type of partitioning statistics to be gathered is specified in the GRANULARITY argument to the DBMS_STATS gathering procedures.AUTO_SAMPLE_SIZE to maximize performance gains while achieving necessary statistical accuracy. ESTIMATE_PERCENT=> DBMS_STATS.ora parameters.GATHER_TABLE_STATS('SH'.AUTO_SAMPLE_SIZE. you could use: EXEC DBMS_STATS. Depending on the SQL statement being optimized. Oracle Corporation recommends setting the ESTIMATE_PERCENT parameter of the DBMS_STATS gathering procedures to DBMS_STATS. or indexes. DBMS_STATS gathers information about the data distribution of the columns within the table. partitions. The degree of parallelism can be specified with the DEGREE argument to the DBMS_STATS gathering procedures. Parallel statistics gathering can be used in conjunction with sampling. You can also manually specify which columns should have histograms and the size of each histogram. domain indexes. including cluster indexes.ESTIMATE_PERCENT=>25). deptno)').ESTIMATE_PERCENT=>5. METHOD_OPT=>’FOR ALL COLUMNS SIZE AUTO’. histograms can also be created as part of the column statistics to describe the data distribution of a given column.'SALES'. AUTO_SAMPLE_SIZE lets Oracle determine the best sample size necessary for good statistics. Histograms are specified using the METHOD_OPT argument of the DBMS_STATS gathering procedures. Note that certain types of index statistics are not gathered in parallel. and the entire table or index. EXEC DBMS_STATS. EXEC DBMS_STATS.AUTO_SAMPLE_SIZE).DEGREE=>6. METHOD_OPT=> 'FOR ALL COLUMNS SIZE AUTO'.GATHER_SCHEMA_STATS(OWNNAME=>'SCOTT'.GATHER_SCHEMA_STATS(OWNNAME=>'SCOTT'.DBMS_STATS. and Oracle recommends setting the GRANULARITY parameter to AUTO to gather both types of partition statistics. The most basic information about the data distribution is the maximum value and minimum value of the column. NO_INVALIDATE=>FALSE). Sampling is specified using the ESTIMATE_PERCENT argument to the DBMS_STATS procedures.Sampling minimizes the resources necessary to gather statistics. to collect table and column statistics for all tables in the SCOTT schema with auto-sampling. However. Oracle recommends setting the DEGREE parameter to DBMS_STATS. With this setting. Statistics on Partitioned Objects For partitioned tables and indexes. EXEC DBMS_STATS. for composite partitioning. Similarly. Because each type of statistics has different requirements.GATHER_TABLE_STATS(OWNNAME=>’DWH. Parallel Statistics Gathering The statistics-gathering operations can run either serially or in parallel. EXEC DBMS_STATS. the size of the actual sample taken may not be the same across the table. columns. This setting allows Oracle to choose an appropriate degree of parallelism based on the size of the object and the settings for the parallel-related init.method_opt=>'FOR COLUMNS (empno. DBMS_STATS can gather separate statistics for subpartitions. and bitmap join indexes. When the ESTIMATE_PERCENT parameter is manually specified. Column Statistics and Histograms When gathering statistics on a table.GATHER_SCHEMA_STATS('SCOTT'. For skewed data distributions. as well as global statistics for the entire table or index. Oracle recommends setting the METHOD_OPT to FOR ALL COLUMNS SIZE AUTO.AUTO_DEGREE.

This is done by calling the statistics-gathering procedure with the METHOD_OPT argument set to FOR ALL HIDDEN COLUMNS. UPDATEs. there may be a few minutes delay while Oracle propagates the information to this view. Use the DBMS_STATS. use TRUNCATE instead of dropping and re-creating the same table. For an application in which tables are being incrementally modified. Database level DBMS_STATS. Oracle calls the statistics collection method in the statistics type whenever statistics are gathered for database objects. DBMS_STATS. -ALTER ALTER -EXEC EXEC -EXEC EXEC Table TABLE TABLE emp emp Schema DBMS_STATS. and DELETEs for that table. these features will not function properly. such as with bulk loads. You should gather new column statistics on a table after creating a function-based index. The DBMS_STATS procedure should be called as soon as the load operation completes. When a table is dropped.GATHER_TABLE_STATS('SH'. If a monitored table has been modified more than 10%.EXEC DBMS_STATS. since the last time statistics were gathered. we not only need to determine how to gather statistics. Note: If you need to remove all rows from a table when using DBMS_STATS. to allow Oracle to collect column statistics equivalent information for the expression. This monitoring is enabled by default when STATISTICS_LEVEL is set to TYPICAL or ALL.alter_schema_tab_monitoring('SCOTT'. The simplest way to gather statistics in these environments is to use a script or job scheduling tool to regularly run the GATHER_SCHEMA_STATS and GATHER_DATABASE_STATS procedures. User-defined Statistics You can create user-defined optimizer statistics to support user-defined indexes and functions. Determining Stale Statistics Statistics must be regularly gathered on database objects as those database objects are modified over time. . DBMS_STATS. level TRUE). The frequency of collection intervals should balance the task of providing accurate statistics for the optimizer against the processing overhead incurred by the statistics collection process. workload information used by the autohistogram gathering feature and saved statistics history used by the RESTORE_*_STATS procedures will be lost. statistics should be gathered on those tables as part of the batch operation. The information about changes of tables can be viewed in the USER_TAB_MODIFICATIONS view. Monitoring tracks the approximate number of INSERTs. When you associate a statistics type with a column or domain index. MONITORING. as well as whether the table has been truncated. Following a data-modification. Oracle provides a table monitoring facility. then these statistics are considered stale and gathered again. In order to determine whether or not given database object needs new database statistics. FALSE).method_op =>'FOR COLUMNS (sal+comm)'). The GATHER_DATABASE_STATS or GATHER_SCHEMA_STATS procedures gather new statistics for tables with stale statistics when the OPTIONS parameter is set to GATHER STALE or GATHER AUTO. level NOMONITORING. but also when and how often to gather new statistics. Without this data.alter_database_tab_monitoring(TRUE).alter_schema_tab_monitoring('SCOTT'.'SALES'. For tables which are being substantially modified in batch operations.alter_database_tab_monitoring(FALSE). When to Gather Statistics When gathering statistics manually.FLUSH_DATABASE_MONITORING_INFO procedure to immediately reflect the outstanding monitored information kept in the memory. we may only need to gather new statistics every week or every month.

EXPORT_SCHEMA_STATS(OWNNAME=>'APPSCHEMA'. you must first export the statistics on the first database. The statistics can then be imported using the DBMS_STATS. The only statistics used by the optimizer are the statistics stored in the data dictionary. so that we can forecast the optimizer behaviour. and finally import the statistics into the second database. statistics 'STATS_TABLE'. tblspace => 'STATS_TABLESPACE'). statistics can be gathered only on those partitions rather than gathering statistics for the entire table. using the EXP and IMP utilities or other mechanisms. table. First the statistics must be collected into a statistics table.For partitioned tables. SQL> EXEC DBMS_STATS.CREATE_STAT_TABLE(ownname =>'SCHEMA_NAME'. Statistics can be exported and imported from the data dictionary to user-owned tables. Note: The EXP and IMP utilities export and import optimizer statistics from the database along with the table. It can be very handy to use production statistics on development database.STATID=>'030610'. 'DBASCHEMA'). gathering global statistics for the partitioned table may still be necessary. You may want to do this to copy the statistics from a production database to a scaled-down test database. which is owned by DBASCHEMA: 1. In the following example the statistics for the APPSCHEMA user are collected into a new table. NULL. 2. In order to move statistics from one database to another. STATTAB=>'STAT_TAB'. It also enables you to copy statistics from one database to another database. This enables you to create multiple versions of statistics for the same schema.EXPORT_SCHEMA_STATS('ORIGINAL_SCHEMA'. This statistics table is created using the procedure DBMS_STATS.'STATS_TAB'. Create the statistics table.IMPORT_*_STATS procedures. However. stat_tab => 'STATS_TABLE'. .'STATS_TAB'). Transferring Statistics between databases It is possible to transfer statistics between servers allowing consistent execution plans between servers with varying amounts of data. STATS_TAB.NULL. The DBMS_STATS export and import packages do utilize IMP and EXP dumpfiles. EXEC DBMS_STATS. Before exporting statistics.EXPORT_*_STATS procedures. 'STATS_TABLE_OWNER'). you first need to create a table for holding the statistics. (or) EXEC DBMS_STATS. then copy the statistics table to the second database. Export statistics to EXEC DBMS_STATS.CREATE_STAT_TABLE. In order to have the optimizer use the statistics in userowned tables. Note that the optimizer does not use statistics stored in a user-owned table. Note: Exporting and importing statistics is a distinct concept from the EXP and IMP utilities of the database. SQL> EXEC DBMS_STATS.STATOWN=>'DBASCHEMA').EXPORT_SCHEMA_STATS('APPSCHEMA'. then you can export statistics from the data dictionary into your statistics table using the DBMS_STATS. In those cases.CREATE_STAT_TABLE('DBASCHEMA'. there are often cases in which only a single partition is modified. you must import those statistics into the data dictionary using the statistics import procedures. After this table is created. One exception is that statistics are not exported with the data if a table has columns with system-generated names.

emp. 5.STATOWN=>'DBASCHEMA').log Import statistics into the data DBMS_STATS. to the query optimizer.dmp select using * any one from log=stats_exp.dmp of the below methods.IMPORT_SCHEMA_STATS('APPSCHEMA'. System statistics enable the query optimizer to more accurately estimate I/O .dmp impdp directory=dpump_dir 4. dictionary. EXEC SQL> Drop the EXEC statistics table (optional step). DBMS_STATS. method_opt => 'for all columns size auto'. System Statistics System statistics describe the system's hardware characteristics. DBMS_STATS.auto_sample_size. estimate_percent => dbms_stats. EXEC server logfile=stats_exp.log tables= dbaschema. For example. we like to schedule a valid sample (using dbms_stats.'STATS_TAB'. Getting top-quality stats Because Oracle9i schema statistics work best with external system load. emp. last_name. end.stats_tab logfile=stats_imp. / Optimizer ALL_ROWS FIRST_ROWS FIRST_n_ROWS APPEND FULL INDEX DYNAMIC_SAMPLING BYPASS_RECURSIVE_CHECK BYPASS_RECURSIVE_CHECK Examples: SELECT SELECT SELECT SELECT /*+ /*+ ALL_ROWS */ empid. 'SYSTEM').3.stats_tab@source.dmp Datapump: expdp directory=dpump_dir dumpfile=stats. such as I/O and CPU performance and utilization. 'DBASCHEMA').STATID=>'030610'.log file=stats.DROP_STAT_TABLE('SYSTEM'. SQL> EXEC DBMS_STATS. last_name. /*+ FIRST_ROWS */ * /*+ FIRST_20_ROWS */ * FIRST_ROWS(100) */ empid.IMPORT_SCHEMA_STATS(OWNNAME=>'APPSCHEMA'.NULL. When choosing an execution plan.log dumpfile=stats. tables=dbaschema.stats_tab log=stats_imp.'STATS_TABLE'). Copy: dbaschema. STATTAB=>'STAT_TAB'. 'STATS_TABLE'. degree => 7).gather_schema_stats(ownname => 'SCOTT'. (or) EXEC DBMS_STATS.stats_tab file=stats. emp. here we refresh statistics using the "auto" option which works with the table monitoring facility to only re-analyze those Oracle tables that have experienced more than a 10% change in row content: begin dbms_stats.auto_sample_size) during regular working hours. Hints APPEND sal FROM FROM FROM sal FROM emp.IMPORT_SCHEMA_STATS('NEW_SCHEMA'.DROP_STAT_TABLE('DBASCHEMA'. the optimizer estimates the I/O and CPU resources required for each query. This table can be SQLPlus SQL> insert into Export/Import: exp imp transferred to another dbaschema.'STATS_TAB'). NULL.

and

CPU

costs,

enabling

the

query

optimizer

to

choose

a

better

execution

plan.

When Oracle gathers system statistics, it analyzes system activity in a specified time period (workload statistics)
or simulates a workload (noworkload statistics). The statistics are collected using the
DBMS_STATS.GATHER_SYSTEM_STATS procedure. Oracle highly recommends that you gather system
statistics.
Note: You must have DBA privileges or GATHER_SYSTEM_STATISTICS role to update dictionary system
statistics.
EXEC
DBMS_STATS.GATHER_SYSTEM_STATS
(interval=>720,
stattab=>'mystats',
statid=>'OLTP');
EXEC DBMS_STATS.IMPORT_SYSTEM_STATS('mystats', 'OLTP');Unlike table, index, or column statistics,
Oracle does not invalidate already parsed SQL statements when system statistics get updated. All new SQL
statements
are
parsed
using
new
statistics.
These options better facilitate the gathering process to the physical database and workload: when workload
system statistics are gathered, noworkload system statistics will be ignored. Noworkload system statistics are
initialized
to
default
values
at
the
first
database
startup.
Workload
Statistics
Workload statistics, introduced in Oracle 9i, gather single and multiblock read times, mbrc, CPU speed
(cpuspeed), maximum system throughput, and average slave throughput. The sreadtim, mreadtim, and mbrc are
computed by comparing the number of physical sequential and random reads between two points in time from
the beginning to the end of a workload. These values are implemented through counters that change when the
buffer cache completes synchronous read requests. Since the counters are in the buffer cache, they include not
only I/O delays, but also waits related to latch contention and task switching. Workload statistics thus depend on
the activity the system had during the workload window. If system is I/O bound—both latch contention and I/O
throughput—it will be reflected in the statistics and will therefore promote a less I/O intensive plan after the
statistics are used. Furthermore, workload statistics gathering does not generate additional overhead.
In Oracle release 9.2, maximum I/O throughput and average slave throughput were added to set a lower limit for
a
full
table
scan
(FTS).
To
gather
workload
statistics,
either:

Run the dbms_stats.gather_system_stats('start') procedure at the beginning of the workload window,

then the dbms_stats.gather_system_stats('stop') procedure at the end of the workload window.

Run dbms_stats.gather_system_stats('interval', interval=>N) where N is the number of minutes when

statistics gathering will be stopped automatically.
To delete system statistics, run dbms_stats.delete_system_stats(). Workload statistics will be deleted and reset to
the
default
noworkload
statistics.
Noworkload
Statistics
Noworkload statistics consist of I/O transfer speed, I/O seek time, and CPU speed (cpuspeednw). The major
difference between workload statistics and noworkload statistics lies in the gathering method.
Noworkload statistics gather data by submitting random reads against all data files, while workload statistics uses
counters updated when database activity occurs. isseektim represents the time it takes to position the disk head
to read data. Its value usually varies from 5 ms to 15 ms, depending on disk rotation speed and the disk or RAID
specification. The I/O transfer speed represents the speed at which one operating system process can read data
from the I/O subsystem. Its value varies greatly, from a few MBs per second to hundreds of MBs per second.
Oracle
uses
relatively
conservative
default
settings
for
I/O
transfer
speed.
In Oracle 10g, Oracle uses noworkload statistics and the CPU cost model by default. The values of noworkload
statistics
are
initialized
to
defaults
at
the
first
instance
startup:
ioseektim
=
10ms

iotrfspeed
cpuspeednw

=

=
gathered

4096
varies

value,

based

on

bytes/ms
system

If workload statistics are gathered, noworkload statistics will be ignored and Oracle will use workload statistics
instead. To gather noworkload statistics, run dbms_stats.gather_system_stats() with no arguments. There will be
an overhead on the I/O system during the gathering process of noworkload statistics. The gathering process may
take from a few seconds to several minutes, depending on I/O performance and database size.
The information is analyzed and verified for consistency. In some cases, the value of noworkload statistics may
remain its default value. In such cases, repeat the statistics gathering process or set the value manually to values
that the I/O system has according to its specifications by using the dbms_stats.set_system_stats procedure.
Managing
Statistics
Restoring
Previous
Versions
of
Statistics
Whenever statistics in dictionary are modified, old versions of statistics are saved automatically for future
restoring. Statistics can be restored using RESTORE procedures of DBMS_STATS package. These procedures
use a time stamp as an argument and restore statistics as of that time stamp. This is useful in case newly
collected statistics leads to some sub-optimal execution plans and the administrator wants to revert to the
previous set of statistics.There are dictionary views that display the time of statistics modifications. These views
are
useful
in
determining
the
time
stamp
to
be
used
for
statistics
restoration.

Catalog view DBA_OPTSTAT_OPERATIONS contain history of statistics operations performed at

schema and database level using DBMS_STATS.

The views *_TAB_STATS_HISTORY views (ALL, DBA, or USER) contain a history of table statistics

modifications.
The old statistics are purged automatically at regular intervals based on the statistics history retention setting and
the time of the recent analysis of the system. Retention is configurable using the
ALTER_STATS_HISTORY_RETENTION procedure of DBMS_STATS. The default value is 31 days, which
means that you would be able to restore the optimizer statistics to any time in last 31 days.
Automatic purging is enabled when STATISTICS_LEVEL parameter is set to TYPICAL or ALL. If automatic
purging is disabled, the old versions of statistics need to be purged manually using the PURGE_STATS
procedure.
The

other

DBMS_STATS

procedures

related

to

restoring

and

purging

statistics

include:

PURGE_STATS: This procedure can be used to manually purge old versions beyond a time stamp.

GET_STATS_HISTORY_RENTENTION: This function can be used to get the current statistics history

retention value.

GET_STATS_HISTORY_AVAILABILITY: This function gets the oldest time stamp where statistics history

is available. Users cannot restore statistics to a time stamp older than the oldest time stamp.
When
restoring
previous
versions
of
statistics,
the
following
limitations

apply:

RESTORE procedures cannot restore user-defined statistics.

Old versions of statistics are not stored when the ANALYZE command has been used for collecting

statistics.
Note: If you need to remove all rows from a table when using DBMS_STATS, use TRUNCATE instead of
dropping and re-creating the same table. When a table is dropped, workload information used by the autohistogram gathering feature and saved statistics history used by the RESTORE_*_STATS procedures will be lost.
Without
this
data,
these
features
will
not
function
properly.
Restoring
Statistics
versus
Importing
or
Exporting
Statistics
The functionality for restoring statistics is similar in some respects to the functionality of importing and exporting

statistics.

In

general,

you

should

use

the

restore

capability

when:

You want to recover older versions of the statistics. For example, to restore the optimizer behaviour to

an earlier date.


You

You want the database to manage the retention and purging of statistics histories.
should

use

EXPORT/IMPORT_*_STATS

procedures

when:

You want to experiment with multiple sets of statistics and change the values back and forth.

You want to move the statistics from one database to another database. For example, moving statistics

from a production system to a test system.

You want to preserve a known set of statistics for a longer period of time than the desired retention date

for restoring statistics.

Locking
Statistics
for
a
Table
or
Schema
Statistics for a table or schema can be locked. Once statistics are locked, no modifications can be made to those
statistics until the statistics have been unlocked. These locking procedures are useful in a static environment in
which
you
want
to
guarantee
that
the
statistics
never
change.
The DBMS_STATS package provides two procedures for locking and two procedures for unlocking statistics:

LOCK_SCHEMA_STATS

LOCK_TABLE_STATS

UNLOCK_SCHEMA_STATS

UNLOCK_TABLE_STATS

EXEC
EXEC

DBMS_STATS.LOCK_SCHEMA_STATS('AP');
DBMS_STATS.UNLOCK_SCHEMA_STATS('AP');

Setting
Statistics
We can set table, column, index, and system statistics using the SET_*_STATISTICS procedures. Setting
statistics in the manner is not recommended, because inaccurate or inconsistent statistics can lead to poor
performance.
Dynamic
Sampling
The purpose of dynamic sampling is to improve server performance by determining more accurate estimates for
predicate selectivity and statistics for tables and indexes. The statistics for tables and indexes include table block
counts, applicable index block counts, table cardinalities, and relevant join column statistics. These more
accurate
estimates
allow
the
optimizer
to
produce
better
performing
plans.
You

can

use

dynamic

sampling

to:

Estimate single-table predicate selectivities when collected statistics cannot be used or are likely to lead

to significant errors in estimation.

Estimate statistics for tables and relevant indexes without statistics.

Estimate statistics for tables and relevant indexes whose statistics are too out of date to trust.

This dynamic sampling feature is controlled by the OPTIMIZER_DYNAMIC_SAMPLING parameter. For dynamic
sampling to automatically gather the necessary statistics, this parameter should be set to a value of 2(default) or
higher.
The primary performance attribute is compile time. Oracle determines at compile time whether a query would
benefit from dynamic sampling. If so, a recursive SQL statement is issued to scan a small random sample of the
table's blocks, and to apply the relevant single table predicates to estimate predicate selectivities. The sample
cardinality can also be used, in some cases, to estimate table cardinality. Any relevant column and index

statistics are also collected. Depending on the value of the OPTIMIZER_DYNAMIC_SAMPLING initialization
parameter,
a
certain
number
of
blocks
are
read
by
the
dynamic
sampling
query.
For a query that normally completes quickly (in less than a few seconds), we will not want to incur the cost of
dynamic sampling. However, dynamic sampling can be beneficial under any of the following conditions:

A better plan can be found using dynamic sampling.

The sampling time is a small fraction of total execution time for the query.

The query will be executed many times.

Dynamic sampling can be applied to a subset of a single table's predicates and combined with standard
selectivity
estimates
of
predicates
for
which
dynamic
sampling
is
not
done.
We control dynamic sampling with the OPTIMIZER_DYNAMIC_SAMPLING parameter, which can be set to a
value
from
0
to
10.
The
default
is
2.

A value of 0 means dynamic sampling will not be done.

Increasing the value of the parameter results in more aggressive application of dynamic sampling, in

terms of both the type of tables sampled (analyzed or unanalyzed) and the amount of I/O spent on sampling.
Dynamic sampling is repeatable if no rows have been inserted, deleted, or updated in the table being sampled.
The parameter OPTIMIZER_FEATURES_ENABLE turns off dynamic sampling if set to a version prior to 9.2.0.
Dynamic
Sampling
Levels
The sampling levels are as follows if the dynamic sampling level used is from a cursor hint or from the
OPTIMIZER_DYNAMIC_SAMPLING
initialization
parameter:

Level 0: Do not use dynamic sampling.

Level 1: Sample all tables that have not been analyzed if the following criteria are met: (1) there is at

least 1 unanalyzed table in the query; (2) this unanalyzed table is joined to another table or appears in a
subquery or non-mergeable view; (3) this unanalyzed table has no indexes; (4) this unanalyzed table has more
blocks than the number of blocks that would be used for dynamic sampling of this table. The number of blocks
sampled is the default number of dynamic sampling blocks (32).

Level 2: Apply dynamic sampling to all unanalyzed tables. The number of blocks sampled is two times

the default number of dynamic sampling blocks.

Level 3: Apply dynamic sampling to all tables that meet Level 2 criteria, plus all tables for which

standard selectivity estimation used a guess for some predicate that is a potential dynamic sampling predicate.
The number of blocks sampled is the default number of dynamic sampling blocks. For unanalyzed tables, the
number of blocks sampled is two times the default number of dynamic sampling blocks.

Level 4: Apply dynamic sampling to all tables that meet Level 3 criteria, plus all tables that have single-

table predicates that reference 2 or more columns. The number of blocks sampled is the default number of
dynamic sampling blocks. For unanalyzed tables, the number of blocks sampled is two times the default number
of dynamic sampling blocks.

Levels 5, 6, 7, 8, and 9: Apply dynamic sampling to all tables that meet the previous level criteria using

2, 4, 8, 32, or 128 times the default number of dynamic sampling blocks respectively.

Level 10: Apply dynamic sampling to all tables that meet the Level 9 criteria using all blocks in the table.

The sampling levels are as follows if the dynamic sampling level for a table is set using the
DYNAMIC_SAMPLING
optimizer
hint:

Level 0: Do not use dynamic sampling.

Level 1: The number of blocks sampled is the default number of dynamic sampling blocks (32).

These include remote tables and external tables. These DBA_* views include the following: · · · · · · · · · DBA_TAB_STATISTICS ALL_TAB_STATISTICS USER_TAB_STATISTICS DBA_TAB_COL_STATISTICS ALL_TAB_COL_STATISTICS USER_TAB_COL_STATISTICS DBA_TAB_HISTOGRAMS ALL_TAB_HISTOGRAMS USER_TAB_HISTOGRAMS · · · · · · · · · · · · · · DBA_TABLES DBA_OBJECT_TABLES DBA_TAB_HISTOGRAMS DBA_INDEXES DBA_IND_STATISTICS DBA_CLUSTERS DBA_TAB_PARTITIONS DBA_TAB_SUBPARTITIONS DBA_IND_PARTITIONS DBA_IND_SUBPARTITIONS DBA_PART_COL_STATISTICS DBA_PART_HISTOGRAMS DBA_SUBPART_COL_STATISTICS DBA_SUBPART_HISTOGRAMS Viewing Histograms Column statistics may be stored as histograms. Levels 2. ALL. 8. However. 32. 64. 4. 6. Default Table Cardinality Average Number Remote Remote Default Index Levels Leaf Leaf Data Distinct Clustering Table Are Missing Statistic Default Value Used by num_of_blocks * (block_size cache_layer) / row length 100 of blocks 100 or actual value based on the cardinality 2000 average row length 100 Index Values When Statistics Are Optimizer avg_row_len bytes extent map rows bytes Missing Statistic Values Default When Value blocks blocks/key blocks/key keys factor Statistics Used by Optimizer 1 25 1 1 100 800 Viewing Statistics Statistics on Tables. Oracle dynamically gathers the necessary statistics needed by the optimizer. the optimizer uses default values for its statistics. indexes. 128. 3. and columns are stored in the data dictionary. 4. In those cases and also when dynamic sampling has been disabled. for certain types of tables. or DBA).  Level 10: Read all blocks in the table. These histograms provide accurate estimates of the distribution . and 9: The number of blocks sampled is 2. query the appropriate data dictionary view (USER. 16. Indexes and Columns Statistics on tables. or 256 times the default number of dynamic sampling blocks respectively. 7. Oracle does not perform dynamic sampling. To view statistics in the data dictionary. 8. 5. Handling Missing Statistics When Oracle encounters a table with missing statistics.

Frequency histograms are automatically created instead of height-balanced histograms when the number of distinct values is less than or equal to the number of histogram buckets specified. 10 QUANTITY_ON_HAND'). ENDPOINT_NUMBER --------------0 1 2 3 4 5 6 7 8 9 10 In the query output. NUM_DISTINCT. Oracle uses two types of histograms for column statistics: height-balanced histograms and frequency histograms. HISTOGRAM USER_TAB_COL_STATISTICS COLUMN_NAME = 'QUANTITY_ON_HAND'. the column values are divided into bands so that each band contains approximately the same number of rows. each value of the column corresponds to a single bucket of the histogram. Height-Balanced Histograms In a height-balanced histogram. TABLE_NAME = 'INVENTORIES' BY AND COLUMN_NAME = HISTOGRAM --------------BALANCED ENDPOINT_VALUE USER_HISTOGRAMS 'QUANTITY_ON_HAND' ENDPOINT_NUMBER. Frequency Histograms In a frequency histogram. 'INVENTORIES' NUM_DISTINCT -----------237 AND NUM_BUCKETS. Frequency histograms can be viewed using the *TAB_HISTOGRAMS tables.of column data. Example for Viewing Height-Balanced BEGIN DBMS_STATS. The type of histogram is stored in the HISTOGRAM column of the *TAB_COL_STATISTICS views (USER and DBA). . FREQUENCY.GATHER_TABLE_STATS (OWNNAME => 'OE'. Height-balanced histograms can be viewed using the *TAB_HISTOGRAMS tables. The useful information that the histogram provides is where in the range of values the endpoints fall. Histograms provide improved selectivity estimates in the presence of data skew. resulting in optimal execution plans with non uniform data distributions. 10 NUM_BUCKETS ----------HEIGHT ENDPOINT_NUMBER. Statistics ENDPOINT_VALUE -------------0 27 42 57 74 98 123 149 175 202 353 one row corresponds to one bucket in the histogram. METHOD_OPT => 'FOR COLUMNS SIZE END. Example for Viewing Frequency BEGIN DBMS_STATS. This column can have values of HEIGHT BALANCED. Histogram TABNAME => Statistics 'INVENTORIES'. TABLE_NAME = COLUMN_NAME -----------------------------QUANTITY_ON_HAND SELECT FROM WHERE ORDER Histogram TABNAME => 'INVENTORIES'. Each bucket contains the number of occurrences of that single value.GATHER_TABLE_STATS (OWNNAME => 'OE'. or NONE. / SELECT FROM WHERE COLUMN_NAME.

TABLE_NAME = COLUMN_NAME -----------------------------WAREHOUSE_ID SELECT FROM WHERE ORDER COLUMNS NUM_DISTINCT.  Gathering stats for the SYS schema can make the system run slower. ENDPOINT_NUMBER --------------36 213 261 370 484 692 798 984 1112 ENDPOINT_VALUE -------------1 2 3 4 5 6 7 8 9 Issues  Exclude dataload tables from your regular stats gathering. 'INVENTORIES' AND NUM_DISTINCT -----------9 SIZE 20 NUM_BUCKETS. that's a significant change.METHOD_OPT END. Things that will let it "guess" at how many rows will match a given criteria. / SELECT FROM WHERE => 'FOR COLUMN_NAME.  If a table goes from 1 row to 200 rows. AND COLUMN_NAME HISTOGRAM --------------FREQUENCY ENDPOINT_VALUE USER_HISTOGRAMS = 'WAREHOUSE_ID' ENDPOINT_NUMBER.000 rows. When a table goes from 1000 rows all with identical values in commonly-queried column X to 1000 rows with nearly unique values in column X.  Even if scheduled. When a table goes from 100. TABLE_NAME = 'INVENTORIES' BY WAREHOUSE_ID'). Startup/Shutdown Options in Oracle Database SYSDBA or SYSOPER (or SYSASM in ASM instance) system privilege is required to issue the STARTUP & SHUTDOWN commands. unless you know they will be full at the time that stats are gathered. Statistics store information about item counts and relative frequencies. When it guesses wrong. not faster. it may be necessary to gather fresh statistics after database maintenance or large data loads. the optimizer can pick a very suboptimal query plan. NUM_BUCKETS ----------9 ENDPOINT_NUMBER. that's a significant change. that's not a terribly significant change. Startup STARTUP [PFILE=file_name [MOUNT [EXCLUSIVE] | options | database_name | OPEN RECOVER [FORCE][RESTRICT][NOMOUNT][MIGRATE][QUIET] SPFILE=file_name] READ {ONLY | WRITE [RECOVER]} database_name] .  Gathering statistics can be very resource intensive for the server so avoid peak workload times or gather stale stats only. HISTOGRAM USER_TAB_COL_STATISTICS COLUMN_NAME = 'WAREHOUSE_ID'.000 rows to 150.

redolog files are opened.ora) at $ORACLE_SID/dbs and allocate the shared memory and semaphores.ora) or server parameter file (spfileSID. SQL*Loader in Oracle SQL*Loader SQL*Loader (sqlldr) is. which has a powerful data parsing engine which puts little limitation on the format of the data in the datafile. OPEN database_name PFILE='/path/' PARALLEL NOMOUNT MOUNT (or STARTUP MOUNT EXCLUSIVE or STARTUP MOUNT SHARED) RESTRICT RESTRICT MOUNT [PFILE='/path/'] {UPGRADE | DOWNGRADE} [QUIET] UPGRADE DOWNGRADE MIGRATE FORCE (= SHUT IMMEDIATE + STARTUP) FORCE pfile='/path/' FORCE RESTRICT PFILE='/path/' OPEN [database_name] pfile pfile = '/path/' spfile spfile = '/path/' OPEN OPEN OPEN READ READ WRITE NOMOUNT -. Shutdown options SHUTDOWN {NORMAL | TRANSACTIONAL SHU SHUT SHUTDOWN SHUTDOWN NORMAL SHUTDOWN TRANSACTIONAL SHUTDOWN TRANSACTIONAL LOCAL -SHUTDOWN [LOCAL] | IMMEDIATE in RAC environmentSHUTDOWN | ABORT} IMMEDIATE ABORT Misc alter alter alter alter alter alter alter database alter database alter database mount standby database. dismount.datafiles. the utility. only. open.Background processes will be started upon reading the parameter file (initSID. mount. SQL*Loader .STARTUP STARTUP STARTUP STARTUP STARTUP STARTUP STARTUP STARTUP STARTUP STARTUP STARTUP STARTUP STARTUP STARTUP STARTUP STARTUP STARTUP STARTUP STARTUP STARTUP STARTUP STARTUP OPEN ONLY WRITE RECOVER OPEN RECOVER. MOUNT -control files will be read and opened. OPEN -. exclusive. close. The data can be loaded from any flat file and inserted into the Oracle database. to use for high performance data loads. database database database database database database open read mount mount. open. SQL*Loader is a bulk loader utility used for moving data from external files into the Oracle database.

(c) REPLACE: Specifies that. if present.) is also a reserved word (keyword). bad rows to the bad file. It contains the table name. discard file name. including case. Note that the optional third section of the control file is interpreted as data rather than as control file syntax. The second section consists of one or more "INTO TABLE" blocks. If a particular literal or a database object name (column name. Using this information and any additional specified parameters (either on the command line or in the PARFILE). conditions to load.g. SQL*Loader loads the data into the database. we want to replace the data in the table before loading. which mark the beginning of the comment. and multi-table loads. This is the default. Options in SQL*Loader while loading the data. comments in this section are not supported. * Certain words have special meaning to SQL*Loader and are therefore reserved. to the end of the line. SQL*Loader will abort the load if the table contains data to start with. (a) INSERT: Specifies that you are loading into an empty table. it must be enclosed in single or double quotation marks. SQL*Loader writes messages to the log file.: * global options such as bindsize. (d) TRUNCATE: This is same as 'REPLACE'. * In control file syntax. SQL*Loader reads data file(s) and description of the data which is defined in the control file. During processing. The first section contains session-wide information. Each of these blocks contains information about the table into which the data is to be loaded such as the table name and the columns of the table. table name. strings enclosed in single or double quotation marks are taken literally. comments extend from the two hyphens (--). but SQL*Loader will use the 'TRUNCATE' command instead of 'DELETE' command. . * It is case insensitive. rows. e. The third section is optional and. Control file can have three sections: 1. Some control file syntax considerations are: * The syntax is free-format (statements can extend over multiple lines). contains input data. and discarded rows to the discard file.supports various load formats. etc. consequently. field delimiters. * INFILE clauses to specify where the input data is located * data character set specification 2. etc. column data types. (b) APPEND: If we want to load the data into a table which is already containing some rows. bad file name. Will 'DELETE' all the existing records and replace them with new. The Control File The SQL*Loader control file. selective loading. SQL functions to be applied and may contain data or infile name. records to skip. contains information that describes how the data will be loaded. however. is a flat file or text file. 3.

deptno) The emp. e. data2 "TRIM(:data2)" data3 "DECODE(:data2..AAAAAAAAAA."Scott Tiger".This sample control file will load an external data file containing delimited data: load data infile 'c:\data\emp.". . Oracle will allocate a bigger buffer to hold the entire column.D.csv' into table emp //here INSERT is default fields terminated by ". Example 1: LOAD DATA INFILE * CONTINUEIF THIS (1) = '*' INTO TABLE delimited_data FIELDS TERMINATED BY ". thus eliminating potential "Field in data file exceeds maximum length" errors. 5000. To load character fields longer than 255 characters. 20 1000. 'goodbye'. load infile replace into table (dept position (02:05) deptname position (08:27) ) begindata COSC COMPUTER ENGL ENGLISH MATH POLY POLITICAL SCIENCE data * departments char(4). otherwise we’ve to specify the file name and location. .csv file may look like this: 10001.g. empname.: .B.C.. sal.."A. :data1)" ) BEGINDATA *11111. 10002.Testttt NOTE: The default data type in SQL*Loader is CHAR(255). 'hello'. 40 Another Sample control file with in-line data formatted as fix length records.. code the type and length in your control file."Frank Naude". resume char(4000)." OPTIONALLY ENCLOSED BY '"' TRAILING NULLCOLS (data1 "UPPER(:data1)". By doing this. char(20) SCIENCE LITERATURE MATHEMATICS "infile *" means. The trick is to specify "*" as the name of the data file. and use BEGINDATA to start the data section in the control file: Loading variable length(delimited) data In the first example we will see how delimited (variable length) data can be loaded into Oracle. the data is within the control file.hello *22222." optionally enclosed by '"' (empno.

This format is harder to create and less flexible but can yield performance benefits. (COL5=BLANKS). Example 2: LOAD INFILE INTO ( COL1 COL2 COL3 COL4 COL5 COL6 ) SQL*Loader TABLE POSITION(1:4) INTEGER POSITION(6:9) INTEGER POSITION(11:46) POSITION(48:83) POSITION(85:120) POSITION(122:130) DATE DATA 'table. CHAR. Decimal numbers need not contain a decimal point. (COL6=BLANKS) Numeric data should be specified as type ‘external’. The control file specifies the specific starting and ending byte location of each field. A file is in fixed record format when all records in a datafile are the same length.' NULLCOLS (COL1=BLANKS). Example 1: LOAD INFILE INTO (data1 data2 ) BEGINDATA 11111AAAAAAAAAA 22222BBBBBBBBBB TABLE DATA * positional_data POSITION(1:5). otherwise. EXTERNAL. A control file specifying a fixed format could look like the following. they are assumed to be integers if not specified. (COL4=BLANKS). position(01:05) will give the 1st to the 5th character (11111 and 22222). it is read as characters rather than as binary data. The standard format for date fields is DD-MON-YY. Loading fixed length (positional) data The control file can also specify that records are in fixed format. CHAR. (COL3=BLANKS). CHAR.dat' ‘table-name’ '.Example 2: LOAD INFILE INTO FIELDS TRAILING ( COL1 COL2 COL3 COL4 COL5 COL6 ) TABLE TERMINATED BY DECIMAL EXTERNAL NULLIF DECIMAL EXTERNAL NULLIF CHAR NULLIF CHAR NULLIF CHAR NULLIF DATE "MM-DD-YYYY" NULLIF DATA 'table.dat' ‘table-name’ EXTERNAL. (COL2=BLANKS). "MMDDYYYY" Options . POSITION(6:15) In the above example.

logical records. actual time taken load the data. bindsize – [256000] The size of the conventional path bind array in bytes. of bad lines. no. Committing less frequently (higher value of ROWS) will improve the performance of SQL*Loader.ctl SQL*Loader provides the following options. including CPU time and elapsed time.Usage: sqlldr keyword=value [keyword=value .. rows – [64] The number of rows to load before a commit is issued (in conventional path).. discard – The name of the file that contains the discarded rows. It has details like no. no. no. Especially interesting is the summary information at the bottom of the log. to load. Discarded rows are those that fail the WHEN clause condition when selectively loading records. sqlldr this the following username@server/password username/password@server command: help=y example: control=loader. it'll abort the loading and rollbacks the already inserted records. A value of ALL will suppress all load messages. The log file contains information about the SQL*Loader execution. The rejected data records are placed in this file. After this limit. Larger bindsize will improve the performance of SQL*Loader. discardmax – skip – load – [0] [ALL] Allows [ALL] The the The maximum skipping number of number the of specified logical of discards number of records to allow. errors – [50] The number of errors to allow on the load. which can be specified either on the command line or within a parameter file: userid – The Oracle username and password. A record could be rejected for many reasons. of rejected lines (full data will be in discard file). of lines loaded. of lines readed. rows are the number of rows to read from the data file before saving the data in the datafiles. SQL*Loader will tolerates this many errors (50 by default). control – The name of the control file.] Invoke the utility without arguments $ To $ check which Look $ $ options are available at sqlldr sqlldr in to get any sqlldr a release of list of available SQL*Loader uses parameters. log – The name of the file used by SQL*Loader to log results. [ALL] For direct path loads. including a non-unique key or a required column being null.ctl control=loader. bad – A file that is created when at least one record from the input file is rejected. Other . It should be viewed after each SQL*Loader job is completed. silent – Suppress messages/errors during data load. data – The name of the file that contains the data to load. This file specifies the format of the data to be loaded.

commit_discontinued – [FALSE] commit loaded rows when load is discontinued.[FALSE] abort Size load (in on entries) any index of errors date (This is conversion from Oracle 11g cache. . Available with direct path data loads only. Specify a filename that contains index creation statements. and PARTITIONS. file – Used only with parallel loads. the parameter options for SQL*Loader. this option allows multiple SQL*Loader jobs to execute concurrently and will improve the performance. HEADER. Direct path load (DIRECT=TRUE) will load faster than conventional. The other valid options include GENERATE_ONLY and EXECUTE. Do not maintain indexes. columnarrayrows – [5000] Specifies the number of rows to allocate for direct path column arrays. This parameter is ignored unless resumable = TRUE. multithreading – use multithreading in direct path. date_cache – [1000] no_index_errors . mark affected indexes as unusable. resumable_name – User defined string that helps identify a resumable statement that has been suspended. This is from 10g. ERRORS. resumable_timeout – [7200 seconds] The time period in which an error must be fixed. streamsize – [256000] Specifies the size of direct path stream buffer size in bytes.options include DISCARDS. the parameters resumable_name and resumable_timeout are utilized. skip_unusable_indexes – [FALSE] Determines whether SQL*Loader skips the building of indexes or index partitions that are in an unusable state. direct – [FALSE] Specifies whether or not to use a direct path load or conventional. This parameter is ignored unless resumable = TRUE. release2). parallel – [FALSE] do parallel load. readsize – [1048576] The size of the read buffer used by SQL*Loader when reading data from the input file. _synchro – parfile – [Y] internal The name of the file that contains testing. this parameter specifies the file to allocate extents from. _display_exitcode – Display exit code for SQL*Loader execution. skip_index_maintenance – [FALSE] Stops index maintenance for direct path loads only. external_table – [NOT_USED] Determines whether or not any data will be loaded using external tables. resumable – [FALSE] Enables and disables resumable space allocation. The default is TRUE on multiple CPU systems and FALSE on single CPU systems. FEEDBACK. This value should match that of bindsize. When “TRUE”.

to get the data from database. An example of the former case is 'sqlldr scott/tiger foo'. This is from Oracle 10g. This is from Oracle 10g.g. This file can now be copied to the Oracle machine and loaded using the SQL*Loader utility. This is from Oracle 11g. Skipping header records while loading We can skip unwanted header records or continue an interrupted load (e. 'sqlldr scott/tiger control=foo logfile=log' is allowed. this only applies for the conventional load path (and not for direct path loads). $ sqlldr TABLE userid=id/passwd (SKIP=5) DATA * load_positional_data POSITION(1:5). One can also populate columns with static or derived values. _parallel_lob_load – allow direct path parallel load of lobs..ctl skip=4 If you are continuing a multiple table direct path load. To load MS-Excel data into Oracle. CONTINUE_LOAD allows you to specify a different number of rows to skip for each of the tables you are loading.. Open the MS-Excel spreadsheet and save it as a CSV (Comma Separated Values) file. Here are some examples: . _testing_server_ca_rows – test with non default direct path server column array rows. 4. This is from Oracle 11g._testing_ncs_to_clob – test non character scalar to character lob conversion. Note: values within parenthesis are the default values. This is from Oracle 11g. but 'sqlldr scott/tiger control=foo log' is not. For example.. you may need to use the CONTINUE_LOAD clause instead of the SKIP parameter.). _testing_server_max_rp_ccnt – test with non default direct path max row piece columns. You can use SQL*Plus to select and format your data and then spool it to a file or you have to use any third party tool. Oracle does not supply any data unload utilities (like SQL*Unloader). Look at this example: OPTIONS LOAD INFILE INTO (data1 data2 ) BEGINDATA 11111AAAAAAAAAA 22222BBBBBBBBBB . This is from Oracle 11g.. an example of the latter is 'sqlldr control=foo userid=scott/tiger'. _testing_server_slot_size – test with non default direct path server slot buffer size. PLEASE NOTE: Command-line parameters may be specified either by position or by keywords. _trace_events – Enable tracing during run by specifying events and levels (SQLLDR_LOWEST. 2. POSITION(6:15) control=control_file_name.. Miscellaneous 1. 3. run out of space) by specifying the "SKIP=n" keyword. even though the position of the parameter 'log' is correct. "n" specifies the number of logical rows to skip. However. One may specify parameters by position before but not after parameters specified by keywords. Modifying data as the database gets loaded Data can be modified as it loads into the Oracle Database.

null. INTEGER . :city. mailing_state.LOAD INFILE INTO TABLE (rec_no region CONSTANT time_loaded "to_char(SYSDATE. ) BEGINDATA 11111AAAAAAAAAA991201 22222BBBBBBBBBB990112 LOAD INFILE BADFILE APPEND INTO TABLE FIELDS TERMINATED (addr. '31'.dat TABLE POSITION(1:4) INTEGER POSITION(6:15) POSITION(17:18) POSITION(20:23) INTEGER emp EXTERNAL. substr(:move_date. 'YYMMDD')" DATA 'mail_orders. Loading into multiple tables One can also specify multiple "INTO TABLE" clauses in the SQL*Loader control file to load into multiple tables. INTEGER TABLE tab1 WHEN FILLER tab = TABLE tab2 WHEN FILLER tab = 'tab2' POSITION(1:4)." BY :addr. ":data1/100". city. EXTERNAL 6. move_date "substr(:move_date.txt' 'bad_orders. 7.null. Here is an example: LOAD INFILE INFILE INFILE APPEND INTO (empno ename deptno mgr ) DATA file1. data1 POSITION(1:5) data2 POSITION(6:15) data3 POSITION(16:22)"to_date(:data3. CHAR. state. mailing_addr "decode(:mailing_addr. zipcode. :mailing_city)". 'HH24:MI')".nextval". 2)" 5. CHAR. mailing_city "decode(:mailing_city. Loading from multiple input files One can load from multiple input files provided they use the same record format by repeating the INFILE clause. 3. 2) ) DATA * modified_data "my_db_sequence. || :mailing_addr)". Look at the following example: LOAD INFILE INTO (tab col1 ) INTO (tab col1 ) BEGINDATA tab1|1 tab1|2 DATA * 'tab1' CHAR(4).txt' mailing_list ". "upper(:data2)".dat file3.dat file2.

You can only use AND as in the example above! To workaround this problem.dis' APPEND INTO TABLE my_selective_table WHEN (01) <> 'H' and (01) <> 'T' ( region CONSTANT '31'. EXTERNAL != ' ' EXTERNAL.dat' BADFILE 'mydata. CHAR. call_b_no POSITION(12:29) CHAR ) NOTE: SQL*Loader does not allow the use of OR in the WHEN clause.bad' DISCARDFILE 'mydata. Here is an example: LOAD DATA INFILE 'mydata. 8. By default field scanning doesn't start over from the beginning of the record for new INTO TABLE clauses. code multiple "INTO TABLE . call_b_no POSITION(12:29) CHAR ) INTO TABLE my_selective_table WHEN (30:37) = '20031217' ( region CONSTANT '31'. WHEN" clauses.bad' DISCARDFILE 'mydata. Instead.tab2|2 tab3|3 The "tab" field is marked as FILLER as we don't want to load it. EXTERNAL 7. Make sure you have big rollback segments ready when you use a high value for ROWS. service_key POSITION(01:11) INTEGER EXTERNAL. (01) is the first character. scanning continues where it left off. Note the use of "POSITION" on the second routing value (tab = 'tab2'). call_b_no POSITION(12:29) CHAR ) .dis' APPEND INTO TABLE my_selective_table WHEN (01) <> 'H' and (01) <> 'T' and (30:37) = '20031217' ( region CONSTANT '31'. service_key POSITION(01:11) INTEGER EXTERNAL.. Selectively loading filtered Look at this example. (30:37) are characters 30 to 37: records LOAD DATA INFILE 'mydata. but by setting the ROWS parameter to a large value. Another example: LOAD INFILE REPLACE INTO (empno ename deptno mgr ) INTO (projno empno ) DATA 'mydata.. In SQL*Loader.dat' BADFILE 'mydata. CHAR. one cannot COMMIT only at the end of the load file. committing can be reduced. service_key POSITION(01:11) INTEGER EXTERNAL. POSITION is needed to reset the pointer to the beginning of the record again.dat' TABLE emp WHEN empno POSITION(1:4) INTEGER POSITION(6:15) POSITION(17:18) POSITION(20:23) INTEGER TABLE proj WHEN POSITION(25:27) POSITION(1:4) projno INTEGER INTEGER != ' ' EXTERNAL.

for loading nested tables and VARRAYs) or LOBFILE. from Oracle 8i one can specify FILLER columns. Skipping certain columns while loading data One cannot use POSITION(x:y) with delimited data. field2 field3 ) INTO TERMINATED TABLE BY DATA T1 '." trailing nullcols ( c1.' ( image_id INTEGER(5). BOUNDFILLER (available with Oracle 9i and above) can be used if the skipped column's value will be required later again. field4 BOUNDFILLER.bmp 11. image_data LOBFILE (file_name) TERMINATED BY EOF ) BEGINDATA 001. FILLER columns are used to skip columns/fields in the load file. field5 BOUNDFILLER. file_name CHAR(30).9. c2 ":field2 || :field3". Control File: LOAD DATA INFILE * INTO TABLE image_table REPLACE FIELDS TERMINATED BY '. Loading images. BLOB). photos.image2. Here is an example: LOAD DATA INFILE * TRUNCATE INTO TABLE sometable FIELDS TERMINATED BY ". images and audio clips into BLOB and CLOB columns. sound clips and documents SQL*Loader can load data from a "primary data file". Loading EBCDIC data . The LOBFILE method provides an easy way to load documents. c3 ":field4 + :field5" ) 10. field3 BOUNDFILLER.jpg 003.gif 002. ignoring fields that one does not want. Luckily. field2 BOUNDFILLER.image3. Given the following table: CREATE image_id file_name image_data TABLE image_table ( NUMBER(5).' FILLER.image1. Look at this example: LOAD TRUNCATE FIELDS (field1. VARCHAR2(30). SDF (Secondary Data file .

How to improve SQL*Loader Performance SQL*Loader is flexible and offers many options that should be considered to maximize the speed of data loads. The data rows are literally concatenated together so that positions 81 to 160 are used to specify column positions for data in the second row (assuming that the record length of the file is 80).bad' 'data. EXTERNAL 12. When using fixed format data. e.: concatenate 3 reads 3 rows of data for every record. For example: continueif last = '+' specifies that if the last non-blank character in the line is '+'. the next line is a continuation of the current line. (ii) Length of the column in the infile/datafile may be bigger than the target table column size. Specify the character set WE8EBCDIC500 for the EBCDIC data. Reading multiple rows per record If the data is in fixed format the number of rows of data to be read for each record can be specified using the concatenate clause. CHAR.: continueif this (1:4) = 'two' specifies that if the first four characters of the current line are 'two'.g.g.ebc BADFILE DISCARDFILE REPLACE INTO ( field1 POSITION field2 POSITION field3 POSITION field4 POSITION field5 POSITION field6 POSITION ) "fix 86 DATA WE8EBCDIC500 buffers 1024" 'data. The following example shows the SQL*Loader controlfile to load a fixed length EBCDIC record into the Oracle Database: LOAD CHARACTERSET INFILE data. EXTERNAL. . the next line is a continuation of this line. This method does not work with free format data because the continuation character is read as the value of the next field. In this case the first four columns of each record are assumed to contain only a continuation indicator and are not read as data.SQL*Loader is character set aware (you can specify the character set of the data). the continuation character may be in the last column of the data record. e. You should also specify the record length (240 in this case) with a reclen clause when concatenating data. EXTERNAL. Common Errors in SQL*Loader are (i) Foreign key is not found. CHAR. (iii) Mismatching of datatypes. that is: reclen 240 The continueif clause may specify more than one character.dsc' TABLE (1:4) (5:6) (7:12) (73:73) temp_data INTEGER INTEGER INTEGER (13:42) (43:72) INTEGER EXTERNAL.

2. • Clustered tables cannot be loaded. the script $ORACLE_HOME/rdbms/admin/catldr.An External table load creates an external table for data in a datafile and executes INSERT statements to insert the data from datafile into target table.9)"). if you ran catalog. • Constraints that depend on other tables are disabled during load and applied when the load is completed. . Disable Indexes and Constraints .The conventional path loader essentially loads the data by using standard insert statements. also populate columns with static or derived values Can use UNRECOVERABLE option Can't use UNRECOVERABLE option Several restrictions associated with direct path loading: • Tables and associated indexes will be locked during load. and its use will slow performance.1. Direct path load builds blocks of data in memory and saves these blocks directly into the extents allocated for the table being loaded.sql at the time of database creation).256K) Will not use streams Will use multithreading Will not use multithreading --- Data can be modified as it loads into the Oracle Database. and loads directly into the Oracle data by using standard INSERT statements datafiles Loads directly to datafile Loads via buffers No redolog Redolog will be generated Will not use Oracle instance and RAM Will use Oracle instance and RAM Can't disable constraints and indexes Can disable constraints and indexes --- Default buffer size is 256KB(bindsize) or 64 rows Can do parallel load Can't do parallel load Will use streams (default stream buffer size . This will significantly slow down load times even with ROWS set to a high value. To prepare the database for direct path loads. The direct path loader (DIRECT=TRUE) loads directly into the Oracle datafiles and creates blocks in Oracle database block format. 3. if datafile is big enough.sql must be executed (no need to run this.1. • Loaded data will not be replicated. There are certain cases.For only conventional data loads. • SQL*Net access is available only under limited circumstances. in which direct path loads cannot be used (clustered tables). This will effectively bypass most of the RDBMS processing. External Table Load . Advantages over conventional & direct: (i) An external table load attempts to load datafiles in parallel. Differences between direct path load and conventional path load Direct Path Load Conventional Path Load The direct path loader (DIRECT=TRUE) bypasses much of The conventional path loader essentially loads the the overhead involved. Use Direct Path Loads . (ii) An external table load allows modification of the data being loaded by using SQL and PL/SQL functions as part of the insert statement that is used to create external table. Cannot always use SQL strings for column processing in the control file (something like this will probably fail: col1 date "ddmonyyyy" "substr(:period. • SQL functions are not available. the disabling of indexes and constraints can greatly enhance the performance of SQL*Loader. The fact that SQL is not being issued makes the entire process much less taxing on the database.

depending on the type of data and number of rows. Use Fixed Width Data. Use ROWS=n to commit less frequently. the bindsize and rows parameters can aid the performance under these loads. While this may not be feasible in certain environments. Use unrecoverable. $ sqlldr control=first. This option is available for direct path loads only.ctl parallel=true direct=true 7. The UNRECOVERABLE option (unrecoverable load data) disables the writing of the data to the redo logs. Benchmarking The following benchmark tests were performed with the various SQL*Loader options. SQL*Loader Option Elapsed Time(Seconds) Time Reduction direct=false rows=64 135 - direct=false bindsize=512000 rows=10000 92 32% direct=false bindsize=512000 rows=10000 85 database in noarchivelog mode 37% direct=true 47 65% direct=true unrecoverable 41 70% direct=true unrecoverable fixed width data 41 70% The results above indicate that conventional path loads take the longest. 9. The table was truncated after each test. The savings can be tremendous. However. The test involving the conventional load didn’t come close to the performance of the direct path load with the unrecoverable option specified. disabling database archiving can increase performance considerably. All database load operations will execute faster when indexes are disabled. For conventional data loads only. 6. . These tests did not compensate for indexes. Issuing fewer commits will enhance performance. Use Parallel Loads. For conventional data loads only.ctl parallel=true direct=true $ sqlldr control=second. the rows parameter specifies the number of rows per commit. Use a Larger Bind Array. The size of the bind array is specified using the bindsize parameter. 5. The bind array's size is equivalent to the number of rows it contains (rows=) times the maximum length of each row. this option allows multiple SQL*Loader jobs to execute concurrently. It is also worth noting that the fastest import time achieved (earlier) was 67 seconds. Available with direct path data loads only. Disable Archiving During Load. Fixed width data format saves Oracle some processing when parsing the data. compared to 41 for SQL*Loader direct path – a 39% reduction in execution time.4. This proves that SQL*Loader can load the same data faster than import. larger bind arrays limit the number of calls to the database and increase performance. 8.

Rollback segment is just like any other table segments and index segments. then Oracle needs a rollback segment which is also in non system tablespace. This is the reason we create a separate tablespace just for the rollback segment. When a transaction is going on a segment which is in non system tablespace. A rollback segment also has its own storage parameters. 2. Why rollback segments? o o o Undo the changes when a transaction is rolled back. we have to bring them ONLINE. Only DBA can create the rollback segments (SYS is the owner) and can not accessible to ordinary users. This rollback segment can't be brought OFFLINE since Oracle needs it as long as DB is up & running.Rollback Segments ROLLBACK SEGMENTS in Oracle In order to support the rollback facility in oracle database. which consist of extents. . and the rules in creating RBS are: 1. We should prefer to create different rollback segments in different tablespaces. We can't define PCTINCREASE for RBS (not even 0). once the transaction is over the blocks in that rollback segment can help any other transaction. either by using init. We better create these rollback segments in a separate tablespace where no tables or indexes exist. of single rollback instance segments database). Recover the database to a consistent state in case of failures. Apart from regular storage parameters rollback segments can also be defined with OPTIMAL. There are two types a) Private rollback segments (for b) Public rollback segments (for RAC or Oracle Parallel Server). Rollback segments basically holds the before image or undo data or uncommitted data of a particular transaction. oracle takes the help of rollback segments. In order to perform any DML operation against a table which is in a non system tablespace ('emp' in 'user' tablespace). Though we have created rollback segments. We have to have at least 2 as MINEXTENTS. At the time of database creation oracle by default creates a rollback segment by name SYSTEM in system tablespace and it's ONLINE. also demand space and they get created in a tablespace. Ensure read consistency (other transactions do not see uncommitted changes made to the database). SQL> CREATE [PUBLIC] ROLLBACK SEGMENT rbs-name [TABLESPACE tbs-name] STORAGE (INITIAL 20K NEXT 40K MINEXTENTS 2 MAXEXTENTS 50). This can't be dropped also. oracle requires a rollback segment from a non system tablespace.ora or by using a SQL statement.

We can limit the maximum transactions a rollback segment can support. A user can request oracle for a particular rollback segment for his transaction. so that "blocks" can’t be overwritten as they are still belonging to "Open" DML. One rollback segment can have multiple transactions. Constantly DBA should observe the HWM (High Water Mark) line for rollback segment. example clear sql procedure updating tables and committing at the end of the transaction. Should limit to 10 by having an init. In a production database environment. The number of rollback segments that are needed in the database are decided by the concurrent DML activity users (number of transactions). The assignment of the rollback segment to a transaction will be done using load balancing method (with respect to the number of transactions but not the size of transactions). 3. 6.  Decrease the "Commit" Frequency. Oracle strongly recommends to have smaller rollback segments.In order to utilize/enable rollback segments by having a parameter in init. Points to 1. If there is any space problem the transaction has to fail. SQL> SET TRANSACTION USE ROLLBACK SEGMENT rbs-name. SQL> SET TRANSACTION USE ROLLBACK SEGMENT. Maximum number of rollback segments can be defined in init. and in the night we perform processing (batch jobs). Usually in the day hours we have smaller transactions (data entry operations) by the endusers.R2.ora. make in it Oracle is: ONLINE. offline. 4. we can do these things:  Increase the size of the rollback segment (so that "wrapping" issue may not occur so frequently). . ponder: 2.ora by MAX_ROLLBACK_SEGMENTS parameter (until 9i). by DBA SEGMENT rbs-name Similarly we can also SQL> ALTER ROLLBACK SEGMENT rbs-name OFFLINE. ALTER ROLLBACK SEGMENT commands. we have to design different types of rollback segments to help different types of transactions. 5. To execute CREATE ROLLBACK SEGMENT and UNDO_MANAGEMENT must not be set or set to MANUAL. Its good to have not more than 4 transactions per rollback segment.R3 There SQL> is another way ALTER to bring any rollback ROLLBACK segment ONLINE. but cannot switch over to another rollback segment and Oracle rollbacks the transaction. One transaction can only take place in one rollback segment. If we are having problems like "Read Inconsistencies" or "Snapshot Too Old" problems. ROLLBACK_SEGMENTS = R1. To assign a rollback segment at session level SQL> ALTER SESSION USE ROLLBACK SEGMENT rbs-name.ora parameter TRANSACTIONS_PER_ROLLBACK_SEGMENT=10.

. Then it becomes smaller and again starts growing if necessary. Otherwise if the rollback segment becomes big. use larger/more rollback segments and avoid long running queries. the rollback segment will be offline. SQL> ALTER ROLLBACK SEGMENT rbs-name SHRINK [TO int {K|M}].e. say it grabbed one rollback segment and some other transaction also grabbed the same rollback segment. If you bounce the database. increase undo_retention setting and undo tablespace size. The biggest issue for a DBA is maintaining the rollback segments especially in a high-activity environment. TABLESPACE_NAME from DBA_SEGMENTS where SEGMENT_TYPE='ROLLBACK'. One user continuously updating on one table where as the another user trying to retrieve continuously on that same table. RBS is already reached MAXEXTENTS. RBS is too for small a to carry transaction to entire transaction. our transaction couldn't find sufficient space to have all the before image blocks. 3.7. fail Nothing but in the Oracle. Or you have defined the NEXT EXTENT size wrongly. although tablespace has some free space to offer. 00000. SQL> ALTER ROLLBACK SEGMENT rbs-name STORAGE storage-options. Managing storage options of rollback segment: You cannot change the values of the INITIAL and MINEXTENTS. So it's nice to comeback to some reasonable size after growing while helping a transaction. i. i. ORA-1555 error (Snapshot too old error) $ oerr ora 1555 01555. DBA should define the optimal value for rollback segment. r2. ROLLBACK_SEGMENTS = r1. thus it has reached its MAXEXTENTS so quickly. rather it'll wait until another transaction wants to use it. In this case. rollback segment can't grow anymore because it has already grabbed it's MAXEXTENTS. Operations on rollback segments Shrinking rollback segment: The rollback segment cannot shrink to less than two extents. unless you added the rollback segment name to parameter file. r4 8. 2. tablespace are: limitation. r3. "snapshot too old: rollback segment number seg-number with name rbs-name too small" // *Cause: rollback records needed by a reader for consistent read are overwritten by other writers. Though we define this. The reasons 1. // *Action: If in automatic undo management mode. Otherwise.e. it'll stay at that size which is unwanted (as Oracle recommends smaller RBS). Our transaction. rollback segment by default it'll not comeback to this size right after the transaction is finished. Dropping rollback segment: you can drop only non-system and offline rollback segments. SQL> select SEGMENT_NAME. Related Views DBA_SEGMENTS --> Here you can see all the rollback segments regardless they are offline or online (without status online/offline). SQL> DROP ROLLBACK SEGMENT rbs-name.

you should use UNDO Segments. Optimal size of the 5. SQL> select SEGMENT_NAME. which interprets backup and recovery commands. a disk location in which the database can store and manage files related to backup and recovery. but highly recommended). the environment for RMAN must include the following:  The target database to be backed up. pool (LARGE_POOL_SIZE) is used Benefits of Some of the benefits provided by RMAN include:  Backups are faster and uses less tapes (RMAN will skip empty blocks)  Less database archiving while database is being backed-up  RMAN checks the database for block corruptions  Automated restores from the catalog  Files are written out in parallel instead of sequential for RMAN. RMAN ships with the Oracle database and doesn't require a separate installation. Some environments will also use these optional components:  A recovery catalog database. OFFLINE. and it's complete details like 1. From Oracle 10g. RBS. Size of the rollback 2. which are V$ROLLSTAT --> You can see USN. RMAN is a Pro*C application that translates commands to a PL/SQL interface through RPC (Remote Procedure Call). and does not require the database to be opened (mapped from the ?/rdbms/admin/recover. RMAN has since been enhanced (in Oracle 9i). segment. Wrap information and much more info about each RBS. RMAN in Oracle Oracle Recovery Manager (RMAN) RMAN was introduced in Oracle8. instead of rollback segments. Pending Offline. STATUS V$ROLLNAME RBS --> Here you USN and the the status from Name.DBA_ROLLBACK_SEGS --> You can see Possible statuses are: ONLINE. At a minimum.  Large Media management software. a separate database schema used to record RMAN activity against one or more target databases (this is optional. of a RBS. enhanced (in Oracle 10g) and enhanced (in Oracle 11g). size.  A flash recovery area. required for RMAN to interface with backup devices such as tape drives. DBA_ROLLBACK_SEGS. Carrying any transactions or 3. directs server sessions to execute those commands. What is the High-Water-Mark 4. ONLINE. called as fast recovery area from 11g release2. The RMAN environment consists of the utilities and databases that play a role in backing up our data. restoring and recovering Oracle databases. which are ONLINE. TABLESPACE_NAME. and records our backup and recovery activity in the target database control file. The PL/SQL calls are statically linked into the Oracle kernel.bsq file). not. RMAN . The RMAN executable is located in $ORACLE_HOME/bin directory.bsq). Recovery Manager(RMAN) is an Oracle provided (free) utility for backing-up.  The RMAN client (rman executable and recover.

No need to grant connect role explicitly.log APPEND $ rman TARGET / CATALOG rman/pwd@cat $ rman TARGET / CATALOG rman/pwd@cat CMDFILE cmdfile. exit. or from command line. the catalog. All the target databases should be register with the catalog. $ sqlplus "/as sysdba" SQL> create user rman identified by rman default tablespace rmants quota unlimited on rmants.RMAN can be operated from Oracle Enterprise Manager. then no recovery catalog if specified. because recovery_catalog_owner role has it. log is opened in append mode show RMAN-nnnn prefix for all messages checksyntax none check the command file for syntax errors $ rman $ rman TARGET SYS/pwd@target $ rman TARGET SYS/pwd@target NOCATALOG $ rman TARGET SYS/pwd@target CATALOG rman/pwd@cat $ rman TARGET=SYS/pwd@target CATALOG=rman/pwd@cat $ rman TARGET SYS/pwd@target LOG $ORACLE_HOME/dbs/log/rman_log. Assign an appropriate tablespace to it and grant it the recovery_catalog_owner role. recovery_catalog_owner to rman. rman/rman catalog. Start by creating a database schema (usually named rman).rcv LOG outfile. Using virtual private catalog database with rman catalog and create create by registering rman/rman@cat your databases in target the catalog. Log $ RMAN> RMAN> in to catalog rman Now you can continue $ rman catalog RMAN> register database. SQL> grant resource.txt $ rman CATALOG rman/pwd@cat$ rman @/my_dir/my_commands. in catalog database. Here are the command line arguments: Argument Value Description target quoted-string connect-string for target database catalog quoted-string connect-string for recovery catalog nocatalog none cmdfile quoted-string name of input command file log quoted-string name of output message log file trace quoted-string name of output debugging message log file append none debug optional-args activate debugging msgno none send quoted-string send a command to the media manager pipe string building block for pipe names timeout integer number of seconds to wait for pipe input if specified.txt Using recovery catalog One (base) recovery catalog can manage multiple target databases. system/manager@tgt .

SQL> grant resource. auxiliary. control files.CREATE_VIRTUAL_CATALOG. The owner of the base recovery catalog can GRANT or REVOKE restricted access to the catalog to other database users. or recovery catalog database. CONNECT Establish a connection between RMAN and a target. EXIT or QUIT Exit/quit the RMAN console. CATALOG Add information about file copies and user-managed backups to the catalog repository. archive logs. recovery_catalog_owner to vpc. CREATE SCRIPT Create a stored script and store it in the recovery catalog. CROSSCHECK Check whether backup items still exist or not. $ rman catalog rman/rman RMAN> GRANT CATALOG FOR DATABASE target_db TO vpc. DUPLICATE Use backups of the target database to create a duplicate database that we can use for testing purposes or to create a standby database.A virtual private catalog is a set of synonyms and views that enable user access to a subset of a base recovery catalog. $ sqlplus "/as sysdba" SQL> create user vpc identified by vpc default tablespace rmants quota unlimited on rmants. DROP DATABASE Delete the target database from disk and unregisters it. spfile. Log in to catalog database $ rman catalog vpc/vpc RMAN> CREATE VIRTUAL CATALOG. DROP CATALOG Remove the base/virtual recovery catalog. CONFIGURE To change RMAN settings. Each restricted user has full read/write access to his own metadata. $ sqlplus vpc/vpc SQL>exec rman. which is called a virtual private catalog. CHANGE Update the status of a backup in the RMAN repository. EXECUTE SCRIPT Run an RMAN stored script. Recovery Manager commands ADVISE FAILURE Will display repair options for the specified failures. CREATE CATALOG Create the base/virtual recovery catalog. ALLOCATE Establish a channel. datafiles. The owner of the base recovery catalog controls what each virtual catalog user can access.DBMS_RCVCAT. CONVERT Convert datafile formats for transporting tablespaces and databases across platforms. FLASHBACK DATABASE Return the database to its state at a previous time or SCN. with vpc and create the virtual private catalog. BACKUP Backup database. DELETE SCRIPT Delete a stored script from the recovery catalog. 11g R1 command. tablespaces. DELETE Delete backups from disk or tape. RMAN> exit. ALTER DATABASE Mount or open a database. The RMAN metadata is stored in the schema of the virtual private catalog owner. BLOCKRECOVER Will recover the corrupted blocks. . GRANT Grant privileges to a recovery catalog user. Log in to catalog database with rman and grant the catalog to vpc. which is a connection between RMAN and a database instance. RMAN> exit.

backups. TRANSPORT TABLESPACE Create transportable tablespace sets from backup for one or more tablespaces. Flash/Fast Recovery Area (FRA) Flash recovery area is a disk location in which the database can store and manage files related to backup and recovery. A channel is a connection (session) from RMAN to target database. REPLACE SCRIPT Replace an existing script stored in the recovery catalog. which creates a snapshot control file and then copies any new or changed information from that snapshot control file to the recovery catalog. All RMAN commands executed through channels. SWITCH Specify that a datafile copy is now the current datafile. then REPLACE SCRIPT creates it. STARTUP Startup the database. RUN To run set of RMAN commands. These connections or channels are used to perform the desired operations. To set the flash recovery area location and size. 11g R1 command. use DB_RECOVERY_FILE_DEST andDB_RECOVERY_FILE_DEST_SIZE. REVOKE Revoke privileges from a recovery catalog user. the datafile pointed to by the control file. only some RMAN commands are valid inside RUN block. RESET DATABASE Inform RMAN that the SQL statement ALTER DATABASE OPEN RESETLOGS has been executed and that a new incarnation of the target database has been created. SPOOL To direct RMAN output to a log file. REPAIR FAILURE Will repair database failures identified by the Data Recovery Advisor. SEND Send a vendor-specific quoted string to one or more specific channels. SHOW Display the current configuration. RMAN new features in Oracle 10g  Managing recovery related files with flash recovery area. SET Settings for the current RMAN session. REGISTER Register the target database in the recovery catalog. UPGRADE CATALOG Upgrade the recovery catalog schema from an older version to the version required by the RMAN executable. If the script does not exist. RELEASE CHANNEL Release a channel that was allocated. . PRINT SCRIPT Display a stored script. VALIDATE To validate. RECOVER Apply redo logs or incremental backups to a restored backup set in order to recover it to a specified time. REPORT Report backup status . RESYNC CATALOG Perform a full resynchronization. that is. 11g R1 command. SHUTDOWN Shutdown the database. LIST List backups and copies. IMPORT CATALOG Import the metadata from one recovery catalog into another recovery catalog. or reset the target database to a prior incarnation. SQL Execute a PL/SQL procedure or SQL statement (not SELECT).database. UNREGISTER Unregister a database from the recovery catalog. files.HOST Invoke an operating system command-line subshell from within RMAN or run a specific operating system command. RESTORE Restore files from RMAN backup.

RMAN> repair failure preview. auto fix or present recovery options to the DBA.  Backup set binary compression. not committed undo. RMAN now supports the ZLIB algorithm.in addition to the Oracle Database 10g backup compression algorithm (BZIP2).  o o o o o o o New commands in RMAN RMAN> list failure. backups required database to perform a preview restore operation. SET NEWNAME FOR TABLESPACE tsname TO format.  Virtual Private Catalog . RMAN> grant catalog for database db-name to user-name. [summary]. if the original datafile name was $ORACLE_HOME/data/tbs_01.checks for corrupted blocks RMAN> create virtual catalog.  Automated Tablespace Point-in-Time Recovery. Each channel backs up one file section. RMAN> list failure errnumber detail. SET NEWNAME FOR DATABASE TO format.  Automatic channel failover on backup & restore. data_D-%d_I-%I_TS-%N_FNO-%f # %b . versus BZIP2.  Fast Backup Compression . RMAN new features in Oracle 11g Release 1  Multisection backups of same file .  # New format identifiers are as follows: # %U Unique identifier. which offers 40% better performance.  Cross-Platform tablespace conversion. and particularly for bigfile tablespaces. in which a datafile can be sized upwards of several hundred GB to TB's. with a trade-off of no more than 20% lower compression ratio.  New compression algorithm BZIP2 brought in.quickly identify the root cause of failures.f. This speeds up overall backup and restore performance. . CTWR (Change Tracking Writer) is the background process responsible for tracking the blocks. then %b is tbs_01. RMAN> validate database. RMAN> configure compression algorithm 'ZLIB' .  Ability to preview the RMAN> restore RMAN> restore tablespace tbs1 preview.  Reducing the time and overhead of full backups with incrementally updated backups.UNIX base name of the original datafile name.  Catalogs can be merged/moved/imported from one database to another. -.a recovery catalog administrator can grant visibility of a subset of registered databases in the catalog to specific RMAN users.  Comprehensive backup job tracking and administration with Enterprise Manager. Optimized incremental backups using block change tracking (Faster incremental backups) using a file (named block change tracking file).RMAN can backup or restore a single file in parallel by dividing the work among multiple channels. For example.f.  Data Recovery Advisor (DRA) . RMAN new features in Oracle 11g Release2  The following are new clauses and format options for the SET NEWNAME command:A single SET NEWNAME command can be applied to all files in a database or tablespace.  Will backup uncommitted undo only. RMAN> advise failure. RMAN> repair failure. which is a contiguous range of blocks.  Recovery will make use of flashback logs in FRA (Flash Recovery Area).

all dropped tablespaces.RMAN related views Control File V$ View Recovery Catalog View View Describes V$ARCHIVED_LOG RC_ARCHIVED_LOG Archived and unarchived redo logs V$BACKUP_DATAFILE RC_BACKUP_CONTROLFILE Control files in backup sets V$BACKUP_CORRUPTION RC_BACKUP_CORRUPTION Corrupt block ranges in datafile backups V$BACKUP_DATAFILE RC_BACKUP_DATAFILE Datafiles in backup sets V$BACKUP_FILES RC_BACKUP_FILES RMAN backups and copies in the repository V$BACKUP_PIECE RC_BACKUP_PIECE Backup pieces V$BACKUP_REDOLOG RC_BACKUP_REDOLOG Archived logs in backups V$BACKUP_SET RC_BACKUP_SET Backup sets V$BACKUP_SPFILE RC_BACKUP_SPFILE Server parameter files in backup sets V$DATAFILE_COPY RC_CONTROLFILE_COPY Control file copies on disk V$COPY_CORRUPTION RC_COPY_CORRUPTION Information about datafile copy corruptions V$DATABASE RC_DATABASE Databases registered in the recovery catalog (RC_DATABASE) or information about the currently mounted database (V$DATABASE) V$DATABASE_ BLOCK_CORRUPTION RC_DATABASE_ BLOCK_CORRUPTION Database blocks marked as corrupt in the most recent RMAN backup or copy V$DATABASE_INCARNATION RC_DATABASE_INCARNATION All database incarnations registered in the catalog V$DATAFILE RC_DATAFILE All datafiles registered in the recovery catalog V$DATAFILE_COPY RC_DATAFILE_COPY Datafile image copies V$LOG_HISTORY RC_LOG_HISTORY Historical information about online redo logs V$OFFLINE_RANGE RC_OFFLINE_RANGE Offline ranges for datafiles V$PROXY_ARCHIVEDLOG RC_PROXY_ARCHIVEDLOG Archived log backups created by proxy copy V$PROXY_CONTROLFILE RC_PROXY_CONTROLFILE Control file backups created by proxy copy V$PROXY_DATAFILE RC_PROXY_DATAFILE Datafile backups created by proxy copy V$LOG and V$LOGFILE RC_REDO_LOG Online redo logs for all incarnations of the database since the last catalog resynchronization V$THREAD RC_REDO_THREAD All redo threads for all incarnations of the database since the last catalog resynchronization V$RESTORE_POINT RC_RESTORE_POINT All restore points for all incarnations of the database since the last catalog resynchronization - RC_RESYNC Recovery catalog resynchronizations V$RMAN_CONFIGURATION RC_RMAN_CONFIGURATION RMAN persistent configuration settings V$RMAN_OUTPUT RC_RMAN_OUTPUT Output from RMAN commands for use in Enterprise Manager RC_RMAN_STATUS Historical status information about RMAN operations V$TABLESPACE RC_TABLESPACE All tablespaces registered in the recovery catalog. and tablespaces that belong to old incarnations RC_TEMPFILE V$TEMPFILE All tempfiles registered in the recovery catalog V$RMAN_STATUS RMAN related Packages .

rman $ rman TARGET / @backup_db. @@ (double at sign) Run a command file in the same directory as another command file that is currently running. RMAN> CREATE CATALOG TABLESPACE rmants.cmd' $ rman MSGNO $ rman | tee rman. RMAN> CREATE CATALOG.rman} CONNECT command Establish a connection between RMAN and a target..cmd $ rman CHECKSYNTAX @'/tmp/backup_db..log APPEND $ rman CATALOG rman/pwd@catdb $ rman TARGET=SYS/pwd@target CATALOG=rman/pwd@cat $ rman TARGET / CATALOG rman/rman@cat $ rman TARGET / SCRIPT dwh LOG /tmp/dwh.rcv' $ rman TARGET user/pwd CMDFILE=takefulldb.txt $ rman TARGET / CATALOG rman/pwd@cat DEBUG TRACE trace. ].. RMAN> CONNECT AUXILIARY / RMAN> CONNECT AUXILIARY rman@auxdb..log $ rman PIPE newpipe TARGET / TIMEOUT 90 $ rman @/my_dir/my_commands.rcv LOG outfile.log $ rman TARGET SYS/pwd@prod CATALOG rman/rman@rcat @'/oracle/dbs/whole. RMAN> CONNECT TARGET.rman "/tmp" $1 RMAN> RUN {@backup_db. RMAN> CONNECT TARGET / RMAN> CONNECT TARGET sys@tgt. ['] ['] [APPEND] $ rman $ rman NOCATALOG $ rman TARGET SYS/pwd@target $ rman TARGET SYS/pwd@target NOCATALOG $ rman TARGET SYS/pwd@target LOG $ORACLE_HOME/dbs/my_log.txt $ rman @backup_ts_generic. RMAN> CONNECT AUXILIARY rman/pwd@auxdb. auxiliary. RMAN> CONNECT CATALOG rman@catdb.rman "/tmp" USERS $ rman CMDFILE=backup_ts_users.txt RMAN> @/tmp/bkup_db.DBMS_RCVCAT DBMS_RCVMAN DBMS_BACKUP_RESTORE RMAN (Recovery Manager) Commands in Oracle rman Start RMAN from the rman [ | | .cmd @@takefulldb.rman $ rman TARGET / CATALOG rman/pwd@cat CMDFILE cmdfile. RMAN> @backup_db.rman RMAN> @/my_dir/my_command_file. . CREATE CATALOG command Create Oracle schema for the recovery catalog. or recovery catalog database. RMAN> CONNECT CATALOG rman/pwd@catdb.log $ rman help=yes @ (at sign) Run a command file. RMAN> CONNECT TARGET sys/pwd@tgt. TARGET {CATALOG LOG [=] ['] [=] ['] [=] OS command [userid][/[password]][@net_service_name] [userid][/[password]][@net_service_name] [']filename['] commands line.rman whole_db RMAN> @backup_ts_generic. The @@ command differs from the @ command only when run from within a command file.

BACKUPS. RMAN> UPGRADE CATALOG. command catalog. CATALOG.Oracle 11g R1 DROP Remove Oracle schema RMAN> CATALOG from the recovery DROP R1 command catalog. NOPROMPT. NOPROMPT. prod4. RMAN> IMPORT CATALOG rman/rman@catdb1 DB_NAME=prod1 NO UNREGISTER. 11g R1 bckop2.DBMS_RCVCAT. tbs-name.Oracle 11g SQL> EXEC rman. -. RMAN> IMPORT CATALOG cat@srcdb DB_NAME=prod2. REGISTER Register RMAN> RMAN> RMAN> the target REGISTER database REGISTER REGISTER CATALOG in the recovery TABLESPACE UNREGISTER Unregister a Oracle database from the recovery RMAN> UNREGISTER RMAN> UNREGISTER DATABASE RMAN> UNREGISTER DATABASE RMAN> UNREGISTER DATABASE prod2 RMAN> UNREGISTER DB_UNIQUE_NAME RMAN> UNREGISTER DB_UNIQUE_NAME prod1 RMAN> UNREGISTER DB_UNIQUE_NAME prod2 INCLUDING RMAN> UNREGISTER DB_UNIQUE_NAME prod3 INCLUDING BACKUPS GRANT Grant RMAN> RMAN> privileges GRANT CATALOG GRANT to a FOR DATABASE REGISTER recovery prod1 TO DATABASE vpc1.Oracle TO command catalog. catalog -. -. NOPROMPT. DATABASE. NOPROMPT. RMAN> IMPORT CATALOG rcat@inst DBID=2871507123. .Oracle 11g R1 RMAN> SQL "EXEC catown. RMAN> IMPORT CATALOG cat@srcdb DB_NAME=prod3. prod2. command user. RMAN> IMPORT CATALOG rman/oracle@catdb1 NO UNREGISTER. prod1. RMAN> RESYNC CATALOG FROM DB_UNIQUE_NAME ALL. CATALOG.RMAN> CREATE VIRTUAL CATALOG. RMAN> RESYNC CATALOG. IMPORT CATALOG command Import the metadata from one recovery catalog into another recovery catalog. RMAN> IMPORT CATALOG cat@srcdb. 61738563. RMAN> IMPORT CATALOG cat@srcdb DBID=1844750987.CREATE_VIRTUAL_CATALOG. DATABASE.DBMS_RCVCAT. RMAN> RESYNC CATALOG FROM DB_UNIQUE_NAME prod_db. -. which creates a snapshot control file and then copies any new or changed information from that snapshot control file to the recovery catalog. RESYNC CATALOG command Perform a full resynchronization.CREATE_VIRTUAL_CATALOG". UPGRADE CATALOG command Upgrade the recovery catalog schema from an older version to the version required by the RMAN executable.

or reset the target database to a prior incarnation. from a recovery FOR DATABASE prod1 FROM REGISTER DATABASE RECOVERY_CATALOG_OWNER rmanop3.RMAN> GRANT REVOKE Revoke RMAN> RMAN> RMAN> RECOVERY_CATALOG_OWNER privileges REVOKE CATALOG REVOKE REVOKE TO rmanop1.Oracle FROM FROM command user. RMAN> STARTUP FORCE. This command is equivalent to the SQL*Plus SHUTDOWN command. RMAN> STARTUP PFILE=’/u01/app/oracle/admin/pfile/initsid. RMAN> STARTUP AUXILIARY nomount.ora. RMAN> SHUTDOWN NORMAL. SHUTDOWN command Shutdown the target database. RMAN> STARTUP FORCE DBA PFILE=c:\Oracle\Admin\pfile\init.ora. RESETLOGS. RMAN> SHUTDOWN ABORT. This command is equivalent to the SQL*Plus STARTUP command. RMAN> SHUTDOWN. MOUNT. bckop. catalog vpc1. STARTUP command Startup the target database. OPEN. RMAN> STARTUP MOUNT. RESET DATABASE command Inform RMAN that the SQL statement ALTER DATABASE OPEN RESETLOGS has been executed and that a new incarnation of the target database has been created.ora’ RMAN> STARTUP NOMOUNT. RMAN> RESET DATABASE TO INCARNATION 3. RMAN> STARTUP. RMAN> STARTUP FORCE MOUNT DBA PFILE=/tmp/inittrgt. RMAN> STARTUP FORCE DBA. -. ALTER Mount RMAN> RMAN> RMAN> ALTER SHOW Display the SHOW { | | | | | | | DATABASE open or ALTER ALTER DATABASE current command database. RMAN> STARTUP FORCE NOMOUNT. a DATABASE DATABASE OPEN CONFIGURE RETENTION BACKUP [DEFAULT] CONTROLFILE [AUXILIARY] CHANNEL DATAFILE ARCHIVELOG [FOR [BACKUP DEVICE AUTOBACKUP DEVICE TYPE BACKUP COPIES|DELETION command settings. POLICY OPTIMIZATION TYPE [FORMAT] deviceSpecifier] MAXSETSIZE COPIES POLICY] . RMAN> SHUTDOWN IMMEDIATE. RMAN> SHUTDOWN TRANSACTIONAL. 11g R1 bckop2.

# default CONFIGURE ARCHIVELOG BACKUP COPIES FOR DISK TO 1. year and = = eight = = characters of the piece number copy number of set backup and the RETENTION POLICY. SHOW CHANNEL. These settings apply to all RMAN sessions until explicitly changed or disabled. # default CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1. CONFIGURE cfauConf. SHOW COMPRESSION ALGORITHM. CONFIGURE AUXNAME FOR DATAFILE datafileSpec {TO 'filename' | CLEAR}.f'. # default CONFIGURE ARCHIVELOG BACKUP COPIES FOR SBT TO 1. # default CONFIGURE DEVICE TYPE DISK PARALLELISM 1. CONFIGURE ARCHIVELOG DELETION POLICY {CLEAR | TO {APPLIED ON [ALL] STANDBY | BACKED UP integer TIMES TO DEVICE TYPE deviceSpecifier | NONE | SHIPPED TO [ALL] STANDBY} . SHOW MAXSETSIZE. CONFIGURE backupConf. # default CONFIGURE DATAFILE BACKUP COPIES FOR SBT TO 1. SHOW ALL FOR DB_UNIQUE_NAME 'STANDBY'. SHOW ALL FOR DB_UNIQUE_NAME ALL. # default CONFIGURE MAXSETSIZE TO UNLIMITED. # default %F %U %u %p %c RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> = dbid... # default --Oracle 11g R2 CONFIGURE ARCHIVELOG DELETION POLICY TO NONE. # default CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'. SHOW CONTROLFILE AUTOBACKUP.. # default CONFIGURE ENCRYPTION ALGORITHM 'AES128'. SHOW SNAPSHOT CONTROLFILE NAME. # default CONFIGURE ENCRYPTION FOR DATABASE OFF. SHOW SHOW RETENTION SHOW DEVICE SHOW backup within the sequence %u_%p_%c time . CONFIGURE deviceConf. # default CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET. # default CONFIGURE BACKUP OPTIMIZATION OFF. month. # default CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE.. # default CONFIGURE CONTROLFILE AUTOBACKUP OFF. SHOW TYPE FOR DB_UNIQUE_NAME prod3. FOR CONTROLFILE AUXNAME EXCLUDE [DATABASE|TABLESPACE]} ALGORITHM NAME DB_UNIQUE_NAME ALL RMAN> SHOW ALL.. POLICY CONFIGURE command To configure persistent RMAN settings. FOR DB_UNIQUE_NAME ALL. backupset piece . # default CONFIGURE DEFAULT DEVICE TYPE TO DISK.| | | ENCRYPTION {ALGORITHM | | COMPRESSION | SNAPSHOT | | } FOR [DB_UNIQUE_NAME [‘db_unique_name’|ALL]]. CONFIGURE SNAPSHOT CONTROLFILE NAME {TO 'filename' | CLEAR}. SHOW BACKUP OPTIMIZATION. DEFAULT DEVICE TYPE. DEVICE TYPE../dbs/snapcf_sid. CONFIGURE RETENTION POLICY TO REDUNDANCY 1. SHOW ENCRYPTION ALGORITHM. day. # default CONFIGURE SNAPSHOT CONTROLFILE NAME TO '.

RMAN> CONFIGURE ARCHIVELOG DELETION POLICY TO SHIPPED TO ALL STANDBY.. RMAN> CONFIGURE RETENTION POLICY TO REDUNDANCY 3. RMAN> CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 2 TIMES TO sbt. sbt. RMAN> CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO 'cf%F'. integer } [K 'channel_parms' 'format_string'].. RMAN> CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK CLEAR. RMAN> CONFIGURE RETENTION POLICY CLEAR. }. RMAN> CONFIGURE DEFAULT DEVICE TYPE TO RMAN> CONFIGURE DEFAULT DEVICE TYPE TO RMAN> CONFIGURE DEVICE TYPE sbt PARALLELISM RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 4. RMAN> CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY.. connectStringSpec::= ['] [userid] { TO deviceSpec | CLEAR } { PARALLELISM integer | CLEAR } DEVICE TYPE deviceSpec {allocOperandList|CLEAR} PARMS [=] [=] integer | [=] 'format_string' RATE [=] [/[password]] [. --11g RMAN> CONFIGURE ARCHIVELOG DELETION POLICY TO NONE. | M | G] [@net_service_name] ['] backupConf::= {RETENTION POLICY {TO {RECOVERY WINDOW OF integer DAYS | REDUNDANCY [=] integer | NONE } | CLEAR } | MAXSETSIZE {TO {integer [K | M | G]| UNLIMITED} | CLEAR } | {ARCHIVELOG | DATAFILE} BACKUP COPIES FOR DEVICE TYPE deviceSpec {TO integer | CLEAR} | BACKUP OPTIMIZATION {ON | OFF | CLEAR} | EXCLUDE FOR TABLESPACE tablespace_name [CLEAR] } cfauConf::== CONTROLFILE AUTOBACKUP {ON | OFF | CLEAR | FORMAT FOR DEVICE TYPE deviceSpec {TO 'format string'|CLEAR}} RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON. RMAN> CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 2. 3. RMAN> CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 2. RMAN> CONFIGURE ARCHIVELOG DELETION POLICY CLEAR. RMAN> CONFIGURE CONTROLFILE AUTOBACKUP OFF. ... RMAN> CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY. RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 3 BACKUP TYPE TO BACKUPSET.[{APPLIED ON [ALL] STANDBY | BACKED UP integer TIMES TO DEVICE TYPE deviceSpecifier | NONE | SHIPPED TO [ALL] STANDBY}] … } deviceConf::= { DEFAULT | DEVICE | [AUXILIARY] } DEVICE TYPE TYPE deviceSpec CHANNEL [integer] allocOperandList::= { | FORMAT | { MAXPIECESIZE . DISK. RMAN> CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7 DAYS. RMAN> CONFIGURE ARCHIVELOG DELETION POLICY TO SHIPPED TO STANDBY. RMAN> CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '+BACKUP'. RMAN> CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 3 TIMES TO disk..

CONFIGURE SNAPSHOT CONTROLFILE NAME TO ‘/dev/sda‘. RMAN> CONFIGURE COMPRESSION ALGORITHM 'BZIP2'. RMAN> CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT 'C:\backup\df%t_s%s_s%p'. RMAN> CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/tmp/%U'. --Oracle 11g R2. --11g R2. CONFIGURE AUXNAME FOR DATAFILE 2 CLEAR. . } ECHO - Controls whether RMAN commands are TO set_run_option. RMAN> CONFIGURE CHANNEL 2 DEVICE TYPE DISK FORMAT '/backup/db_%s%d_%p'.. RMAN> CONFIGURE CHANNEL DEVICE TYPE DISK DEBUG 5. RMAN> CONFIGURE CHANNEL DEVICE TYPE sbt PARMS 'ENV=(NSR_SERVER=bksrv1)'. DBID . CONFIGURE CHANNEL DEVICE TYPE sbt MAXPIECESIZE 1G.A unique 32-bit identification number computed when the database is created. CONFIGURE EXCLUDE FOR TABLESPACE example. RMAN> CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT CLEAR. CONFIGURE SNAPSHOT CONTROLFILE NAME TO ‘/backup/snapcf_%d. --Oracle 11g R1 RMAN> CONFIGURE COMPRESSION ALGORITHM 'LOW'. RMAN> CONFIGURE CHANNEL DEVICE TYPE sbt CLEAR. We can obtain the DBID by querying V$DATABASE or RC_DATABASE. CONFIGURE DEFAULT DEVICE TYPE TO SBT FOR DB_UNIQUE_NAME po.corresponds to BZIP2 RMAN> RMAN> RMAN> RMAN> CONFIGURE DB_UNIQUE_NAME 'standby' CONNECT IDENTIFIER 'standby_cs'. CONFIGURE MAXSETSIZE TO 100M. RMAN> CONFIGURE CHANNEL DEVICE TYPE sbt.f'.RMAN> CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET. CONFIGURE EXCLUDE CLEAR..corresponds to LZO RMAN> CONFIGURE COMPRESSION ALGORITHM 'MEDIUM'. CONFIGURE SNAPSHOT CONTROLFILE NAME TO ‘/ocfs/oradata/snapcf‘.corresponds to ZLIB RMAN> CONFIGURE COMPRESSION ALGORITHM 'HIGH'. RMAN> CONFIGURE CHANNEL DEVICE TYPE sbt PARMS='ENV=mml_env_settings'. CONFIGURE AUXNAME FOR DATAFILE 4 TO '/oracle/auxfiles/aux_4. CONFIGURE MAXSETSIZE TO UNLIMITED. SET command Set the value of various attributes that affect RMAN behaviour for the duration of a RUN block or a session. SET {set_rman_option set_rman_option::= {ECHO | CONTROLFILE {ON|OFF} AUTOBACKUP [.} [=] deviceSpec {'filename' TO TYPE displayed TO deviceSpec in the TO integer 'frmt_string' | NEW} 'log_archive_dest' untilClause 'string' TO 'frmt_string' message log.corresponds to unmodified BZIP2 RMAN> CONFIGURE COMPRESSION ALGORITHM 'BASIC'. --11g R2. RMAN> CONFIGURE CHANNEL DEVICE TYPE sbt FORMAT 'bkup_%U'. --11g R2. RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> CONFIGURE BACKUP OPTIMIZATION ON.] | FORMAT FOR | DBID DEVICE TYPE set_run_option::= { NEWNAME FOR DATAFILE datafileSpec | ARCHIVELOG DESTINATION | | COMMAND ID | CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE . RMAN displays the DBID upon connection to the target database. CONFIGURE DEFAULT DEVICE TYPE TO DISK FOR DB_UNIQUE_NAME ALL.f‘. CONFIGURE DEFAULT DEVICE TYPE TO DISK FOR DB_UNIQUE_NAME 'standby'. CONFIGURE SNAPSHOT CONTROLFILE NAME TO ‘+FRA/snap/snapcf_%d.f‘. RMAN> CONFIGURE COMPRESSION ALGORITHM 'ZLIB'. RMAN> CONFIGURE CHANNEL DEVICE TYPE sbt PARMS 'BLKSIZE=1048576'. RMAN> CONFIGURE CHANNEL 2 DEVICE TYPE sbt CONNECT 'SYS/pwd@node2' PARMS 'ENV=(NSR_SERVER=bksrv2)'. CONFIGURE BACKUP OPTIMIZATION OFF.

COPIES = 2. DATABASE TO 2.] backupSpec ARCHIVELOG [backupSpecOperand [backupSpec]. backupOperand::= { FORMAT [=] 'format_string' [.bak'. prod. copies of database files.f' TO '/disk9/tbs11. OFF.. FULL FULL AS (COPY | BACKUPSET) INCREMENTAL LEVEL [=] integer INCREMENTAL LEVEL [=] integer AS (COPY | BACKUPSET) AS (COPY | BACKUPSET) AS (COPY | BACKUPSET) (FULL | INCREMENTAL LEVEL [=] integer) Options::= [backupOperand [PLUS [backupOperand].A limit on the number of previously undetected physical block corruptions that Oracle will allow in the datafile(s). DATAFILE 13 TO 200. ['] CUMULATIVE | G] ['] keepOption INACCESSIBLE} VALIDATE 'date_string'] integer deviceSpecifier . ID TO 'rman'.. RMAN> SET COMPRESSION ALGORITHM 'HIGH'. TEMPFILE 1 TO '/newdisk/dbs/temp1. [backupSpecOperand]. archived logs. RMAN> SET ARCHIVELOG DESTINATION TO '/oracle/temp_restore'.. DBID=4240978820.NEWNAME FOR DATAFILE ... DATAFILE 1 to ‘/oradata/system01. TABLESPACE users TO '/oradata2/%U'.. MAXCORRUPT FOR DATAFILE . AUTOLOCATE .. RMAN> SET COMPRESSION ALGORITHM 'MEDIUM'..f'. RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> SET SET SET ECHO ECHO DATABASE SET SET SET SET SET SET SET SET SET SET COMMAND MAXCORRUPT FOR MAXCORRUPT FOR SET BACKUP NEWNAME NEWNAME FOR NEWNAME FOR NEWNAME FOR NEWNAME FOR ON. DATAFILE '/disk7/tbs11.. BACKUP Backs up BACKUP BACKUP BACKUP BACKUP BACKUP BACKUP Oracle database files. } backupSpec::= [(] Options Options Options Options Options Options 'format_string'].Force RMAN to automatically discover which nodes of an Oracle Real Application Clustersconfiguration contain the backups that you want to restore. RMAN> SET CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO 'cf_%F. DBID 591329635. RMAN> SET COMPRESSION ALGORITHM 'LOW' OPTIMIZE FOR LOAD FALSE.dbf’. RMAN> SET UNTIL TIME ’04-23-2010:23:50:04’. RMAN> SET CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE sbt TO 'cf_%F'.]]. | CHANNEL ['] channel_id | | MAXSETSIZE [=] integer [K | M | TAG [=] ['] tag_name | | SKIP {OFFLINE | READONLY | | | NOT BACKED UP [SINCE TIME [=] | COPIES [=] | DEVICE TYPE . or command backup sets.f'. FOR DATABASE TO '/oradata1/%b'. RMAN> SET COMPRESSION ALGORITHM 'LOW'..The default name for all subsequent RESTORE or SWITCH commands that affect the specified datafile.

RMAN> BACKUP DATABASE SKIP INACCESSIBLE.. 00:60 DATABASE. RMAN> BACKUP DATABASE NOEXCLUDE KEEP FOREVER TAG=’abc’. MINIMIZE LOAD DATABASE. MINIMIZE TIME DATABASE. datafileSpec]. | DATAFILECOPY FROM TAG [=] ['] tag_name ['] [. RMAN> BACKUP DATABASE SKIP READONLY..... RMAN> BACKUP DATABASE SKIP READONLY SKIP OFFLINE SKIP INACCESSIBLE. DATABASE MAXSETSIZE 10M. . RMAN> BACKUP DATABASE UNTIL 'SYSDATE+365' NOLOGS. RMAN> BACKUP DEVICE TYPE sbt ARCHIVELOG LIKE '/disk%arc%' DELETE ALL INPUT.. | DATAFILECOPY 'filename' [. | CHANNEL ['] channel_id ['] | CUMULATIVE | MAXSETSIZE [=] integer [K | M | G] | TAG [=] ['] tag_name ['] | keepOption | SKIP {OFFLINE | READONLY | INACCESSIBLE} | NOT BACKED UP [SINCE TIME [=] 'date_string' | integer TIMES] | DELETE [ALL] INPUT ... RMAN> BACKUP DATABASE PLUS ARCHIVELOG. TAG=’test backup’. RMAN> BACKUP DATABASE NOEXCLUDE.] backupSpecOperand::= { FORMAT [=] 'format_string' [.bck' TAG quarterly KEEP UNTIL TIME 'SYSDATE+365' RESTORE POINT Q1FY12. 'format_string']. ['] tag_name [']]. | DATABASE | archivelogRecordSpecifier | CURRENT CONTROLFILE [FOR STANDBY] | CONTROLFILECOPY 'filename' | SPFILE } [backupSpecOperand [backupSpecOperand]... | DATAFILECOPY { ALL | LIKE 'string_pattern' } | TABLESPACE ['] tablespace_name ['] [. RMAN> BACKUP VALIDATE CHECK LOGICAL DATABASE. } | DATAFILE datafileSpec [. datafileSpec]. RMAN> BACKUP DATABASE SKIP OFFLINE. ['] tablespace_name [']].. primary_key]. RMAN> BACKUP DEVICE TYPE DISK DATABASE. RMAN> BACKUP DATABASE KEEP UNTIL TIME=’SYSDATE+30’. RMAN> BACKUP DEVICE TYPE sbt DATABASE PLUS ARCHIVELOG. RMAN> BACKUP DATABASE COPIES=2..{ BACKUPSET { {ALL | completedTimeSpec }| primary_key) [... 'filename']. RMAN> BACKUP DEVICE TYPE sbt BACKUPSET COMPLETED BEFORE 'SYSDATE-14'DELETE INPUT. COMMENT=’full backup’. } RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> BACKUP BACKUP DATABASE BACKUP DATABASE BACKUP TAG 'weekly_full_db_bkup' BACKUP MAXSETSIZE 500M BACKUP DURATION BACKUP DURATION 00:30 BACKUP DURATION 00:45 DATABASE. RMAN> BACKUP CHECK LOGICAL DATABASE. ['] tablespace_name [']].... RMAN> BACKUP DEVICE TYPE sbt DATAFILECOPY FROM TAG 'latest' FORMAT 'df%f_%d'. RMAN> BACKUP NOT BACKED UP SINCE TIME 'SYSDATE-10' MAXSETSIZE 500M DATABASE PLUS ARCHIVELOG. -backup read only database also RMAN> BACKUP DATABASE NOT BACKED UP. RMAN> BACKUP DATABASE FORMAT '/disk1/backups/db_%U. RMAN> BACKUP DATABASE NOT BACKED UP SINCE TIME=’SYSDATE–3’.. DATABASE PLUS ARCHIVELOG.. RMAN> BACKUP DATABASE FORCE. } | COPY OF { DATABASE | TABLESPACE ['] tablespace_name ['] [.. | DATAFILE datafileSpec [. RMAN> BACKUP DATABASE KEEP FOREVER.

RMAN> BACKUP ARCHIVELOG ALL. RMAN> BACKUP ARCHIVELOG FROM TIME ‘SYSDATE–3’. RMAN> BACKUP FORMAT='AL_%d/%t/%s/%p' ARCHIVELOG LIKE '%arc_dest%'..ctl’’ “..bak’. RMAN> BACKUP DATAFILE 3. RMAN> BACKUP AS BACKUPSET DATAFILE '$ORACLE_HOME/oradata/users01. RMAN> BACKUP TABLESPACE appsd INCLUDE CURRENT CONTROLFILE PLUS ARCHIVELOG. RMAN> BACKUP AS COMPRESSED BACKUPSET DEVICE TYPE DISK COPIES 2 DATABASE FORMAT '/disk1/db_%U'. RMAN> BACKUP ARCHIVELOG ALL DELETE INPUT. 2. RMAN> BACKUP ARCHIVELOG FROM SEQUENCE 999 DELETE INPUT. RMAN> OR RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> BACKUP SQL “ALTER DATABASE CURRENT BACKUP CONTROLFILE CONTROLFILE.'$ORACLE_HOME/oradata/tools01..cpy' TAG longterm_bck DATAFILE 1 DATAFILE 2.’.dbf'. RMAN> BACKUP TABLESPACE system. RMAN> BACKUP TABLESPACE invd INCLUDE CURRENT CONTROLFILE. RMAN> BACKUP ARCHIVELOG ALL FROM SEQUENCE 1200 DELETE ALL INPUT.bkp'. ALL. TO BACKUP BACKUP BACKUP BACKUP CURRENT CONTROLFILE TO CONTROLFILE COPY BACKUP DEVICE TYPE sbt SPFILE DEVICE TYPE sbt DATAFILECOPY BACKUP RECOVERY ’’/u01/ .copy'. RMAN> BACKUP DEVICE TYPE sbt BACKUPSET COMPLETED BEFORE 'SYSDATE-14' DELETE INPUT. RMAN> BACKUP AS BACKUPSET DATAFILECOPY ALL. RMAN> BACKUP AS COMPRESSED BACKUPSET INCREMENTAL FROM SCN 4111140000000 DATABASE TAG 'RMAN_RECOVERY'. users.dbf'. RMAN> BACKUP ARCHIVELOG NOT BACKED UP 2 TIMES. ‘/u10/backup/control. RMAN> BACKUP TABLESPACE test. RMAN> BACKUP DATAFILE 1.. ARCHIVELOG ALL. RMAN> BACKUP BACKUPSET COMPLETED BEFORE ‘SYSDATE-3’ DELETE INPUT. SPFILE. RMAN> BACKUP COPIES 2 DEVICE TYPE sbt BACKUPSET ALL. RMAN> BACKUP AS COMPRESSED BACKUPSET. RMAN> BACKUP ARCHIVELOG COMPLETION TIME BETWEEN 'SYSDATE-28' AND 'SYSDATE-7'. RMAN> BACKUP DATAFILE ‘/u01/data/. RMAN> BACKUP TABLESPACE dwh SECTION SIZE 100M. RMAN> BACKUP KEEP FOREVER FORMAT '?/dbs/%U_longterm. RMAN> BACKUP DATAFILE 1 PLUS ARCHIVELOG. RMAN> BACKUP ARCHIVELOG FROM SEQUENCE 21531 UNTIL SEQUENCE 21590 FORMAT '/tmp/archive_backup.. RMAN> BACKUP SECTION SIZE 500M DATAFILE 6. tools. RMAN> BACKUP ARCHIVELOG FROM SEQUENCE 123 DELETE ALL INPUT. RMAN> BACKUP BACKUPSET ALL FORMAT = ‘/u01/.RMAN> RMAN> BACKUP VALIDATE BACKUP VALIDATE DATABASE ARCHIVELOG DATABASE. RMAN> BACKUP ARCHIVELOG LIKE '/arch%' DELETE ALL INPUT. RMAN> BACKUP SECTION SIZE 250M TABLESPACE datamart./backup_%u. RMAN> BACKUP TABLESPACE gld PLUS ARCHIVELOG. 14./bkctl.. RMAN> BACKUP TABLESPACE 4. IMAGE copy RMAN> BACKUP AS COPY DATABASE. FILES. RMAN> BACKUP AS COPY COPY OF DATABASE FROM TAG 'test' CHECK LOGICAL TAG 'duptest'. RMAN> BACKUP AS BACKUPSET DATAFILECOPY ALL NODUPLICATES. ALL NODUPLICATES.bkp’. . '/backup/cntrlfile. BACKUP set RMAN> BACKUP BACKUPSET ALL. RMAN> BACKUP ARCHIVELOG FROM SEQUENCE 100. '/disk2/db_%U'.

cpy'. undotbs. RMAN> BACKUP INCREMENTAL LEVEL 1 CUMULATIVE SKIP INACCESSIBLE DATABASE. DATAFILE 2 FORMAT '?/oradata/df2_longterm.ctl'..DATAFILE 1 FORMAT '?/oradata/df1_longterm..0.] CONTROLFILE SPFILE {BACKUP SUMMARY|FILE}] . | recordSpec} listObjectSpec::= {BACKUP [OF listObjectList] [listBackupOption] | COPY [OF listObjectList] | archivelogRecordSpecifier} listObjectList::= [ DATAFILE | TABLESPACE | | DATABASE [SKIP | | ]. [']tablespace_name[']].2/dbs/orapwcrd' AUXILIARY FORMAT '/u01/oracle/11.cpy'. RMAN> BACKUP AS COPY DATAFILE 2 FORMAT '/disk2/df2. RMAN> BACKUP INCREMENTAL LEVEL=0 DATABASE PLUS ARCHIVELOG. RMAN> BACKUP AS COPY CURRENT CONTROLFILE. RMAN> BACKUP AS COPY REUSE TARGETFILE '/u01/oracle/11. RMAN> BACKUP AS COPY KEEP FOREVER NOLOGS CURRENT CONTROLFILE FORMAT '?/oradata/cf_longterm. RMAN> BACKUP AS COPY DATAFILECOPY '/disk2/df2. RMAN> BACKUP AS COPY TABLESPACE system. RMAN> BACKUP AS COPY TABLESPACE test. RMAN> BACKUP INCREMENTAL FROM SCN 629184 DATAFILE 5 FORMAT '/tmp/ForStandby_%U' TAG 'FORSTANDBY'.cpy' TAG my_tag. users. RMAN> BACKUP AS COPY DATAFILECOPY 'bar' FORMAT 'foobar'.2/dbs/orapwcrd'.. RMAN> BACKUP DEVICE TYPE DISK INCREMENTAL FROM SCN 351986 DATABASE FORMAT '/tmp/incr_standby_%U'. RMAN> BACKUP AS COPY CURRENT CONTROLFILE FOR STANDBY AUXILIARY format '+DATA/crd/data1/control01.. LIST Produce LIST { | [ }.cpy'. [. Incremental backups RMAN> BACKUP INCREMENTAL LEVEL=0 DATABASE. RMAN> BACKUP INCREMENTAL LEVEL 2 DATABASE. RMAN> BACKUP AS COPY ARCHIVELOG ALL.0.cpy' FORMAT '/disk1/df2. [[']database_name[']]] [EXPIRED] {listObjectSpec ].. RMAN> BACKUP AS COPY CURRENT CONTROLFILE FORMAT ‘/.....cpy'..2. tools. archivelogRecordSpecifier [']tablespace_name[']] .RMAN> BACKUP AS COPY TABLESPACE 8. RMAN> BACKUP INCREMENTAL LEVEL = --- tablespace/datafile RMAN> BACKUP BLOCKS ALL CHECK LOGICAL VALIDATE DATAFILE 1398.’.. RMAN> BACKUP DEVICE TYPE DISK INCREMENTAL LEVEL 1 FOR RECOVER OF COPY WITH TAG 'oltp' DATABASE. RMAN> BACKUP INCREMENTAL LEVEL=1 DATABASE. listBackupOption::= [[BY BACKUP] datafileSpec [']tablespace_name['] TABLESPACE [VERBOSE] | [. RMAN> BACKUP AS COPY DATAFILE 1.. SUMMARY | BY datafileSpec].. RMAN> BACKUP INCREMENTAL LEVEL 2 CUMULATIVE DATABASE.2. a detailed listing INCARNATION maintQualifier of [OF | backup sets DATABASE RECOVERABLE [untilClause] or command copies. RMAN> BACKUP INCREMENTAL LEVEL=2 DATABASE. RMAN> BACKUP INCREMENTAL LEVEL 1 FOR RECOVER OF COPY WITH TAG 'incr' DATABASE. [']tablespace_name['] [.

420 FAILURE SUMMARY. RMAN> LIST BACKUP OF DATAFILE 11 SUMMARY. RMAN> LIST BACKUP OF TABLESPACE test SUMMARY. RMAN> LIST BACKUP SUMMARY. RMAN> LIST EXPIRED BACKUP OF ARCHIVELOG ALL SUMMARY. REPORT {{NEED BACKUP [{INCREMENTAL | DAYS} [=] integer| REDUNDANCY [=] integer | RECOVERY WINDOW OF integer DAYS)] | UNRECOVERABLE } reportObject | SCHEMA [atClause] | OBSOLETE [obsOperandList] } [DEVICE TYPE deviceSpecifier [..RMAN> RMAN> RMAN> RMAN> RMAN> LIST LIST LIST LIST INCARNATION. RMAN> LIST BACKUP RECOVERABLE. NAMES.11g DETAIL. 1. ] reportObject::= [ DATAFILE datafileSpec [.copy".. DATABASE OF DATAFILE LIST RMAN> LIST BACKUPSET RMAN> LIST BACKUPSET RMAN> LIST BACKUPSET OF RMAN> LIST ARCHIVELOG. 98. 11. 109. Perform detailed analyses of the content of the recovery catalog. files. RMAN> LIST BACKUP. RMAN> LIST CONTROLFILECOPY RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> LIST LIST LIST ALL GLOBAL LIST LIST FAILURE LIST FAILURE. ARCHIVELOG ALL. RMAN> LIST BACKUP OF DATABASE. RMAN> LIST ARCHIVELOG ALL LIKE '%5515%'. RMAN> LIST BACKUP OF ARCHIVELOG FROM TIME 'sysdate-1'. RMAN> LIST BACKUP OF CONTROLFILE. TABLESPACE appl_idx. -. and backups. 60. OF CONTROLFILE. datafileSpec]. . LIST DB_UNIQUE_NAME ALL. RMAN> LIST EXPIRED BACKUP. INCARNATION OF DATABASE. SCRIPT SCRIPT SCRIPT NAMES.. RMAN> LIST BACKUP OF ARCHIVELOG ALL COMPLETED BEFORE 'sysdate-2'. DATAFILE "/tmp/cntrlfile. INCARNATION OF DATABASE vis. RMAN> LIST BACKUP OF DATAFILE 65. -. NAMES. RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> LIST LIST LIST LIST COPY OF COPY COPY LIST OF COPY COPY. REPORT command Report backup status: database.11g ALL. -.11g R1 R1 R1 RMAN> LIST RESTORE POINT ALL. DB_UNIQUE_NAME OF DATABASE. EXPIRED COPY. RMAN> LIST BACKUP BY FILE. RMAN> LIST BACKUP OF DATABASE BY BACKUP. RMAN> LIST BACKUP OF ARCHIVELOG FROM SEQUENCE 2222..deviceSpecifier].

. override the retention policy for a backup or copy. ..] atClause::= {AT TIME [=] 'date_string' | AT SCN [=] integer|AT SEQUENCE [=] integer THREAD [=] integer } obsOperandList::= [REDUNDANCY [=] RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> integer | RECOVERY WINDOW OF integer DAYS | ORPHAN].. tablespace_name [']].| | ] TABLESPACE DATABASE [SKIP [']tablespace_name['] [... . REPORT NEED BACKUP.] keepOption} deviceSpecifier].122. archivelogRecordSpecifier [']tablespace_name[']] .... or archived redo log as having the status UNAVAILABLE or AVAILABLE.. UNAVAILABLE. primary_key]. remove the repository record for a backup or copy... | 'filename' [... REPORT UNRECOVERABLE. deviceSpecifier]. [..].. Mark a backup piece... ['] [. CHANGE {AVAILABLE [DEVICE CHANGE [DEVICE CHANGE {AVAILABLE [DEVICE {BACKUP | COPY} [OF listObjList] [maintQualifier | UNAVAILABLE | UNCATALOG | TYPE deviceSpecifier [.... | primary_key [. NOLOGS. REPORT SCHEMA AT TIME 'sysdate-20/1440'. datafileSpec tablespace_name TABLESPACE | UNCATALOG | keepOption} [. NOKEEP. deviceSpecifier UNCATALOG [. | TAG [=] ['] tag_name ['] } | BACKUPSET primary_key [.. ['] tag_name [']].. } | ARCHIVELOG {primary_key [. primary_key].300 FOREVER. [']tablespace_name[']] TABLESPACE [']tablespace_name['] [. REPORT NEED BACKUP RECOVERY WINDOW OF 7 DAYS. REPORT NEED BACKUP DAYS=5. primary_key].... [maintQualifier]. [.. | 'filename' [. image copy. keepOption} deviceSpecifier].]. REPORT OBSOLETE.} | TAG [=] ['] tag_name ['] [. datafileSpec].... update the recovery catalog with the DB_UNIQUE_NAME for the target database. UNCATALOG.203. 'media_handle']..} } RMAN> RMAN> RMAN> RMAN> RMAN> CHANGE BACKUPSET 666 KEEP CHANGE BACKUPSET 431 KEEP FOREVER CHANGE BACKUPSET 100 CHANGE BACKUPSET 123 CHANGE BACKUPSET 121.. ['] [']tablespace_name['] [. | deviceSpecifier]. archivelogRecordSpecifier {AVAILABLE | UNAVAILABLE TYPE deviceSpecifier recordSpec [DEVICE TYPE | UNAVAILABLE | TYPE deviceSpecifier listObjList::= [DATAFILE | TABLESPACE ['] | | DATABASE [SKIP | | ].]..] CONTROLFILE SPFILE recordSpec::= {{BACKUPPIECE | PROXY} {'media_handle' [. CHANGE command Update the status of a backup in the RMAN repository.... [']tablespace_name[']] . REPORT NEED BACKUP REDUNDANCY=3. REPORT NEED BACKUP DATABASE. REPORT NEED BACKUP INCREMENTAL 1. 'filename']. primary_key]. | {CONTROLFILECOPY | DATAFILECOPY} {{primary_key [... 'filename']. REPORT SCHEMA..127...

. and backup pieces. RMAN> CROSSCHECK BACKUP TAG=’full db’.. RMAN> CROSSCHECK BACKUP OF DATABASE. RMAN> CHANGE BACKUP TAG 'consistent_db_bkup' KEEP FOREVER NOLOGS. RMAN> CROSSCHECK BACKUP OF SPFILE... RMAN> CROSSCHECK BACKUP COMPLETED BETWEEN ‘SYSDATE-7’ AND ‘SYSDATE–1’. RMAN> CROSSCHECK BACKUP OF TABLESPACES gld. RMAN> CROSSCHECK BACKUP COMPLETED BETWEEN '01-JAN-10' AND '14-FEB-10'. tablespace_name [']].. RMAN> CHANGE BACKUP TAG 'consistent_db_bkup' KEEP FOREVER. RMAN> CHANGE CONTROLFILECOPY '/tmp/cf.| primary_key [. RMAN> CHANGE FAILURE 5 PRIORITY LOW. } } RMAN> CROSSCHECK BACKUP.] | recordSpec [DEVICE TYPE deviceSpecifier [.. RMAN> CROSSCHECK BACKUP OF DATAFILE "?/oradata/dwh/system01. primary_key]. RMAN> CROSSCHECK BACKUP OF CONTROLFILE. RMAN> CHANGE ARCHIVELOG ALL UNCATALOG.] CONTROLFILE SPFILE recordSpec::= {{ BACKUPPIECE | PROXY } { 'media_handle' [. RMAN> CHANGE BACKUP FOR DB_UNIQUE_NAME standby3 RESET DB_UNIQUE_NAME TO standby2.. RMAN> CROSSCHECK BACKUP OF DATAFILE 4 COMPLETED AFTER 'SYSDATE-14'. RMAN> CHANGE DB_UNIQUE_NAME FROM rdbms4 TO rdbms_dev. RMAN> CHANGE BACKUP TAG 'consistent_db_bkup' DATABASE KEEP FOREVER. primary_key]. RMAN> CHANGE BACKUP FOR DB_UNIQUE_NAME standby1 RESET DB_UNIQUE_NAME.. primary_key]. RMAN> CHANGE BACKUP TAG 'consistent_db_bkup' NOKEEP.. RMAN> CROSSCHECK BACKUP OF DATAFILE 9.. such as archived logs.RMAN> CHANGE BACKUP OF DATABASE TAG=’abc’ UNAVAILABLE. datafileSpec tablespace_name ['] TABLESPACE [... RMAN> CHANGE BACKUP OF SPFILE COMPLETED BEFORE 'SYSDATE-3' UNAVAILABLE.] }.. still exist on disk or tape. ['] tag_name [']].. datafile copies. primary_key]. CROSSCHECK command Check whether files managed by RMAN. | TAG [=] ['] tag_name ['] } | BACKUPSET primary_key [. [.. 'filename']. | 'filename' [. RMAN> CROSSCHECK BACKUP OF TABLESPACE warehouse. RMAN> CROSSCHECK BACKUP DEVICE TYPE sbt COMPLETED BETWEEN '01-AUG-09' AND '31-DEC-09'. | 'filename' [. } | ARCHIVELOG { primary_key [. deviceSpecifier].. invd.} | TAG [=] ['] tag_name ['] [. | { CONTROLFILECOPY | DATAFILECOPY } { {primary_key [.dbf" COMPLETED AFTER 'SYSDATE-14'.. listObjList::= [ DATAFILE | TABLESPACE ['] | | DATABASE [SKIP | | ].. RMAN> CROSSCHECK BACKUP DEVICE TYPE DISK COMPLETED BETWEEN '01-JAN-10' AND '23-MAR10'. 'filename'].... .... archivelogRecordSpecifier [']tablespace_name[']] .. CROSSCHECK {{BACKUP [OF listObjList] | COPY [OF listObjList] | archivelogRecordSpecifier} [maintQualifier [maintQualifier].. RMAN> CROSSCHECK BACKUP OF TABLESPACE userd COMPLETED BEFORE 'SYSDATE-14'. RMAN> CHANGE BACKUP OF DATABASE DEVICE TYPE DISK UNAVAILABLE. ['] [']tablespace_name['] [. 'media_handle']... RMAN> CHANGE COPY OF DATABASE CONTROLFILE NOKEEP.cpy' UNCATALOG. datafileSpec].

'nightly_backup'. 1339. SUMMARY... within Recovery ‘channel_id’] ‘command’. ]. deviceSpecifier].. restoreObject::= { CONTROLFILE [TO | DATABASE [SKIP [FOREVER] TABLESPACE [']tablespace_name['] [. 114. 115. | TABLESPACE ['] tablespace_name ['] [. 1340.] datafileSpec]. CROSSCHECK BACKUPPIECE TAG = CROSSCHECK PROXY SQL Execute a SQL SQL statement from [CHANNEL ALL. CROSSCHECK CONTROLFILECOPY CROSSCHECK ARCHIVELOG CROSSCHECK CROSSCHECK BACKUPSET 1338. tablespace_name [']]. BACKUPSET.. RESTORE Restore files from backup sets or from disk copies to the default or a new command location. command Manager. 154876. RMAN> SQL 'ALTER TABLESPACE users ONLINE'. RESTORE [(] restoreObject [(restoreSpecOperand [restoreSpecOperand]. users.. | 'channel_parms' 'media_handle'} DATABASE DATABASE PREVIEW TABLESPACE temp. '/tmp/control01. history. DATABASE..... 789. archivelogRecordSpecifier 'filename'] ['] | PARMS [=] integer)]. . RMAN> SQL "ALTER SYSTEM ARCHIVE LOG CURRENT". [ CHANNEL ['] channel_id ['] | PARMS [=] 'channel_parms' | FROM { BACKUPSET | DATAFILECOPY } | untilClause | FROM TAG [=] ['] tag_name ['] | VALIDATE | DEVICE TYPE deviceSpecifier [..] [)].. SPFILE. RMAN> SQL "ALTER SYSTEM SWITCH LOGFILE"..RMAN> RMAN> CROSSCHECK CROSSCHECK BACKUP BACKUP OF OF ARCHIVELOG ARCHIVELOG ALL RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> CROSSCHECK CROSSCHECK COPY OF CROSSCHECK DATAFILECOPY 113.ctl'. ALL....dbf'' SIZE 100K AUTOEXTEND ON NEXT 10K MAXSIZE 100K". ['] | | SPFILE [TO [PFILE] } restoreSpecOperand::= { CHANNEL ['] channel_id ['] | FROM TAG [=] ['] tag_name | FROM {AUTOBACKUP [{MAXSEQ | MAXDAYS} [=] } RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RESTORE RESTORE RESTORE RESTORE DATABASE RESTORE DATABASE SKIP RESTORE DATABASE RMAN> RESTORE 'filename'] [']tablespace_name[']] . UNTIL SCN TABLESPACE DATABASE. RMAN> SQL "ALTER DATABASE BACKUP CONTROLFILE TO TRACE". RMAN> SQL "ALTER TABLESPACE users ADD DATAFILE ''/disk1/ora/users02.. VALIDATE. COPY. | DATAFILE datafileSpec [.. PREVIEW. RMAN> SQL 'ALTER DATABASE DATAFILE 64 OFFLINE'.

| DATAFILE datafileSpec [. PREVIEW.ctl'. | TABLESPACE [']tablespace_name['] [. } RECOVER command Perform media recovery from RMAN backups and copies. to update them to a specified time. RMAN> RESTORE ARCHIVELOG ALL VALIDATE.dbf' FROM AUTOBACKUP MAXSEQ 20 MAXDAYS 150. | ['] M | G]}] READONLY NOREDO tag_name ['] DATABASE.ctl'. RMAN> RESTORE CONTROLFILE VALIDATE.RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RESTORE RESTORE RESTORE TABLESPACE TABLESPACE TABLESPACE RESTORE RESTORE RESTORE dwh1. tbs1 users dwh2. ALTER DATABASE MOUNT.. RMAN> RESTORE CONTROLFILE. RESTORE CONTROLFILE. DATABASE untilClause [']tablespace_name[']] .. DATAFILE 23 12 DATAFILE DATAFILE 45. VALIDATE.ctl' FROM '+DATA/pcrd/data1/control01. RMAN> RESTORE ARCHIVELOG ALL PREVIEW. } RMAN> RMAN> [MAXSIZE CHECK ARCHIVELOG {integer TAG} [K [=] RECOVER RECOVER DATABASE deviceSpecifier]. RMAN> RESTORE SPFILE. NOREDO.. RMAN> RESTORE CONTROLFILE TO '/tmp/autobkp. RMAN> RESTORE SPFILE FROM AUTOBACKUP.. RMAN> RESTORE CLONE CONTROLFILE TO '+DATA/pcrd/data2/control02.. } recoverOptionList::= { DELETE ARCHIVELOG | | | {FROM TAG | .. RMAN> RESTORE STANDBY CONTROLFILE FROM TAG 'forstandby'. (to all locations specified in the parameter file) then restore the database...] [recoverOptionList]. RESTORE DATABASE. RMAN> RESTORE CONTROLFILE FROM '/u01/control01.] [']tablespace_name[']]. RMAN> RESTORE CONTROLFILE FROM AUTOBACKUP. datafileSpec]. RMAN> RESTORE CONTROLFILE FROM TAG 'monday_cf_backup'. recoverObject::= { [ | [untilClause] SKIP [FOREVER] TABLESPACE [']tablespace_name['] [. RECOVER recoverObject [DEVICE TYPE deviceSpecifier [. VALIDATE. RMAN> RESTORE ARCHIVELOG LOW LOGSEQ 78311 HIGH LOGSEQ 78340 THREAD 1 ALL. RMAN> RESTORE ARCHIVELOG ALL PREVIEW RECALL. RMAN> RESTORE ARCHIVELOG FROM LOGSEQ=21531 UNTIL LOGSEQ=21590. PREVIEW. RMAN> RESTORE ARCHIVELOG ALL DEVICE TYPE sbt PREVIEW. RUN { ALLOCATE CHANNEL c1 DEVICE TYPE sbt.. .. Restore the control file. using that control file: STARTUP NOMOUNT. Apply redo log files and incremental backups to datafiles or data blocks restored from backup or datafile copies.

RMAN>
RMAN>
RMAN>

RECOVER
RECOVER
RECOVER

DATABASE
DATABASE
SKIP
DATABASE

SKIP
FOREVER
UNTIL

TABLESPACE
TABLESPACE
SCN

temp;
exam;
154876;

RMAN>
RECOVER
TABLESPACE
users;
RMAN>
RECOVER
TABLESPACE
dwh
DELETE
ARCHIVELOG
MAXSIZE
2M;
RMAN> RECOVER DATAFILE 33;
RMAN> RECOVER DATAFILE 3 BLOCK 116 DATAFILE 4 BLOCK 10;
RMAN> RECOVER DATAFILE 2 BLOCK 204 DATAFILE 9 BLOCK 109 FROM TAG=sundaynight;
RMAN>
RECOVER
DATAFILECOPY
'/disk1/img.df'
UNTIL
TIME
'SYSDATE-7';
RMAN>
RECOVER
COPY
OF
DATABASE
WITH
TAG
'incr';
RMAN> RECOVER COPY OF DATABASE WITH TAG 'upd' UNTIL TIME 'SYSDATE - 7';
RMAN>
RECOVER
CORRUPTION
LIST;
Restore
RMAN>
RMAN>
RMAN>
RMAN>
Restore
RMAN>
RMAN>
RMAN>
RMAN>
Restore
RMAN>
RMAN>
RMAN>
RMAN>

and

recover
STARTUP
RESTORE
RECOVER
ALTER
and

SQL
SQL
SQL
SQL

'ALTER
RESTORE
RECOVER
'ALTER
and
'ALTER
RESTORE
RECOVER
'ALTER

the

whole

database
MOUNT;
DATABASE;
DATABASE;
OPEN;

FORCE
DATABASE

recover
a
TABLESPACE
users
TABLESPACE
TABLESPACE
TABLESPACE
users
recover
DATABASE
DATABASE

tablespace
OFFLINE';
users;
users;
ONLINE';

a
DATAFILE
DATAFILE
DATAFILE
DATAFILE

64
64

datafile
OFFLINE';
64;
64;
ONLINE';

Steps
for
media
recovery:
1. Mount or open the Oracle database. Mount the database when performing whole database recovery, or open
the
database
when
performing
online
tablespace/datafile
recovery.
2. To perform incomplete recovery, use the SET UNTIL command to specify the time, SCN, or log sequence
number at which recovery terminates. Alternatively, specify the UNTIL clause on the RESTORE and RECOVER
commands.
3.
Restore
the
necessary
files
with
the
RESTORE
command.
4.
Recover
the
datafiles
with
the
RECOVER
command.
5. Place the database in its normal state. For example, open it or bring recovered tablespaces/datafiles online.

DELETE
command
Delete backups and copies, remove references to them from the recovery catalog, and update their control file
records
to
status
DELETED.
DELETE
[FORCE]
[NOPROMPT]
{[EXPIRED]
{
{BACKUP
[OF
listObjectList] |
COPY
[OF
listObectjList] |
archivelogRecordSpecifier
}
[maintQualifier
[maintQualifier]...]
|
recordSpec
[DEVICE
TYPE
deviceSpecifier
[,
deviceSpecifier]...]
}
| OBSOLETE [REDUNDANCY [=] integer | RECOVERY WINDOW OF integer DAYS | ORPHAN] [DEVICE TYPE
(deviceSpecifier
[,
deviceSpecifier]...]
};
recordSpec::=
{
{
BACKUPPIECE
|
PROXY
}
{ 'media_handle' [, 'media_handle']...| primary_key [, primary_key]...| TAG [=] ['] tag_name ['] }
|
BACKUPSET
primary_key
[,
primary_key]...
|
{
CONTROLFILECOPY
|
DATAFILECOPY
}
{
{primary_key
[,
primary_key]... |
'filename'
[,
'filename']...}
|
TAG
[=]
[']
tag_name
[']
[,
[']
tag_name
[']]...

}
|

ARCHIVELOG {

primary_key

listObjectList::=
[
DATAFILE
|
TABLESPACE
[']
|
|
DATABASE
[SKIP
|
|
]...

[,

primary_key]... |

datafileSpec
tablespace_name
[']
TABLESPACE

'filename'
[,

[,

[']

[']tablespace_name['] [,

[,

'filename']... }

datafileSpec]...
tablespace_name
[']]...
archivelogRecordSpecifier
[']tablespace_name[']]
...]
CONTROLFILE
SPFILE

RMAN>
DELETE
OBSOLETE;
RMAN>
DELETE
NOPROMPT
OBSOLETE;
RMAN>
DELETE
NOPROMPT
OBSOLETE
RECOVERY
WINDOW
OF
7
DAYS;
RMAN>
DELETE
EXPIRED
BACKUP;
RMAN>
DELETE
EXPIRED
BACKUP
DEVICE
TYPE
sbt;
RMAN>
DELETE
BACKUP
OF
DATABASE
LIKE
'/tmp%';
RMAN> DELETE NOPROMPT EXPIRED BACKUP OF TABLESPACE userd COMPLETED BEFORE
'SYSDATE-14';
RMAN>
DELETE
BACKUP
OF
SPFILE
TABLESPACE
users
DEVICE
TYPE SBT;
RMAN> DELETE ARCHIVELOG ALL;
RMAN> DELETE ARCHIVELOG ALL COMPLETED BEFORE 'sysdate-2';
RMAN> DELETE ARCHIVELOG ALL BACKED UP 2 TIMES TO DEVICE TYPE SBT;
RMAN> DELETE ARCHIVELOG ALL LIKE '%755153075%';
RMAN> DELETE ARCHIVELOG UNTIL SEQUENCE=79228;
RMAN> DELETE FORCE ARCHIVELOG ALL COMPLETED BEFORE 'sysdate-1.5';
RMAN> DELETE FORCE ARCHIVELOG UNTIL SEQUENCE=16364;
RMAN>
DELETE
NOPROMPT
ARCHIVELOG
UNTIL
SEQUENCE
=
7300;
RMAN>
DELETE
EXPIRED
ARCHIVELOG
ALL;
RMAN>
DELETE
NOPROMPT
EXPIRED
ARCHIVELOG
ALL;
RMAN>
DELETE
BACKUPSET
101,
102,
103;
RMAN>
DELETE
NOPROMPT
BACKUPSET
TAG
weekly_bkup;
RMAN>
DELETE
FORCE
NOPROMPT
BACKUPSET
TAG
weekly_bkup;
RMAN>
RMAN>
RMAN>
RMAN>
RMAN>
RMAN>
RMAN>

DROP
Delete
RMAN>
RMAN>
RMAN>
RMAN>

DELETE
DATAFILECOPY
"+DG_DATA/db/datafile/system.259.699468079";
DELETE
CONTROLFILECOPY
'/tmp/cntrlfile.copy';
DELETE
BACKUP
DEVICE
TYPE
SBT;
DELETE
BACKUP
DEVICE
TYPE
DISK;
DELETE
COPY;
DELETE
EXPIRED
COPY;
DELETE
COPY
TAG
'lastest';

the

DATABASE
database
from
DROP

target
DROP
DROP
DROP

disk

and

DATABASE
DATABASE
INCLUDING
DATABASE
INCLUDING
BACKUPS

command
unregisters
it.
DATABASE;
NOPROMPT;
BACKUPS;
NOPROMPT;

DUPLICATE
command
Use backups of the target database to create a duplicate database that we can use for testing purposes or to
create
a
standby
database.
RMAN>
DUPLICATE
TARGET
DATABASE;
RMAN>
DUPLICATE
TARGET
DATABASE
TO
dwhdb;
RMAN>
DUPLICATE
TARGET
DATABASE
TO
test
PFILE=/u01/apps/db/inittest.ora;
RMAN>
DUPLICATE
TARGET
DATABASE
TO
devdb
NOFILENAMECHECK;
RMAN> DUPLICATE DATABASE 'prod' DBID 139525561 TO 'dupdb' NOFILENAMECHECK;
RMAN> DUPLICATE DATABASE TO "cscp" NOFILENAMECHECK BACKUP LOCATION '/apps/oracle/backup';

RMAN> DUPLICATE TARGET DATABASE TO dup FROM ACTIVE DATABASE NOFILENAMECHECK
PASSWORD
FILE
SPFILE;
RMAN>
DUPLICATE
TARGET
DATABASE
TO
dupdb
LOGFILE GROUP 1 ('?/dbs/dupdb_log_1_1.f','?/dbs/dupdb_log_1_2.f') SIZE 200K, GROUP 2
('?/dbs/dupdb_log_2_1.f','?/dbs/dupdb_log_2_2.f')
SIZE
200K
REUSE;
RMAN> DUPLICATE TARGET DATABASE TO dup FOR STANDBY FROM ACTIVE DATABASE PASSWORD
FILE
SPFILEPARAMETER_VALUE_CONVERT
'/disk1',
'/disk2'
SET
DB_FILE_NAME_CONVERT
'/disk1','/disk2'
SET
LOG_FILE_NAME_CONVERT
'/disk1','/disk2'
SET
SGA_MAX_SIZE
200M
SET
SGA_TARGET
125M;
RMAN> DUPLICATE TARGET DATABASE FOR STANDBY NOFILENAMECHECK;
RMAN> DUPLICATE TARGET DATABASE FOR STANDBY FROM ACTIVE DATABASE;
RMAN> DUPLICATE TARGET DATABASE FOR STANDBY FROM ACTIVE DATABASE NOFILENAMECHECK;
RMAN> DUPLICATE TARGET DATABASE FOR STANDBY FROM ACTIVE DATABASE
SPFILE PARAMETER_VALUE_CONVERT '/stg/','/muc/'
SET "DB_UNIQUE_NAME"="muc"
SET LOG_FILE_NAME_CONVERT '/stg/','/muc/'
SET DB_FILE_NAME_CONVERT '/stg/','/muc/'
DORECOVER;
RMAN> DUPLICATE DATABASE TO newdb
UNTIL RESTORE POINT firstquart12
DB_FILE_NAME_CONVERT='/u01/prod1/dbfiles/','/u01/newdb/dbfiles'
PFILE = '/u01/newdb/admin/init.ora';

SWITCH
command
Specify that a datafile copy is now the current datafile, i.e. the datafile pointed to by the control file. This
command is equivalent to the SQL statement ALTER DATABASE RENAME FILE as it applies to datafiles.
RMAN>
SWITCH
DATABASE
TO
COPY;
RMAN>
SWITCH
TABLESPACE
users
TO
COPY;
RMAN>
SWITCH
DATAFILE
ALL;
RMAN>
SWITCH
DATAFILE
'/disk1/tols.dbf'
TO
DATAFILECOPY
'/disk2/tols.copy';
RMAN>
SWITCH
DATAFILE
"+DG_OLD/db/datafile/sysaux.260.699468081"
TO
COPY;
RMAN>
SWITCH
TEMPFILE
1;
RMAN>
SWITCH
TEMPFILE
1
TO
'/newdisk/dbs/temp1.f';
RMAN> SWITCH TEMPFILE ALL;
RMAN> SWITCH CLONE DATAFILE ALL;

CATALOG
command
Add information about file copies and user-managed backups to the catalog repository.
RMAN>
CATALOG
DATAFILECOPY
'/disk1/old_datafiles/01_10_2009/users01.dbf';
RMAN>
CATALOG
DATAFILECOPY
'/disk2/backup/users01.bkp'
LEVEL
0;
RMAN>
CATALOG
CONTROLFILECOPY '/disk3/backup/cf_copy.bkp';
RMAN>
CATALOG
ARCHIVELOG
'/disk1/arch1_731.dbf',
'/disk1/arch1_732.dbf';
RMAN>
CATALOG
BACKUPPIECE
'/disk1/c-874220581-20090428-01';
RMAN>
CATALOG
LIKE
'/backup';
RMAN>
CATALOG
START
WITH
'/fs2/arch';
RMAN>
CATALOG
START
WITH
'/disk2/archlog'
NOPROMPT;
RMAN>
CATALOG
START
WITH
'+dg2';
RMAN>
CATALOG RECOVERY
AREA;

ALLOCATE
Establish
a
channel,
which
is
a
connection
between RMAN and
RMAN>
ALLOCATE
CHANNEL
c1
DEVICE
RMAN> ALLOCATE CHANNEL ch DEVICE TYPE DISK FORMAT

command
database
instance.
TYPE
sbt;
‘C:\ORACLEBKP\DB_U%’;
a

RMAN>
ALLOCATE
CHANNEL
t1
DEVICE
TYPE
DISK
CONNECT
'sys/pwd@bkp1’;
RMAN> ALLOCATE CHANNEL c1 DEVICE TYPE sbt PARMS 'ENV=(OB_MEDIA_FAMILY=wholedb_mf)';
RMAN> ALLOCATE CHANNEL t1 DEVICE TYPE sbt PARMS 'ENV=(OB_DEVICE_1=tape1,
OB_DEVICE_2=tape3)';
RMAN>
ALLOCATE
CHANNEL
t1
TYPE
'sbt_tape'
PARMS='SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so.1';
RMAN> ALLOCATE CHANNEL t1 TYPE 'sbt_tape' SEND "NB_ORA_CLIENT=CLIENT_MACHINE_NAME";
RMAN>
ALLOCATE
CHANNEL
'dev1'
TYPE 'sbt_tape' PARMS
'ENV=OB2BARTYPE=ORACLE8,
OB2APPNAME=ORCL, OB2BARLIST=MACHINENAME_ORCL_ARCHLOGS)';
RMAN> ALLOCATE CHANNEL y1 TYPE DISK RATE 70M;
RMAN>
ALLOCATE
AUXILIARY
CHANNEL
ac1
TYPE
DISK;
RMAN>
ALLOCATE
AUXILIARY
CHANNEL
ac2
DEVICE
TYPE
sbt;
ALLOCATE CHANNEL FOR MAINTENANCE - allocate a channel in preparation for issuing maintenance
commands
such
as
DELETE.
RMAN>
ALLOCATE
CHANNEL
FOR
MAINTENANCE
DEVICE
TYPE
DISK;
RMAN> ALLOCATE CHANNEL FOR MAINTENANCE DEVICE TYPE DISK FORMAT "/disk2/%U";
RMAN> ALLOCATE CHANNEL FOR MAINTENANCE DEVICE TYPE DISK CONNECT '@test1';
RMAN>
ALLOCATE
CHANNEL
FOR
MAINTENANCE
DEVICE
TYPE
sbt;
RMAN>
ALLOCATE
CHANNEL
FOR
MAINTENANCE
DEVICE
TYPE
sbt
PARMS
'SBT_LIBRARY=/usr/local/oracle/backup/lib/libobk.so,
ENV=(OB_DEVICE_1=tape2)';

RELEASE
CHANNEL
command
Release a channel that was allocated with an ALLOCATE CHANNEL or ALLOCATE CHANNEL FOR
MAINTENANCE
command.
RMAN>
RELEASE
CHANNEL;
RMAN>
RELEASE
CHANNEL
c1;

BLOCKRECOVER
command
Will
recover
the
corrupted
blocks.
RMAN>
BLOCKRECOVER
CORRUPTION
LIST;
RMAN>
BLOCKRECOVER
DATAFILE
8
BLOCK
22;
RMAN>
BLOCKRECOVER
DATAFILE
7
BLOCK
233,235
DATAFILE
4
BLOCK
101;
RMAN> BLOCKRECOVER DATAFILE 2 BLOCK 12,13 DATAFILE 3 BLOCK 5,98,99 DATAFILE 4 BLOCK 19;
RMAN> BLOCKRECOVER DATAFILE 3 BLOCK 2,4,5 TABLESPACE sales DBA 4194405,4194412 FROM
DATAFILECOPY;
RMAN> BLOCKRECOVER TABLESPACE dwh DBA 4194404,4194405 FROM TAG "weekly";
RMAN> BLOCKRECOVER TABLESPACE dwh DBA 94404 RESTORE UNTIL TIME 'SYSDATE-2';

ADVISE
Display
RMAN>
RMAN>
RMAN>
RMAN>
RMAN>
RMAN>
RMAN>

REPAIR
Repair
RMAN>
RMAN>
RMAN>

FAILURE
ADVISE
ADVISE
ADVISE
ADVISE
ADVISE
ADVISE
FAILURE

one

or

FAILURE
more
failures
REPAIR
REPAIR

command (From Oracle
repair
ADVISE
FAILURE
FAILURE
FAILURE
FAILURE
FAILURE
HIGH
EXCLUDE

command (From Oracle
recorded
in
the
automated
REPAIR
FAILURE
FAILURE

11g
555,

FAILURE

11g
diagnostic

R1)
options.
FAILURE;
242;
ALL;
CRITICAL;
HIGH;
LOW;
625;

R1)
repository.
FAILURE;
PREVIEW;
NOPROMPT;

mark FILESPERSET 20) (DATAFILE 2. BACKUP TABLESPACE &2. RMAN> VALIDATE BACKUPSET 218. SPOOL Write RMAN> RMAN> RMAN> RMAN output SPOOL SPOOL command a log file. to LOG LOG SPOOL TO run command Execute a sequence of one or more RMAN commands. RMAN> VALIDATE DATAFILE 4. } CREATE Create a stored script and SCRIPT store it in the recovery command catalog. RMAN> VALIDATE BACKUPSET 3871. } RMAN> run { ALLOCATE CHANNEL c1 TYPE DISK FORMAT '&1/%U'.log'. which are one or more statements executed within the braces of RUN. RMAN> CREATE SCRIPT backup_whole COMMENT "backup whole database and archived redo log files" {BACKUP INCREMENTAL LEVEL 0 TAG backup_whole FORMAT "/disk2/backup/%U" DATABASE PLUS ARCHIVELOG. RMAN> VALIDATE DATAFILE 8 SECTION SIZE = 200M. ALLOCATE CHANNEL dev2 DEVICE TYPE DISK FORMAT '/fs2/%U'. 3890. '/tmp/backup. BACKUP(TABLESPACE system.log' APPEND. RMAN> VALIDATE TABLESPACE dwh. RMAN> VALIDATE CURRENT CONTROLFILE. LOG OFF.11g R1 RMAN> VALIDATE CHECK LOGICAL DATABASE. RMAN> VALIDATE DATABASE. RMAN> VALIDATE SKIP INACCESSIBLE DATABASE. RMAN> VALIDATE RECOVERY FILES. RMAN> VALIDATE SPFILE.} RMAN> CREATE COMMENT 'tablespace users SCRIPT backup_ts_users backup' . RMAN> VALIDATE DATAFILE 4 BLOCK 56. RMAN> VALIDATE DB_RECOVERY_FILE_DEST. VALIDATE command Examine a backup set and report whether its data is intact.6). RMAN scans all of the backup pieces in the specified backup sets and looks at the checksums to verify that the contents can be successfully restored. RMAN> run { ALLOCATE CHANNEL c1 TYPE DISK FORMAT '/orabak/%U'. } RMAN> run { ALLOCATE CHANNEL dev1 DEVICE TYPE DISK FORMAT '/fs1/%U'. RMAN> VALIDATE DATAFILE 2. TO '/tmp/spool.RMAN> REPAIR FAILURE USING ADVISE OPTION integer.8. BACKUP TABLESPACE users.fin.3. -. RMAN> VALIDATE RECOVERY AREA. RMAN> VALIDATE COPY OF TABLESPACE dwh. RMAN> VALIDATE COPY OF DATABASE.

} RMAN> CREATE GLOBAL SCRIPT backup_db COMMENT "back up any database from the recovery catalog. If the script does not exist. "/tmp/gl_backupdb. backup_db.dmp' IMPORT SCRIPT 'impscript. EXECUTE Run RMAN> RMAN> RMAN> RMAN> DELETE Delete RMAN> RMAN> SCRIPT command an RMAN stored script.rman'. RUN {EXECUTE SCRIPT backup_whole.{ALLOCATE BACKUP CHANNEL c1 TYPE DISK FORMAT 'c:\temp\%U'. with logs" {BACKUP DATABASE. tools TABLESPACE DESTINATION '/disk1/trans' AUXILIARY DESTINATION '/disk1/aux' UNTIL TIME 'SYSDATE15/1440'.1 FORMAT '&3_%U'. users. RMAN> REPLACE SCRIPT backup_db {BACKUP DATABASE PLUS ARCHIVELOG. RMAN> CREATE GLOBAL SCRIPT gl_backup_db {BACKUP DATABASE PLUS ARCHIVELOG. then REPLACE SCRIPT creates it. global_backup_db.} RUN {EXECUTE SCRIPT backup_df USING 3 test_backup df3.} RMAN> REPLACE SCRIPT df {BACKUP DATAFILE &1 TAG &2.} a SCRIPT stored script from DELETE DELETE GLOBAL FLASHBACK Return the database to its state RMAN> FLASHBACK DATABASE RMAN> FLASHBACK DATABASE TO at command catalog. RMAN> TRANSPORT TABLESPACE example.1 FORMAT '/disk1/&3_%U'.sql' EXPORT LOG 'explog.} RUN {EXECUTE SCRIPT backup_ts_any USING 'example'. the SCRIPT SCRIPT a TO RESTORE recovery DATABASE command time or SCN. RMAN> TRANSPORT TABLESPACE exam TABLESPACE DESTINATION '/disk1/trans' AUXILIARY DESTINATION '/disk1/aux' DATAPUMP DIRECTORY dpdir DUMP FILE 'dmpfile.txt'.} RMAN> REPLACE GLOBAL SCRIPT backup_db {BACKUP DATABASE PLUS ARCHIVELOG.log'. REPLACE SCRIPT command Replace an existing script stored in the recovery catalog. POINT 'before_update'. backup_db.rman".} TABLESPACE RMAN> CREATE SCRIPT df {BACKUP DATAFILE &1 TAG &2. backup_db.} PRINT Display RMAN> RMAN> RMAN> SCRIPT a PRINT PRINT PRINT GLOBAL SCRIPT stored SCRIPT GLOBAL SCRIPT gl_backup_db TO FILE command script. previous TRANSPORT TABLESPACE command Create transportable tablespace sets from backup for one or more tablespaces.} RUN {EXECUTE GLOBAL SCRIPT global_backup_db. SCN 411010.} RMAN> REPLACE GLOBAL SCRIPT gl_full_bkp FROM FILE '/tmp/script_file.} RMAN> CREATE SCRIPT backup_ts_users FROM FILE 'backup_ts_users. .

quit. '/disk1/oracle/dbs/ax1. This utility is focused at collecting information that will aid in problem diagnosis. RMAN> CONVERT TABLESPACE RMAN> CONVERT TABLESPACE fin. '/tmp/from_solaris/fin/fin02. hr TO '/tmp/transport_to_solaris/%U'. RMAN> CONVERT DATAFILE '/u01/oradata/datafile/system. PLATFORM 'Solaris[tm] OE (32-bit)' FORMAT RMAN> CONVERT DATAFILE '/disk1/oracle/dbs/tbs_f1. '/tmp/test'.'+DATA'.dbf' TO PLATFORM='Solaris Operating System (x86-64)' FROM PLATFORM='Solaris[tm] OE (64-bit)' FORMAT '/tmp/test/%N.dbf'. It’s not only a great tool for troubleshooting but also very helpful for documenting an Oracle environment. vendor-specific quoted SEND string to one or more 'OB_DEVICE specific command channels.sql' NEW DATABASE 'prodaix' DB_FILE_NAME_CONVERT '/u01/oradata/datafile'. run it and upload the output to Metalink for analysis. HOST command Invoke an operating system command-line subshell from within RMAN or run a specific operating system command.df'. to collect diagnostics information from an Oracle database and its environment (RAC. RMAN> CONVERT DATABASE ON DESTINATION PLATFORM CONVERT SCRIPT '/tmp/convertdb/convertscript.dbf' FROM PLATFORM 'Linux x86 64-bit' FORMAT '+DATA/system.arc /disk2/archlog/'. RMAN> CONVERT DATAFILE '/tmp/from_solaris/fin/fin01.dbf'. When logging a call. Oracle Remote Diagnostic Agent (RDA) RDA (Remote Diagnostic Agent) is a utility. RMAN> CONVERT DATABASE NEW DATABASE 'prodwin' TRANSPORT SCRIPT '/tmp/convertdb/transportscript' TO PLATFORM 'Microsoft Windows IA (32-bit)' DB_FILE_NAME_CONVERT '/disk1/oracle/dbs'. that can be downloaded from Metalink.dbf'. RMAN> HOST.rman' TRANSPORT SCRIPT '/tmp/transport_newdb.'/tmp/convertdb'. tape1'. TO PLATFORM 'Solaris[tm] OE (32-bit)'. '/tmp/from_solaris/hr/hr01. RMAN> RMAN> SEND Send a RMAN> exit. '/tmp/from_solaris/hr/hr02.df'.'/tmp/from_solaris/hr'. tbs_2 FORMAT '/tmp/tbs_2_%U. . a set of shell scripts (or) a PERL script. Exadata). RMAN> HOST '/bin/mv $ORACLE_HOME/dbs/*.dbf' DB_FILE_NAME_CONVERT='/ui/prod/oracle/oradata/SEARCHP/data/'.dbf' DB_FILE_NAME_CONVERT '/tmp/from_solaris/fin'. RMAN> CONVERT DATAFILE '/tmp/PSMN.f' FORMAT '+DATAFILE'. ASM. RMAN> CONVERT DATABASE ON DESTINATION PLATFORM CONVERT SCRIPT '/tmp/convert_newdb.sql' NEW DATABASE 'prodwin' FORMAT '/tmp/convertdb/%U'. EXIT Exit or the QUIT RMAN Command console.rman'TRANSPORT SCRIPT '/tmp/convertdb/transportscript.dbf'. RMAN> HOST 'ls -lt /disk2/*'.'/disk2/orahome/dbs/hr' FROM PLATFORM 'Solaris[tm] OE (64-bit)'. Oracle Support will often request that you install the RDA utility. hr RMAN> CONVERT TABLESPACE fin.'/disk2/orahome/dbs/fin'.CONVERT command Convert datafile formats for transporting tablespaces and databases across platforms.

sh -p Exadata_IbSwitch .+] /etc/oratab | grep $ORACLE_SID |cut -f2 -d:`. not -cv RDA rda./rda. which is not the SYSTEM tablespace.sh ONET .SQL_SYSDBA=1. you can browse through information about your database. . And it’s all nicely formatted in HTML with drill-down links.sh -p Exadata_FailedDrives .txt or README_WINDOWS. applications tier.ASM_ORACLE_SID=`grep ASM /etc/oratab |cut -f1 -d:`.pl./rda.ASM_ORACLE_HOME=`grep ASM /etc/oratab |cut -f2 -d:` -T oraddc . You can then restore the object.htm. Java. When an object has been dropped from a locally managed tablespace (LMTS)./rda.ASM_ORACLE_HOME=`grep ASM /etc/oratab |cut -f2 -d:` ONET .ORACLE_HOME=`grep -v -e ^[#.txt in the installation directory.sh -f -y -e RPT_GROUP='XD2'.sh -p Exadata_NetworkCabling Recycle bin Recycle Recycle bin bin was in introduced Oracle in Oracle 10g Recycle bin is actually a data dictionary table containing information about dropped objects.sh -T oraddc .sh -vT ora600:`grep -i ora-00600 $B_D_D |cut -f1 -d:` .+] /etc/oratab | grep $ORACLE_SID |cut -f2 -d:`.ASM_ORACLE_HOME=`grep ASM /etc/oratab |cut -f2 -d:` EXA . Instead. you have to answer some questions and send it off to gather information about your environment. which is similar to deleting a file/folder from Windows/Macintosh. its data and its dependent objects from the recycle bin.ASM_ORACLE_SID=`grep ASM /etc/oratab |cut -f1 -d:`.ORACLE_HOME=`grep -v -e ^[#./rda. the database does not immediately delete the object & reclaim the space associated with the object. Once installed and run rda.*./rda.htm located in the RDA_Output directory.ASM_ORACLE_SID=`grep ASM /etc/oratab |cut -f1 -d:`./rda.SQL_SYSDBA=1.+] /etc/oratab | grep $ORACLE_SID |cut -f2 -d:`./rda.*.sh -f -y -e RPT_GROUP='XD2'.+] /etc/oratab | grep $ORACLE_SID |cut -f2 -d:`.SQL_SYSDBA=1.ORACLE_HOME=`grep -v -e ^[#.sh the patch out To $. Download To find $.sh -p advanced DBM ./rda.SQL_LOGIN='/'. If you pull up the RDA__START.sh has run and open the file RDA__START./rda./rda. As result you will get a lot of TXT and HTML files. This eliminates the need to perform a point-in-time recovery operation.SQL_LOGIN='/'./rda.sh -f -y -e RPT_GROUP='XD'.RDA offers lots of reporting options.pl questions]] $perl of For more options.SQL_LOGIN='/'.ORACLE_SID=$ORACLE_SID./rda.*.ORACLE_SID=$ORACLE_SID. server.ORACLE_SID=$ORACLE_SID.ORACLE_HOME=`grep -v -e ^[#./rda.SQL_LOGIN='/'.sh -p Exadata_Assessment .sh [[Answer from Metalink. You can run it on just about any version of the Database or Oracle Applications or Operating System and it is smart enough to figure out where to go and what to gather. whether RDA FTP to installation database is box and successful run –vdt or bundle unzip or it. Examples: .SQL_SYSDBA=1. The simplest way of reviewing the output files is to launch a web browser on the same machine where rda.ASM_ORACLE_HOME=`grep ASM /etc/oratab |cut -f2 -d:` -p Exadata_Assessment .ASM_ORACLE_SID=`grep ASM /etc/oratab |cut -f1 -d:`.sh or rda. read the README_UNIX.ORACLE_SID=$ORACLE_SID. it places the object and any dependent objects in the recycle bin. is relatively unobtrusive and provides easy to read results./rda. forms and just about anything else you ever wanted to know./rda.sh EXA .sh -p Exadata_SickCell . The FLASHBACK DROP and the FLASHBACK TABLE feature places the object in the recycle bin after removing the object from the database./rda.sh -f -y -e RPT_GROUP='XD2'.*.

as follows: unique_id is a 26-character globally unique identifier for this object. Managing Recycle Bin in Oracle If the tables are not really dropped in this process. But what if you want to drop the table completely. You can query objects that are in the recycle bin. they still occupy the space there. you must specify the name of the object as it is identified in the recycle bin. . Similarly. However. the objects are not moved from the tablespace they were in earlier. SQL> SELECT * FROM "BIN$W1PPyhVRSbuv6g+V69OgRQ==$0". The recycle bin is merely a logical structure that catalogs the dropped objects. you can drop it permanently using: SQL> DROP TABLE table-name PURGE. This command will not rename the table to the recycle bin name. Remember. objects as that well as they the new own in bin contents: RECYCLEBIN. But you can turn it on or off with the RECYCLEBIN initialization parameter. which makes the recycle bin name unique across all databases  version is a version number assigned by the database Use the SQL> or SQL> ORIGINAL ------------ATTACHMENT This shows Note that following SELECT NAME the users command to see * recycle FROM SHOW RECYCLEBIN NAME OBJECT TYPE --------------------------------BIN$Wk/N7nbuC2DgRAAAd7F0UA==$0 TABLE original can name see of only the table. The renaming BIN$unique_id$version where:  convention attachment." In that scenario. just as you can query other objects. The tablespace may have enough free space. To free the space. space pressure can occur with user quotas as defined for a particular tablespace. To get back SQL> FLASHBACK TABLE the deleted table-name/bin-name TO table BEFORE and its DROP [RENAME TO contents new-name]. SQL> DROP TABLE SQL> SELECT * TNAME TABTYPE -----------------------------------BIN$Wk/N7nbuC2DgRAAAd7F0UA==$0 TABLE The deleted table has been renamed with system name. This is similar to SHIFT+DELETE in Windows. rather. bin. CLUSTERID ---------- FROM physically is it’s not dropped.When objects are dropped. at the system or session level. The dependent objects (such as indexes) are removed before a table is removed. The recyclebin is enabled. by default. SQL> SHOW PARAMETER RECYCLEBIN When the recycle bin is enabled. TAB. therefore not releasing the tablespace. from Oracle 10g. it will be deleted permanently. SQL> ALTER SYSTEM/SESSION SET RECYCLEBIN=ON/OFF SCOPE=BOTH. placing tables in the recycle bin does not free up space in the original tablespace. RECYCLEBIN DROP TIME ----------------2008-10-28:11:46:55 name the in the recycle bin. but the user may be running out of his or her allotted portion of it. In such situations. dropped tables and their dependent objects are placed in the recycle bin. objects are automatically purged from the recycle bin in a first-in-first-out manner. in that case. without needing a flashback feature. you need to purge the bin using: SQL> PURGE RECYCLEBIN. the tablespace is said to be under "space pressure. as it would have been before Oracle 10g. what happens when the dropped objects take up all of that space? When a tablespace is completely filled up with recycle bin data such that the datafiles have to extend to make room for more data.

o The same goes for materialized view logs. you may want to purge all the objects in recycle bin in a tablespace. This will remove the index only. but not the associated objects like indexes and triggers. the objects in the tablespace are not placed in the recycle bin . Therefore. The un-drop feature brings the table back to its original name. leaving the copy of the table in the recycle bin. it purges indexes first. A normal SQL> user. The PURGE DBA_RECYCLEBIN command can be used only if you have SYSDBA system privileges. regardless of user. or using SQL> its recycle PURGE TABLE bin name "BIN$Wk/N7nbuC2DgRCBAd7F0UA==$0". Sometimes it might be useful to purge at a higher level. all the dependent objects for the table that are in the recycle bin are retrieved with it. tablespace using DBA_RECYCLEBIN. A o few types of dependent objects are not handled like the simple index above. It removes all objects from the recycle bin. The constraint names are also not retrievable from the view. If you SQL> want to permanently drop PURGE an index from the recycle INDEX bin. This command will remove table and all dependent objects such as indexes. not put in the recyclebin. You can then use the above PURGE TABLESPACE command to purge the segments for each of the users. you could issue SQL> PURGE TABLE table-name. and so on from the recycle bin. This approach could come handy in data warehouse environments where users create and drop many transient tables. For instance. which are left with the recycled names. You could modify the command above to limit the purge to a specific user only: SQL> PURGE TABLESPACE tablespace-name USER user-name. You may want to purge only the recycle bin for a particular user in that tablespace. These old names must be retrieved/renamed manually and then applied to the flashed-back table. and not retrieved when the table is restored with FLASHBACK DROP.Oracle automatically purges objects belonging to that user in that tablespace. When you drop a tablespace including its contents. saving some space. there are several ways you can manually control the recycle bin. The PURGE TABLESPACE command only removes recyclebin segments belonging to the currently connected user. If space limitations force Oracle to start purging objects from the recyclebin. Sources such as views and procedures defined on the table are not recompiled and remain in the invalid state. all mview logs defined on that table are permanently dropped. A DBA SQL> can such as purge SCOTT. Note: When a table is retrieved from the recycle bin. you can do so using: index-name. when you drop a table. You would issue: SQL> PURGE TABLESPACE tablespace-name. In addition. If you want to purge the specific table from the recyclebin after its drop. could PURGE all the PURGE clear objects his in own any recycle bin with RECYCLEBIN. it may not remove all the recyclebin segments in the tablespace. You can determine which users have recyclebin segments in a target tablespace using the following query: SQL> SELECT DISTINCT owner FROM dba_recyclebin WHERE ts_name = "tablespace-name". They have to be renamed from other sources. constraints. They cannot be retrieved separately. o Referential integrity constraints that reference another table are lost when the table is put in the recyclebin and then restored. Bitmap join indexes are not put in the recyclebin when their base table is DROPped.

and the database purges any entries in the recycle bin for objects located in the tablespace.
The database also purges any recycle bin entries for objects in a tablespace when you drop the tablespace, not
including contents, and the tablespace is otherwise empty. Likewise:

When you drop a user, any objects belonging to the user are not placed in the recycle bin and any
objects in the recycle bin are purged.
When you drop a cluster, its member tables are not placed in the recycle bin and any former member
tables in the recycle bin are purged.

When you drop a type, any dependent objects such as subtypes are not placed in the recycle bin and
any former dependent objects in the recycle bin are purged.

When the recycle bin is disabled, dropped tables and their dependent objects are not placed in the recycle bin;
they are just dropped, and you must use other means to recover them (such as recovering from backup).
Disabling the recycle bin does not purge or otherwise affect objects already in the recycle bin.
Related
RECYCLEBIN$ (base table)
DBA_RECYCLEBIN
USER_RECYCLEBIN
RECYCLEBIN (synonym for USER_RECYCLEBIN)
Profiles
Profiles
Profiles

Views

in
were

introduced

in

Oracle

Oracle
8.

Profiles are, set of resource limits, used to limit system resources a user can use. It allows us to regulate the
amount of resources used by each database user by creating and assigning profiles to them.
Whenever you create a database, one default profile will be created and assigned to all the users you have
created.
The
name
of
default
profile
is
DEFAULT.
Kernel Resources

sessions_per_user -- Maximum concurrent sessions allowed for a user.

cpu_per_session -- Maximum CPU time limit per session, in hundredth of a second.

cpu_per_call -- Maximum CPU time limit per call, in hundredth of a second. Call being parsed, executed
and fetched.

connect_time -- Maximum connect time per session, in minutes.

idle_time -- Maximum idle time before user is disconnected, in minutes.

logical_reads_per_session -- Maximum blocks read per session.

logical_reads_per_call -- Maximum blocks read per call.

private_sga -- Maximum amount of private space in SGA.

composite_limit -- Sum of cpu_per_session, connect_time, logical_reads_per_session and private_sga.

In order to enforce above kernel resource limits, init parameter resource_limit must be set to true.
SQL> ALTER SYSTEM SET resource_limit=TRUE SCOPE=BOTH;
Password
limiting
functionality
is
not
affected
by
this

parameter.

Password Resources

failed_login_attempts
-Maximum
Query
to
count
failed
SQL> SELECT name, lcount FROM USER$ WHERE lcount <> 0;

failed

login
login

password_life_time -- Maximum time a password is valid.

password_reuse_max -- Minimum of different passwords before password can be reused.

password_reuse_time -- Minimum of days before a password can be reused.

attempts.
attempts

password_lock_time -- Number of days an account is locked after failed login attempts.

password_grace_time -- The number of days after the grace period begins, during which a warning is
issued and login is allowed. If the password is not changed during the grace period, the password expires.

password_verify_function -- This function will verify the complexity of passwords.

Notes:
If PASSWORD_REUSE_TIME is set to an integer value, PASSWORD_REUSE_MAX must be set to
UNLIMITED and vice versa.

If PASSWORD_REUSE_MAX=DEFAULT and PASSWORD_REUSE_TIME is set to UNLIMITED, then
Oracle uses the PASSWORD_REUSE_MAX value defined in the DEFAULT profile and vice versa.

If both PASSWORD_REUSE_TIME and PASSWORD_REUSE_MAX are set to DEFAULT, then Oracle
uses whichever value is defined in the DEFAULT profile.

The system resource limits can be enforced at the session level or at the call level or both.
If a session exceeds one of these limits, Oracle will terminate the session. If there is a logoff trigger, it won't be
executed.
In order to track password limits, Oracle stores the history of passwords for a user in USER_HISTORY$.
Creating
Profiles
In Oracle, the default cost assigned to a resource is unlimited. By setting resource limits, you can prevent users
from performing operations that will tie up the system and prevent other users from performing operations. You
can use resource limits for security to ensure that users log off the system and do not leave the sessions
connected
for
long
periods
of
time.
Syntax
for
CREATE
and
CREATE/ALTER
PROFILE
[SESSIONS_PER_USER
[CPU_PER_SESSION
[CPU_PER_CALL
[CONNECT_TIME
[IDLE_TIME
[LOGICAL_READS_PER_SESSION
[LOGICAL_READS_PER_CALL
[COMPOSITE_LIMIT
[PRIVATE_SGA
[FAILED_LOGIN_ATTEMPTS
[PASSWORD_LIFE_TIME
[PASSWORD_REUSE_TIME
[PASSWORD_REUSE_MAX
[PASSWORD_LOCK_TIME
[PASSWORD_GRACE_TIME
[PASSWORD_VERIFY_FUNCTION function_name|NULL|DEFAULT]

ALTER
command:
profile-name
LIMIT
value|UNLIMITED|DEFAULT]
value|UNLIMITED|DEFAULT]
value|UNLIMITED|DEFAULT]
value|UNLIMITED|DEFAULT]
value|UNLIMITED|DEFAULT]
value|UNLIMITED|DEFAULT]
value|UNLIMITED|DEFAULT]
value|UNLIMITED|DEFAULT]
value[K|M]|UNLIMITED|DEFAULT]
expr|UNLIMITED|DEFAULT]
expr|UNLIMITED|DEFAULT]
expr|UNLIMITED|DEFAULT]
expr|UNLIMITED|DEFAULT]
expr|UNLIMITED|DEFAULT]
expr|UNLIMITED|DEFAULT]

e.g:
SQL>
CREATE
PASSWORD_LIFE_TIME
PASSWORD_GRACE_TIME
PASSWORD_REUSE_TIME
PASSWORD_REUSE_MAX
FAILED_LOGIN_ATTEMPTS
PASSWORD_LOCK_TIME
CPU_PER_CALL

PROFILE

onsite

LIMIT
45
12
3
5
4
2
5000

PRIVATE_SGA
LOGICAL_READS_PER_CALL

250K
2000;

Following is the create profile statement for DEFAULT profile (with all default values):
SQL>
CREATE
PROFILE
"DEFAULT"
LIMIT
CPU_PER_SESSION
UNLIMITED
CPU_PER_CALL
UNLIMITED
CONNECT_TIME
UNLIMITED
IDLE_TIME
UNLIMITED
SESSIONS_PER_USER
UNLIMITED
LOGICAL_READS_PER_SESSION
UNLIMITED
LOGICAL_READS_PER_CALL
UNLIMITED
PRIVATE_SGA
UNLIMITED
COMPOSITE_LIMIT
UNLIMITED
PASSWORD_LIFE_TIME
180
PASSWORD_GRACE_TIME
7
PASSWORD_REUSE_MAX
UNLIMITED
PASSWORD_REUSE_TIME
UNLIMITED
PASSWORD_LOCK_TIME
1
FAILED_LOGIN_ATTEMPTS
10
PASSWORD_VERIFY_FUNCTION
NULL;
SQL>

ALTER

PROFILE

onsite

LIMIT

FAILED_LOGIN_ATTEMPTS

3;

Assigning
Profiles
By default, when you create a user, they are assigned to the DEFAULT profile. If you don't want
that
CREATE
USER user-name IDENTIFIED
BY
password
PROFILE profile-name;
e.g.
SQL>
CREATE
USER
satya
IDENTIFIED
BY
satya
PROFILE
onsite;
You
ALTER
e.g.
Dropping
Syntax

can

alter

USER
SQL>
for

the
existing
user's
profiles
by
user-name
PROFILE
profile-name
;
ALTER
USER
surya
PROFILE
offshore;

dropping

a

profile,

without

dropping

Profiles
users.

DROP
PROFILE
profile-name
e.g. SQL> DROP PROFILE onsite;Syntax for dropping a profile with CASCADE clause, all the
users
who
are
having
this
profile
will
be
deleted.
DROP
e.g.

PROFILE
DROP

SQL>

Password

Verify

profile-name CASCADE
offshore
CASCADE;

PROFILE
Function

in

Oracle

This
will
verify
passwords
for
length,
content
and
complexity.
The function requires the old and new passwords, so password changes can not be done with
ALTER USER. Password changes should be performed with the SQL*Plus PASSWORD command
or
through
a
stored
procedure
that
requires
the
correct
inputs.
CREATE/ALTER PROFILE profile-name LIMIT PASSWORD_VERIFY_FUNCTION function-name;
It's possible to restrict a password's format by creating a PL/SQL procedure that validates
passwords. It’ll check for minimum width, letters, numbers, or mixed case, or verifying that the
password
isn't
a
variation
of
the
username.
If you want to remove this password verify function, assign NULL value to
PASSWORD_VERIFY_FUNCTION.
ALTER
PROFILE
profile-name
LIMIT
PASSWORD_VERIFY_FUNCTION
NULL;
Related
profile$
profname$
DBA_PROFILES
RESOURCE_COST

Views

(shows

the

unit

cost

associated

with

each

resource)

USER_RESOURCE_LIMITS (each user can find information on his resources and limits)
Password file (orapwd utility) in Oracle
Oracle password file stores passwords for users with administrative privileges.
If the DBA wants to start up an Oracle instance there must be a way for Oracle to authenticate the DBA.
Obviously, DBA password cannot be stored in the database, because Oracle cannot access the database before
the instance is started up. Therefore, the authentication of the DBA must happen outside of the database. There
are two distinct mechanisms to authenticate the DBA:
(i) Using the password file or
(ii) Through the operating system (groups). Any OS user under dba group, can login as SYSDBA.
The default location for the password file is:
$ORACLE_HOME/dbs/orapw$ORACLE_SID
%.ora on Windows.

on

Unix,

%ORACLE_HOME%\database\PWD%ORACLE_SID

REMOTE_LOGIN_PASSWORDFILE
The init parameter REMOTE_LOGIN_PASSWORDFILE specifies if a password file is used to authenticate the
Oracle DBA or not. If it set either to SHARED or EXCLUSIVE, password file will be used.
REMOTE_LOGIN_PASSWORDFILE is a static initialization parameter and therefore cannot be changed without
bouncing the database.
Following are the valid values for REMOTE_LOGIN_PASSWORDFILE:
NONE - Oracle ignores the password file if it exists i.e. no privileged connections are allowed over non secure
connections. If REMOTE_LOGIN_PASSWORDFILE is set to EXCLUSIVE or SHARED and the password file is
missing, this is equivalent to setting REMOTE_LOGIN_PASSWORDFILE to NONE.
EXCLUSIVE (default) - Password file is exclusively used by only one (instance of the) database. Any user can be
added to the password file. Only an EXCLUSIVE file can be modified. EXCLUSIVE password file enables you to
add, modify, and delete users. It also enables you to change the SYS password with the ALTER USER
command.
SHARED - The password file is shared among databases. A SHARED password file can be used by multiple
databases running on the same server, or multiple instances of an Oracle Real Application Clusters
(RAC) database. However, the only user that can be added/authenticated is SYS.
A SHARED password file cannot be modified i.e. you cannot add users to a SHARED password file. Any attempt
to do so or to change the password of SYS or other users with the SYSDBA or SYSOPER or SYSASM (this is
from Oracle 11g) privileges generates an error. All users needing SYSDBA or SYSOPER or SYSASM system
privileges must be added to the password file when REMOTE_LOGIN_PASSWORDFILE is set to EXCLUSIVE.
After all users are added, you can change REMOTE_LOGIN_PASSWORDFILE to SHARED.
This option is useful if you are administering multiple databases or a RAC database.
If a password file is SHARED or EXCLUSIVE is also stored in the password file. After its creation, the state is
SHARED. The state can be changed by setting REMOTE_LOGIN_PASSWORDFILE and starting the database
i.e. the database overwrites the state in the password file when it is started up.
ORAPWD
You can create a password file using orapwd utility. For some Operating systems, you can create this file as part
of standard installation.
Users are added to the password file when they are granted the SYSDBA or SYSOPER or SYSASM privilege.
The Oracle orapwd utility assists the DBA while granting SYSDBA, SYSOPER and SYSASM privileges to other
users. By default, SYS is the only user that has SYSDBA and SYSOPER privileges. Creating a password file, via
orapwd, enables remote users to connect with administrative privileges.
$ orapwd file=password_file_name
[nosysdba=Y|N]

[password=the_password]

[entries=n]

[force=Y|N]

[ignorecase=Y|N]

Examples: $ orapwd file=orapwSID password=sys_password force=y nosysdba=y $ orapwd file=$ORACLE_HOME/dbs/orapw$ORACLE_SID password=secret $ orapwd file=orapwprod entries=30 force=y C:\orapwd file=%ORACLE_HOME%\database\PWD%ORACLE_SID%. If you supply only filename.e.ora password=2012 entries=20 C:\orapwd file=D:\oracle11g\product\11. ENTRIES Entries specify the maximum number of distinct SYSDBA. passwords are treated as case-insensitive i. This argument specifies the number of entries that you require the password file to accept.ora password=id entries=6 force=y $ orapwd file=orapwPRODB3 password=abc123 entries=10 ignorecase=n $ orapwd file=orapwprodb password=oracle1 ignorecase=y There are no spaces permitted around the equal-to (=). The filenames allowed for the password file are OS specific. the file is written to the current directory. you must create a new password file. which will hold the password information. . the environment variable for each instance should point to the same password file. SYSOPER and SYSASM users that can be stored in the password file. Entries can be reused as users are added to and removed from the password file. The number of password entries allocated is always a multiple of four. The contents are encrypted and are unreadable. Other operating systems allow the use of environment variables to specify the name and location of the password file. FORCE (Optional) If Y. It is critically important to secure password file. This argument is mandatory. For example. If you are running multiple instances of Oracle Database using Oracle Real Application Clusters (RAC).0\db_1\database\pwdsfs. PASSWORD This is the password the privileged users should enter while connecting as SYSDBA or SYSOPER or SYSASM. if your OS block size is 512 bytes. case is ignored when comparing the password that the user supplies during login with the password in the password file. permits overwriting an existing password file. The following describe the orapwd command line arguments. When you exceed the allocated number of password entries. An error will be returned if password file of the same name already exists and this argument is omitted or set to N. The actual number of allowable entries can be higher than the number of users. IGNORECASE (Optional) If Y. because the orapwd utility continues to assign password entries until an OS block is filled. it holds four password entries. Some operating systems require the password file to adhere to a specific format and be located in a specific directory. allocate a number of entries that is larger than you think you will ever need. To avoid this necessity. FILE Name to assign to the password file.1. You must supply complete path.

NOSYSDBA (Optional) For Oracle Data Vault installations.-----SYS TRUE TRUE FALSE SATYA TRUE TRUE TRUE .-----.------. SQL> select * from v$pwfile_users. then the user can log on with SYSOPER system privilege. SYSDBA If the value of this column is TRUE.------. use the GRANT statement to grant the SYSDBA or SYSOPER or SYSASM system privilege to a user.-----SYS TRUE TRUE FALSE The columns displayed by the view V$PWFILE_USERS are: Column USERNAME Description This column contains the name of the user that is recognized by the password file.-----. as shown in the following example: SQL> grant sysdba to satya. attempting to grant SYSDBA or SYSOPER or SYSASM privileges will result in the following error: SQL> grant sysdba to satya. USERNAME SYSDBA SYSOPER SYSASM -------. then the user can log on with SYSASM system privilege.-----SYS TRUE TRUE FALSE SATYA TRUE TRUE FALSE SQL> grant sysasm to satya.------.------.-----SYS TRUE TRUE FALSE SATYA TRUE FALSE FALSE SQL> grant sysoper to satya. USERNAME SYSDBA SYSOPER SYSASM -------. SYSOPER If the value of this column is TRUE. ORA-01994: GRANT failed: cannot add users to public password file If your server is using an EXCLUSIVE password file. SQL> select * from v$pwfile_users. SQL> select * from v$pwfile_users. SQL> select * from v$pwfile_users.-----. If orapwd has not yet been executed or password file is not available. SYSASM If the value of this column is TRUE. USERNAME SYSDBA SYSOPER SYSASM -------.-----. USERNAME SYSDBA SYSOPER SYSASM -------. then the user can log on with SYSDBA system privilege. Granting SYSDBA or SYSOPER or SYSASM privileges Use the V$PWFILE_USERS view to see the users who have been granted SYSDBA or SYSOPER or SYSASM system privileges for a database.

If you revoke all 3 privileges. the WITH ADMIN OPTION is not used in the GRANT statement... the grantee cannot in turn grant the SYSDBA or SYSOPER or SYSASM privilege to another user. If you receive the file full error (ORA-01996) when you try to grant SYSDBA or SYSOPER or SYSASM system privileges to a user. end backup" $ adrci> HELP [COMMAND] adrci> help HELP [topic] Available Topics: CREATE REPORT ECHO EXIT HELP HOST Interpreter. export incident -p "incident_id>120"" $ adrci exec="dde show available actions. Oracle removes the user from the password file. Use the REVOKE statement to revoke the SYSDBA or SYSOPER or SYSASM system privilege from a user.-----.e. If the server does not have an EXCLUSIVE password file (i. Oracle issues an error if you attempt to grant these privileges. as shown in the following example: SQL> revoke sysoper from satya.]"] -help script=adrci_script.command. After you remove this file. That is. you must create a larger password file and regrant the privileges to the users.When you grant SYSDBA or SYSOPER or SYSASM privileges to a user. because roles are available only after database startup. Only a user currently connected as SYSDBA can grant or revoke another user's SYSDBA or SYSOPER or SYSASM system privileges. that user's name and privilege information are added to the password file. SYSOPER and SYSASM are the most powerful database privileges. if the initialization parameter REMOTE_LOGIN_PASSWORDFILE is NONE or SHARED. USERNAME SYSDBA SYSOPER SYSASM -------.------. These privileges cannot be granted to roles. Because SYSDBA. show tracefile" $ adrci exec="begin backup. cp -R log /tmp. only those users who can be authenticated by the OS can perform SYSDBA or SYSOPER or SYSASM database administration operations..adi script=env. SQL> select * from v$pwfile_users. ADRCI Commands in Oracle ADRCI Automatic Diagnostic Repository (ADR) Command $ adrci [-HELP] [SCRIPT=script_filename] $ adrci $ adrci $ adrci $ adrci exec="show alert" $ adrci exec="show home. from [EXEC="command Commands Oracle 11g [.adrci adrci .-----SYS TRUE TRUE FALSE SATYA TRUE FALSE TRUE A user's name remains in the password file only as long as that user has at least one of these three privileges. or the password file is missing). you can delete the password file and then optionally reset the REMOTE_LOGIN_PASSWORDFILE initialization parameter to NONE. Removing Password File If you determine that you no longer require a password file to authenticate users.

sh" tailalert.adrci" .IPS PURGE RUN SET BASE SET BROWSER SET CONTROL SET ECHO SET EDITOR SET HOMES | HOME | HOMEPATH SET TERMOUT SHOW ALERT SHOW BASE SHOW CONTROL SHOW HM_RUN SHOW HOMES | HOME | HOMEPATH SHOW INCDIR SHOW INCIDENT SHOW PROBLEM SHOW REPORT SHOW TRACEFILE SPOOL adrci> help set browser adrci> help ips get remote keys adrci> help merge file adrci> help register incident file adrci> CREATE adrci> HM REPORT create report_type hm_run Health report - adrci> ECHO adrci> echo quoted_string world" "Hello adrci> EXIT adrci> adrci> QUIT adrci> quit adrci> HOST adrci> adrci> adrci> report_id hm_run_6 Monitor exit host Incident Packaging Service (IPS) adrci> help ips HELP IPS [topic] Available Topics: ADD ADD FILE ADD NEW INCIDENTS CHECK REMOTE KEYS COPY IN FILE COPY OUT FILE CREATE PACKAGE DELETE PACKAGE FINALIZE PACKAGE GENERATE PACKAGE GET MANIFEST GET METADATA GET REMOTE KEYS PACK "ls host -l "vi ["host_command_string"] host *.

59.trc adrci> ips copy out file ADR_HOME/trace/orcl_ora_11595.txt to new_file.trc to /tmp/orcl_ora_11595.00 -07:00' package 7 adrci> IPS adrci> ips ADD add file FILE filespec PACKAGE /u01/app/oracle/diag/rdbms/orcl/orcl/trace/alert_orcl.trc to /home/sona/trace/orcl_ora_63175.txt adrci> IPS COPY IN FILE filename [TO new_name][OVERWRITE] PACKAGE pkgid [INCIDENT incid] adrci> ips copy in file /home/sona/trace/orcl_ora_63175.trc package 11 incident 6 adrci> ips copy in file /tmp/key_file.59 -05:30' adrci> IPS adrci> ips delete package 22 DELETE adrci> IPS adrci> FINALIZE ips finalize adrci> IPS GENERATE PACKAGE adrci> ips generate adrci> ips generate package 14 incremental adrci> ips generate package 345 complete adrci> IPS GET pkg_id MANIFEST [IN package PACKAGE pkg_id PACKAGE package pkg_id 33 location] 4 FROM [COMPLETE|INCREMENTAL] in /tmp FILE filename .txt overwrite package 12 adrci> IPS COPY OUT FILE source TO target [OVERWRITE] adrci> ips copy out file ADR_HOME/trace/orcl_ora_63175.trc to ADR_HOME/trace/orcl_ora_63175.trc overwrite adrci> IPS CREATE PACKAGE [INCIDENT inc_id | PROBLEM problem_id | PROBLEMKEY problem_key | SECONDS secs | TIME start_time TO end_time} [CORRELATE BASIC|TYPICAL|ALL] adrci> ips create package adrci> ips create package incident 111 adrci> ips create package incident 222 correlate basic adrci> ips create package problem 6 adrci> ips create package problemkey '?' adrci> ips create package problemkey "?" adrci> ips create package seconds 333 adrci> ips create package seconds 444 correlate all adrci> ips create package time '2011-10-01 00:00:00 -05:30' to '2011-10-02 23.REMOVE REMOVE FILE SET CONFIGURATION SHOW CONFIGURATION SHOW FILES SHOW INCIDENTS SHOW PACKAGE UNPACK FILE UNPACK PACKAGE USE REMOTE KEYS adrci> IPS ADD [INCIDENT inc_id | PROBLEM prob_id | PROBLEMKEY problem_key | SECONDS secs | TIME start_time TO end_time] PACKAGE pkg_id adrci> ips add incident 22 package 33 adrci> ips add problem 3 package 1 adrci> ips add seconds 60 package 9 adrci> ips add time '2011-10-01 10:00:00.00 -07:00' to '2011-10-01 23:00:00.log adrci> IPS ADD NEW adrci> ips add new incidents package 321 INCIDENTS PACKAGE package_id package 99 package_id adrci> IPS CHECK REMOTE KEYS FILE file_spec adrci> ips check remote keys file /tmp/key_file.

zip FILE filename | FROM ADR] /home/satya/ORA600_20101006135346_COM_1. adrci> IPS REMOVE [INCIDENT inc_id | PROBLEM prob_id | PROBLEMKEY prob_key] PACKAGE pkg_id adrci> ips remove incident 2 package 7 adrci> ips remove problem 4 package 8 adrci> IPS adrci> adrci> IPS adrci> ips SET REMOVE remove FILE file file_name PACKAGE ADR_HOME/trace/orcl_ora_3579.00 -07:00' to '2012-01-01 01:01:01.zip adrci> IPS GET REMOTE KEYS FILE file_spec PACKAGE package_id adrci> ips get remote keys file /tmp/key_file.trc package CONFIGURATION parameter_id value (parameter_id ips set configuration = 1 4 adrci> IPS SHOW CONFIGURATION [parameter_id] (parameter_id = 1 to 23) adrci> ips show adrci> ips show configuration adrci> IPS adrci> SHOW ips FILES show adrci> IPS SHOW adrci> ips show incidents package 333 files INCIDENTS PACKAGE package PACKAGE to pkg_id 4 23) 8 configuration 21 pkg_id 9 package_id adrci> IPS SHOW PACKAGE [package_id] [BASIC | BRIEF | DETAIL] adrci> ips show package adrci> ips show package 12 detail adrci> ips show package 999 basic adrci> IPS UNPACK FILE file_name [INTO adrci> ips unpack file /home/oracle/ORA4030_20111008145306_COM_1.00 -07:00'.zip into /tmp/newadr unpack file path] /home/oracle/ORA4030_20111008145306_INC_1.adrci> ips get manifest from file adrci> IPS GET METADATA [FROM adrci> ips get metadata from file adrci> ips get metadata from adr /home/satya/ORA603_20101006235311_INC_1.txt package 12 adrci> IPS PACK [INCIDENT inc_id | PROBLEM prob_id | PROBLEMKEY prob_key | SECONDS secs | TIME start_time TO end_time] [CORRELATE {BASIC|TYPICAL|ALL}] [IN path] adrci> ips pack adrci> ips pack incident 41 adrci> ips pack incident 24001 in /tmp adrci> ips pack problem 5 adrci> ips pack problemkey ORA 4031 adrci> ips pack seconds 60 correlate all adrci> ips pack time '2011-12-31 23:59:59.zip adrci> IPS UNPACK PACKAGE pkg_name [INTO path] adrci> ips unpack package IPSPKG_20111029010503 into /tmp/newadr adrci> IPS USE REMOTE KEYS FILE file_spec PACKAGE package_id .

diag\tnslsnr\testdb\listener HOMEPATH set set homepath homepath_str1 homepath homepath diag/rdbms/dwh3/dwh31 [homepath_str2] [. .] diag\dbms\orabase\orabase diag/rdbms/orcl/orcl4 diag/rdbms/dwh3/dwh32 TERMOUT set set termout termout adrci> SHOW ALERT [-p predicate_string] [-term] [[-tail [num] [-f]] | [-file alert_file_name]] The fields in the predicate are the fields: ORIGINATING_TIMESTAMP timestamp NORMALIZED_TIMESTAMP timestamp ORGANIZATION_ID text(65) COMPONENT_ID text(65) HOST_ID text(65) HOST_ADDRESS text(17) MESSAGE_TYPE number MESSAGE_LEVEL number MESSAGE_ID text(65) ON|OFF on off . home_location2 [..scr HOMES homes is vi) vi xemacs home_location diag\rdbms\orabase\orabase home_location1.] diag\rdbms\orabase\orabase.adrci> ips use remote keys file /tmp/key_file.adi @adr. adrci> set base -product client adrci> set base -product adrci adrci> SET adrci> adrci> adrci> SET adrci> adrci> CONTROL set set (SHORTP_POLICY control control = product_name browser_program firefox mozilla browser browser [720|value] | LONGP_POLICY = (SHORTP_POLICY = (LONGP_POLICY = [8760|value]) 360) 4380) ECHO set set EDITOR editor_program (default set editor editor editor HOME home set set ON|OFF on off echo echo set set adrci> SET adrci> adrci> SET adrci> adrci> -product BROWSER adrci> SET adrci> adrci> adrci> SET adrci> adrci> adrci> | set set adrci> SET adrci> adrci> adrci> SET adrci> adrscr9 adrscr....txt package 12 adrci> PURGE [[-i id1 | start_id end_id] | [-age mins [-type ALERT|INCIDENT|TRACE|CDUMP|HM|UTSCDMP] adrci> purge adrci> purge -i 123 456 adrci> purge -age 600 -type incident adrci> purge –age 720 –type alert adrci> RUN @script_name @@script_name adrci> adrci> adrci> script_name run run adrci> SET BASE base_string adrci> set base /u01/app/oracle/product adrci> set base .

like Unix command tail -200 adrci> show alert -tail -f -like Unix command tail –f adrci> show alert -tail 20 -f adrci> show alert -term adrci> show alert -P "MESSAGE_TEXT LIKE '%ORA-%'" -To list all the "ORA-" errors adrci> SHOW adrci> adrci> adrci> BASE show show adrci> SHOW adrci> adrci> SHOW HM_RUN [-p predicate_string] The fields can appear in the predicate are: RUN_ID number RUN_NAME text(31) CHECK_NAME text(31) NAME_ID number MODE number START_TIME timestamp RESUME_TIME timestamp END_TIME timestamp MODIFIED_TIME timestamp TIMEOUT number FLAGS number STATUS number SRC_INCIDENT_ID number NUM_INCIDENTS number ERR_NUMBER number REPORT_FILE bfile adrci> adrci> show [-product show base base -product -product CONTROL control show show hm_run product_name] base client adrci -p hm_run "run_id=293" .MESSAGE_GROUP text(65) CLIENT_ID text(65) MODULE_ID text(65) PROCESS_ID text(33) THREAD_ID text(65) USER_ID text(65) INSTANCE_ID text(65) DETAILED_LOCATION text(161) UPSTREAM_COMP_ID text(101) DOWNSTREAM_COMP_ID text(101) EXECUTION_CONTEXT_ID text(101) EXECUTION_CONTEXT_SEQUENCE number ERROR_INSTANCE_ID number ERROR_INSTANCE_SEQUENCE number MESSAGE_TEXT text(2049) MESSAGE_ARGUMENTS text(129) SUPPLEMENTAL_ATTRIBUTES text(129) SUPPLEMENTAL_DETAILS text(129) PROBLEM_KEY text(65) adrci> show alert -it will open alert in vi editor adrci> show alert -tail -like Unix command tail adrci> show alert -tail 200 -.

.] adrci> adrci> adrci> adrci> adrci> adrci> adrci> show show show show show show show tracefile tracefile tracefile tracefile tracefile tracefile %mmon% %smon% -path tracefile -t -rt %pmon% alert%log -rt /home/satya/temp ...] [-path path1 path2 ....adrci> SHOW adrci> HOME home show adrci> SHOW HOMES [-ALL adrci> show homes adrci> show homes -all adrci> show homes -base /temp adrci> show homes rdbms | adrci> SHOW adrci> -base base_str | homepath_str1 INCDIR [id | id_low show show show incdir incdir ] HOMEPATH homepath show adrci> SHOW adrci> adrci> adrci> .] [-rt | -t] [-i inc1 inc2 . 3801 id_high] incdir 317 3804 adrci> SHOW INCIDENT [-p predicate_string] [-mode BASIC|BRIEF|DETAIL] [-last50|num|-all] [-orderby (field1.012579 +00:00'" adrci> show incident -p "problem_key='ORA 600 [ksmnfy2]'" adrci> show incident -p "problem_key='ORA 700 [kfnReleaseASM1]'" -mode basic -last -all adrci> SHOW PROBLEM [-p predicate_string] [-last 50|num|-all] [-orderby (field1.. ..) [ASC|DSC]] The field names that users can specify in the predicate are: PROBLEM_ID number PROBLEM_KEY text(550) FIRST_INCIDENT number FIRSTINC_TIME timestamp LAST_INCIDENT number LASTINC_TIME timestamp IMPACT1 number IMPACT2 number IMPACT3 number IMPACT4 number SERVICE_REQUEST text(64) BUG_NUMBER text(64) adrci> show problem adrci> show problem -all adrci> show problem -p "problem_id=44" adrci> show problem -p "problem_key='ORA 600 [krfw_switch_4]'" adrci> SHOW adrci> REPORT show report report_type hm_run run_name hm_run_9 adrci> SHOW TRACEFILE [file1 file2 ... field2. .) [ASC|DSC]] adrci> show incident adrci> show incident -mode brief adrci> show incident -mode basic -p "incident_id=905" adrci> show incident -mode detail adrci> show incident -mode detail -p "incident_id=33" adrci> show incident -p "CREATE_TIME > '2011-09-18 21:35:25.. field2..

adrci> adrci> show adrci> SPOOL adrci> adrci> adrci> adrci> spool off adrci> spool show tracefile -i 1 tracefile 4 -path full_file_name spool spool spool -.Turn off the spooling -.log append adrci> HELP EXTENDED adrci> help extended HELP [topic] Available Topics: BEGIN BACKUP CD CREATE STAGING XMLSCHEMA CREATE VIEW DDE DEFINE DELETE DESCRIBE DROP VIEW END BACKUP INSERT LIST DEFINE MERGE ALERT MERGE FILE MIGRATE QUERY REPAIR SELECT SET COLUMN SHOW CATALOG SHOW DUMP SHOW SECTION SHOW TRACE SHOW TRACEMAP SWEEP UNDEFINE UPDATE VIEW adrci> BEGIN BACKUP adrci> begin backup adrci> CREATE STAGING XMLSCHEMA [arguments] adrci> create staging xmlschema adrci> CREATE [OR REPLACE] [PUBLIC | PRIVATE] VIEW viewname [(alias)] AS select_stmt adrci> create view my_incident as select incident_id from incident .Check the current spooling status myfile.ado -i 916 diag/rdbms/orabase/orabase [APPEND|OFF] c:\dwh\alrt.txt /home/satya/myalert.

Diagnostic Data Extractor (DDE) adrci> help DDE HELP DDE [topic] Available Topics: CREATE INCIDENT EXECUTE ACTION SET INCIDENT SET PARAMETER SHOW ACTIONS SHOW AVAILABLE ACTIONS adrci> DDE CREATE INCIDENT TYPE type adrci> dde create incident type wrong_results adrci> DDE EXECUTE ACTION [INCIDENT incident_id] ACTIONNAME action_name invocation_id adrci> dde execute action incident 12 actionname LSNRCTL_STATUS invocation 1 adrci> dde execute action actionname LSNRCTL_STATUS invocation 1 INVOCATION adrci> DDE SET INCIDENT incident_id adrci> dde set incident 867 adrci> DDE SET PARAMETER value [INCIDENT incident_id] ACTIONNAME action_name INVOCATION invocation_id POSITION position adrci> dde set parameter incident 12 actionname LSNRCTL_STATUS invocation 1 position 1 service adrci> DDE SHOW ACTIONS [INCIDENT incident_id] adrci> dde show actions adrci> dde show actions incident 222 adrci> DDE SHOW AVAILABLE ACTIONS adrci> dde show available actions adrci> DEFINE variable_name variable_value adrci> define spool_file 'my_spool_file' adrci> define alertLog "alert_prod.log" adrci> DELETE FROM relation [WHERE predicate_string] adrci> delete from incident where "incident_id > 99" adrci> DESCRIBE relation_name [-SURROGATE] [-HOMEPATH homepath_str] adrci> describe incident adrci> describe problem -surrogate adrci> describe hm_run adrci> describe DDE_USER_INCIDENT_TYPE adrci> describe DDE_USER_ACTION adrci> DROP VIEW viewname adrci> drop view my_incident .

... field2.. .442132 -05:30" 10 10 -term adrci> MERGE FILE [(projection_list)] [file1 file2 ..] [-t ref_time_str beg_sec end_sec | -i incid [beg_sec end_sec]] [-tdiff|-tfull] [-alert] [-insert_incident] [-term] [-plain] adrci> merge file (pid fct) orcl_m000_8544.trc -i 1 600 10 -alert adrci> merge file -t "2012-04-25 22:32:40.....adrci> END BACKUP adrci> end backup adrci> INSERT INTO relation (field_list) VALUES (values_list) adrci> insert into incident(create_time) values (systimestamp) adrci> LIST DEFINE adrci> list define adrci> MERGE ALERT [(projection_list)] [-t ref_time_str beg_sec end_sec] [-tdiff|-tfull] [-term] [-plain] adrci> merge alert adrci> merge alert (fct ti) adrci> merge alert -tfull -term adrci> merge alert -t "2012-05-25 09:50:20...trc -alert -term adrci> merge file -i 1 -alert -tfull adrci> merge file orcl_m000_8544. create_time) incident -p "incident_id > 111" adrci> query (problem_key) problem -p "PROBLEM_KEY like '%700%'" adrci> REPAIR [RELATION | SCHEMA] name adrci> repair relation incident adrci> repair relation hm_run adrci> SELECT [* | field1.] FROM relation_name [WHERE predicate_string] [ORDER BY field1.trc .. [ASC|DSC| DESC]] [GROUP BY field1. ..] -path path1 path2 ..442132 -05:30" 10 10 -term adrci> MIGRATE [RELATION name | SCHEMA] [-DOWNGRADE | -RECOVER] adrci> migrate schema adrci> migrate relation incident adrci> migrate relation hm_run -recover adrci> migrate relation problem -downgrade adrci> QUERY [(field1..]> [-plain] [-term] adrci> show dump error_stack orcl_ora_27483. create_time from incident where "incident_id > 90" adrci> select * from problem where "PROBLEM_KEY like '%7445%'" adrci> SET COLUMN TEXT num adrci> set column text 32 adrci> SHOW CATALOG adrci> show catalog adrci> SHOW DUMP dump_name [(projection_list)] <[file1 file2 ...) [ASC|DSC]] adrci> query (incident_id.] -i inc1 inc2 . .)] relation_name [-p predicate_string] [-orderby (field1.] | [[file1 file2 . ..... field2.] | [[file1 file2 . ] [HAVING having_predicate] adrci> select incident_id. ..

"PRODUCT_ID".] | [[file1 file2 . adrci> view foo..trc -xp "/dump[nm='Data block']" adrci> SWEEP ALL | id1 | id1 id2 | -RESWEEP -FORCE adrci> sweep all adrci> sweep 123 adrci> sweep 41 100 adrci> sweep 91 200 -resweep adrci> sweep 16 600 -force adrci> UNDEFINE variable_name adrci> undefine spool_file adrci> undefine alertLog adrci> UPDATE relation SET {field1=val.] .] -i inc1 inc2 .trc adrci> show tracemap -level 3 -i 145 152 adrci> show tracemap orcl_ora_27483. li) orcl_ora_27483. fi.. nm) orcl_ora_27483.. "PRODUCT_TYPE"..] | [[file1 file2 ...... adrci> CREATE INCIDENT keyname=key_value [keyname=key_value .] | [[file1 file2 ...trc foo1... .] | [[file1 file2 ......trc adrci> SHOW TRACE [(projection_list)] <[file1 file2 . and "INSTANCE_ID"... } [WHERE predicate_string] adrci> update incident set create_time = systimestamp where "incident_id > 6" adrci> VIEW file1 file2 ..adrci> SHOW SECTION section_name [(projection_list)] <[file1 file2 .] -path path1 path2 .] .....log adrci> show trace ~alertLog adrci> SHOW TRACEMAP [(projection_list)] <[file1 file2 ..] -path path1 path2 ...trc -xp "/dump[nm='Data block']" -xr /* adrci> show trace alert_dwh.]> [-plain] [-term] adrci> show section sql_exec orcl_ora_27483.] | [[file1 file2 ...] -i inc1 inc2 .] -i inc1 inc2 .]> [-level level_num] [-xp "path_pred_string"] [-term] adrci> show tracemap (nm) adrci> show tracemap (co.] | [[file1 file2 .]> [-plain] [-xp "path_pred_string"] [-xr display_path] [-term] adrci> show trace (co..trc adrci> view alert.....] -path path1 path2 .log adrci> HELP HIDDEN adrci> help hidden HELP [topic] Available Topics: CREATE HOME CREATE INCIDENT REGISTER INCIDENT FILE adrci> CREATE HOME keyname=key_value [keyname=key_value adrci> create home base=/tmp product_type=rdbms product_id=prod1 instance_id=inst1 Mandatory parameters are "BASE"...trc adrci> show trace (id.. co) -i 145 152 adrci> show trace orcl_ora_27483.

"ERROR_MESSAGE". RAID10). adrci> REGISTER INCIDENT FILE keyname=key_value adrci> register incident file filename=ora_pmon_12345. . and it is very efficient. This provides direct I/O to the file and performance is comparable with that provided by raw devices. which are managed by ASM. Assemble and build logical volume groups and determine if striping or mirroring is also necessary. 2. Zeroing on a specific disk configuration from among the multiple possibilities requires careful planning and analysis.exp adrci> IMPORT file_name adrci> import incident_99. which makes it more efficient. Oracle uses a scaled down Oracle instance to simulate a file structure on it where none exists. to simplify Oracle data management. The ASM functionality can be used in combination with existing raw and cooked file systems. Oracle creates a separate instance for this purpose. allowing the DBA to see the individual files and to move them. Build a file system on the logical volumes created by the logical volume manager. introduced Automatic Storage Management(ASM). and Oracle manages its blocks directly. ASM provided a foundation for highly efficient storage management with kernelized asynchronous I/O. Setting up storage takes a significant amount of time during most database installations. ASM is recommended file system for RAC and single instance ASM for storing database files.imp adrci> import hm_run_456. and file systems.. and to provide a platform for file sharing in RAC and Grid computing. intimate knowledge of storage technology.] [-dir] incident dir_name] Automatic Storage Management Oracle Database 10g Release 1. along with OMF and manually managed files. 3. adrci> EXPORT relation_name [-p predicate_string] adrci> export adrci> export incident -p “incident_id>80″ adrci> export problem -p "problem_id>123" -overwrite adrci> export hm_run -p "run_id-456" -file hm_run_456. Confirm that storage is recognized at the OS level and determine the level of redundancy protection that might already be provided (hardware RAID. striping. File system storage is flexible. Before ASM. and most important.trc incident_id=1 "FILENAME" and "INCIDENT_ID" are mandatory keys. Raw disk storage is such a manageability nightmare that few DBAs use it. "ERROR_FACILITY". Automatic Storage Management (ASM) simplifies administration of Oracle related files by allowing the administrator to reference diskgroups rather than hundreds of individual disks and files. The ASM functionality is an extension of the Oracle Managed Files (OMF) functionality that also includes striping and mirroring to provide balanced and secure storage. to enforce the SAME (Stripe And Mirror Everywhere.adrci> create incident problem_key="ORA-00600: [Memory corruption]" error_facility=ORA error_number=600 error_message="ORA-00600: [Memory corruption]" Mandatory keynames are "PROBLEM_KEY". "ERROR_NUMBER". The metadata enables the Recovery Manager (RMAN) to backup and restore Oracle files easily within it. copy them.. to bypass the OS overhead. by recording all the metadata. It's raw disk storage managed by Oracle. and back them up easily. Automatic Storage Management (ASM) is a new type of file system. Automatic Storage Management (ASM) will take physical disk partitions and manages their contents in a way that efficiently supports the files needed to create an Oracle database. but it also incurs overhead. Raw disk storage has no file directories on it. ASM is the middle ground. called external redundancy in ASM). ASM includes volume management functionality similar to that of a generic logical volume manager. The design tasks at this stage can be loosely described as follows: 1. volume managers.exp -dir diag/clients/user_oracle/host [-file [keyname=key_value filename] [-dir [-overwrite] . there were only two choices: file system storage and raw disk storage. a new framework for managing Oracle database files. and an easy way to manage storage. direct I/O. redundancy.

 Supports large files.This ASMB process is used to provide information to and from cluster synchronization services used by ASM to manage the disk resources. are done to serve Oracle database. full or a LUN from the RG. ARBx and ASMB.4. Oracle database offers some techniques of its own to simplify or enhance the process.RBAL is the ASM related process that performs rebalancing of disk resources controlled by ASM. .  Use of ASM diskgroup is very simple create tablespace. All above tasks. 5. called diskgroups. Set the ownership and privileges so that the Oracle process can open. logical file system building. Lets DBAs execute many of the above tasks completely within the Oracle framework. RBAL .  A Disk can be a partial. striping. and write to the devices. or file system management. o Re-Balance.  Enterprise Manager can also be used for administering diskgroups  Only RMAN can be used with ASM. mirroring. Create a database on that file system while taking care to create special files such as redo logs. ASM provides the following functionality/features:  Manages groups of disks. temporary tablespaces.  Replacement for CFS (Cluster File System). Must be careful while choosing disks for a diskgroup.  When combined with OMF increases manageability.  I/O is spread evenly across all disks of a diskgroup.  Also useful for Non-RAC databases.  Enables management of database objects without specifying mount points and filenames.  ASM instance has no data dictionary.  Provides near-optimal I/O balancing without any manual tuning. and undo tablespaces in non-RAID locations. create a dummy directory. ASMB . read.  Disks can be dynamically added to any diskgroup.ASM is introduced in 10g.  A new instance type .  ASM cannot maintain empty directories “delete input” has issues. It's also used to update statistics and provide a heart beat mechanism. volume managers. Using ASM you can transform a bunch of disks to a highly scalable and performance file system/volume manager using nothing more than what comes with Oracle database software at no extra cost and you don't need to be an expert in disk.  Manages disk redundancy within a diskgroup. if possible. You can store the following file types in ASM diskgroups: o Datafiles o o o o o o o o o o o Control files Online redo logs Archive logs Flashback logs SPFILEs RMAN backups Temporary datafiles Datafile copies Disaster recovery configurations Change tracking bitmaps Datapump dumpsets In summary.  o Introduces three additional Oracle background processes – RBAL.

Otherwise it will not work. in addition to being fast in locating the file extents.The ASM file system is not buffered.Striping can be fine grained as in redolog files (128K for faster transfer rate) and coarse for datafiles (1MB for transfer of a large number of blocks at one time).The list of diskgroups that should be mounted by an ASM instance during instance startup. With ASM. A diskgroup is analogous to a striped and optionally mirrored.Adding a disk is very easy. This special ASM instance is similar to other file systems in that it must be running for ASM to work and can't be modified by the user. ASM instance has it own set of v$views and init. making them as self-describing as possible. The default value is NULL allowing all suitable disks to be considered. This computation uses CPU cycles.Set to ASM. without manual intervention. helps while adding or removing disks because the locations of file extents need not be coordinated. One ASM instance can serve number of Oracle databases.  ASM_DISKSTRING . such as hashing. ARBx . No downtime is required and file extents are redistributed automatically.Specifies a value that can be used to limit the disks considered for discovery. The initialization parameters that are specific to an ASM instance are:  INSTANCE_TYPE . reducing chances of a hot spot.There is no special setup necessary to enable kernelized asynchronous I/O. Diskgroup offers the advantage of direct access to this space as a raw device. The default is RDBMS. it should be started before the database instances. file system. The advantages of ASM are  Disk Addition . to map the logical address of the blocks to the physical blocks.2. the feature will group a set of physical disks to a logical entity known as a diskgroup. we can use ASMCMD to start and stop the ASM instances.ARBx is configured by ASM_POWER_LIMIT. making it direct I/O capable by design. ASM configuration changes are automatically reflected in this parameter. This design.  Stripe Width . You should start the instance up when the server is booted i.  ASM Instance The ASM functionality is controlled by an ASM instance. . ASM instance and database instances have to be present on same server. When a new disk is added. yet provides the convenience and flexibility of a file system.ora parameters. Changing the parameter to a value which prevents the discovery of already mounted disks results in an error. Kernelized Asynchronous I/O . All the metadata about the disks are stored in the diskgroups themselves. not a database where users can create objects.o  Actual Rebalance. From 11. This is a special instance.e.  Buffering . without using raw or third-party file systems such as Veritas Quick I/O. ASM uses this special instance to address the mapping of the file extents to the physical disk blocks.Software mirroring can be set up easily. you don't have to create anything on the OS side. this typical striping function requires each bit of the entire data set to be relocated. if hardware mirroring is not available. and it should be one of the last things stopped when the server is shutdown.   Mirroring . with important differences: it's not a general-purpose file system for storing user files and it's not buffered. just the memory structures and as such is very small and lightweight. I/O Distribution .  ASM_DISKGROUPS .0. Logical volume managers typically use a function.I/O is spread over all the available disks automatically. Altering the default value may improve the speed of diskgroup mount time and the speed of adding a disk to a diskgroup. In contrast. or by the ALTER DISKGROUP ALL MOUNT statement.

The options for the STARTUP command are:  NOMOUNT . This parameter is generally used only for clustered ASM instances and its value can be different on different nodes. The higher the limit the more resources are allocated resulting in faster rebalancing operations. init+ASM. This defaults to +ASM but must be altered if you intend to run multiple ASM instances. The valid values range from 1 (default) to 11.ora'. SQL> CREATE SPFILE FROM PFILE='/tmp/init+ASM.  OPEN . This is fromOracle 11g. INSTANCE_TYPE = ASM Next.  FORCE .Performs a SHUTDOWN ABORT before restarting the ASM instance. in the /tmp directory. To create an ASM instance first create pfile.Specifies a globally unique name for the database. SQL> startup nomount ASM instance started Total System Global Area 130023424 bytes Fixed Size 2028368 bytes Variable Size 102829232 bytes ASM Cache 25165824 bytes The ASM instance is now ready to use for creating and mounting diskgroups. $ export ORACLE_SID=+ASM SQL> sqlplus "/as sysdba" Create a spfile using the contents of the init+ASM. . diskgroups can be used for the following parameters in database instances (INSTANCE_TYPE=RDBMS) to allow ASM file creation:  CONTROL_FILES  DB_CREATE_FILE_DEST  DB_CREATE_ONLINE_LOG_DEST_n  DB_RECOVERY_FILE_DEST  LOG_ARCHIVE_DEST_n  LOG_ARCHIVE_DEST  STANDBY_ARCHIVE_DEST Startup of ASM Instances ASM instances are started and stopped in a similar way to normal database instances. Once an ASM instance is present. This value is also used as the default when the POWER clause is omitted from a rebalance operation.Starts the ASM instance without mounting any diskgroups. A value of 0 disables rebalancing.ASM instance does not have open stage.  ASM_PREFERRED_READ_FAILURE_GROUPS .  MOUNT .ora. ASM_POWER_LIMIT -The maximum power for a rebalancing operation on an ASM instance.This initialization parameter value (default is NULL) is a comma-delimited list of strings that specifies the failure groups that should be preferentially read by the given instance. containing the following parameter. connect to the ideal instance.ora file.  DB_UNIQUE_NAME .Starts the ASM instance and mounts the diskgroups specified by the ASM_DISKGROUPS parameter.

The ASM instance waits for all connected ASM instances and SQL sessions to exit then shuts down.The ASM instance waits for any SQL transactions to complete then shuts down. .to not use ASM mirroring.  TRANSACTIONAL .  External redundancy . The physical disks are known as ASM disks. While creating a diskgroup. requiring three failure groups. High redundancy). This is used if you are using hardware mirroring or third party redundancy mechanism like RAID.  High redundancy .The ASM instance shuts down instantly.for 3-way mirroring. Now you can do many tasks from the command line. Any ASM file (and it's redundant copy) is completely contained within a single diskgroup. Oracle introduced a new command line tool called ASMCMD that lets you look inside ASM volumes (which are called diskgroups).for 2-way mirroring. each of which comprise of several physical disks that are controlled as a single unit.  IMMEDIATE . ASM chooses the disk on which to store the secondary copy in a different failure group other than the primary copy. we cannot modify the redundancy for diskgroup once it has been created. Each diskgroup consists of disks/raw devices where the files are actually stored. ASM diskgroups were a black box. Diskgroup is a terminology used for logical structure which holds the database files. We had to manipulate ASM diskgroups with SQL statements while logged in to the special ASM instance that manages the diskgroups. It doesn't wait for sessions to exit. A diskgroup might contain files belonging to several databases and a single database can use files from multiple diskgroups. when ASM allocates an extent for a normal redundancy file. Shutdown of ASM Instances The options for the SHUTDOWN command are:  NORMAL . So this can be used as an alternative for RAID (Redundant Array of Inexpensive Disks) 0+1 solutions. ASM Diskgroups The main components of ASM are diskgroups. Storage arrays. in this case the extent is mirrored across 3 disks. To alter it we will be required to create a new diskgroup and move the files to it. ASM is supposed to stripe the data and also mirror the data (if using Normal. In Oracle 10g Release 2. but user-friendly aliases and directory structures can be defined for ease of reference. The locations and names for the files are controlled by ASM. ASMCMD equivalent for this command is shutdown (11g R2 command).  ABORT . while the files that reside on the disks are known as ASM files.Same as IMMEDIATE. requiring two failure groups. No. ASM allocates a primary copy and a secondary copy. we have to specify an ASM diskgroup type based on one of the following three redundancy levels:  Normal redundancy . In the initial release of Oracle 10g. This can also be done by restoring full backup on the new diskgroup.ASMCMD equivalent for this command is startup (11g R2 command).

Failure groups are defined within a diskgroup to support the required level of redundancy. /dev/d2. FAILGROUP failure_group_3 DISK '/devices/diskc1' NAME diskc1... which indicates that the failure of a disk will bring down the diskgroup. such as mirroring. the ASM can be set up to create a special set of disks called failgroup in the diskgroup to provide that redundancy. Rather. SQL> CREATE DISKGROUP dskgrp1 NORMAL REDUNDANCY FAILGROUP failgrp1 DISK '/dev/d1'. A second file may be created on d3 with copy on d2. This is usually the case when the redundancy is provided by the hardware. primary copy will be on one failure group and secondary copy will be another (third copy will be another.'/dev/d2'. That is. we can also specify disk names in wildcards in the DISK clause as DISK '/dev/d*'. SQL> CREATE DISKGROUP dg_asm_fra HIGH REDUNDANCY FAILGROUP failure_group_1 DISK '/devices/diska1' NAME diska1. If there is no hardware based redundancy. and so on. '/devices/diska2' NAME diska2. a file on the diskgroup might be created in d1 with a copy maintained on d4. so individual files are written to three locations. '/devices/diska2' NAME diska2. For three-way mirroring we would expect a diskgroup to contain three failure groups. We have also specified a clause EXTERNAL REDUNDANCY. They contain the mirrored ASM extents and must be containing different disks and preferably on separate disk controller.'/dev/d3'. Although it may appear as such. For example. For two-way mirroring we would expect a diskgroup to contain two failure groups. so individual files are written to two locations. ... database will create a diskgroup named dg_grp1 with the physical disks named /dev/d1. FAILGROUP failgrp2 DISK '/dev/d3'. d3 and d4 are not mirrors of d1 and d2..'/dev/d4'. '/devices/diskc2' NAME diskc2. Creating diskgroups SQL> CREATE DISKGROUP dg_asm_data NORMAL REDUNDANCY FAILGROUP failure_group_1 DISK '/devices/diska1' NAME diska1. FAILGROUP failure_group_2 DISK '/devices/diskb1' NAME diskb1. ASM uses all the disks to create a fault-tolerant system. The FORCE option can be used to move a disk from another diskgroup into this one.'/dev/d4' . In the above command. using normal/high redundancy. . In addition failure groups and preferred names for disks can be defined in CREATE DISKGROUP statement.'/dev/d2'. SQL> CREATE DISKGROUP dg_grp1 EXTERNAL REDUNDANCY DISK '/dev/d1'. FAILGROUP failure_group_2 DISK '/devices/diskb1' NAME diskb1. Instead of giving disks separately. for high redundancy). '/devices/diskb2' NAME diskb2. and so on. If the NAME clause is omitted the disks are given a system generated name like "disk_group_1_0001". '/devices/diskb2' NAME diskb2.

'/devices/disk*4'.'/devices/diska4' FAILGROUP fg2 DISK '/devices/diskb1'. Listing diskgroups To find out all the diskgroups: SQL> SELECT * FROM V$ASM_DISKGROUP. SQL> CREATE DISKGROUP dg1 DISK '/dev/raw/*' ATTRIBUTE 'compatible.rdbms' = '11. SQL> DROP DISKGROUP disk_group_1 FORCE.'/devices/diskb3'. Altering diskgroups Disks can be added or removed from diskgroups using the ALTER DISKGROUP statement.'/devices/diskb4' ATTRIBUTE 'au_size'='4M'. SQL> ALTER DISKGROUP dskgrp1 ADD DISK '/dev/d5'.1'. Dropping diskgroups Diskgroups can be deleted using the DROP DISKGROUP statement. SQL> DROP DISKGROUP disk_group_1 INCLUDING CONTENTS. you could lose the controller for both disks d1 and d2 and ASM would mirror copies of the extents across the failure group to maintain data integrity. For example. SQL> ALTER DISKGROUP dg1 ADD DISK '/devices/disk*3'.'/devices/diska2'.2'. .'compatible.1'.advm'='11. ASMCMD equivalent for this command is lsdg.rdbms'='11.'/devices/diskb2'.2'.'compatible. (11g R1 command) SQL> DROP DISKGROUP disk_group_1 FORCE INCLUDING CONTENTS. (11gR1 command) ASMCMD equivalent for this command is dropdg (11g R2 command). Remember that the wildcard "*" can be used to reference disks so long as the resulting string does not match a disk already used by an existing diskgroup. (11g R1 command) SQL> CREATE DISKGROUP dg2 EXTERNAL REDUNDANCY DISK '/dev/sde1' ATRRIBUTE 'au_size' = '32M'. (11g R1 command) SQL> CREATE DISKGROUP archdg NORMAL REDUNDANCY FAILGROUP fg1 DISK '/devices/diska1'.'/devices/diska3'.(11g R2 command) ASMCMD equivalent for this command is mkdg (11g R2 command).Failure of a specific disk allows a copy on another disk so that the operation can continue. 'compatible.2'. Adding disks We may have to add additional disks into the diskgroup to accommodate growing demand.asm' = '11.asm'='11.'compatible.

Listing disks The following command shows all the disks managed by the ASM instance for all the client databases. SQL> ALTER DISKGROUP dg4 DROP DISK diska4. Resizing all disks in a diskgroup SQL> ALTER DISKGROUP dg_data_1 RESIZE ALL SIZE 100G. The statement can be used to resize individual disks. is online (11gR2 command). Dropping disks We can remove a disk from diskgroup. Resizing disks Disks can be resized using the RESIZE clause of the ALTER DISKGROUP statement. If the SIZE clause is omitted the disks are resized to the size of the disk returned by the OS. SQL> SELECT * FROM V$ASM_CLIENT. . 'disk_0001'. Online disks SQL> ALTER DISKGROUP data ONLINE DISK 'disk_0000'. ASMCMD equivalent for this command is chdg (11g R2 command). Undropping disks The UNDROP DISKS clause of the ALTER DISKGROUP statement allows pending disk drops to be undone. SQL> ALTER DISKGROUP data ONLINE DISKS IN FAILGROUP 'fg_99'. or disk drops associated with the dropping of a diskgroup. It will not revert drops that have completed.ASMCMD equivalent for this command is chdg (11g R2 command). ASMCMD equivalent for this command is lsct. SQL> ALTER DISKGROUP data ASMCMD equivalent for this command ONLINE ALL. SQL> ALTER DISKGROUP disk_group_1 UNDROP DISKS. SQL> ALTER DISKGROUP dg_data_1 RESIZE DISK diska1 SIZE 150G. ASMCMD equivalent for this command is lsdsk (11g R1 command). SQL> SELECT * FROM V$ASM_DISK. all disks in a failure group or all disks in the diskgroup. Resizing all disks in a failure group SQL> ALTER DISKGROUP dg_data_1 RESIZE DISKS IN FAILGROUP fg_1 SIZE 50G. Listing client databases The following command shows all the database instances connected to the ASM instance.

rdbms' = '11. SQL> ALTER DISKGROUP dg_fra DISMOUNT.5h'. attributes V$ASM_ATTRIBUTE. SQL> ALTER DISKGROUP dg_data2 MOUNT RESTRICTED. Rebalancing Diskgroups can be rebalanced manually using the REBALANCE clause of the ALTER DISKGROUP statement. SQL> ALTER DISKGROUP ALL MOUNT. (11gR1 command) SQL> ALTER DISKGROUP data3 SET ATTRIBUTE 'compatible. ASMCMD equivalent for this command is lsattr (11gR2 command). Manual mounting and dismounting can be accomplished using the ALTER DISKGROUP statement as below.Offline disks SQL> ALTER DISKGROUP data OFFLINE DISK 'disk_0000'. (11gR1 command) SQL> ALTER DISKGROUP data3 SET ATTRIBUTE 'disk_repair_time' = '4.2'. Mounting diskgroups Diskgroups are mounted at ASM instance startup and unmounted at ASM instance shutdown. Rebalancing is only needed when the speed of the automatic rebalancing is not appropriate. Dismounting diskgroups SQL> ALTER DISKGROUP ALL DISMOUNT.1'. ASMCMD equivalent for this command is offline (11gR2 command). ASMCMD equivalent for this command is umount (11gR2 command). Changing attributes SQL> ALTER DISKGROUP data3 SET ATTRIBUTE 'compatible. SQL> ALTER DISKGROUP data OFFLINE DISK d1_0001 DROP AFTER 30m. 'disk_0001'. If the POWER clause is omitted the ASM_POWER_LIMIT parameter value is used. SQL> ALTER DISKGROUP dg_data2 MOUNT. . (11gR1 command) ASMCMD equivalent for this command is mount (11gR2 command).asm' = '11. (11gR1 command) ASMCMD equivalent Listing SQL> for SELECT this command * FROM is setattr (11gR2 command). SQL> ALTER DISKGROUP data OFFLINE DISKS IN FAILGROUP 'fg_99'.

ASMCMD equivalent for this command is mkdir. The fully qualified filename represents a hierarchy of directories in which the plus sign (+) represent the root directory. Renaming a directory SQL> ALTER DISKGROUP dg_1 RENAME DIRECTORY '+dg_1/my_dir' TO '+dg_1/my_dir_2'.0. and an ASM directory can be part of a tree structure of other directories. The absolute path includes directories until the file or directory is reached. Directories As in other file systems. ASM automatically creates a directory hierarchy that corresponds to the structure of the fully qualified filenames in the diskgroup.incarnation Numeric ASM Filename +dgroup. Until 11. Some forms are used during creation and some for referencing ASM files. The directories in this hierarchy are known as system-generated directories. From 11.1. Files There are several ways to reference ASM files.2. Creating a directory SQL> ALTER DISKGROUP dg_1 ADD DIRECTORY '+dg_1/my_dir'. In each diskgroup. An absolute path begins with a plus sign (+) followed by a diskgroup name. Deleting a directory SQL> ALTER DISKGROUP dg_1 DROP DIRECTORY '+dg_1/my_dir_2' FORCE.file. REBALANCE command POWER 6. That is. known as fully qualified filename. ASMCMD equivalent for this command is rm. a diskgroup ASMCMD equivalent for this command is iostat (11gR2 command). all the above commands can not be performed with ASMCMD.0. the path to the file or directory is relative to the current directory.file. an ASM directory is a container for files. A directory hierarchy can be defined using the ALTER DISKGROUP statement to support ASM file aliasing. The forms of the ASM filenames are: Filename Type Format Fully Qualified ASM Filename +dgroup/dbname/file_type/ file_type_tag. we can. followed by subsequent directories in the directory tree. Every file created in ASM gets a system-generated filename. An absolute path refers to the full path of a file or directory.incarnation Alias ASM Filenames +dgroup/directory/filename Alias ASM Filename with Template +dgroup(template)/alias . A relative path includes only the part of the filename or directory name that is not part of the current directory. A fully qualified filename is an example of an absolute path to a file. this is same as complete path name in a local file system. is rebal (11gR2 command).SQL> ALTER ASMCMD equivalent DISKGROUP for disk_group_1 this IO statistics of SQL> SELECT * FROM V$ASM_DISK_IOSTAT.

We can add aliases or other directories to a user-created directory.541956473 +dg_fra/hrms/DATAFILE/users.256.309. ASMCMD equivalent for this command is rm.dbf'. Creating an alias Creating an alias. Dropping file using a numeric form filename SQL> ALTER DISKGROUP dg_2 DROP FILE '+dg_2.317.fileNumber. For these circumstances it is necessary to manually delete the files.dbf'.292. Dropping file using a fully qualified filename SQL> ALTER DISKGROUP dg_2 DROP FILE '+dg_2/mydb/datafile/my_ts. ASMCMD equivalent for this command is mkalias.321. as they are not Oracle Managed Files (OMF).Incomplete ASM Filename +dgroup Incomplete ASM Filename with Template +dgroup(template) ASM generates filenames according to the following scheme: +diskGroupName/databaseName/fileType/fileTypeTag. or if a recovery is done to a point-in-time before the file was created.621906475 ASM does not place system-generated files into user-created directories.dbf' FOR '+dg_3. Renaming an alias SQL> ALTER DISKGROUP dg_3 RENAME ALIAS '+dg_3/my_dir/my_file.dbf'.265390671'. Aliases Aliases allow you to reference ASM files using user-friendly names. Templates . it places them only in systemgenerated directories. using the fully qualified filename SQL> ALTER DISKGROUP dg_3 '+dg_3/mydb/datafile/users. as shown below. Dropping Files Files are not deleted automatically if they are created using aliases.g: +dgroup2/crm/CONTROLFILE/Current. Deleting an alias SQL> ALTER DISKGROUP dg_3 DELETE ALIAS '+dg_3/my_dir/my_file.111222333'.392. rather than the fully qualified ASM filenames. ASMCMD equivalent for this command is rmalias.dbf' TO '+dg_3/my_dir/my_file2.333222555'.123456789'. Dropping file using an alias SQL> ALTER DISKGROUP dg_2 DROP FILE '+dg_2/my_dir/my_file. ADD ALIAS '+dg_3/my_dir/users.incarnation e.dbf' FOR Creating an alias. Attempting to drop a system alias results in an error. using the numeric form filename SQL> ALTER DISKGROUP dg_3 ADD ALIAS '+dg_3/my_dir/my_file.

 COARSE . COARSE. Checking metadata for a specific failure group in the diskgroup SQL> ALTER DISKGROUP dg_5 CHECK FAILGROUP failure_group_1. is chtmpl (11gR2 command). Checking Metadata The internal consistency of diskgroup metadata can be checked in a number of ways using the CHECK clause of the ALTER DISKGROUP statement. but additional templates can be defined as needed.  FINE .Two-way mirroring for normal redundancy and three-way mirroring for high redundancy. ASMCMD equivalent for this command Listing SQL> SELECT * FROM ASMCMD equivalent for this command is lstmpl (11gR2 command).Specifies higher granularity for striping.  MIRROR . Default templates are provided for each file type stored by ASM. The level of redundancy and the granularity of the striping can be controlled using templates. .dbf' Checking metadata for a specific disk in the diskgroup SQL> ALTER DISKGROUP dg_5 CHECK DISK diska1. Checking metadata for a specific file SQL> ALTER DISKGROUP dg_5 CHECK FILE '+dg_5/my_dir/my_file. MIRROR. ASMCMD equivalent for this command is mktmpl (11gR2 command). Creating a template SQL> ALTER DISKGROUP dg_4 ADD TEMPLATE mf_template ATTRIBUTES (MIRROR FINE). Available attributes are:  UNPROTECTED . templates V$ASM_TEMPLATE. FINE attributes are cannot be set for external redundancy. ASMCMD equivalent for this command is rmtmpl (11gR2 command).No mirroring or striping regardless of the redundancy setting. Modifying a template SQL> ALTER DISKGROUP dg_4 ALTER TEMPLATE c_template ATTRIBUTES (COARSE). Dropping a template SQL> ALTER DISKGROUP dg_4 DROP TEMPLATE u_template.Templates are named groups of attributes that can be applied to the files within a diskgroup.Specifies lower granularity for striping.

User Management From Oracle 11g release 2. the list * this Modifying(adding/deleting ASM users to/from) an ASM usergroup SQL> ALTER DISKGROUP data_dg MODIFY USERGROUP 'grp2' ADD MEMBER 'oracle3'. is lsusr (11gR2 command). SQL> ALTER DISKGROUP dg_5 CHECK NOREPAIR. ASMCMD equivalent for this command is mkgrp (11gR2 command). . command is groups (11gR2 command). SQL> ALTER DISKGROUP dg_5 CHECK REPAIR.'oracle2'.Checking metadata for all disks in the diskgroup SQL> ALTER DISKGROUP dg_5 CHECK ALL. is rmgrp (11gR2 command). ASM to which user belongs FROM V$ASM_USERGROUP_MEMBER. SQL> ALTER DISKGROUP dg_5 CHECK. we can create ASM users and usergroups and manipulate the permissions and ownership of files. is mkusr (11gR2 command). V$ASM_USERGROUP. ASMCMD equivalent for this command is grpmod (11gR2 command). SQL> ALTER DISKGROUP data_dg ADD USERGROUP 'grp2' WITH MEMBER 'oracle1'. Creating SQL> ALTER ASMCMD equivalent Listing To find SQL> ASMCMD equivalent an DISKGROUP for ASM data_dg ADD command this ASM out SELECT for Listing ASM SQL> SELECT ASMCMD equivalent the list * this usergroups * for this of FROM command user USER 'oracle1'. V$ASM_USER. Listing To find out SQL> SELECT ASMCMD equivalent for Dropping SQL> ALTER ASMCMD equivalent an DISKGROUP for ASM of FROM command usergroups ASM usergroups. users users. is lsgrp (11gR2 command). ASMCMD equivalent for this command is chkdg (11gR2 command). ASM data_dg DROP this command usergroup USERGROUP 'grp1'. Creating an ASM usergroup SQL> ALTER DISKGROUP data_dg ADD USERGROUP 'grp1'. SQL> ALTER DISKGROUP data_dg MODIFY USERGROUP 'grp2' DROP MEMBER 'oracle3'.

The volume device associated with the dynamic volume can then be used to host an (Oracle ACFS) file system. GROUP=read only. ASMCMD equivalent for this command is chown (11gR2 command). Modifying ownership of a file SQL> ALTER DISKGROUP data_dg SET OWNERSHIP OWNER='oracle1'.Dropping SQL> ALTER ASMCMD equivalent an DISKGROUP for this ASM data_dg DROP command user USER 'oracle1'. is volcreate (11gR2 command). is volresize (11gR2 command). this command is volinfo (11gR2 command). Modifying permissions for a file SQL> ALTER DISKGROUP data_dg SET PERMISSION OWNER=read write. DISABLE DISABLE command volume VOLUME volume1. Volume Management From 11g release 2. volume information the volumes information. out out SELECT a ADD .f'. * FROM V$ASM_VOLUME. a RESIZE this redo_dg ALL this is volstat (11gR2 command). is voldelete (11gR2 command). ASMCMD equivalent for this command is chmod (11gR2 command). DROP command data_dg this fra_dg volume statistics statistics information. GROUP='grp1' FOR FILE '+data_dg/controlfile. is voldisable (11gR2 command). is rmusr (11gR2 command). we can create Oracle ASM Dynamic Volume Manager (Oracle ADVM) volumes in a diskgroups.f'. VOLUME ALL. OTHER=none FOR FILE '+data_dg/controlfile. Creating SQL> ALTER DISKGROUP ASMCMD equivalent for Listing To find SQL> SELECT ASMCMD equivalent for Listing To SQL> find ASMCMD equivalent Dropping SQL> ALTER ASMCMD equivalent data_dg this the * for volumes FROM this command a DISKGROUP for Resizing SQL> ALTER DISKGROUP ASMCMD equivalent for Disabling SQL> ALTER SQL> ALTER ASMCMD equivalent VOLUME command VOLUME command volume volume1 SIZE 25G. a DISKGROUP DISKGROUP for volume volume1 SIZE 20G. volume VOLUME volume1. V$ASM_VOLUME_STAT.

'+dskgrp2/redo/group_1. we cannot store binaries or flat files.. you cannot see the files using standard Unix commands like ls. Until Oracle 11g release1. Note how the diskgroup is used as a virtual file system. Archived log destinations can also be set to a diskgroup. SQL> CREATE TABLESPACE my_ts DATAFILE '+disk_group_1' SIZE 100M AUTOEXTEND ON. logfiles etc. but in other types of Oracle files as well. operations V$ASM_OPERATION. .. ASM filenames can be used in place of conventional filenames for most Oracle file types. volume volume1 USAGE 'acfs'.. Everything related to Oracle database can be created in an ASM diskgroup. You need to use utility called asmcmd to do this. which can be used by RMAN to create backup datafiles and archived log files. For example. We cannot use ASM for storing the voting disk and OCR.659723485' ) SIZE 50M. is volset (11gR2 command). ASM supports files created by and read by the Oracle database only. datafiles. the following command creates a new tablespace with a datafile in the disk_group_1 diskgroup. we can store ALL files on ASM. Can we see the files stored in the ASM instance using standard Unix commands? No. volume VOLUME volume1. It is due to the fact that Clusterware starts before ASM instance and it should be able to access these files which are not possible if you are storing it on ASM. Creating Tablespaces Now create a tablespace in the main database using a datafile in the ASM-enabled storage. Oracle 10g release2 introduces asmcmd which makes administration very easy.659723485'. is volenable (11gR2 command). But from 11g release 2.. LOGFILE GROUP 1 ( '+dskgrp1/redo/group_1. For instance. it is not a replacement for a generalpurpose file system.Enabling SQL> ALTER ASMCMD equivalent a DISKGROUP for arch_dg this Setting SQL> ALTER DISKGROUP asm_dg_data ASMCMD equivalent for this ENABLE command a MODIFY VOLUME command Misc Listing the current SQL> SELECT * FROM ASMCMD equivalent for this command is lsop (11gR2 command). SQL> CREATE TABLESPACE user_data DATAFILE '+dskgrp1/user_data_01' SIZE 1024M. You will have to use raw devices or OCFS or any other shared storage. This approach is useful not only in datafiles. including controlfiles.258. . Backup is another great use of ASM. we can create online redo log files as .258. You can set up a bunch of inexpensive disks to create the recovery area of a database.

Migrating to ASM using RMAN The following method shows how a database can be migrated to ASM from a disk based backup: 1) Shutdown the database. The diskgroups default time limit is altered by changing the DISK_REPAIR_TIME parameter with a unit of minutes(M/m) or hours(H/h). ASMLIB allows an Oracle database using ASM more efficient and capable access to diskgroups. ASM drops disks and if they remain offline for more than 3. ASMLIB API enables storage and OS vendors to supply extended storage-related features. SQL> alter diskgroup dg-name set attribute 'disk_repair_time'='4.  We can maintain version compatibilites SQL> alter diskgroup dg-name set attribute SQL> alter diskgroup dg-name set attribute 'compatible. 8) Switch all datafile to the new ASM location.  at diskgroup level. RMAN> STARTUP NOMOUNT 5) Restore the controlfile into the new location from the old location.asm'='11. SQL> SHUTDOWN IMMEDIATE 2) Modify the parameter file of the database as follows: Set DB_CREATE_FILE_DEST and DB_CREATE_ONLINE_LOG_DEST_n parameters to the relevant ASM diskgroups. 6) Mount the database. 4) Start the database in nomount mode. 3) Remove CONTROL_FILES parameter from the spfile so the control files will be moved to the DB_CREATE_* destination and the spfile gets updated automatically. is to provide an alternative interface to identify and access block devices. If you are using a pfile the CONTROL_FILES parameter must be set to the appropriate ASM files or aliases.  Automatic bad block detection and repair.1'. RMAN> ALTER DATABASE MOUNT. RMAN> ALTER DATABASE OPEN. RMAN> SWITCH DATABASE TO COPY. 10) Create new redo logs in ASM and delete the old ones. . ASM New features in Oracle 11g release1  Support for rolling upgrades.6 hours. 7) Copy the database into the ASM diskgroup.$ asmcmd ASMCMD> ASMLIB is the support library for the ASM.1'. RMAN> BACKUP AS COPY DATABASE FORMAT '+disk_group'. 'compatible.5h'.rdbms'='11. The purpose of ASMLIB. RMAN> RESTORE CONTROLFILE FROM 'old_control_file_name'. 9) Open the database.

in the same we do at SQL prompt. alter. space efficient. ext3. drop). We can execute OS commands at asmcmd by using !. The total number of extents in shared pool will be significantly reduced and improved performance. ugo and rwx .  ACFS Snapshots are read-only on-line. Volume management. report files. ACFS snapshots can be used to recover from inadvertent modification or deletion of files from a file system. alert logs. $ sqlplus "/as sysasm" or $ asmcmd -a sysasm  ASM Preferred Mirror Read or Preferred Read Failure Groups ASM_PREFERRED_READ_FAILURE_GROUPS parameter is set to the preferred failure groups for each node.to repair a range of physical blocks on disk.to restore metadata. User management. SYSOPER) & OSASM OS group (like OSDBA. md_backup .  ASM Cluster File System (ACFS) provides support for files such as Oracle binaries. View In ASM instance In DB instance V$ASM_ALIAS Displays a row for each alias present in every diskgroup mounted by the Returns no rows. to forcefully. Template management.Fast mirror resync after temporary connectivity lost. which contain information depending on whether they are queried from the ASM instance. by using renamedg command. File access control (like OS. Managing diskgroups (create. lsdsk .to list(check) disks. trace files. ACFS can be managed by ACFSUTIL. ASM New features in Oracle 11g release2  ASM Configuration Assistant (ASMCA) is a new tool to install and configure ASM.  Faster Mirror Resync . and other application datafiles. This will separate storage administration from database administration...  Can mount the disk in SQL> alter diskgroup dg-name mount restricted. Clusterware binaries.  We can drop SQL> drop diskgroup dg-name force including contents.. attribute 'au_size' = 'number-of-bytes'. OSOPER) to manage ASM instance only. Supports variable extent(allocation unit) sizes.. point in time copy of an ACFS file system.  New SYSASM role (like SYSDBA.  ASM Dynamic Volume Manager (ADVM) provides volume management services and a standard device driver interface to its clients (ACFS.  ASM can hold and manage OCR (Oracle Cluster Registry) file and voting file.to backup metadata. OCFS2 and third party files systems). New commands in ASMCMD. rebalance faster. remap .  o o o o o a restricted diskgroup mode. ASM instance. ASMCMD. SQL command interface. or a dependant database instance. cp .  ASM diskgroups can be renamed. OEM. ASMCA. md_restore . SQL> create diskgroup . ASM Views The ASM configuration can be viewed using the V$ASM_% views. mount.to copy between ASM and local or remote destination.  o o o o o o o ASMCMD utility can do startup and shutdown of ASM instances. . external files.).

SELECT * FROM V$ASM_USERGROUP_MEMBER. SPOOL asm_views. instance.log SET ECHO ON SELECT * FROM V$ASM_ALIAS. for each - ASM backup can be taken by spooling the output of the ASM views to text file. - V$ASM_USERGROUP(11gR2) Displays a row ASM usergroup. V$ASM_OPERATION Displays a row for each file for each long running operation executing in Displays no rows. SELECT * FROM V$ASM_DISKGROUP. SELECT * FROM V$ASM_CLIENT. SELECT * FROM V$ASM_DISK_IOSTAT. ASM instance. SELECT * FROM V$ASM_TEMPLATE. V$ASM_USER (11gR2) Displays a row for each ASM user. SELECT * FROM V$ASM_ATTRIBUTE. in diskgroups in use by the including disks which are not part of database instance. SELECT * FROM V$ASM_FILE. local ASM instance. IO statistics of Displays a row for each or Displays a row for each diskgroup diskgroup mounted by the discovered by the ASM instance.Displays attributes diskgroups. SELECT * FROM V$ASM_FILESYSTEM. SELECT * FROM V$ASM_DISKGROUP_STAT. V$ASM_FILE Displays a row for each file for each diskgroup mounted by the ASM Displays no rows. V$ASM_DISK or V$ASM_DISK_STAT Displays a row for each disk Displays a row for each disk discovered by the ASM instance. V$ASM_CLIENT Displays a row for each database Displays a row for the ASM instance using a diskgroup managed instance if the database has by the ASM instance. - V$ASM_USERGROUP_MEMBER(11gR2) Displays ASM usergroups and it's members. SELECT * FROM V$ASM_USER. SELECT * FROM V$ASM_DISK.SELECT * FROM V$ASM_DISK_STAT. of V$ASM_ATTRIBUTE (11gR2) Displays attributes of diskgroups. V$ASM_DISKGROUP V$ASM_DISKGROUP_STAT Displays disks. V$ASM_DISK_IOSTAT(11gR2) Displays IO statistics of disks. . V$ASM_FILESYSTEM(11gR2) Displays a row for each filesystem for each diskgroup mounted by the ASM Displays no rows. instance. any diskgroup. SELECT * FROM V$ASM_OPERATION. open ASM files. V$ASM_TEMPLATE Displays a row for each Displays a row for each template template present in each present in each diskgroup mounted diskgroup mounted by the by the ASM instance. SELECT * FROM V$ASM_USERGROUP. V$ASM_VOLUME orV$ASM_VOLUME_STAT(11gR2) Displays a row for each volume. the ASM instance.

We can’t see the files stored in the ASM instance using standard UNIX commands like ls. Invoking asmcmd . templates and aliases  managing volumes  make online or offline the disks/failure groups  can rebalance the diskgroups  repair physical blocks  copy the files between the diskgroups and OS  backup and restore of SP file  add/remove/modify/list users from password file  add/remove/modify/list templates  manipulate diskstring  create/modify/remove ASM users. Managing ASM through SQL interfaces.0) ASMCMD is used to  start/stop ASM instances  create or alter or drop diskgroups  mount or dismount diskgroups  list the contents. The asmcmd utility supports all common Linux commands. a powerful and easy to use command line tool. introduced with Oracle Database 10g release 2. but it only manages files at the OS level.SELECT SELECT * FROM V$ASM_VOLUME_STAT. ASM diskgroups are not visible outside the database for regular file system administration tasks such as copying and creating directories. no separate setup is required. We need to use asmcmd. ASMCMD is included in the installation of the Oracle Database software (from 10g Release 2). The idea of this tool is to make administering the ASM files similar to administering standard OS files. From Oracle Database 10g Release 2. asmcmd utility in Oracle Automatic Storage Management (ASM) Command line utility (ASMCMD). in Oracle Database 10g Release 1. In Oracle Database 10g Release 1. From Oracle Database 10g Release 2. and files on them  backup and restore the metadata of diskgroups  create and remove directories. The asmcmd command line interface is very similar to standard UNIX/Linux commands. (From 11. we can transfer the files from ASM to locations outside of the diskgroups via FTP and through a web browser using HTTP. SPOOL OFF * FROM V$ASM_VOLUME. posed a challenge for administrators who were not very familiar with SQL and preferred a more conventional command line interface. groups  change ASM file permissions. we have an option to manage the ASM files by using ASMCMD.2. owners and groups  display space utilization  perform searches ASMCMD has equivalent commands for all the SQL commands that can be performed through SQL*Plus. statistics and attributes of diskgroups.

and we can navigate the hierarchy of both system-generated directories and user-created directories with the cd command. The default value of the ASM SID for a single instance database is +ASM. The environmental variables ORACLE_HOME and ORACLE_SID must be set to the ASM instance.To start using ASMCMD. +ASM2. $ asmcmd ASMCMD> To run a specific ASMCMD command. An absolute path refers to the full path of a file or directory. $ export ORACLE_SID=+ASM To enter in interactive mode. We can also create our own directories as subdirectories of the system-generated directories using the ALTER DISKGROUP command or with the ASMCMD mkdir command.. . $ asmcmd -v ASM Filenames & Directories Pathnames within ASMCMD. followed by subsequent directories in the directory tree. can use either the forward slash (/) as in UNIX or the backward slash (\) as in Windows. "%". ASMCMD retains the case of the directory that you entered. Those directories can have subdirectories. and the ASM diskgroups are mounted. either SYSASM or SYSDBA. This is from 11g.0. Filenames are not case sensitive. From 11g. You must log in as a user that has SYSASM or SYSDBA privileges through OS authentication. Ensure that $ORACLE_HOME/bin is in PATH environment variable. You must have ASM configured on the machine and started. type asmcmd.. or its SQL equivalent. the SYSASM privilege is preferred & default. This is from 11g. we can type $ asmcmd command arguments We can specify the -p option with the asmcmd command to include the current directory in the ASMCMD prompt.6. that is. The absolute path includes . we can use either the UNIX wildcard "*" to match any string in a pathname. $ asmcmd -a sysasm or asmcmd -a sysdba We can specify the -V option when starting asmcmd to displays the asmcmd version number. When we run an ASMCMD command that accepts a filename or directory name as an argument. Also. which brings up the ASM command prompt.). An absolute path begins with a plus sign (+) followed by a diskgroup name. we can use the name as either an absolute path or a relative path. the default value of the ASM SID on any node is +ASMnode# (+ASM1. $ asmcmd -V asmcmd version 11. they're interchangeable.1. non interactively.0 We can specify the -v option when starting asmcmd to displays the additional information. In Real Application Clusters (RAC) environments. The fully qualified filename represents a hierarchy of directories in which the plus sign (+) represent the root directory. but are case retentive. $ asmcmd -p ASMCMD [+] > cd dgroup1/hrms ASMCMD [+dgroup1/hrms] > We can specify the -a option to choose the type of connection.

" in place of a directory name. To run ASMCMD in interactive mode: 1.f => +dgroup2/hrms/CONTROLFILE/Current. Aliases are similar to symbolic links in UNIX flavors.256. The following rm command uses an absolute path for the filename: ASMCMD [+] > rm +dgroup1/hrms/datafile/users.267.555341999 A relative path includes only the part of the filename or directory name that is not part of the current directory. each alias is listed with the system-generated file to which the alias refers. These two wildcard characters behave identically.f +dgroup1/crm/ctl1. That is. ASMCMD in Non Interactive Mode . ASMCMD [+dgroup1/hrms/DATAFILE] > ls -l undotbs1. Using an absolute path enables the command to access the file or directory regardless of where the current directory is set. ASM's auto generated names can be a bit strange. A fully qualified filename is an example of an absolute path to a file.557429239 Paths to directories can also be relative and we can use the pseudo-directories ". Alias As in UNIX. where we are prompted to enter ASMCMD commands. so creating aliases makes working with ASM files with ASMCMD easier.directories until the file or directory is reached..f If you run the ASMCMD ls (list directory) with the -l flag.280.541956473 We can run the ASMCMD utility in either interactive or non interactive mode. The wildcard characters * and % match zero or more characters anywhere within an absolute or relative path. We can create aliases at the diskgroup level or in any system-generated or user-created subdirectory. Aliases simplify ASM filename administration. Continue entering ASMCMD commands. ASMCMD in Interactive Mode The interactive mode of the ASMCMD utility provides an environment like shell or SQL*Plus. we can create alias names for files listed in the diskgroup. Enter the following at the OS command prompt: $ asmcmd Oracle displays an ASMCMD command prompt as follows: ASMCMD> 2. The command runs and displays its output. The following are examples of aliases: +dgroup1/ctl1.f +dgroup1/mydir/ctl1." and ". Enter an ASMCMD command and press Enter. ctl1. the path to the file or directory is relative to the current directory. We can create aliases with an ALTER DISKGROUP command or with the mkalias ASMCMD command. 3. An alias has at a minimum the diskgroup name as part of its complete path. which saves typing of the full directory or file name. and then ASMCMD prompts for the next command. Aliases are user-friendly filenames that are references or pointers to system-generated filenames. Enter the command exit to exit ASMCMD.

. If the wildcard pattern matches multiple directories. then the help command lists all of the ASMCMD commands and general information about using the ASMCMD utility. ASMCMD [+dgroup2/crm] > cd +dgroup1/hrms ASMCMD [+dgroup1/hrms] > cd DATAFILE ASMCMD [+dgroup1/hrms/DATAFILE] > cd . To run ASMCMD in non interactive mode. pwd command Displays the absolute path of the current directory.. . cd dir_name dir_name may be specified as either an absolute path or a relative path. help [command] or ? [command] If you do not specify a value for command. ASMCMD runs the command. and then exits. and . generates output. ASMCMD [+]> cd +dgroup1/sample/C* If a wildcard pattern matches only one directory when using wildcard characters with cd.In non interactive mode. where command is any valid ASMCMD command and arguments is a list of command flags and arguments. The non interactive mode is especially useful for running scripts. pseudo-directories and wildcards. at the command prompt enter the following: $ asmcmd command arguments $ asmcmd ls -l State Type Rebal Unbal Name MOUNTED NORMAL N N DG_GROUP1/ MOUNTED NORMAL N N DG_GROUP2/ $ asmcmd lsdg ASM_DG_FRA $ asmcmd mkdir $ asmcmd md_restore -b /u01/app/oracle/BACKUP/asm_md_backup -t nodg Oracle 10g (Release 2) commands +data/hrms/archives cd command Changes to a specified directory. then cd changes the directory to that destination. then ASMCMD does not change the directory but instead returns an error. pwd ASMCMD> pwd help command Displays the syntax of a command and description of the command parameters. including the . We can go up or down the hierarchy of the current directory tree by providing a directory argument to the cd command. you run a single ASMCMD command by including the command and command arguments on the command line that invokes ASMCMD.

ONLINELOG.DATAGUA RDCONFIG.ASMCMD> ASMCMD> help lsct ASMCMD> ASMCMD> ? mkgrp help ? du command Displays the total space used for files in the specified directory and in the entire directory tree under the directory. Used_MB & Mirror_used_MB are the same. because the diskgroups are not mirrored. then assuming that each file in the diskgroup is 2way mirrored. ASMCMD [+ASM_DG_DATA/CRM] > du -H DATAFILE/ 98203 98203 find command Displays the absolute paths of all occurrences of the specified name pattern (can have wildcards) in a specified directory and its subdirectories. du [-H] [dir_name] The -H flag suppresses column headings from the output.ARCHIVELOG. dir_name can contain wildcard characters.DATAFILE.258.TEMPFILE.This value includes mirroring. ASMCMD> du TEMPFILE Used_MB Mirror_used_MB 24582 24582 In this case.CHANGETRACKING.PARAMETERFILE. The value that you use for name_pattern can be a directory name or a filename.557429239 ASMCMD> find -t CONTROLFILE +dg_data/hrms * . For example. ASMCMD [+dgroup1/prod] > du Used_MB Mirror_used_MB 1251 2507 Used_MB .DUMPSET.272.AUTOBACKUP. Used_MB is 100 MB and Mirror_used_MB is 200 MB. find [-t type] dir_name name_pattern find [--type type] dir_name name_pattern (Oracle 11g R2 syntax) type can be (these are the type values from the type column of the V$ASM_FILE) CONTROLFILE.FLASHBACK. this value does not include mirroring. then information about the current directory is displayed. ASMCMD> find +dgroup1 undo* +dgroup1/crm/DATAFILE/UNDOTBS1.The total space used in the directory. if a normal redundancy diskgroup contains 100 MB of data. If you do not specify dir_name.XTRANSPORT This command searches the specified directory and all subdirectories under it in the directory tree for the supplied name_pattern.BACKUPSET. This command is similar to the du -s command on UNIX flavors. Mirror_used_MB .555341963 +dgroup1/crm/DATAFILE/UNDOTBS1.

INST_ID is included in the output.260. or pattern. -s Displays file space information. query the V$ASM_FILE and V$ASM_DISKGROUP. V$ASM_ALIAS. Flag Description (none) Displays only filenames and directory names. Typically used with another flag. Note that not all possible file attributes or diskgroup attributes are included.permission. -r or Reverses the sort order of the listing. such as the -l flag. -Shows the permissions of a file (V$ASM_FILE. or the names and attributes of all diskgroups from the V$ASM_DISKGROUP_STAT (default) or V$ASM_DISKGROUP. GV$ASM_DISKGOUP. -l Displays extended file information.261. then ASMCMD displays information about the file that it references.260. ASMCMD also lists information about the file. If name is a directory name. Directories are listed with a trailing forward slash (/) to distinguish them from files. -L If the value for the name argument is an alias. ASMCMD also lists information about each directory member. indicated by N under the SYS column). directory. -c Selects from V$ASM_DISKGROUP or GV$ASM_DISKGROUP if the -g flag is also specified. -g Selects from GV$ASM_DISKGROUP_STAT or GV$ASM_DISKGROUP if the -c flag is also specified. displays the absolute path of the alias that references it. -H Suppresses column headings. --reverse -t Sorts the listing by timestamp. Typically used with another flag. then ASMCMD lists the contents of the directory and depending on flag settings. such as the -l flag. .usergroup.555342185 +dg_data/hrms/CONTROLFILE/Current. ls [-lsdrtLacgH] [name] ls [-lsdtLacgH] [--reverse] [--permission] [pattern] (11g R2 syntax) name can be a filename or directory name. -a For each listed file. then ASMCMD displays information about that directory. with duplicates removed.691577263 ls command Lists the contents of an ASM directory. then ASMCMD lists the file and depending on the flag settings. To see the complete set of column values for a file or a diskgroup. permission V$ASM_FILE. Name of a file. If you specify all of the flags. -d If the value for the name argument is a directory.name). pattern V$ASM_FILE. including striping and redundancy information and whether the file was system-generated (indicated by Y under the SYS column) or user-created (as in the case of an alias.555342183 ASMCMD [+] > find --type CONTROLFILE +data/devdb * +data/devdb/CONTROLFILE/Current. then the command shows a union of their attributes. rather than the directory contents. the attributes of the specified file.+dg_data/hrms/CONTROLFILE/Current.owner. If name is a filename. can include wildcards.

691577149 rw-rw-rw.265.258.272.272.691577151 rw-rw-rw.258.257.269.SYSAUX.272.555341963 DATAFILE MIRROR COARSE MAY 04 17:27:34 Y UNDOTBS1.557429239 DATAFILE MIRROR COARSE APR 19 19:16:24 Y SYSTEM.555341963 DATAFILE MIRROR COARSE MAY 04 17:27:34 Y UNDOTBS1.256.256.257.691577149 rw-rw-rw.269.555341961 UNDOTBS1.557429239 USERS.555341963 ASMCMD [+dgroup1/sample/DATAFILE] > ls -lt Type Redund Striped Time Sys Name DATAFILE MIRROR COARSE MAY 09 22:01:28 Y SYSAUX.272.555342243 ASMCMD [+] > ls --permission +data/prod/datafile User Group Permission Name rw-rw-rw.555342243 8192 48641 398467072 802160640 SYSAUX.259.557429239 ASMCMD [+dgroup1/sample/DATAFILE] > ls -s Block_Size Blocks Bytes Space Name 8192 12801 104865792 214958080 EXAMPLE.256.269.258.555341961 DATAFILE MIRROR COARSE APR 18 19:16:07 Y USERS.555341963 DATAFILE MIRROR COARSE APR 18 19:16:07 Y EXAMPLE.555341963 UNDOTBS1.269.555342243 SYSAUX.691577151 ASMCMD [+dgroup1/sample/DATAFILE] > ls -l undo* Type Redund Striped Time Sys Name DATAFILE MIRROR COARSE MAY 05 12:28:42 Y UNDOTBS1.USERS.557429239 DATAFILE MIRROR COARSE APR 18 19:16:07 Y USERS.555341961 SYSTEM.691577295 rw-rw-rw.258.259.555341961 .257.ASMCMD [+dgroup1/sample/DATAFILE] > ls EXAMPLE.555341963 ASMCMD [+dgroup1/sample/DATAFILE] > ls -l Type Redund Striped Time Sys Name DATAFILE MIRROR COARSE APR 18 19:16:07 Y EXAMPLE.555341963 DATAFILE MIRROR COARSE MAY 04 17:27:34 Y UNDOTBS1.259.259.555341961 DATAFILE MIRROR COARSE APR 19 19:16:24 Y SYSTEM.258.555341961 DATAFILE MIRROR COARSE MAY 05 12:28:42 Y UNDOTBS1.555341961 DATAFILE MIRROR COARSE MAY 05 12:28:42 Y UNDOTBS1.257.UNDOTBS1.EXAMPLE.257.SYSTEM.555342243 DATAFILE MIRROR COARSE MAY 09 22:01:28 Y SYSAUX.256.

555341961 8192 6401 52436992 110100480 UNDOTBS1.270.f => EXAMPLE.ora ASMCMD [+dgroup1] > ls -l +dgroup1/sample Type Redund Striped Time Sys Name Y CONTROLFILE/ Y DATAFILE/ Y ONLINELOG/ Y PARAMETERFILE/ Y TEMPFILE/ N spfilesample.f => ASMCMD [+dgroup1] > ls -a +dgroup1/sample/DATAFILE/EXAMPLE.271. ASMCMD [+dgroup1] > ls -l + State Type Rebal Unbal Name MOUNTED NORMAL N N DGROUP1/ MOUNTED HIGH N N DGROUP2/ MOUNTED EXTERN N N DGROUP3/ ASMCMD [+USERDG2/prod] > ls -l Type Redund Striped Time Sys Name .271.555342443 If you enter ls +.555341963 8192 12801 104865792 214958080 UNDOTBS1.256.8192 61441 503324672 101187584 SYSTEM.556715087 27 11:04 N example_df2.f Type Redund Striped Time Sys Name DATAFILE MIRROR COARSE APR +dgroup1/sample/DATAFILE/EXAMPLE. then the command returns information about all diskgroups.258. including information about whether the diskgroups are mounted.557429239 8192 641 5251072 12582912 USERS.ora TEMPFILE/ PARAMETERFILE/ ONLINELOG/ DATAFILE/ CONTROLFILE/ ASMCMD [+dgroup1] > ls -lL example_df2.259.556715087 ASMCMD [+dgroup1] > ls -lH +dgroup1/sample/PARAMETERFILE PARAMETERFILE MIRROR COARSE MAY 04 21:48 Y spfile.ora=>+dgroup1/sample/PARAMETERFILE/spfile.555342443 ASMCMD [+dgroup1] > ls -r +dgroup1/sample spfilesample.271.555341963 ASMCMD [+dgroup1] > ls +dgroup1/sample CONTROLFILE/ DATAFILE/ ONLINELOG/ PARAMETERFILE/ TEMPFILE/ spfilesample.556715087 +dgroup1/example_df2.270.272.

257.265.263. the attributes of the alias.573852215 .dbf => +USERDG2/prod/DATAFILE/UNDOTBS1..555341961 UNDOTBS1..555341963 +dgroup1/sample/ONLINELOG/: group_1.log => +USERDG2/prod/ONLINELOG/group_3.f dummy. shows if the file or directory was created by the ASM system. such as size..log => +USERDG2/prod/ONLINELOG/group_2..555342443 +dgroup1/sample/TEMPFILE/: TEMP.dbf => +USERDG2/prod/DATAFILE/SYSTEM.. shown in the first few columns of the output are null.N redo03.555342183 +dgroup1/sample/DATAFILE/: EXAMPLE. and redundancy... immediately to the left of the Name column..573852277 .261.ctl => +USERDG2/prod/CONTROLFILE/Current.N redo01.N control03.268.. The following examples illustrate the use of wildcards.266.555342197 group_2.259.269..Y CONTROLFILE/ .265..dbf => +USERDG2/prod/DATAFILE/SYSAUX..Y ONLINELOG/ .N temp01.264.256...dbf => +USERDG2/prod/DATAFILE/UNKNOWN..573852243 .555341961 SYSTEM...256..555342243 SYSAUX.573852295 .262....N sysaux01.N redo02...262. Because the CONTROLFILE directory is not a real file but an alias. ASMCMD> ls +dgroup1/mydir1/d* data1.Y TEMPFILE/ .263.Y DATAFILE/ .dbf => +USERDG2/prod/TEMPFILE/TEMP..dbf => +USERDG2 The Sys column.N control01.270.555342195 group_2.272...555342185 Current.267..573852113 .ctl => +USERDG2/prod/CONTROLFILE/Current...555342191 group_1.N users01.N control02..log => +USERDG2/prod/ONLINELOG/group_1..f ASMCMD> ls +group1/sample/* +dgroup1/sample/CONTROLFILE/: Current.260..ctl => +USERDG2/prod/CONTROLFILE/Current. free space.573852115 .573852249 .N system01.573852215 .258....257...N example01...573852255 ..555342229 lsct command .557429239 USERS.261....573852215 .555342201 +dgroup1/sample/PARAMETERFILE/: spfile.264..N undotbs01..260.573852115 .....

GV$ASM_CLIENT. Flag Description (none) Displays all the diskgroup attributes.1 or earlier. A client.Lists information about current ASM clients (from V$ASM_CLIENT). -H Suppresses column headings. The output also includes notification of any current rebalance operation. it's not a database by itself. pattern Returns only information about the specified diskgroup or diskgroups that match the supplied pattern. -c or Selects from V$ASM_DISKGROUP or GV$ASM_DISKGROUP if the -g flag is also specified.0. lsdg [-gcH] [disk_group] lsdg [-gH][--discovery][pattern] (11.0 REP ASMCMD [+] > lsct flash (in 11g) DB_Name Status Software_Version Compatible_version Instance_Name Group_Name TESTDB CONNECTED 11.0 10. Attribute Name Description .1. lsdg command Lists all diskgroups and their attributes from V$ASM_DISKGROUP_STAT (default) or V$ASM_DISKGROUP. An ASM instance serves as a storage container.0 PROD REP CONNECTED 10. This --discovery option is ignored if the ASM instance is version 10.0 syntax) If group is specified.1.2.0. If a diskgroup is specified.0. -g Selects from GV$ASM_CLIENT.0 11.0 10.0.2. use the V$ASM_DISKGROUP_STAT or V$ASM_DISKGROUP. The REBAL column of the GV$ASM_OPERATION is also included in the output. then information about only that diskgroup is listed.0 TESTDB FLASH SQL equivalent for lsct command is: SQL> SELECT * FROM V$ASM_CLIENT.2.0. and so on.1.0.INST_ID is included in the output. How do you know how many databases are using an ASM instance? ASMCMD [+DG1_FRA] > lsct DB_Name Status Software_Version Compatible_version Instance_Name PROD CONNECTED 10. The output also includes notification of any current rebalance operation for a diskgroup. To see the complete set of attributes for a diskgroup.1. is a database or Oracle ASM Dynamic Volume Manager (Oracle ADVM).1. Other databases use the space in the ASM instance for datafiles. GV$ASM_DISKGOUP. -g Selects from GV$ASM_DISKGROUP_STAT or GV$ASM_DISKGROUP if the -c or --discovery flag is also specified. then lsdg returns only information about that diskgroup.2. lsct [-gH] [disk_group] Flag Description (none) Displays information about current ASM clients from V$ASM_CLIENT. control files. -H Suppresses column headings.INST_ID is included in the output.2. uses diskgroups that are managed by the ASM instance to which ASMCMD is currently connected.2.2.1.

Name Diskgroup name. EXTERN). Type Diskgroup redundancy (NORMAL. mkdir command Creates ASM directories under the current directory. Offline disks are eventually dropped. ASMCMD [+] > lsdg dgroup2 State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Name MOUNTED NORMAL N 512 4096 1048576 206 78 0 39 0 dgroup2 The following example lists the attributes of the dg_data diskgroup (in 11.555341961 sysaux. Amount of space that must be available in the diskgroup to restore full redundancy after the Req_mir_free_MB most severe failure that can be tolerated by the diskgroup. Voting_files Specifies whether the diskgroup contains voting files (Y or N).257. Usable_file_MB Amount of free space. adjusted for mirroring. Only one alias is permitted for each ASM file. DISMOUNTED.2. ASMCMD [+dgroup1/sample/DATAFILE] > mkalias SYSAUX. SQL> SELECT * FROM V$ASM_DISKGROUP_STAT. mkdir dir_name [dir_name. Total_MB Size of the diskgroup in MB. mkalias file alias alias must be in the same diskgroup as the system-generated file.. AU Allocation unit size in bytes. from V$ASM_DISKGROUP.State State of the diskgroup (BROKEN.f SQL equivalent for mkalias command is: SQL> ALTER DISKGROUP dg_name ADD ALIAS user_alias FOR file. HIGH. mkalias command Creates an alias for specified system-generated filename. Block Block size in bytes.. MOUNTED. QUIESCING. and UNKNOWN).] The current directory can be created by the system or by the user. Rebal Y if a rebalance operation is in progress. without redundancy. that is available for new files.0). Free_MB Free space in the diskgroup in MB. from V$ASM_DISKGROUP. This is the REQUIRED_MIRROR_FREE_MB column from V$ASM_DISKGROUP. ASMCMD [+dgroup1] > mkdir subdir1 subdir2 . which is a diskgroup. You cannot create a directory at the root (+) level. Sector Sector size in bytes. CONNECTED. ASMCMD [+] > lsdg dg_data State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED NORMAL N 512 4096 4194304 12288 8835 1117 3859 0 N DG_DATA SQL equivalent for lsdg command is: SQL> SELECT * FROM V$ASM_DISKGROUP. Offline_disks Number of offline disks in the diskgroup.

SQL equivalent for mkdir command is: SQL> ALTER DISKGROUP dg_name ADD DIRECTORY dir_name[. remove sub-directories also.. Flag Description -f Force.f . retaining the files that the aliases reference. This enables you to delete a nonempty directory. If you use the -r flag or a wildcard character.alias as well as +dg/orcl/DATAFILE/System. ASMCMD [+dgroup1/sample/DATAFILE] > rm alias293... ASMCMD [+dgroup1/orcl/DATAFILE] > rmalias sysaux. then the rm command can delete the file or alias. remove it without user interaction.256. If name is a directory. This enables you to delete all of the aliases in the current directory and in the entire directory tree beneath the current directory.. they are also deleted. then the rm command prompts you to confirm the deletion before proceeding.. use the rmalias command.. SQL> ALTER DISKGROUP dg_name DROP DIRECTORY .. If name is an alias. Files and directories created by the system are not deleted. dir_name .146589651 then running the rm -r +dg1/dir1 command removes the +dg1/dir1/file. -r Recursive.. To delete only an alias and retain the file that the alias references.].f ASMCMD> rm -rf +dg/orcl/DATAFILE ASMCMD> rm -rf fradg/* SQL equivalents for rm command are: SQL> ALTER DISKGROUP dg_name DROP FILE . if we have an alias. For example. either the system-generated file or the alias must be present in the directory in which you run the rm command. unless you specify the -f flag. the rm command deletes all of the matches except nonempty directories.146589651. rm command Deletes specified ASM files and directories.256. then the rm command deletes both the alias and the file to which the alias refers. rm [-rf] name [name]. If name is a file or alias (can contain wildcard characters). To recursively delete... +dg1/dir1/file.alias => +dg/orcl/DATAFILE/System. only if it is not currently in use by a client database. including all files and directories in it and in the entire directory tree underneath it. To recursively delete. If any user-created directories become empty as a result of deleting aliases. then the rm command can delete it only if it is empty (unless the -r flag is used) and it is not a system-generated directory. use the -r flag. use the -r flag. rmalias [-r] alias [alias]. rmalias command Deletes the specified aliases. If you use a wildcard.. When using the -r flag.. unless you use the -r flag.

exit command Exits ASMCMD and returns control to the OS command prompt. In case of a remote instance copy. The default port number is 1521. exit ASMCMD> exit Oracle 11g Release1 commands cp command Used to copy files between ASM diskgroups on local instances to and from remote instances. otherwise the file copy returns an error. tgt_fname . ASM automatically converts the format when it writes the files. The target directory must exist. if an existing destination file. source +DG3/prod/DATAFILE/warehouse.. /backups/vdb. host_name.ctf1 copying file(s). The format of copied files is portable between Little-Endian and Big-Endian systems.Source file name to copy from. ASMCMD [+DG3/prod/DATAFILE] > cp warehouse.ctf1. -r Recursive. The connect_identifier parameter is not required for a local instance copy. ASMCMD [+] > cp +DG1/vdb.port_number]. you can copy the files to a different endian platform and then use one of the commonly used utilities to convert the file. The connect_identifier is in the form of: user_name@host_name[.A target alias directory within an ASM diskgroup. which is default. and SID are required. We can also use this command to copy files from ASM diskgroups to the OS. -f Force. cp [-ifr] [\@connect_identifier:]src_fname [\@connect_identifier:]tgt_fname cp [-ifr] [\@connect_identifier:]src_fnameN[.ctf1 file. or the ASM alias. we need to specify the connect_identifier and ASM prompts for a password in a non-echoing prompt.dbf . copy forwarding sub-directories recursively.dbf /tmp copying file(s). Enter either the fully qualified file name.ASMCMD > rmalias –r +dgroup1/orcl/ARCHIVES SQL equivalent for rmalias command is: SQL> ALTER DISKGROUP dg_name DELETE ALIAS user_alias. src_fname(s) .. The local ASM instance must be either the source or the target. prompt before copy file or overwrite. copy committed. source +DG1/vdb. src_fnameN+1…] [\@connect_identifier:]tgt_directory Flag Description -i Interactive. if the files exist in an ASM diskgroup. For copying non-ASM files from or to an ASM diskgroup. remove it and try again without user interaction.SID The user_name. tgt_directory ..ctf1 /backups/vdb..ctf1 target /backups/vdb.A user alias for the created target file name or alias directory name. The file copy cannot be between remote instances.

Here AMBR stands for ASM Managed Backup Recovery.. This example backs up all of the mounted diskgroups and creates the backup in the current working directory.. ASMCMD > md_backup –b /tmp/dgbackup100221 –g admdsk1 –g asmdsk2 ASMCMD > md_backup -b /u01/backup/backup.'] md_restore backup_file [--full|--nodg|--newdg -o 'old_diskgroup:new_diskgroup [-S sql_script_file] [-G 'diskgroup [.273.dg_data md_restore command This command restores a diskgroup metadata backup..661514191 +DATA2/db11g/datafile/users. -g Specifies the diskgroup name that needs to be backed up. nodg ..Create diskgroup with a different name and restore metadata. all the mounted diskgroups are included in the backup file which is saved in the current working directory.273.dg_name. md_restore -b backup_file [-li] [-t (full)|nodg|newdg] [-f sql_script_file] [-g 'dg_name..dbf ASMCMD [+] > cp +DATA1/db11g/datafile/users..dbf file. dgname.. By default.Restore metadata only. md_backup [-b location_of_backup] [-g dgname [-g dgname.Create diskgroup and restore metadata. -g or -G Select the diskgroups to be restored. Specifying this flag ignores any errors.f md_backup command Creates a backup file containing metadata for one or more diskgroups.]'] -i or --silent If md_restore encounters an error. then all diskgroups will be restored. -l Prints the messages to a file. -f or -S Write SQL commands to sql_script_file instead of executing them.]] Flag Description -b Specifies the location in which you want to store the intermediate backup file.txt -t full -g data .'] [-o 'old_dg_name:new_dg_name.. copy committed. -o is required to rename. newdg ...dbf.]] md_backup [-b location_of_backup] [-g dgname[.. /tmp/warehouse.. If the name of the backup file is not specified..661514191 /tmp/users. -o Rename diskgroup old_dg_name to new_dg_name. ASM names the file AMBR_BACKUP_INTERMEDIATE_FILE.target /tmp/warehouse. If no diskgroups are defined.diskgroup.. [--silent] [. ASMCMD> md_restore -b /tmp/backup.]'] (11g R2 syntax) Flag Description -b Reads the metadata information from backup_file. -t Specifies the type of diskgroup to be created: full . it will stop.. The backup will be saved in the /tmp/dgbackup100221 file. ASMCMD [+] > cp +DATA/db11g/datafile/users.txt -g dg_fra. ASMCMD > md_backup The following example creates a backup of diskgroup asmdsk1 and asmdsk2.

MOUNT_DATE. -I Scans disk headers for information rather than extracting the information from an ASM instance. ASMCMD [+] > md_restore –-newdg -o 'data:data2' --silent /tmp/dgbackup20090714 Example restores from the backup file after applying the overrides defined in the override. BYTES_WRITTEN. --statistics BYTES_READ. and PATH columns of V$ASM_DISK. This option forces the non-connected mode. REPAIR_TIMER. and PATH columns of V$ASM_DISK. GV$ASM_DISK. -g Selects from GV$ASM_DISK_STAT or GV$ASM_DISK if the -c flag is also specified. READ_TIME. lsdsk [-ksptcgHI] [-d diskgroup_name] lsdsk [-kptgMHI] [-G [--discovery] [--statistics] [pattern] (11g R2 syntax) diskgroup] [pattern] [--member|--candidate] Flag Description (none) Displays PATH column of V$ASM_DISK. HEADER_STATUS. if the -g flag is also specified. ASMCMD [+] > md_restore -S override.txt. -t Displays CREATE_DATE.1 or earlier. and PATH columns of V$ASM_DISK. STATE. WRITES.INST_ID is included in the output. PRODUCT. ASMCMD> md_restore –t nodg –g asmdsk6 –i backup_file Example restores diskgroup asmdsk1 completely but the new diskgroup that is created is called asmdsk2. -s or Displays READS. ASMCMD> md_restore –t full –g asmdsk1 –i backup_file Example takes an existing diskgroup asmdsk6 and restores its metadata.txt –i backup_file Example restores the diskgroup asmdsk1 from the backup script and creates a copy. and PATH columns of V$ASM_DISK.sql --silent /tmp/dgbackup20090714 ASMCMD> md_restore -b dg7. FAILGROUP. . using V$ASM_DISK_STAT (default) or V$ASM_DISK. ASMCMD> md_restore –t newdg –of override. NAME. -c Selects from V$ASM_DISK or GV$ASM_DISK. /tmp/dgbackup20090714 diskgroup data and restores its metadata. ASMCMD> md_restore –t newdg -o 'asmdsk1:asmdsk2' –i backup_file Example restores from the backup file after applying the overrides defined in the file override.sql script file. WRITE_TIME. -k Displays TOTAL_MB. -H Suppresses column headings. FREE_MB. LIBRARY. READ_ERRS. MODE_STATUS.sql lsdsk command List the disks that are visible to ASM. -p Displays GROUP_NUMBER. WRITE_ERRS. OS_MB. MOUNT_STATUS. LABEL.backup -t full -f cr8_dg7. –-nodg –G data –-silent /tmp/dgbackup20090714 Example restores diskgroup data completely but the new diskgroup that is created is called data2.ASMCMD> md_restore –t newdg –of override. DISK_NUMBER. UDID. REDUNDANCY. This option is ignored if the ASM instance is version 10. INCARNATION.txt –i Example restores ASMCMD [+] the > Example ASMCMD > takes [+] an diskgroup data from the md_restore –-full –G existing md_restore backup data script –-silent backup_file and creates a copy.

then the output shows the union of the attributes associated with each flag. If any combinations of the flags are specified. if included in a diskgroup. The -I option forces the non-connected mode. --candidate Restricts results to only disks having membership status equal to CANDIDATE. --member Restricts results to only disks having membership status equal to MEMBER. ASMCMD uses dynamic views to retrieve disk information. In non-connected mode. using an ASM disk string to restrict the discovery set. pattern restricts the output to only disks that matches the pattern specified. -M Displays the disks that are visible to some but not all active instances. data Path /devices/diska1 /devices/diska2 /devices/diskb1 /devices/diskb2 /devices/diska* State Path /devices/diska1 /devices/diska2 /devices/diska3 The third example lists information about candidate disks. pattern Returns only information about the specified disks that match the supplied pattern. p.1 or earlier. This flag is disregarded if lsdsk is running in non-connected mode. or from GV$ASM_DISK if the -g flag is also specified. ASMCMD> lsdsk -k -d ASM_DG_DATA *_001 ASMCMD> lsdsk -sp -d ASM_DG_FRA *_001 ASMCMD> lsdsk -Ik ASMCMD> lsdsk -t -d ASM_DG_IDX *_001 ASMCMD> lsdsk -Ct The first and ASMCMD Create_Date 13-JUL-09 13-JUL-09 13-JUL-09 13-JUL-09 ASMCMD Group_Num 1 0 1 1 1 2 second [+] [+] Disk_Num 2105454210 2105454199 2105454205 examples list > Mount_Date 13-JUL-09 13-JUL-09 13-JUL-09 13-JUL-09 -d information lsdsk ASM_DG_DATA about disks in -t Repair_Timer 0 0 0 0 > lsdsk -p -G Incarn Mount_Stat Header_Stat CACHED MEMBER ONLINE CACHED MEMBER ONLINE CACHED MEMBER ONLINE the *_001 data -G data Mode_Stat NORMAL NORMAL NORMAL diskgroup. ASMCMD [+] > lsdsk --candidate -p Group_Num Disk_Num Incarn Mount_Stat Header_Stat Mode_Stat State Path 0 5 2105454171 CLOSED CANDIDATE ONLINE NORMAL /devices/diske1 0 25 2105454191 CLOSED CANDIDATE ONLINE NORMAL /devices/diske2 0 18 2105454184 CLOSED CANDIDATE ONLINE NORMAL /devices/diske3 .-d or -G Restricts results to only those disks that belong to the group specified by diskgroup_name. The k. Selects from V$ASM_DISK. In connected mode. These are disks that. s. ASMCMD scans disk headers to retrieve disk information. This option is --discovery always enabled if the Oracle ASM instance is version 10. and t flags modify how much information is displayed for each disk. ASMCMD> lsdsk -d DG_DATA -k ASMCMD> lsdsk -g -t -d DATA1 *_001 This command can run in connected or non-connected mode. This is not supported on Windows. The connected mode is always attempted first. cause the mount of that diskgroup to fail on the instances where the disks are not visible.

.  Template management.  Volume management.  Managing diskgroups (create. ugo and rwx . remap command Repairs range of physical blocks on a disk. startup [--nomount] [--restrict] [--pfile pfile_name] Flag Description (default) Will mount diskgroups and enables Oracle ADVM volumes.SQL equivalent for lsdsk command is: SQL> SELECT * FROM V$ASM_DISK. The command assumes a physical block size of 512 bytes and supports all allocation unit sizes (1MB to 64MB). The remap command only repairs blocks that have read disk I/O errors.  File access control (like OS. It reads the blocks from a good copy of an ASM mirror and rewrites them to an alternate location on disk if the blocks on the original location cannot be read properly. disk_name Name of the disk that must be repaired. ASMCMD utility can also do  startup and shutdown of ASM instances. mount. --restrict Specifies restricted mode. SQL> SELECT * FROM V$ASM_DISK_STAT. It does not repair blocks that contain corrupted contents. remap diskgroup_name disk_name block_range Flag Description diskgroup_name Name of the diskgroup in which a disk must be repaired. ASMCMD> remap DISK_GRP_FRA largedisk_2 7200-8899 Oracle 11g Release2 commands From 11g release 2.ora initialization parameter file. drop). --pfile Oracle ASM initialization parameter file. ASMCMD Instance Management Commands startup command Starts up an Oracle ASM instance. whether those blocks can be read or not. .). block_range Range of physical blocks to repair. The following is an example of the startup command that starts the Oracle ASM instance without mounting diskgroups and uses the asm_init.. ASMCMD> remap DISK_GRP_DATA DATA_001 4500-5599 The following example repairs blocks 7200 through 8899 for disk largedisk_2 in diskgroup DISK_GRP_GRA.  User management. --nomount Specifies no mount operation. in the format: starting_number-ending_number The following example repairs blocks 4500 through 5599 for disk DATA_001 in diskgroup DISK_GRP_DATA. alter.

This is the default setting.. . --profile force] [-.. The update occurs after the Oracle ASM instance has successfully validated that the specified discovery string has discovered all the necessary diskgroups and voting files.. . The first example performs a shutdown of the Oracle ASM instance with normal action.. shutdown command Shuts down an Oracle ASM instance.ASMCMD> startup --nomount --pfile asm_init. The update is guaranteed to be propagated to all the nodes that are part of the cluster.ora SQL equivalent for startup command is: SQL> STARTUP . dsset command Sets the discovery diskstring value that is used by the Oracle ASM instance and its clients. Oracle strongly recommends that you shut down all database instances that use the Oracle ASM instance and dismount all file systems mounted on Oracle ASM Dynamic Volume Manager (Oracle ADVM) volumes before attempting to shut down the Oracle ASM instance with the abort (--abort) option.Specifies the discovery diskstring that is pushed to the GPnP profile without any validation by the Oracle ASM instance. If --force is specified. the specified diskstring is pushed to the local GPnP profile without any . --immediate Shut down immediately. dsset [--normal] [--parameter] [--profile [--force]] diskstring Flag Description --normal Sets the discovery string in the Grid Plug and Play (GPnP) profile and in the Oracle ASM instance. The specified diskstring must be valid for existing mounted diskgroups. ASMCMD [+] > shutdown The second example performs a shut down with immediate action ASMCMD [+] > shutdown –-immediate The third example performs a shut down that aborts all existing operations. shutdown [--abort|--immediate] Flag Description (default) normal shutdown. This command fails if the instance is not using a server parameter file (SPFILE). The updated value takes effect immediately. --abort Shut down aborting all existing operations. ensuring that the instance can discover all the required diskgroups. ASMCMD [+] > shutdown --abort SQL equivalent for shutdown command is: SQL> SHUTDOWN .

synchronization with other nodes in the cluster. --parameter Retrieves the ASM_DISKSTRING parameter setting of the Oracle ASM instance. diskstring Specifies the value for the discovery diskstring. drop. This is the default setting. The command requires the SYSASM privilege to run. --profile force] [-. ASMCMD [+] > dsget profile: /devices/disk* parameter: /devices/disk* lspwusr command List the current users from the local Oracle ASM password file. It returns one row each for the profile and parameter setting. ASMCMD [+] > dsset /devices/disk* dsget command Retrieves the discovery diskstring value that is used by the Oracle ASM instance and its clients. orapwusr {{ {--add | --modify [--password]} [--privilege {sysasm|sysdba|sysoper}] } | --delete} user Flag Description .Retrieves the discovery string from the GPnP profile. The diskstring is not persistently recorded in either the SPFILE or the GPnP profile. dsget [[--normal] [--profile [--force]] [--parameter]] Flag Description --normal Retrieves the discovery string from the Grid Plug and Play (GPnP) profile and the one that is set in the Oracle ASM instance. ASMCMD [+] > lspwusr Username sysdba sysoper sysasm SYS TRUE TRUE TRUE ASMSNMP TRUE FALSE FALSE ASMCMD [+] > lspwusr -H SYS TRUE TRUE TRUE Satya TRUE TRUE FALSE orapwusr command Add. The following example uses dsget to retrieve the current discovery diskstring value from the GPnP profile and the ASM_DISKSTRING parameter. A user logged in as SYSDBA cannot change its password using this command. retrieves the discovery string from the local GPnP profile. This option should only be used for recovery. or modify an Oracle ASM password file user. Specifies that the diskstring is updated in memory after validating that the discovery diskstring --parameter discovers all the current mounted diskgroups and voting files. If --force is specified. This command option updates only the local profile file. The following example uses dsset to set the current value of the discovery diskstring in the GPnP profile. lspwusr [-H] -H Suppresses column headers from the output. The command fails if the Oracle Clusterware stack is running.

orapwusr attempts to update passwords on all nodes in a cluster. spset location The following is an example of the spset command that sets the location of the Oracle ASM SPFILE command in the dg_data diskgroup. After the next restart of the Oracle ASM. ASMCMD> spbackup /u01/oracle/dbs/spfile+ASM.253. this location point to the ASM SPFILE currently is being used. spbackup source destination The backup file that is created is not a special file type and is not identified as a SPFILE.ora /u01/oracle/dbs/bakspfileASM. ASMCMD [+] > spget +DATA/asm/asmparameterfile/registry. ASMCMD> spset +DG_DATA/asm/asmparameterfile/asmspfile. The following is an example of the spget command that retrieves and displays the location of the SPFILE from the GPnP profile. user Name of the user to add. For example.--add Adds a user to the password file. ASMCMD [+] > orapwusr --add --privilege sysasm Satya ASMCMD [+] > lspwusr Username sysdba sysoper sysasm SYS TRUE TRUE TRUE Satya TRUE TRUE TRUE spset command Sets the location of the Oracle ASM SPFILE in the Grid Plug and Play (GPnP) profile. drop. --modify Changes a user in the password file.691575633 spbackup command Backs up an Oracle ASM SPFILE. sysdba. the location could have been recently updated by spset or spcopy with the -u option on an Oracle ASM instance that has not been restarted. use the ASMCMD cp command. --delete Drops a user from the password file. This file cannot be copied with spcopy.ora spget command Retrieves the location of the Oracle ASM SPFILE from the Grid Plug and Play (GPnP) profile. spbackup does not affect the GPnP profile. and sysoper. --privilege Sets the role for the user. The options are sysasm. --password Prompts for and then changes the password of a user. To copy this backup file. Also prompts for a password. but not always the location of the SPFILE currently used. This example adds the Satya to the Oracle ASM password file with the role of the user set to SYSASM. or modify. spget The location retrieved by spget is the location in the GPnP profile. The first example backs up the Oracle ASM SPFILE from one operating system location to another.ora .

ora spcopy command Copies an Oracle ASM SPFILE from source to destination.ASM must be set to 11. you must restart the instance with the SPFILE in the new location to use that SPFILE.ora +DG_DATA/bakspfileASM.ora lsop command Lists the current operations on diskgroups or Oracle ASM instance. you must restart the instance with the SPFILE in the new location to use that SPFILE.ora diskgroup.  spmove cannot move an Oracle ASM SPFILE when the SPFILE is being used by an open Oracle ASM instance. We can also use spset to update the GPnP profile. To use spcopy to copy an Oracle ASM SPFILE into a diskgroup. ASMCMD> spbackup /u01/oracle/dbs/spfile+ASM. ASMCMD> spmove /u01/oracle/dbs/spfile+ASM. spmove source destination Note the following about the use of spmove:  spmove can move an Oracle ASM SPFILE when the open instance is using a PFILE or a different SPFILE. Note the following about the use of spcopy:  spcopy can copy an Oracle ASM SPFILE from a diskgroup to a different diskgroup or to an operating system file.ora The second example copies the SPFILE from an operating system location to the data diskgroup and updates the GPnP profile. After moving the SPFILE. The first example copies the Oracle ASM SPFILE from one operating system location to another.ora spmove command Moves an Oracle ASM SPFILE from source to destination and automatically updates the GPnP profile. ASMCMD> spcopy -u /u01/oracle/dbs/spfile+ASM. spcopy [-u] source destination -u updates the Grid Plug and Play (GPnP) profile.2 or greater.  spcopy can copy an Oracle ASM SPFILE when the SPFILE is being used by an open Oracle ASM instance. the diskgroup attribute COMPATIBLE.ora /u01/oracle/dbs/testspfileASM. The first example moves the Oracle ASM SPFILE from one operating system location to another.ora The second example moves the SPFILE from an operating system location to the data diskgroup. To use spmove to move an Oracle ASM SPFILE into a diskgroup.ASM must be set to 11.ora +DATA/testspfileASM. ASMCMD> spmove /u01/oracle/dbs/spfile+ASM. ASMCMD> spcopy /u01/oracle/dbs/spfile+ASM. you can remove the source SPFILE.2 or greater. from V$ASM_OPERATION.ora /u01/oracle/dbs/testspfileASM.ora +DATA/testspfileASM.The second example backs up the SPFILE from an operating system location to the data/bakspfileASM.  spcopy can copy an Oracle ASM SPFILE from an operating system file to a diskgroup or to an operating system file. After copying the SPFILE. the diskgroup attribute COMPATIBLE. When the Oracle ASM instance is running with the SPFILE in the new location. lsop .

lsusr [-Ha] [-G diskgroup] [pattern] Flag Description -H Suppresses column headings. . ASMCMD [+] > lsusr -G asm_dg_data User_Num OS_ID OS_Name 3 1001 oradba 1 1021 asmdba1 2 1022 asmdba2 ASMCMD [+] > lsusr -G asm_dg_fra asm* User_Num OS_ID OS_Name 1 1021 asmdba1 2 1022 asmdba2 SQL equivalent for lsusr command is: SQL> SELECT * FROM V$ASM_USER. mkusr diskgroup user The following example adds the asmdba2 user to the dg_fra diskgroup. lsusr command Lists Oracle ASM users in a diskgroup. The example lists users in the asm_dg_data diskgroup and also shows the OS user Id assigned to the user.ASMCMD [+] > lsop Group_Name Dsk_Num State Power DG_DATA REBAL WAIT 2 ASMCMD [+] > lsop Group_Name Dsk_Num State Power DG_FRA REBAL REAP 3 SQL equivalent for lsop command is: SQL> SELECT * FROM V$ASM_OPERATION ASMCMD File Access Control Commands mkusr command Adds a valid operating system user to a diskgroup. Only users authenticated as SYSASM can run this command. -a List all users and the diskgroups to which the users belong. ASMCMD [+] > mkusr dg_fra asmdba2 SQL equivalent for mkusr command is: SQL> ALTER DISKGROUP disk_group ADD USER 'user_name'. pattern Displays the users that match the pattern expression. -G Limits the results to the specified diskgroup name.

lsgrp [-Ha] [-G diskgroup] [pattern] Flag Description -H Suppresses column headings. The following is an example to remove the asmdba2 user from the dg_data2 diskgroup. Only a user authenticated as SYSASM can run this command. mkgrp diskgroup usergroup [user] [user.. -G Limits the results to the specified diskgroup name. rmusr [-r] diskgroup user -r removes all files in the diskgroup that the user owns at the same time that the user is removed. . -a Lists all columns. passwd user The user is first prompted for the current password. An error will be raised if the user does not exist in the Oracle ASM password file. lsgrp command Lists all Oracle ASM user groups. then the new password. User group name can have maximum 30 characters.. mkgrp command Creates a new Oracle ASM user group. The command requires the SYSASM privilege to run. We can optionally specify a list of users to be included as members of the new user group. SQL> ALTER DISKGROUP disk_group ADD USERGROUP 'usergroup_name' WITH MEMBER 'user_names'. ASMCMD [+] > rmusr dg_data2 asmdba2 SQL equivalent for rmusr command is: SQL> ALTER DISKGROUP disk_group DROP USER 'user_name'. ASMCMD [+] > mkgrp dg_data asm_data asmdba1 asmdba2 ASMCMD [+] > mkgrp dg_fra asm_fra SQL equivalent for mkgrp command is: SQL> ALTER DISKGROUP disk_group ADD USERGROUP 'usergroup_name'.] This example creates the asm_data user group in the dg_data diskgroup and adds the asmdba1 and asmdba2 users to the user group.passwd command Changes the password of a user. ASMCMD [+] > passwd asmdba2 Enter old password (optional): Enter new password: ****** rmusr command Deletes an OS user from a diskgroup. pattern Displays the user groups that match the pattern expression.

The following is an example of the rmgrp command that removes the asm_data user group from the dg_data diskgroup. ASMCMD [+] > grpmod –-add dg_fra asm_fra asmdba1 asmdba2 The second example removes the asmdba2 user from the asm_data user group of the dg_data diskgroup.] Flag Description --add Specifies to add users to the user group. user Name of the user to add or remove from the user group. explicitly update those files to a valid group. To ensure that those files have a valid group. ASMCMD [+] > rmgrp dg_data asm_data SQL equivalent for rmgrp command is: SQL> ALTER DISKGROUP disk_group DROP USERGROUP 'usergroup_name'. Only the owner of the user group can use this command. grpmod {--add | --delete} diskgroup usergroup user [user. grpmod command Adds or removes OS users to and from an existing Oracle ASM user group. rmgrp diskgroup usergroup Removing a group might leave some files without a valid group. rmgrp command Removes a user group from a diskgroup. The OS users are typically owners of a database instance home. usergroup Name of the user group.The following example displays a subset of information about the user groups whose name matches the asm% pattern. The following example adds the asmdba1 and asmdba2 users to the asm_fra user group of the dg_fra diskgroup. ASMCMD [+] > grpmod –-delete dg_data asm_data asmdba2 . This command accepts an OS user name or multiple user names separated by spaces... The command requires the SYSASM privilege to run. The command must be run by the owner of the group and also requires the SYSASM privilege to run. ASMCMD [+] > lsgrp asm% DG_Name Grp_Name Owner DG_FRA asm_fra grid DG_DATA asm_data grid The second example displays all information about all the user groups. ASMCMD [+] > lsgrp –a –G DG_DATA DG_Name Grp_Name Owner Members DG_DATA asm_data grid asmdba1 asmdba2 SQL equivalent for lsgrp command is: SQL> SELECT * FROM V$ASM_USERGROUP. --delete Specifies to delete users from the user group.

used with r or w r Read permission w Write permission file Name of a file We can only set file permissions to read-write. This command accepts a file name or multiple file names separated by spaces.  {0|4|6} {0|4|6} {0|4|6} The first digit specifies owner permissions. chmod command Changes permissions of a file or list of files. groups diskgroup user ASMCMD [+] > groups dg9 asmdba1 asm_data SQL equivalent for groups command is: SQL> SELECT * FROM V$ASM_USERGROUP_MEMBER. used with r or w - Removes a permission.SQL equivalent for grpmod command is: SQL> ALTER DISKGROUP disk_group MODIFY USERGROUP 'usergroup_name' ADD MEMBER 'user_name'. g specifies the group permissions. use the ASMCMD ls command with the --permission option. . used with r or w + Add a permission. used with r or w o Other user permissions. used with r or w g Group permissions.264. and o specifies permissions for other users. ASMCMD [+fra/orcl/archivelog/flashback] > chmod ug+rw log_7. and the third digit specifies other permissions. and no permissions. u specifies permissions for the owner/user of the file.684972027 To view the permissions on a file. We cannot set file permissions to write-only.] mode can be one of the following forms:  {ugo|ug|uo|go|u|g|o|a} {+|-} {r|w|rw} a specifies permissions for all users.684968167 log_8. SQL> ALTER DISKGROUP disk_group MODIFY USERGROUP 'usergroup_name' DROP MEMBER 'user_name'.684968167 log_8.684972027 ASMCMD [+fra/orcl/archivelog/flashback] > chmod 640 log_7.265. the second digit specifies group permissions.264. Flag Description 6 Read write permissions 4 Read only permissions 0 No permissions u Owner permissions... chmod mode file [file . The specified files must be closed. used with r or w a All user permissions.265. read-only. groups command Lists all the user groups to which the specified user belongs.

. GROUP=read only. and paths of the disks that form the diskgroup.265. OTHER=none FOR FILE '. This command accepts a file name or multiple file names separated by spaces.654892547 SQL equivalent for chown command is: SQL> ALTER DISKGROUP disk_group SET OWNERSHIP OWNER='user_name'.684972027 ASMCMD> chmod ug+rw +data/hrms/Controlfile/Current.264.684924747 ASMCMD [+fra/orcl/archivelog/flashback] > chgrp asm_fra log_7.path. then he must also be either the owner or a member of the group for this command to succeed.264. Only the file owner or the Oracle ASM administrator can use this command.xml | 'contents_of_xml_file'} Flag Description config_file Name of the XML file that contains the configuration for the new diskgroup.. Only the Oracle ASM administrator can use this command.684968167 log_8. This command accepts a file name or multiple file names separated by spaces.ASMCMD [+] > ls --permission +fra/orcl/archivelog/flashback User Group Permission Name grid asm_fra rw-r----.path. GROUP='usergroup_name' FOR FILE '.260. The specified files must be closed. chown user[:usergroup ] file [file .'.log_7..265.684972027 ASMCMD Diskgroup Management Commands mkdg command Creates a diskgroup based on an XML configuration file which specifies the name of the diskgroup. attributes.175.. chgrp usergroup file [file . chown command Changes the owner of a file or list of files. Oracle ASM File Access Control uses the OS name to identify a database. chgrp command Changes the Oracle ASM user group of a file or list of files. If the user is the file owner.264.684968167 grid asm_fra rw-r----. mkdg searches for the XML file in the directory where ASMCMD was started unless a path is specified..654892547 SQL equivalent for chmod command is: SQL> ALTER DISKGROUP disk_group SET PERMISSION OWNER=read write..684968167 log_8.log_8. redundancy..'.687650269 ASMCMD> chown oracle1:asm_users +data/hrms/Controlfile/Current.264. mkdg {config_file.265.] user typically refers to the user that owns the database instance home..] ASMCMD [+] > chgrp asm_data +data/orcl/controlfile/Current.684972027 ASMCMD [+fra/orcl/archivelog/flashback] > chown asmdba1:asm_fra log_9. . contents_of_xml_file The XML script enclosed in single quotations.175. ASMCMD [+fra/orcl/archivelog/flashback] > chown asmdba1 log_7.

such as AU_SIZE and SECTOR_SIZE. It is possible to set some diskgroup attribute values during diskgroup creation.2"/> <a name="compatible. <fg> name </fg> <dsk> name path size </dsk> mkdg external. each with two disks identified by associated disk strings. disks are required to be gathered into failure groups. <dg name="dg_data" redundancy="normal"> <fg name="fg1"> <dsk string="/dev/disk1"/> <dsk string="/dev/disk2"/> </fg> <fg name="fg2"> <dsk string="/dev/disk3"/> <dsk string="/dev/disk4"/> </fg> <a name="compatible. The configuration file creates a diskgroup named dg_data with normal redundancy.asm" value="11. each disk in the diskgroup belongs to its own failure group. In the case that failure groups are not specified for a diskgroup. the default is normal redundancy. ASMCMD [+] > mkdg data_config.advm" value="11. The default diskgroup compatibility settings are 10.1 for database compatibility.2.1 for Oracle ASM compatibility. Some attributes. fg1 and fg2. can be set only during diskgroup creation. Tags <dg> name redundancy for <a> name value </a> </dg> XML Configuration diskgroup normal. failure failure size of disk disk the attribute attribute File diskgroup name high group group name disk disk name path add to attribute name value The following is an example of an XML configuration file for mkdg. Two failure groups. For some types of redundancy.rdbms" value="11. 10. and no value for Oracle ADVM compatibility. are created. The diskgroup compatibility attributes are all set to 11.2"/> </dg> The first example executes mkdg with an XML configuration file in the directory where ASMCMD was started.xml .Redundancy is an optional parameter.2"/> <a name="compatible.

or rebalances) based on an XML configuration file. The failure groups are optional parameters. contents_of_xml_file The XML script enclosed in single quotations. . drops disks. path="/dev/disk*"/></dg>' SQL equivalent for mkdg command is: SQL> CREATE DISKGROUP diskgroup_name . chdg {config_file. The resize operation fails if there is not enough space for storing data after the resize. The modification includes adding or deleting disks from an existing diskgroup.xml | 'contents_of_xml_file'} Flag Description config_file Name of the XML file that contains the changes for the diskgroup. The power level can set from 0 to the maximum of 11. Dropping disks from a diskgroup can be performed through this operation. Tags <chdg> name power for update the chdg disk clause diskgroup power to XML Configuration (add/delete disks/failure to perform Template groups) change rebalance <add> items to add are placed here</add> <drop> items to drop are placed here</drop> <fg> name </fg> <dsk> name path size </dsk> failure failure size of disk disk the group group name disk disk name path add to . A set of disks that belong to a failure group can be specified by the failure group name.The second example ASMCMD [+] > executes mkdg mkdg using information on '<dg name="data"><dsk the command line.. We can resize a disk inside a diskgroup with chdg. The default causes every disk to belong to a its own failure group. chdg searches for the XML file in the directory where ASMCMD was started unless a path is specified. An individual disk can be referenced by its Oracle ASM disk name. When adding disks to a diskgroup. and the setting rebalance power level.. the same values as the ASM_POWER_LIMIT initialization parameter. the diskstring must be specified in a format similar to the ASM_DISKSTRING initialization parameter. chdg command Changes a diskgroup (adds disks.

. diskgroup diskgroup Name of diskgroup to drop. include contents. <chdg name="data" power="4"> <drop> <fg name="fg1"></fg> <dsk name="data_0001"/> </drop> <add> <fg name="fg2"> <dsk string="/dev/disk8"/> </fg> </add> </chdg> The following are examples of the chdg command with the configuration file or configuration information on the command line. dropdg Drops an dropdg existing diskgroup.. . -r Recursive. Only applicable if the diskgroup cannot be mounted. The failure group fg1 is dropped and the disk data_0001 is also dropped. . on command multiple nodes.</chdg> The following is an example of an XML configuration file for chdg. The first example forces the drop of the diskgroup dg_data. The /dev/disk8 disk is added to failure group fg2.. chkdg checks the metadata of a diskgroup for errors and . The rebalance power level is set to 4. ASMCMD [+] > dropdg -r -f dg_data The second ASMCMD example drops [+] the diskgroup > dg_fra. dg_fra SQL equivalent for dropdg command is: SQL> DROP DISKGROUP diskgroup_name . chkdg command Checks or repairs the metadata of a diskgroup. ASMCMD [+] > chdg data_config. including dropdg any data in -r the diskgroup. The diskgroup cannot [-r] be [-f] mounted Flag Description -f Force the operation.. This XML file alters the diskgroup named data. including any data in the diskgroup.xml ASMCMD <drop><fg <add><fg [+] > chdg '<chdg name="fg1"></fg><dsk name="fg2"><dsk name="data" power="3"> name="data_0001"/></drop> string="/dev/disk5"/></fg></add></chdg>' SQL equivalent for chdg command is: SQL> ALTER DISKGROUP diskgroup_name .

A diskgroup can be mounted with or without force or restricted options. umount command Will dismount the specified diskgroup. restrict. -f Forces the dismount operation. mount [--restrict] {[-a] | [-f] diskgroup[. The first ASMCMD example dismounts [+] The second example ASMCMD [+] > umount -f data all forces SQL equivalent for umount command is: SQL> ALTER DISKGROUP offline diskgroups mounted on > the dismount of the Oracle umount the diskgroup_name data ASM instance.. diskgroup Name of the diskgroup. The following are examples of the mount command showing the use of the force.optionally chkdg repairs the errors.. This operation mounts one or more diskgroups... diskgroup [--repair] The following is an example of the chkdg command used to check and repair the dg_data diskgroup. diskgroup Name of the diskgroup. command . These disk groups are listed in the output of the V$ASM_DISKGROUP view. ASMCMD [+] > mount -f data ASMCMD [+] > mount --restrict data ASMCMD [+] > mount -a SQL equivalent for mount command is: SQL> ALTER DISKGROUP diskgroup_name MOUNT.. and all options. mount command Will mount the specified diskgroups. . ASMCMD [+] > chkdg --repair dg_data SQL equivalent for chkdg command is: SQL> ALTER DISKGROUP diskgroup_name CHECK . DISMOUNT. -a disk group. -a Mounts all diskgroups.]} Flag Description --restrict Mounts in restricted mode. umount {-a | [-f] diskgroup} Flag Description -a Dismounts all mounted diskgroups.diskgroup. -f Forces the mount operation.

-D Disk name. {[-a] a single -G disk.. ASMCMD [+] > online -G dg_data -F failgroup1 -w The second ASMCMD SQL SQL> example [+] ALTER onlines > the data_0001 disk in online -G dg_data equivalent online command DISKGROUP diskgroup_name the dg_data -D ONLINE diskgroup.Offline offline disks -G or failure diskgroup groups that {-F failgroup|-D belong disk} to [-t a diskgroup. this implies all the disks that belong to it should be onlined. {minutes|hours}] Flag Description -G Diskgroup name.. The first ASMCMD example [+] offlines > the failgroup1 offline failure -G group of dg_data the dg_data -F diskgroup. A value of 0 disables rebalancing.. -w Wait option.. -t Specifies the time before the specified disk is dropped as nm or nh. Causes ASMCMD to wait for the diskgroup to be rebalanced before returning control to the user.. The first example onlines all disks in the failgroup1 failure group of the dg_data diskgroup with the wait option enabled. The default is not waiting. The default unit is hours. where m specifies minutes and h specifies hours. When a failure group is specified. this implies all the disks that belong to it should be offlined. -G Diskgroup name. failgroup1 The second example offlines the data_0001 disk of the dg_data diskgroup with a time of 1. -D Specifies a single disk name. If the .5 hours before the disk is dropped. rebal command Rebalances a diskgroup. command diskgroup.5h SQL SQL> online Online online equivalent offline command DISKGROUP diskgroup_name ALTER all disks. ASMCMD [+] > offline -G dg_data -D data_0001 -t 1. The power level can be set from 0 to 11. data_0001 is: .. -F Failure group name. [-w] Flag Description -a Online all offline disks in the diskgroup. -F Failure group name. or a failure diskgroup|-F OFFLINE group that belongs failgroup|-D to a disk} is: . When a failure group is specified.

Attribute Name Description Group_Name Name of the diskgroup. diskgroup Diskgroup name. -w Wait option. -t Displays time statistics (Read_Time. If the --io option is entered. Causes ASMCMD to wait for the diskgroup to be rebalanced before returning control to the user. The following ASMCMD example [+] We can determine ASMCMD Group_Name FRA if rebalances > a the dg_fra diskgroup with a rebal --power rebalance operation [+] Dsk_Num REBAL is occurring with the > State RUN power level 6 ASMCMD SQL equivalent rebal command SQL> ALTER DISKGROUP diskgroup_name REBALANCE POWER n. use the V$ASM_DISK_IOSTAT view. -G Displays statistics for the diskgroup name. then the value is displayed as number of I/Os. Hot_Reads Number of bytes read from the hot disk region. Cold_Reads Number of bytes read from the cold disk region. Cold_Writes. by using V$ASM_DISK_IOSTAT. . The default is not waiting. Write_Err). If the --io option is entered. iostat [-etH] [--io] [--region] [-G diskgroup] [interval] Flag Description -e Displays error statistics (Read_Err. interval Refreshes the statistics display based on the interval value (seconds). --region Displays information for cold and hot disk regions (Cold_Reads. If the --io option is entered. then the value is displayed as number of I/Os. Use Ctrl-C to stop the interval display. Hot_Reads. dg_fra command.rebalance power is not specified. If the --io option is entered. then the value is displayed as number of I/Os. then the value is displayed as number of I/Os. Writes Number of bytes written from the disk. lsop set to 6. -H Suppresses column headings. Reads Number of bytes read from the disk. If the --io option is entered. rebal [--power power_value] [-w] diskgroup Flag Description --power Power setting (0 to 11). Dsk_Name Name of the disk. Cold_Writes Number of bytes written from the cold disk region. then the value is displayed as number of I/Os. lsop Power 6 is: iostat command Will display I/O statistics of disks in mounted ASM diskgroups. the value defaults to the setting of the ASM_POWER_LIMIT initialization parameter. To see the complete set of statistics for a diskgroup. --io Displays information in number of I/Os. instead of bytes. Hot_Writes). Write_Time).

then the value displayed (bytes or I/Os) is the difference between the previous and current values. disk if the Write_Time I/O time (in hundredths of a second) for write requests for the TIMED_STATISTICSinitialization parameter is set to TRUE (0 if set to FALSE).1. then the value is displayed as number of I/Os.0. from V$ASM_ATTRIBUTE. disk if the If a refresh interval is not specified. .0 ASMCMD> setattr -G DG_ASM_DATA au_size 2M SQL equivalent for setattr command is: SQL> ALTER DISKGROUP disk_group SET ATTRIBUTE attribute_name=attribute_value. If a refresh interval is specified. Read_Err Number of failed I/O read requests for the disk. ASMCMD [+] > iostat --io -G data Group_Name Dsk_Name Reads Writes DATA DATA_0000 2801 34918 DATA DATA_0001 58301 35700 DATA DATA_0002 3320 36345 ASMCMD> iostat -t Group_Name Disk_Name Reads Writes Read_Time Write_Time FRA DATA_0099 54601 38411 441.694266 SQL equivalent for iostat command is: SQL> SELECT * FROM V$ASM_DISK_IOSTAT.0.0.rdbms 11. Read_Time I/O time (in hundredths of a second) for read requests for the TIMED_STATISTICSinitialization parameter is set to TRUE (0 if set to FALSE). the number displayed represents the total number of bytes or I/Os.234546 672.2. setattr -G disk_group attribute_name attribute_value ASMCMD> setattr -G DG_ASM_FRA compatible.0. not the total value. If the --io option is entered.Hot_Writes Number of bytes written from the hot disk region.0 ASMCMD> setattr -G DG_ASM_DATA compatible.asm 11. The first example displays disk I/O statistics for the data diskgroup in total number of bytes. lsattr [-G diskgroup] [-Hlm] [pattern] Flag Description -G Diskgroup name. ASMCMD> iostat -G DG_DATA Group_Name Disk_Name Reads Writes DG_DATA DATA_0010 DG_DATA DATA_0011 4860 18398 58486 29183 The second example displays disk I/O statistics for the data diskgroup in total number of I/O operations. Write_Err Number of failed I/O write requests for the disk. lsattr command List attributes of a diskgroup. setattr command setattr command will change an attribute of a diskgroup.

0 disk_repair_time 3. lsod Lists the open lsod [-H] [-G diskgroup] [--process process_name] [pattern] Flag Description -H Suppresses column header information from the output.2.0 compatible. The RO (read-only) column identifies those attributes that can only be set when a diskgroup is created.asm 11. ASM command disks.0.smart_scan_capable FALSE compatible. pattern Specifies a pattern to filter the list of disks. --process Specifies a pattern to filter the list of processes.0. -m Displays additional information. such as the RO and Sys columns.0 > lsattr -l -G fra %compat* Value 11.0.umask 066 au_size 1048576 cell. -G Specifies the diskgroup that contains the open disks.0.0 disk_repair_time 3. .rdbms 10.0.rdbms 11.asm 11.2.0 SQL equivalent for lsattr command is: SQL> SELECT * FROM V$ASM_ATTRIBUTE.smart_scan_capable FALSE compatible.rdbms 10.0.umask 066 au_size 1048576 cell.enabled FALSE access_control.0.2.0.2.asm compatible. -l Display names with values.6h sector_size 512 ASMCMD [+] Name compatible. The rebalance operation (RBAL) opens a disk both globally and locally so the same disk may be listed twice in the output for the RBAL process.0.0.1.0.0 compatible.1.0 ASMCMD> lsattr -l -G DG_ASM_FRA Name Value access_control.rdbms 11.2. The Sys column identifies those attributes that are system-created.6h sector_size 512 ASMCMD> setattr -G DG_ASM_FRA compatible.-H Suppresses column headings. ASMCMD> lsattr -l -G DG_ASM_FRA Name Value access_control.0.0.0. pattern Display the attributes that contain pattern expression.enabled FALSE access_control.

lstmpl Lists all templates or lstmpl [-Hl] [-G diskgroup] [pattern] Flag Description -H Suppresses column headings. mirror.The first example lists the open devices associated with the data diskgroup and the LGWR process. the templates for a specified command diskgroup. -G Specifies diskgroup name. --striping Striping specification. {coarse|fine}] {hot|cold}] a [--striping [--primary --redundancy Redundancy specification. either coarse or fine. --secondary Intelligent Data Placement specification for secondary extents. -l Displays all details. The new template has the redundancy set to mirror and the striping set to coarse. ASMCMD [+] > lsod --process LGWR Instance Process OSPID 1 oracle@dadvmn0652 (LGWR) 26593 1 oracle@dadvmn0652 (LGWR) 26593 1 oracle@dadvmn0652 (LGWR) 26593 ASMCMD Template mktmpl Adds a template mktmpl -G diskgroup [--redundancy {high|mirror|unprotected}] [--secondary {hot|cold}] template Flag Description -G Name of the diskgroup. match the diska diska Path /devices/diska1 /devices/diska2 /devices/diska3 Management to Commands command diskgroup. either hot or cold region. either hot or cold region. template Name of the template to create. --primary Intelligent Data Placement specification for primary extents. ASMCMD [+] > lsod -G data --process LGWR Instance Process OSPID Path 1 oracle@dadvmn0652 (LGWR) 26593 /devices/diska1 1 oracle@dadvmn0652 (LGWR) 26593 /devices/diska2 1 oracle@dadvmn0652 (LGWR) 26593 /devices/diska3 1 oracle@dadvmn0652 (LGWR) 26593 /devices/diskb1 1 oracle@dadvmn0652 (LGWR) 26593 /devices/diskb2 1 oracle@dadvmn0652 (LGWR) 26593 /devices/diskb3 1 oracle@dadvmn0652 (LGWR) 26593 /devices/diskd1 The second example lists the open devices associated with the LGWR process for disks that pattern. . either high... or unprotected. ASMCMD [+] > mktmpl -G dg_data --redundancy mirror --striping coarse temp_mc SQL equivalent for mktmpl command is: SQL> ALTER DISKGROUP disk_group ADD TEMPLATE template_name . The following example adds temp_mc template to the dg_data diskgroup..

{coarse|fine}] {hot|cold}] a [--striping [--primary --redundancy Redundancy specification. either hot or cold region. --primary Intelligent Data Placement specification for primary extents. of { diskgroup. either high. dg_data MirrReg COLD COLD COLD COLD COLD COLD COLD COLD COLD COLD COLD COLD COLD COLD COLD COLD command template. At least one of these options is required: --striping.. The following example updates temp_hf template of the dg_fra diskgroup. either hot or cold region. is: V$ASM_TEMPLATE.. --primary. ASMCMD [+] > chtmpl -G dg_fra --redundancy high --striping fine temp_hf SQL equivalent for chtmpl command is: SQL> ALTER DISKGROUP disk_group ALTER TEMPLATE template_name . --striping Striping specification. template Name of the template to change.. mirror. The example lists all details of the templates in the dg_data ASMCMD [+] > lstmpl -l -G Group_Name Group_Num Name Stripe Sys Redund PriReg DG_DATA 1 ARCHIVELOG COARSE Y MIRROR COLD DG_DATA 1 ASMPARAMETERFILE COARSE Y MIRROR COLD DG_DATA 1 AUTOBACKUP COARSE Y MIRROR COLD DG_DATA 1 BACKUPSET COARSE Y MIRROR COLD DG_DATA 1 CHANGETRACKING COARSE Y MIRROR COLD DG_DATA 1 CONTROLFILE FINE Y HIGH COLD DG_DATA 1 DATAFILE COARSE Y MIRROR COLD DG_DATA 1 DATAGUARDCONFIG COARSE Y MIRROR COLD DG_DATA 1 DUMPSET COARSE Y MIRROR COLD DG_DATA 1 FLASHBACK COARSE Y MIRROR COLD DG_DATA 1 MYTEMPLATE FINE N HIGH COLD DG_DATA 1 OCRFILE COARSE Y MIRROR COLD DG_DATA 1 ONLINELOG COARSE Y MIRROR COLD DG_DATA 1 PARAMETERFILE COARSE Y MIRROR COLD DG_DATA 1 TEMPFILE COARSE Y MIRROR COLD DG_DATA 1 XTRANSPORT COARSE Y MIRROR COLD SQL SQL> equivalent for lstmpl command * FROM SELECT chtmpl Changes the attributes chtmpl -G diskgroup [--redundancy {high|mirror|unprotected}] [--secondary {hot|cold}]} template Flag Description -G Name of the diskgroup. rmtmpl command . The redundancy attribute is set to high and the striping attribute is set to fine. --redundancy.pattern Displays the templates that match pattern expression. either coarse or fine. or --secondary. or unprotected. --secondary Intelligent Data Placement specification for secondary extents.

The default value is 4. a volume device name is created with a unique Oracle ADVM persistent diskgroup number that is concatenated to the end of the volume name. No space is allowed. the volume device name is in the format volume_name-nnn. at power-of-two intervals. The unit designation must be appended to the number specified. the setting defaults to the redundancy level of the diskgroup. volume1 DATA VOLUME1 /dev/asm/volume1-123 . template dg_data dg_data diskgroup. --width Stripe width of a volume. dashes are not allowed. -The range of values are as follows: unprotected for non-mirrored redundancy. A successful volume creation automatically enables the volume device. volume When creating an Oracle ADVM volume. P. The first character must be alphabetic.. ASMCMD Volume Management Commands volcreate command Creates an Oracle ADVM volume in the specified diskgroup. ASMCMD [+] > volcreate -G dg_data -s 10G --width 64K --column 8 volume1 You can ASMCMD Diskgroup Volume Volume determine [+] the volume > device name volinfo -G Name: Name: Device: with the volinfo dg_data command. with a default of 128 KB. Redundancy of the Oracle ADVM volume which can be specified for normal redundancy diskgroups. The following example creates volume1 in the dg_data diskgroup with the size set to 10 gigabytes. or E. For example: 20G --column Number of columns in a stripe set. Can be a maximum of 11 alphanumeric characters. On Windows the volume device name is in the format asm-volume_name-nnn. The volume device file functions as any other disk or logical volume to mount file systems or for applications to use directly. either hot or cold region. -s size Size of the volume to be created in units of K.. T. G.Removes rmtmpl a template from diskgroup -G The following ASMCMD example [+] removes temp_uf template > rmtmpl from -G a the diskgroup. The value can range from 4 KB to 1 MB. mirror for doubleredundancy mirrored redundancy. On Linux. Name of the volume to be created. If redundancy is not specified. M. The unique number can be one to three digits. or high for triple-mirrored redundancy. volcreate -G diskgroup -s size [--column number] [--width stripe_width] [--redundancy {high|mirror|unprotected}] [--primary {hot|cold}] [--secondary {hot|cold}] volume Flag Description -G Name of the diskgroup containing the volume.. such as asm-volume1-123. --primary Intelligent Data Placement specification for primary extents. --secondary Intelligent Data Placement specification for secondary extents. temp_uf SQL equivalent for rmtmpl command is: SQL> ALTER DISKGROUP disk_group DROP TEMPLATE template_name . such as volume1-123. either hot or cold region. Values range from 1 to 8.

volume Name of the volume.\asm-volume1-311 . The first example displays information about the volume1 volume in the dg_data diskgroup and was produced in a Linux environment. A volume device name is required. specifies all volumes within that diskgroup. The mount path field displays the last mount path for the volume. --show_volume Returns only the volume name. volume} volumedevice} Flag Description -a When used without a diskgroup name. specifies all volumes within all diskgroups. When used with a diskgroup name (-G diskgroup -a). -G Name of the diskgroup containing the volume. ASMCMD [+] > volinfo -G dg_data volume1 Diskgroup Name: DG_DATA Volume Name: VOLUME1 Volume Device: /dev/asm/volume1-123 State: ENABLED Size (MB): 10240 Resize Unit (MB): 512 Redundancy: MIRROR Stripe Columns: 8 Stripe Width (K): 64 Usage: ACFS Mountpath: /u01/app/acfsmounts/acfs1 The second example displays information about the asm-volume1 volume in the dg_data diskgroup and was produced in a Windows environment. volinfo Displays volinfo volinfo {-a information | -G about Oracle diskgroup -a | [--show_diskgroup|--show_volume] -G ADVM diskgroup command volumes. volumedevice Name of the volume device.State: Size Resize Redundancy: Stripe Stripe Usage: Mountpath: ENABLED 10240 512 MIRROR 8 64 (MB): Unit (MB): Columns: Width ASMCMD [+] > volcreate (K): -G dg_fra -s 100M vol2 SQL equivalent for volcreate command is: SQL> ALTER DISKGROUP disk_group ADD VOLUME volume_name SIZE n. --show_diskgroup Returns only the diskgroup name. A volume device name is required. ASMCMD [+] > volinfo -G dg_data -a Diskgroup Name: DG_DATA Volume Name: VOLUME1 Volume Device: \\.

You must first dismount the Oracle ACFS file system before disabling the volume. specifies all volumes within that diskgroup. Can be maximum of 30 alphanumeric characters. volume ADVM diskgroup To successfully execute this command. Before disabling a volume. You can delete a volume without first disabling the volume. . voldisable command Disables Oracle ADVM volumes in mounted diskgroups and removes the volume device on the local node. you must ensure that there are no active file systems associated with the volume. When used with a diskgroup name (-G diskgroup -a). You can disable volumes before shutting down an Oracle ASM instance or dismounting a diskgroup to verify that the operations can be accomplished normally without including a force option due to open volume files. The first character must be alphabetic. Before deleting a volume. The following example disables volume1 in the dg_data diskgroup. you must ensure that there are no active file systems associated with the volume. The following ASMCMD example [+] deletes > volume1 voldelete from -G the dg_data dg_data diskgroup. command volume.State: Size Resize Redundancy: Stripe Stripe Usage: Mountpath: ENABLED 1024 256 MIRROR 4 128 ACFS C:\oracle\acfsmounts\acfs1 (MB): Unit (MB): Columns: Width ASMCMD (K): [+] > SQL equivalent for volinfo command is: SQL> SELECT * FROM V$ASM_VOLUME. voldisable {-a | -G diskgroup -a | -G diskgroup volume} Flag Description -a When used without a diskgroup name. volume Name of the volume to be operated on. -G Name of the diskgroup containing the volume. Disabling a volume also prevents any subsequent opens on the volume or device file because it no longer exists. volume1 SQL equivalent for voldelete command is: SQL> ALTER DISKGROUP disk_group DROP VOLUME volume_name. the local Oracle ASM instance must be running and the diskgroup required by this command must be mounted in the Oracle ASM instance. specifies all volumes within all diskgroups. SQL> SELECT voldelete Deletes voldelete * an volinfo FROM Oracle -G -a V$ASM_VOLUME_STAT.

then dismount the file system first before resizing. volenable {-a | -G diskgroup -a | -G diskgroup volume} Flag Description -a When used without a diskgroup name. G. If the new size is smaller than current. M. You must use the acfsutil size command. ADVM Flag Description -G Name of the diskgroup containing the volume. volume Name of the volume to be operated on. specifies all volumes within all diskgroups. volset Sets attributes of an Oracle ADVM volume in mounted commnad diskgroups. When used with a diskgroup name (-G diskgroup -a). ASMCMD [+] > volresize -G dg_data -s 20G volume1 SQL equivalent for volresize command is: SQL> ALTER DISKGROUP disk_group RESIZE VOLUME volume_name SIZE n. -f Force the shrinking of a volume that is not an Oracle ACFS volume to suppress the warning message. If the volume is mounted on a non-Oracle ACFS file system. The following is an example of the volresize command that resizes volume1 in the dg_data diskgroup to 20 gigabytes. volresize Resizes an volresize -G diskgroup -s size [-f] volume Oracle command volume. Unless the -f (force) option is specified. which also resizes the volume and file system. . -G Name of the diskgroup containing the volume.ASMCMD [+] > voldisable -G dg_data volume1 SQL equivalent for voldisable command is: SQL> ALTER DISKGROUP disk_group DISABLE VOLUME volume_name. The following ASMCMD example [+] enables > volume1 volenable in -G the dg_data dg_data diskgroup. -s New size of the volume in units of K. you are prompted whether to continue with the operation. then you cannot resize the volume with the volresize command. volume Name of the volume to be operated on. or T. volume1 SQL equivalent for volenable command is: SQL> ALTER DISKGROUP disk_group ENABLE VOLUME volume_name. A volume is enabled when it is created. specifies all volumes within that diskgroup. If there is an Oracle ACFS file system on the volume. volenable command Enables Oracle ADVM volumes in mounted diskgroups. you are warned of possible data corruption.

--secondary Intelligent Data Placement specification for secondary extents. The following is an example of the volstat command that displays information about volumes in the dg_data diskgroup. the mountpath field should not be updated.  If the diskgroup is not specified and the volume name is specified. volume Name of the volume to be operated on.volset -G diskgroup [--usagestring string] [--mountpath mount_path] [--primary {hot|cold}] [--secondary {hot|cold}] volume Flag Description -G Name of the diskgroup containing the volume. When running the mkfs command to create a file system. This string is set to ACFS when the volume is attached to an Oracle ACFS file system and should not be changed. volstat Reports volstat I/O statistics for [-G Oracle ADVM diskgroup] command volumes. either hot or cold region. This string is set when the file system is mounted and should not be changed. all volumes are displayed for the named diskgroup. The usage field should remain at ACFS. [volume] The following apply when using the volstat command. After the value is set by the mount command.  If the diskgroup name is specified and the volume name is omitted. ASMCMD [+] > volset -G dg_arch --usagestring 'no file system created' volume1 ASMCMD [+] > volset -G dg_data --usagestring 'acfs' volume1 SQL equivalent for volset command is: SQL> ALTER DISKGROUP disk_group MODIFY VOLUME volume_name USAGE 'usage_string. the mountpath field is set to the mount path value to identify the mount point for the file system. all volumes on all diskgroups are displayed. --usagestring Optional usage string to tag a volume which can be up to 30 characters.  If both the diskgroup name and the volume name are omitted. When running the mount command to mount a file system. the usage field is set to ACFS and mountpath field is reset to an empty string if it has been set. --primary Intelligent Data Placement specification for primary extents. all mounted diskgroups are searched for the specified volume name. either hot or cold region. ASMCMD [+] > volstat -G dg_data DISKGROUP NUMBER / NAME: 1 / DG_DATA --------------------------------------VOLUME_NAME READS BYTES_READ READ_TIME READ_ERRS WRITES BYTES_WRITTEN WRITE_TIME WRITE_ERRS ------------------------------------------------------------VOLUME1 10085 2290573312 22923 0 . The following is an example of a volset command that sets the usage string for a volume that is not associated with a file system. --mountpath Optional string to tag a volume with its mount path string which can be up to 1024 characters.

265. rmalias management commands are asmcmd command line history The asmcmd utility does not provide command history with the up-arrow key. that can be done by adding the following entry to the oracle user’s profile file: alias asmcmd='rlwrap asmcmd' Auditing in Oracle Auditing in Oracle The auditing mechanism for Oracle is extremely flexible.691577151 orcl orcl +dg_data/orcl/datafile/users. mkalias.259.691577271 orcl orcl +dg_data/orcl/onlinelog/group_3. ASMCMD File Management Commands lsof command Lists the open files of the local clients.258.260.264. lsof [-H] {-G diskgroup|--dbname db|-C instance} Flag Description -H Suppresses column headings. ls. With rlwrap utility installed.679227351 Other file cd.691577151 orcl orcl +dg_data/orcl/onlinelog/group_1.257.691577275 orcl orcl +dg_data/orcl/tempfile/temp.691577263 orcl orcl +dg_data/orcl/datafile/example.691577287 ASMCMD [+] > lsof -C +ASM DB_Name Instance_Name Path asmvol +ASM +data/VOLUME1.263. du.691577267 orcl orcl +dg_data/orcl/onlinelog/group_2.679226013 asmvol +ASM +data/VOLUME2. rm.272.256. Oracle stores information that is relevant to . --dbname List files only from this specified database. ASMCMD [+] > lsof -G dg_data DB_Name Instance_Name Path orcl orcl +dg_data/orcl/controlfile/current. pwd.262.691577295 orcl orcl +dg_data/orcl/datafile/sysaux.691577149 orcl orcl +dg_data/orcl/datafile/system. -G List files only from this specified disk group. cp.1382 5309440 SQL equivalent for volstat command is: SQL> SELECT 1482 * FROM 0 V$ASM_VOLUME_STAT. find.691577149 orcl orcl +dg_data/orcl/datafile/undotbs1.271. -C List files only from this specified instance.261.

ora file. The XML options were brought in Oracle 10g Release 2.------------- audit_file_dest string ?/rdbms/audit audit_sys_operations boolean FALSE audit_syslog_level string NONE audit_trail string DB transaction_auditing boolean TRUE AUDIT_TRAIL can have the following values. but the SQL_BIND and SQL_TEXT columns are also populated. It is also the location for all mandatory auditing specified by the AUDIT_SYS_OPERATIONS parameter.  DB_EXTENDED –> Same as DB.  XML_EXTENDED –> Same as XML. Every time a user attempts anything in the database where audit is enabled the Oracle kernel checks to see if an audit record should be created or updated (in the case or a session record) and generates the record in a table owned by the SYS user called AUD$.  XML-> Auditing is enabled.EXTENDED". This in itself can cause problems with potential denial of service attacks. Note: In Oracle 10g Release 1. auditing is disabled by default. Default until Oracle 10g. with all audit records stored as XML format OS files. with all audit records stored in the database audit trial (AUD$). with all audit records directed to the operating system's file specified by AUDIT_FILE_DEST. AUDIT_TRAIL={NONE or FALSE| OS| DB or TRUE| DB_EXTENDED| XML |XML_EXTENDED} The following list provides a description of each value:  NONE or FALSE -> Auditing is disabled. The AUDIT_FILE_DEST parameter specifies the OS directory used for the audit trail when the OS.  OS -> Auditing is enabled. Default from Oracle 11g. XML and XML_EXTENDED options are used.auditing in its data dictionary. by default. init parameters Until Oracle 10g. the database will hang. auditing is enabled for some system level privileges. DB_EXTENDED was used in place of "DB. This table is. but can be enabled by setting the AUDIT_TRAIL static parameter in the init.  DB or TRUE -> Auditing is enabled. located in the SYSTEM tablespace. If the SYSTEM tablespace fills up. SQL> show parameter audit NAME TYPE VALUE ---------------------- -----------. but the SQL_BIND and SQL_TEXT columns are also populated. From Oracle 11g. .

SQL> select * from DBA_PRIV_AUDIT_OPTS. All system privileges that are found in SYSTEM_PRIVILEGE_MAP can be audited. including the SYS user.sql script while connected as SYS (no need to run this. update. Audit options BY SESSION Specify BY SESSION if you want Oracle to write a single record for all SQL statements of the same type issued and operations of the same type executed on the same schema objects in the same session. There are three levels that can be audited: Statement level Auditing will be done at statement level. packages. stored procedures and stored functions.The AUDIT_SYS_OPERATIONS static parameter enables or disables the auditing of operations issued by users connecting with SYSDBA or SYSOPER privileges.sql at the time of database creation). sequences. Oracle database can write to an operating system audit file but cannot read it to detect whether an . SQL> audit create tablespace. The other clauses are optional and enabling them allows audit be more specific. SQL> audit insert. Statements that can be audited are found in STMT_AUDIT_OPTION_MAP. SQL> select * from DBA_STMT_AUDIT_OPTS. Audit records can be found in DBA_PRIV_AUDIT_OPTS. if you ran catalog. These objects can be audited: tables. views. All audit records are written to the OS audit trail. SQL> audit table by scott. Object level Auditing will be done at object level. Privilege level Auditing will be done at privilege level. Audit records can be found in DBA_OBJ_AUDIT_OPTS. Audit records can be found in DBA_STMT_AUDIT_OPTS. SQL> select * from DBA_OBJ_AUDIT_OPTS. Run the $ORACLE_HOME/rdbms/admin/cataudit. Start Syntax of audit {statement_option|privilege_option} {successful|not successful}] Auditing [by audit user] [by {session|access}] command: [whenever Only the statement_option or privilege_option part is mandatory. delete on scott. alter tablespace by all. Specify ALL PRIVILEGES to audit all system privileges.emp by hr.

SQL> audit alter sequence by tester by access. update. For statement options and system privileges that audit SQL statements other than DDL. It's contents can be viewed in the following views: · DBA_AUDIT_TRAIL · DBA_OBJ_AUDIT_OPTS · DBA_PRIV_AUDIT_OPTS · DBA_STMT_AUDIT_OPTS · DBA_AUDIT_EXISTS · DBA_AUDIT_OBJECT · DBA_AUDIT_SESSION · DBA_AUDIT_STATEMENT . then Oracle Database performs the audit regardless of success or failure. SQL> audit create. If you specify statement options or system privileges that audit data definition language (DDL) statements. Therefore. WHENEVER [NOT] SUCCESSFUL Specify WHENEVER SUCCESSFUL to audit only SQL statements and operations that succeed. Auditing for every statement that performs any operation on the sequence SQL> AUDIT ALL ON hr. then the database may write multiple records to the audit trail file even if you specify BY SESSION.entry has already been written for a particular operation. Specify WHENEVER NOT SUCCESSFUL to audit only SQL statements and operations that fail or result in errors. then the database automatically audits by access regardless of whether you specify the BY SESSION clause or BY ACCESS clause. drop or set a role).emp_seq. Auditing for every statement that reads files from database directory SQL> AUDIT READ ON DIRECTORY ext_dir.AUD$. alter. SQL> AUDIT ROLE.emp by hr by session whenever not successful. SQL> audit update on health by access. If you omit this clause. you can specify either BY SESSION or BY ACCESS. if you are using an operating system file for the audit trail (that is. alter. View Audit Trail The audit trail is stored in the base table SYS. SQL> audit insert. delete on hr. drop on currency by xe by session. BY ACCESS Specify BY ACCESS if you want Oracle database to write one record for each audited statement and operation. the AUDIT_TRAIL initialization parameter is set to OS). SQL> audit alter materialized view by session. Examples Auditing for every SQL statement related to roles (create. BY SESSION is the default. SQL> audit materialized view by pingme by access whenever successful.

 OS_PROCESS . Maintenance The audit trail must be deleted/archived on a regular basis to prevent the SYS. The SQL_BIND and SQL_TEXT columns are only populated when the AUDIT_TRAIL=DB_EXTENDED or AUDIT_TRAIL=XML_EXTENDED initialization parameter is set.Transaction identifier for the audited transaction.The values of any bind variables if any.A more precise value than the existing TIMESTAMP column. FGA supports DML statements in addition to selects.  SCN . Timestamp .  TRANSACTIONID .The INSTANCE_NUMBER value from the actioning instance.  SQL_BIND . SELECT. Action Name . This is usually just the user SYS or any user who has had permissions.Oracle Username.AUD$ table growing to an unacceptable size.  GLOBAL_UID . Object Name . introduced in Oracle9i.Operating system process id for the oracle process. DELETE. not for DML such as update.  SQL_TEXT . UPDATE. .  PROXY_SESSIONID . These roles should not be granted to general users.AUD$ for select and delete. From Oracle 10g.name of the object that was interacted with. Object Owner .  INSTANCE_NUMBER .The action that occurred against the object (INSERT.Machine that the user performed the action from.Global Universal Identifier for an enterprise user. Auditing modifications of the data in the audit trail itself can be achieved as follows SQL> AUDIT INSERT.AUD$ can access the table to select. but they work for select statements only. allowed recording of row-level changes along with SCN numbers to reconstruct the old data.System change number of the query. Only users who have been granted specific access to SYS. EXECUTE) Fine Grained Auditing (FGA). these are DELETE_CATALOG_ROLE and SELECT_CATALOG_ROLE. Several fields have been added to both the standard and fine-grained audit trails:  EXTENDED_TIMESTAMP . UPDATE. insert. This column can be used to join to the XID column on the FLASHBACK_TRANSACTION_QUERY view.Proxy session serial number when an enterprise user is logging in via the proxy method.aud$ BY ACCESS. but the following are most likely to be of interest: Username . DELETE ON sys. and delete. There are two specific roles that allow access to SYS.When the action occurred.The owner of the object that was interacted with.· AUDIT_ACTIONS · DBA_AUDIT_POLICIES · DBA_AUDIT_POLICY_COLUMNS · DBA_COMMON_AUDIT_TRAIL · DBA_FGA_AUDIT_TRAIL (FGA_LOG$) · DBA_REPAUDIT_ATTRIBUTE · DBA_REPAUDIT_COLUMN The audit trail contains lots of data. Terminal .The SQL statement that initiated the audit action. alter or delete from it. This column can be used in flashback queries.

AUD$ and SYS. hr. DELETE TABLE. A NOAUDIT statement that sets statement and privilege audit options can include the BY USER option to specify a list of users to limit the scope of the statement and privilege audit options. Background processes consolidate functions that would otherwise be handled by multiple database programs running for each user process. CKPT. Background processes are the processes running behind the scene and are meant to perform certain maintenance activities or to deal with abnormal conditions arising in the instance. these run as separate threads within the same service. All other processes are optional. SQL> NOAUDIT. will be invoked if that particular feature is activated. PMON. SQL> NOAUDIT SELECT TABLE. privilege and object audit options. SQL> NOAUDIT ALL ON DEFAULT. SQL> NOAUDIT ALL PRIVILEGES. Some are mandatory and some are optional. and RECO. the instance will be aborted. a multiprocess Oracle database system uses background processes. Each background process is meant for a specific purpose and its role is well defined. SQL> NOAUDIT ALL. From Oracle 11g R2. SQL> NOAUDIT DELETE ON emp. Any issues related to background processes should be monitored and analyzed from the trace files generated and the alert log.PROGRAM from v$session where TYPE='BACKGROUND'. Not all background processes are mandatory for an instance. To findout background processes from database: SQL> select SID. Oracle background processes are visible as separate operating system processes in Unix/Linux. Background processes are started automatically when the instance is started. .FGA_LOG$) tablespace and we can periodically delete the audit trail records using DBMS_AUDIT_MGMT package. EXECUTE PROCEDURE.aud$. Use it to reset statement.To delete all audit records from the audit trail: SQL> DELETE FROM sys. In Windows. INSERT TABLE. Background Processes in oracle To maximize performance and accommodate many users. Disabling Auditing The NOAUDIT statement turns off the various audit options of Oracle. To findout background processes from OS: $ ps -ef|grep ora_|grep SID Mandatory Background Processes in Oracle If any one of these 6 mandatory background processes is killed/not running. SQL> NOAUDIT session BY scott. Mandatory background processes are DBWn. SQL> NOAUDIT session. LGWR. Background processes asynchronously perform I/O and monitor other Oracle database processes to provide increased parallelism for better performance and reliability. SMON. we can change audit table's (SYS.

oracle calls DBWn and synchronizes all the dirty blocks in database buffer cache to the respective datafiles. If this parameter is not set correctly. One DBWn process is adequate for most systems. oracle calls DBWn. scattered or randomly. then oracle calls DBWn. Redolog file contains changes to any datafile. The possible multiple DBWR processes in RAC must be coordinated through the locking and global cache processes to ensure efficient processing is accomplished. This is done through the initialization parameter DB_BLOCK_LRU_LATCHES. sequentially. 2) Log Writer (maximum 1) LGWR LGWR writes redo data from redolog buffers to (online) redolog files.  DBWn has a time out value (3 seconds by default) and it wakes up whether there are any dirty blocks or not. depends on the number of CPUs allocated to the instance. The content of the redolog file is file id.DBWa- DBWj Whenever a log switch is occurring as redolog file is becoming CURRENT to ACTIVE stage.  When a server process cannot find a clean reusable buffer after scanning a threshold number of buffers.  When Table DROPped or TRUNCATEed. Multiple database writers can be configured by initialization parameter DB_WRITER_PROCESSES.1) Database Writer (maximum 20) DBW0-DBW9. multiple DB writers can end up contending for the same block list.  When a huge table wants to enter into SGA and oracle could not find enough free space where it decides to flush out LRU blocks and which happens to be dirty blocks. DBWn will be invoked in following scenarios:  When the dirty blocks in SGA reaches to a threshold value.  When tablespace is going to OFFLINE/READ ONLY/BEGIN BACKUP. To have more than one DBWn only make sense if each DBWn has been allocated its own list of blocks to write to disk. oracle calls DBWn. Database writer (or Dirty Buffer Writer) process does multi-block writing to disk asynchronously.  When the database is shutting down with some dirty blocks in the SGA. . block id and new content.  When a checkpoint is issued.  Oracle RAC ping request is made. Before flushing out the dirty blocks.

When the LGWR is writing to a particular redolog file. the LGWR will even write uncommitted redolog entries to release the held buffers. INACTIVE and this is a cyclic process.  Whenever the log writer times out (3sec). each RAC instance has its own LGWR process that maintains that instance’s thread of redo logs. unless we receive the acknowledgement. Changes to actual data blocks are deferred until a convenient time (Fast-Commit mechanism). the log writer will log an error in the LGWR trace file and the alert log. For every small update we don’t want to open huge gigabytes of datafiles. a System Change Number (SCN) is generated and tagged to it. when additional redolog buffer space is required. LGWR will be invoked in following scenarios:  LGWR is invoked whenever 1/3rd of the redo buffer is filled up. If the file is filled up completely then a log switch takes place and the LGWR starts writing to the second file (this is the reason every database requires a minimum of 2 redolog groups).  Shutting down the database. ACTIVE. Log writer will write synchronously to the redolog groups in a circular fashion. 3) Checkpoint (maximum 1) CKPT . Sometimes. All redo records associated with changes in the block buffers must be written to disk first (The write-ahead protocol). Log writer puts a commit record in the redolog buffer and writes it to disk immediately along with the transaction's redo entries. When a transaction is committed.  Whenever 1MB of redolog buffer is filled (This means that there is no sense in making the redolog buffer more than 3MB). instead write to the log file.  When a transaction is completed (either committed or rollbacked) then oracle calls the LGWR and synchronizes the log buffers to the redolog files and then only passes on the acknowledgement back to the user. if the DBWn process finds that some redo information has not been written. that file is said to be in CURRENT status.  When DBWn signals the writing of redo records to disk. Newly created redolog file will be in UNUSED state. If any damage is identified with a redolog file. Which means the transaction is not guaranteed although we said commit. Redolog file has three stages CURRENT. The file which is filled up now becomes from CURRENT to ACTIVE. While writing dirty buffers. In RAC.LGWR will be invoked more often than DBWn as log files are really small when compared to datafiles (KB vs GB).  Whenever checkpoint event occurs. LGWR can also use group commits (multiple committed transaction's redo entries taken together) to write to redologs when a database is undergoing heavy write operations. it signals the LGWR to write the information and waits until the control is returned.

the SMON process of one instance can perform instance recovery for other instances that have failed. SQL> ALTER SYSTEM SWITCH LOGFILE. exists between the incremental checkpoint and the tail of the log. This SCN is called checkpoint SCN. SQL> ALTER SYSTEM CHECKPOINT. o Manual checkpoint. The parameter FAST_START_MTTR_TARGET replaces LOG_CHECKPOINT_INTERVAL and FAST_START_IO_TARGET. the time specified by the initialization parameter FAST_START_MTTR_TARGET (in seconds) is reached and specifies the time required for a crash recovery.  SMON wakes up about every 5 minutes to perform housekeeping activities. which is known as INSTANCE CRASH RECOVERY. o The number of buffers specified by the initialization parameter FAST_START_IO_TARGET required to perform roll-forward is reached. o Whenever manual log switch is done. Hence it requires some recovery. 4) System Monitor (maximum 1) SMON  If the database is crashed (power failure) and next time when we restart the database SMON observes that last time the database was not shutdown gracefully. o When the number of OS blocks specified by the initialization parameter LOG_CHECKPOINT_INTERVAL. exists between the incremental checkpoint and the tail of the log. 5) Process Monitor (maximum 1) PMON .  If SMON observes some uncommitted transaction which has already updated the table in the datafile. This is done by LGWR.  Checkpoint event can be occurred in following conditions: o Whenever database buffer cache filled up.  It also coalesces contiguous free extents in dictionary managed tablespaces that have PCTINCREASE set to a non-zero value. 1second from 10g). When performing the crash recovery before the database is completely open. o Log switch occurred. to synchronize all database files with the checkpoint information. o Whenever BEGIN BACKUP command is issued. o When the time specified by the initialization parameter LOG_CHECKPOINT_TIMEOUT (in seconds). When checkpoint occurred it will invoke the DBWn and updates the SCN block of the all datafiles and the control file with the current SCN. will now be applied from redolog files to datafiles. o Whenever times out (3seconds until 9i. o Graceful shutdown of the database. It ensures data consistency and faster database recovery in case of a crash. if it finds any transaction committed but not found in the datafiles.  In RAC environment.  SMON also cleans up temporary segments that are no longer in use. is going to be treated as a in doubt transaction and will be rolled back with the help of before image available in rollback segments. o Oracle 9i onwards.Checkpoint is a background process which triggers the checkpoint event. but these parameters can still be used.

RECO will connect to the remote database to resolve pending transactions. Pending distributed transactions are two-phase commit transactions involving multiple databases. If a negative request is received from one of the other sites. It will send request to other databases involved in two-phase commit if they are ready to commit. there is a chance that an error (network related or otherwise) causes the two-phase commit transaction to be left in pending state (i. If the old session locked any resources that will be unlocked by PMON. The database that the transaction started is normally the coordinator. the distributed transaction will be committed on all sites. 6) Recoverer (maximum 1) RECO [Mandatory from Oracle 10g] This process is intended for recovery in distributed databases. the entire transaction will be rolled back. However. It's the role of the RECO process to liaise with the coordinator to resolve the pending two-phase commit transaction. not committed or rolled back). Optional Background Processes in Oracle Archiver (maximum 10) ARC0-ARC9 . It will rollback uncommitted transactions. PMON also registers information about the instance and dispatcher processes with Oracle (network) listener.e. Otherwise. The distributed transaction recovery process finds pending distributed transactions and resolves them. RECO will either commit or rollback this transaction. PMON is responsible for performing recovery if a user process fails. All in-doubt transactions are recovered by this process in the distributed database setup. PMON’s role as service registration agent is particularly important. PMON also checks the dispatcher & server processes and restarts them if they have failed. In RAC. PMON is responsible for cleaning up the database buffer cache and freeing resources that were allocated to a process. PMON wakes up every 3 seconds to perform housekeeping activities.If a client has an open transaction which is no longer active (client session is closed) then PMON comes into the picture and that transaction becomes in doubt transaction which will be rolled back.

and from 11gR2 the default value is 1000. Archives the standby redo logs applied by the managed recovery process (MRP). it is not released to log writer for overwriting. The coordinator process dynamically spawns job queue slave processes (J000-J999) to run the jobs. Coordinated Job Queue Processes (maximum 1000) CJQ0/Jnnn Job queue processes carry out batch processing. If a job is due. ARCH processes. Unless ARCn completes the copying of a redolog file. scheduled by the Oracle job queue. It periodically selects jobs (from JOB$) that need to be run. it spawns Jnnnn processes to handle jobs. These jobs could be PL/SQL statements or procedures on an Oracle instance. The initialization parameter JOB_QUEUE_PROCESSES specifies the maximum job processes that can be run concurrently. the various ARCH processes can be utilized to ensure that copies of the archived redo logs for each instance are available to the other instances in the RAC setup should they be needed for recovery. max 10). In RAC. Prior to 11gR2 the default value is 0. This is the Oracle’s dynamic job queue coordinator. These processes will be useful in refreshing materialized views. DBMS_JOB and DBMS_SCHEDULER work without setting JOB_QUEUE_PROCESSES.The ARCn process is responsible for writing the online redolog files to the mentioned archive log destination after a log switch has occurred. Archive log files are used for media recovery (in case of a hard disk failure and for maintaining an Oracle standby database via log shipping). ARCn is present only if the database is running in archivelog mode and automatic archiving is enabled. CQJ0 . All scheduled jobs are executed by these processes. The number of archiver processes that can be invoked initially is specified by the initialization parameter LOG_ARCHIVE_MAX_PROCESSES (by default 2. . The actual number of archiver processes in use may vary based on the workload. running on primary database.Job queue controller process wakes up periodically and checks the job log. select archived redo logs and send them to standby database. The log writer process is responsible for starting multiple ARCn processes when the workload increases. From Oracle 11g release2.

The LISTENER process is responsible for load balance and failover in case a RAC instance fails or is overloaded. Lock monitor co-ordinates with the Process Monitor (PMON) to recover dead processes that hold instance locks. Global Cache Service (LMS) In an Oracle Real Application Clusters environment. LISTENER The LISTENER process listens for connection requests on a specified port and passes these requests to either a distributor process if MTS is configured. or to a dedicated process if MTS is not used.Dedicated Server Dedicated server processes are used when MTS is not used. Each user process gets a dedicated connection to the database. Lock processes (maximum 10) LCK0. this process manages resources and provides inter-instance resource control. Lock Monitor (maximum 1) LMON Lock monitor manages global locks and resources. Lock monitor also recovers instance lock information prior to the instance recovery process. Lock Manager Daemon (maximum 10) LMDn LMDn processes manage instance locks that are used to share resources between instances.LCK9 The instance locks that are used to share resources between instances are held by the lock processes. These user processes also handle disk reads from database datafiles into the database block buffers. . CALLOUT Listener Used by internal processes to make calls to externally stored procedures. It handles the redistribution of instance locks whenever instances are started or shutdown. LMDn processes also handle deadlock detection and remote lock requests.

and is meant for allowing a publish/subscribe style of messaging between applications. QMNn monitors the message queues. in certain circumstances. Shared Server Processes (maximum 1000) Snnn Intended for multi threaded server (MTS) setups. Maximum shared server processes can be specified by MAX_SHARED_SERVERS. Event Monitor (maximum 1) EMN0/EMON This process is also related to advanced queuing. The maximum number of dispatcher process can be specified using the initialization parameter MAX_DISPATCHERS. Dnnn are mediators between the client processes and the shared server processes.Block Server Process (maximum 10) BSP0-BSP9 Block server Processes have to do with providing a consistent read image of a buffer that is requested by a process of another instance. Queue Monitor (maximum 10) QMN0-QMN9 This is the advanced queuing time manager process. Dispatcher processes listen to and receive requests from connected sessions and places them in the request queue for further processing. The Maximum number of parallel processes that can be invoked is specified by the initialization parameter PARALLEL_MAX_SERVERS. process them and then return the results to a result queue. . QMN used to manage Oracle Streams Advanced Queuing. Trace Writer (maximum 1) TRWR Trace writer writes trace files from an Oracle internal tracing facility. Dispatcher (maximum 1000) Dnnn Intended for multi threaded server (MTS) setups. Dispatcher processes also pickup outgoing responses from the result queue and transmit them back to the clients. These processes pickup requests from the call request queue. These user processes also handle disk reads from database datafiles into the database block buffers. The number of shared server processes to be created at instance startup can be specified using the initialization parameter SHARED_SERVERS. Parallel Execution/Query Slaves (maximum 1000) Pnnn These processes are used for parallel processing. It can be used for parallel execution of SQL statements or recovery.

The RFS process communicates with the logical standby process (LSP) to coordinate and record which files arrived. this managed recovery process will apply archived redo logs to the standby database.Input/Output Slaves (maximum 1000) Innn These processes are used to simulate asynchronous I/O on platforms that do not support it. The LSP also maintains metadata in the database. LNS process performs actual network I/O and waits for each network I/O to complete. and apply completed SQL transactions from the archived redo logs. Each LNS has a user configurable buffer that is used to accept outbound redo data from the LGWR process. . analyze. This process is obsolete and has been removed. Logical Standby Process LSP The logical standby process is the coordinator process for a set of processes that concurrently read. Remote File Server process RFS The remote file server process. build. Managed Recovery Process MRP In Data Guard environment. in Data Guard environment. The NET_TIMEOUT attribute is used only when the LGWR process transmits redo data using a LGWR Network Server(LNS) process. Wakeup Monitor Process (maximum 1) WMON This process was available in older versions of Oracle to alarm other processes that are suspended while waiting for an event to occur. The initialization parameter DBWR_IO_SLAVES is set for this purpose. prepare. LGWR Network Server process LNS In Data Guard. Dataguard Monitor (maximum 1) DMON The Dataguard broker process. DMON is started when Dataguard is started. on the standby database receives archived redo logs from the primary database.

Memory Monitor (maximum 1) MMON MMON monitors SGA and performs various manageability related background tasks. used to collect statistics for the Automatic Workload Repository (AWR). This process performs frequent and lightweight manageability-related tasks. Multiple FAL servers can be run on a primary database. Initiates transfer of archived redo logs when it detects a gap sequence. Fetch Archive Log (FAL) Client Pulls archived redo log files from the primary site. Optimized incremental backups using block change tracking (faster incremental backups) using a file (named block change tracking file). large pool. MMON. Memory Monitor Light (maximum 1) MMNL New background process in Oracle 10g. one for each FAL request. It is a new process added to Oracle 10g as part of automatic shared memory management. the Oracle 10g background process. shared pool and java pool and serves as SGA memory broker. .Recovery Writer (maximum 1) RVWR This is responsible for writing flashback logs (to FRA). such as session history capture and metrics computation. Change Tracking Writer (maximum 1) CTWR CTWR will be useful in RMAN. Fetch Archive Log (FAL) Server Services requests for archive redo logs from FAL clients running on multiple standby databases. CTWR (Change Tracking Writer) is the background process responsible for tracking the blocks. New Background Processes in Oracle 10g Memory Manager (maximum 1) MMAN MMAN dynamically adjust the sizes of the SGA components like buffer cache.

File Monitor (FMON) The database communicates with the mapping libraries provided by storage vendors through an external non-Oracle Database process that is spawned by a background process called FMON. New Background Processes in Oracle 11g  ACMS . then the FMON process is spawned. .Diagnosibility process 0  DIAG .Konductor (Conductor) of ASM Temporary Errands  MARK . It determines the list of jobs that must be created for each maintenance window.Global Transaction Process 0  KATE .Atomic Controlfile to Memory Server  DBRM .Mark Allocation unit for Resync Koordinator (coordinator)  SMCO . It is spawned by the MMON background process at the start of the maintenance window. Re-Balance RBAL RBAL is the ASM related process that performs rebalancing of disk resources controlled by ASM.Autotask Background Process Autotask Background Process (ABP) It translates tasks into jobs for execution by the scheduler.Flashback Data Archiver  GTX0 . Actual Rebalance ARBx ARBx is configured by ASM_POWER_LIMIT.Diagnosibility process  FBDA .Space Manager  VKTM .ASMB This ASMB process is used to provide information to and from cluster synchronization services used byASM to manage the disk resources. It's also used to update statistics and provide a heart beat mechanism.Space Management Worker Processes  ABP . When you specify the FILE_MAPPING initialization parameter for mapping datafiles to physical devices on a storage subsystem.Virtual Keeper of TiMe process  W000 .Database Resource Manager  DIA0 . FMON is responsible for managing the mapping information. Stores task execution history in the SYSAUX tablespace.

the parallel capability is only available on Oracle 10g and Oracle 11g Enterprise Editions. A new public interface package. since the new utilities have vastly improved performance and greatly enhanced functionality. provides a server-side infrastructure for fast data and metadata movement. DBMS_DATAPUMP.Dynamic Intimate Shared Memory (DISM) By default. From Oracle Database 10g. Datapump is available on the Standard Edition. Data Dictionary views V$ views Data will not be lost Data will be lost if instance is even after instance is shutdowned shutdowned (some are) Will be accessible Will be accessible only even if instance is in mount or if instance is OPENED nomount stage (STARTED) Data dictionary view V$ view names are singular names are plural Datapump Datapump utility in Oracle Oracle Datapump was brought in Oracle Database 10g. However. bulk data and metadata movement of Oracle database contents. Datapump will make use of streams_pool_size. Oracle Datapump provides high speed. as replacement for the original Export and Import utilities. which improves Oracle's performance linearity under load. Oracle uses intimate shared memory (ISM) instead of standard System V shared memory on Solaris Operating system. new Datapump Export (expdp) and Import (impdp) clients that use this interface have been provided. Data Dictionary views Vs V$ views in Oracle Here are the differences between data dictionary views and V$ views. in Oracle. When a shared memory segment is made into an ISM segment. This greatly reduces the overhead due to process context switches.. Datapump is .e. parallel. it cannot be paged out). it is mapped using large pages and the memory for the segment is locked (i. Oracle recommends that customers use these new Datapump Export and Import clients rather than the Original Export and Original Import clients. It’s a server-side replacement for the original Export and Import utilities. Enterprise Edition and Personal Edition.

etc.Writes either – Direct Path unloads – External tables (part of cluster.Writing to external tables 4.power outage. no parallelism  Difficult to monitor job progress  Limited flexibility in object selection  No callable API  Difficult to maintain  Non-restartable  Client-side. Oracle 11g. Datapump Features 1. Datapump doesn’t need a lot of additional system or database resources.included on all the platforms supported by Oracle 10g. a single stream of data unload is about 2 times faster than normal Export because the Direct Path API has been modified to be even more efficient. etc) 2. whereas Datapump Import uses the Direct Path method of loading. Original Export and Import Limitations  Design not scalable to large databases  Slow unloading and loading of data. the job can be parallelized for even more improvement. The reason it is so much faster is that Conventional Import uses only conventional mode inserts. A single stream of data load is 15-45 times faster than normal Import. the job can be dynamically throttled to reduce the number of execution threads.All stopped Datapump jobs can be restarted without loss of data as long as the master table and dump file set remain undisturbed while the job is stopped. As with Export.Command line interface 3. the level of improvement can be much more.  May be stopped and restarted later  Abnormally terminated job is also restartable  Current objects can be skipped on restart if problematic . single threaded execution  Limited tuning mechanisms  Limited object filtering The Datapump system requirements are the same as the standard Oracle Database 10g requirements. It doesn’t matter if the job was stopped voluntarily by a user of if the stoppage was involuntary due to a crash. Using the Direct Path method of unloading. If system resource consumption becomes an issue while a Datapump job is executing.DBMS_METADATA – Metadata API 6. Depending on the level of parallelism.Checkpoint / Job Restart  Job progress recorded in Master Table . has LOB.DBMS_DATAPUMP – Datapump API 5. but the time to extract and treat the information will be dependent on the CPU and memory available on each machine.

Clients may also detach from an executing job without affecting it.  Job state and description . table.Multiple clients can attach to a job to see what is going on. the master table and dump files are deleted  CONTINUE: leave interactive mode. so there is no need to uncompress a dumpfile before importing it. dmp2dir:full2%u. The default method for determining this is to estimate the size of a partition by counting the number of blocks currently allocated to it.  Dumpfile set coherency automatically maintained  Datapump supplies encryption options for more flexible and robust security. statistics can also be used which should provide a more accurate estimate. can use ADD_FILE command and specify a FILESIZE value  Wildcard file specs supported ..  Per-worker status showing current object and percent done  Enterprise Manager interface available .  START_JOB: Restart a previously stopped job.Dumpfile Set Management  Directory based: e. the approximate size of the job is estimated before it gets underway. The user gets an estimate of how much dump file space will be consumed for the operation. schema.Better Job Monitoring and Control  Can detach from and attach to running jobs from any location . leaving it restartable. DMPDIR:export01.At Export time. Datapump compression is fully inline on the import side as well.Interactive Mode for expdp and impdp clients  PARALLEL: add or remove workers  ADD_FILE: add dump files and wildcard specs  STATUS: get detailed per-worker status  STOP_JOB {=IMMEDIATE}: stop the job.If a job ran out of space. He can then extrapolate the time required to get the job completed.  Automatically uncompressed during Import.dmp where DMPDIR is external directory  Can specify maximum size of each dumpfile  Can dynamically add dumpfiles to jobs . continue logging  EXIT: exit client.The jobs can be monitored from any location 8.g.g.: Dumpfile=dmp1dir:full1%u. immediate doesn’t wait for workers to finish current work items. tablespace.dmp. can change reporting interval  KILL_JOB: stop job and delete all its resources.Metadata is compressed before being written to the dumpfile set COMPRESSION=METADATA_ONLY  In Oracle Database 11g.7. Datapump compression is an inline operation. this compression capability has been extended so that we can now compress table data on export.Network Mode Datapump Export and Import both support a network mode in which the job’s source is a remote . leaving it unrestartable. leave job running All modes of operation are supported: full. 9. so the reduced dumpfile size means a significant savings in disk space. and transportable tablespace.  Initial job space estimate and overall percent done . the user can get a status of the job by seeing a percentage of how far along he is in the job. 10.Once the Export begins.Wildcard file support makes it easy to spread the I/O load over multiple spindles: e.dmp  Dump file compression of metadata . If tables have been analyzed.

procedures. intra-partition parallelism is not supported in network operations.  REMAP_TABLESPACE -> REMAP_TABLESPACE allows objects to be moved from one tablespace to another. Both network export and import use database links to communicate with the remote source.…)’” e. so second level. using Import.DDL Transformations Easy with XML. and packages with names starting with ‘PAYROLL’. First level parallelism is supported for both network export and import. 12.  Exclude parameter: specified object types are excluded from the operation  Include parameter: only the specified object types are included  Both take an optional name filter for even finer granularity: INCLUDE/ PACKAGE: “LIKE PAYROLL%’” EXCLUDE TABLE: “in (‘FOO’.g. Network Export – Unload a remote database to a local dump file set – Allows export of read-only databases for archiving Network Import – Overlap execution of extract and load – No intermediate dump files Because Datapump maintains a Master Control Table and must perform database writes. and packages with names starting with ‘PAYROLL’ from the job.  REMAP_SCHEMA -> REMAP_SCHEMA provides the old ‘FROMUSER / TOUSER’ capability to change object ownership. and loading the data. would include the functions. so those processes don’t have to be serialized.Fine-Grained Object Selection  All object types are supported .’BAR’. This is an overlap of unloading the data.: EXCLUDE=function EXCLUDE=procedure EXCLUDE=package:”like ‘PAYROLL%’ “ Would exclude all functions. Using INCLUDE instead of EXCLUDE above. a Datapump job can include or exclude any type of object and any subset of objects within a type. We don’t have to worry about allocating file space because there are no intermediate dump files. because object metadata is stored as XML in the dump file set. A database link is used for the network.it is easy to apply transformations when DDL is being formed (via XSL-T) during import. procedures.  Segment and storage attributes can be suppressed -> The TRANSFORM parameter can also be used so that storage clauses are not generated in the DDL. This is useful if the storage characteristics of the target instance are very different from those of the source. I/O servers do not operate remotely. . This changes the tablespace definition as well  REMAP_DATAFILE -> REMAP_DATAFILE is useful when moving databases across platforms that have different file system semantics. 11.With the new EXCLUDE and INCLUDE parameters. Datapump can’t directly Export a Read-only database.Oracle instance. using Export. Network mode allows the user to export Read-Only databases: The Datapump Export job runs locally on a read/write instance and extracts the data and metadata from the remote read-only instance.

the DBA can monitor a job from multiple locations and know how far along it is. it is much easier for the DBA to manage and monitor jobs. packages. With Datapump. The metadata capabilities of the Datapump are also available as a separate PL/SQL package. so a single stream of Datapump Import is 10-45 times faster than normal Import. o Since the jobs are completed much more quickly than before. Almost all database object types can be excluded or included in an . what objects are being worked on. o Original Import uses only conventional mode inserts. there is much more flexibility in selecting objects for unload and load operations. the job’s single stream can be changed to parallel streams for even more improvement.e. While importing. if destination schema is not existed. Depending on the level of parallelism. Datapump requires no special tuning. the level of improvement can be much more. o Datapump is publicly available as a PL/SQL package (DBMS_DATAPUMP). DBMS_METADATA. o With Datapump. production systems have less downtime. All the Oracle database data types are supported via Datapump’s two data movement mechanisms. o We can dynamically throttle the number of threads of execution throughout the lifetime of the job. There is an interactive command mode where we can adjust the level of parallelism. We can now unload any subset of database objects (such as functions. so customers can write their own data movement utilities if so desired. etc. and then increase it at night to a higher level. and stop it for later restart. how much there is left to go. we can start up a job during the day with a PARALLEL=2.Datapump Benefits (advantages over normal export & import) o Restartable o Improved control o Files will created on server. Original Export and (especially) Import require careful tuning to achieve optimum results. not on client side o Parallel execution o Automated performance tuning o Simplified monitoring o Improved object filtering o Dump will be compressed o Data can be encrypted (in Oracle 11g or later) o Remap of data during export or import (in 11g or later) o We can export one or more partitions of a table without having to move the entire table (in 11g or later) o XML schemas and XMLType columns are supported for both export and import (in 11g or later) o Using the Direct Path method of unloading or loading data. i. abort it. As with Export. a single stream of Datapump export (unload) is approximately 2 times faster than original Export. Direct Path and External Tables. and procedures) and reload them on the target platform. Datapump runs optimally “out of the box”. adjust its resource consumption. For example. There are no Datapump performance tuning parameters other than the ability to dynamically adjust the degree of parallelism. datapump will create the user and import the objects. because the Direct Path API has been modified to be even more efficient. The DBA can also affect the job’s operation. During a longrunning job.

it is now possible to change the definition of some objects as they are created at import time.operation using the new Exclude and Include parameters. With Datapump. Even when transporting a tablespace. so if a job is stopped for any reason. and production databases. Oracle Datapump supports Oracle Apps 11i. it can be restarted at a later point in time. Datapump Vs SQL*Loader We can use SQL*Loader to load data from external files into tables of an Oracle database. logical database backup. We can use the “ESTIMATE ONLY” command to see how much disk space is required for the job’s dump file set before we start the operation. Datapump Export and Import may be used less frequently. The Master Table is the directory to the job. test. Many customers use SQL*Loader on a daily basis to load files (e. Whenever datapump export or import is running. Datapump handles all the necessary compatibility issues between hardware platforms and operating systems. and for application deployment throughout a corporation. and then plugged into another database. From this table. For example. This is really useful if we are moving across platforms with different file system syntax. Datapump Vs Transportable Tablespaces We can use Transportable Tablespaces when we want to move an entire tablespace of data from one Oracle database to another. Moving data using Transportable Tablespaces can be much faster than performing either an export or import of the same data. Datapump Export and Import are still used to handle the extraction and recreation of the metadata for that tablespace. Oracle will find out how much job has been completed and from where to continue etc. Transportable Tablespaces allows Oracle data files to be unplugged from a database. without losing any data. financial feeds) into their databases. because transporting a tablespace only requires the copying of datafiles and integrating the tablespace dictionary information. but for very important tasks. moving data between development. We can either use the Command line interface or the Oracle Enterprise Manager web-based GUI interface. Clients may also detach from an executing job without affecting it. such as migrating between platforms.g. Oracle will create a table with the JOB_NAME and will be deleted once the job is done. Datapump supports the Flashback infrastructure. Every Datapump job creates a Master Table in which the entire record of the job is maintained. Jobs can be monitored from any location is going on. Datapump •Can’t •Can't run Disadvantages use as Related DBA_DATAPUMP_JOBS USER_DATAPUMP_JOBS DBA_DIRECTORIES DATABASE_EXPORT_OBJECTS SCHEMA_EXPORT_OBJECTS TABLE_EXPORT_OBJECTS DBA_DATAPUMP_SESSIONS SYS UNIX (/ as pipes sysdba) Views . we can remap the source datafile name to the target datafile name in all DDL statements where the source datafile is referenced. moved or copied to another location. so we can perform an export and get a dumpfile set that is consistent with a specified point in time or SCN.

g. and (TRANSPARENT). scott2. dmpdir:scott3. ENCRYPTION_ALGORITHM Specify how encryption should be done. CONTENT Specifies data to unload.dmp.. ENCRYPTION_PASSWORD Password key for creating encrypted data within a dump file. Valid keyword values are: DUAL. we can do all exp/imp activities. (METADATA_ONLY). e. DATA_OPTIONS Data layer flags. and METADATA_ONLY. Valid value is: XML_CLOBS . and AES256. (DATA_PUMP_DIR) e.value2..write XML datatype in CLOB format. With datapump. Format: expdp KEYWORD=value or KEYWORD=(value1. AES192. create directory extdir as '/path/'.. Valid keyword values are: ALL. Valid keyword values are: ALL.. $ expdp help=y Keyword Description (Default) ATTACH Attach to an existing job. DATA_ONLY. ENCRYPTED_COLUMNS_ONLY. except incremental backups. DUMPFILE=scott1. Valid keyword values are: (BLOCKS) and .g. or NONE.dmp. This user must have read & write permissions on DIRECTORY. Valid keyword values are: (AES128).valueN) USERID must be the first parameter on the command line.DMP). COMPRESSION Reduce the size of a dumpfile.Datapump Export & Import utilities in Oracle Continuation of Datapump .g. PASSWORD. DATA_ONLY. ENCRYPTION Encrypt part or all of a dump file.DATA_ONLY and NONE. METADATA_ONLY.. DIRECTORY Directory object to be used for dumpfiles and logfiles.. e. DUMPFILE List of destination dump files (EXPDAT.dmp. expdp utility The Data Pump export utility provides a mechanism for transferring data objects between Oracle databases. ATTACH [=job name]. ESTIMATE Calculate job estimates.. ENCRYPTION_MODE Method of generating encryption key. Valid keyword values are: (ALL).

PARALLEL Change the number of active workers for current job. e. SAMPLE Percentage of data to be exported. where XXXX can be FULL or SCHEMA or TABLE). INCLUDE Include specific object types. EXCLUDE Exclude specific object types. e. e.g. ESTIMATE_ONLY Calculate job estimates without performing the export. REMAP_DATA Specify a data conversion function. INCLUDE=TABLE_DATA. . To use this option user must have EXP_FULL_DATABASE role. SCHEMAS List of schemas to export (login schema). FLASHBACK_TIME Time used to find the closest corresponding SCN value. e.g. SOURCE_EDITION Edition to be used for extracting metadata (from Oracle 11g release2).g.g. QUERY=emp:"WHERE dept_id > 10".EMPNO REUSE_DUMPFILES Overwrite destination dump file if it exists (N). JOB_NAME Name of export job (default name will be SYS_EXPORT_XXXX_01.LOG). NETWORK_LINK Name of remote database link to the source system. REMAP_DATA=EMP. FLASHBACK_SCN SCN used to reset session snapshot. EXCLUDE=TABLE:EMP FILESIZE Specify the size of each dumpfile in units of bytes. LOGFILE Specify log file name (EXPORT. STATUS Frequency (secs) job status is to be monitored where the default (0) will show new status when available. FULL Export entire database (N).STATISTICS. PARFILE Specify parameter file name. QUERY Predicate clause used to export a subset of a table. NOLOGFILE Do not write logfile (N).EMPNO:SCOTT. HELP Display help messages (N).

TABLES=HR. TRANSPORTABLE Specify whether transportable method can be used.SH.TABLES Identifies a list of tables to export. TRANSPORT_FULL_CHECK Verify storage segments of all tables (N). TRANSPORT_TABLESPACES List of tablespaces from which metadata will be unloaded. STATUS Frequency (secs) job status is to be monitored where the default (0) will show . (NEVER). EXIT_CLIENT Quit client session and leave job running.g. Valid value is: SKIP_CURRENT. The following commands are valid while in interactive mode. Note: abbreviations are allowed Datapump Export interactive mode While exporting is going on.SALES:SALES_1995. KILL_JOB Detach and delete job. CONTINUE_CLIENT Return to logging mode. VERSION Version of objects to export. or any valid database version. LATEST. Job will be re-started if idle. it will stop the displaying of the messages on the screen. PARALLEL=number of workers REUSE_DUMPFILES Overwrite destination dump file if it exists (N).EMP. Valid keyword values are: ALWAYS. FILESIZE Default filesize (bytes) for subsequent ADD_FILE commands. TABLESPACES Identifies a list of tablespaces to export. PARALLEL Change the number of active workers for current job. press Control-C to go to interactive mode. Valid keywords are: (COMPATIBLE). Export> [[here you can use the below interactive commands]] Command ADD_FILE Description Add dumpfile to dumpfile set. START_JOB Start/resume current job. but not the export process itself. HELP Summarize interactive commands. e.

Oracle will create a table with the JOB_NAME and will be deleted once the job is done.log FILESIZE=300000 DUMPFILE=example1. Datapump Export Examples SQL> CREATE DIRECTORY dp_dir AS '/u02/dpdata'. ==> creating an external directory and granting privileges. SQL> GRANT READ.dmp LOGFILE=search. . The options in sky blue color are the enhancements in Oracle 11g Release1.dmp LOGFILE=liv_full.own. with the help of 4 processes. $ expdp DUMPFILE=master.dmp JOB_NAME=example1 ==> exporting all the objects of a schema. $ expdp DUMPFILE=liv_full.log SCHEMAS=search.log SCHEMAS=satya (or) $ expdp system/manager SCHEMAS=hr DIRECTORY=data_pump_dir LOGFILE=example1.dmp LOGFILE=master. From this table. The options in blue color are the enhancements in Oracle 11g Release2. $ expdp ATTACH=EXAMPLE1 ==> continuing or attaching job to background process. STOP_JOB=IMMEDIATE performs an immediate shutdown of Datapump job.new status when available. Note: values within parenthesis are the default values. Oracle will find out how much job has been completed and from where to continue etc.tester ==> exporting all the objects of multiple schemas. Whenever datapump export or import is running. STATUS[=interval] STOP_JOB Orderly shutdown of job execution and exits the client. $ expdp DUMPFILE=search.log FULL=y PARALLEL=4 ==> exporting whole database. WRITE ON DIRECTORY dp_dir TO user_name.

$ expdp SCHEMAS=u1.dmp DIRECTORY=dpump_dir INCLUDE=GRANT INCLUDE=INDEX CONTENT=ALL ==> exporting an entire database to a dump file with all GRANTS.dmp PARALLEL=4 JOB_NAME=kick_export ==> exporting all the rows in table.log SCHEMAS=ym ESTIMATE_ONLY=Y (or) $ expdp LOGFILE=t5. COMPRESSION=metadata_only ==> exporting two schemas and compressing the metadata.....$ expdp anand/coffee TABLES=kick DIRECTORY=ext_dir DUMPFILE=expkick_%U.dmp LOGFILE=t5. $ expdp SCHEMAS=cpp.dmp DIRECTORY=dpump_dir INCLUDE=PROCEDURE:\"=\'PROC1\'\".log ==> exporting without specifying DIRECTORY option and specifying the external directory name within the file names.dmp DIRECTORY=dpump_dir INCLUDE=PROCEDURE ==> exporting all the procedures.dmp LOGFILE=extdir:avail.log SCHEMAS=manage ESTIMATE_ONLY=Y ==> estimating export time and size.. $ expdp username/password DUMPFILE=dba.u6 .. COMPRESSION=all ==> exporting two schemas and compressing the data (valid in 11g or later). $ expdp DUMPFILE=t5. . $ expdp username/password FULL=y DUMPFILE=dba.FUNCTION:\"=\'FUNC1\'\" ==> exporting procedure PROC1 and function FUNC1. INDEXES and data $ expdp DUMPFILE=dba. $ expdp DUMPFILE=extdir:avail.java .

4). dbms_datapump.activity REMAP_DATA=holder.add_file(handle. 'SCHEMA_EXPR'.detach(handle).put_line(substr(sqlerrm. Exporting using Datapump API (DBMS_DATAPUMP package) declare handle number. 254)). 'scott.$ expdp username/password DUMPFILE=dba. Datapump Import can always read dump file sets created by older versions of Data Pump Export. $ expdp TABLES=hr..cardno:hidedata.. 'SCHEMA').dmp DIRECTORY=dpump_dir INCLUDE=TABLE:"LIKE 'TAB%'" (or) $ expdp username/password DUMPFILE=dba.cardno:hidedata.set_parallel(handle. 'EXTDIR').dmp DIRECTORY=dpump_dir EXCLUDE=TABLE:"NOT LIKE 'TAB%'" ==> exporting only those tables whose name start with TAB.open ('EXPORT'.dmp ==> exporting and remapping of data.metadata_filter(handle. dbms_datapump. begin handle := dbms_datapump.dmp'.value2.newcc REMAP_DATA=activity.1 DIRECTORY=dpump_dir1 DUMPFILE=emp. Format: impdp KEYWORD=value or KEYWORD=(value1.start_job(handle). exception when others then dbms_output.valueN) USERID must be the first parameter on the command line. dbms_datapump. .employees VERSION=10. dbms_datapump.. dbms_datapump.newcc DIRECTORY=dpump_dir DUMPFILE=hremp2..'=''SCOTT''').dmp ==> exporting data with version. end. / impdp utility The Data Pump Import utility provides a mechanism for transferring data objects between Oracle databases. $ expdp TABLES=holder. This user must have read & write permissions on DIRECTORY. 1.

DIRECTORY Directory object to be used for dump.dmp. INCLUDE=TABLE_DATA. . HELP Display help messages (N).g. and METADATA_ONLY. Not valid for network import jobs. FULL Import everything from source (Y). ATTACH [=job name]. EXCLUDE=TABLE:EMP FLASHBACK_SCN SCN used to reset session snapshot. JOB_NAME Name of import job (default name will be SYS_IMPORT_XXXX_01. and sql files. log. DUMPFILE=scott1. Valid keywords are:(ALL).g. LOGFILE Log file name (IMPORT. scott2.g.dmp. ENCRYPTION_PASSWORD Password key for accessing encrypted data within a dump file. Valid keywords are:(BLOCKS) and STATISTICS. ESTIMATE Calculate job estimates. To use this option (full import of the database) the user must have IMP_FULL_DATABASE role. (DATA_PUMP_DIR) DUMPFILE List of dumpfiles to import from (EXPDAT. e. DATA_OPTIONS Data layer flags. DATA_ONLY. INCLUDE Include specific object types. Valid value is: SKIP_CONSTRAINT_ERRORS-constraint errors are not fatal.g.dmp.LOG).DMP). CONTENT Specifies data to load. FLASHBACK_TIME Time used to find the closest corresponding SCN value. NETWORK_LINK Name of remote database link to the source system. e. e. EXCLUDE Exclude specific object types. dmpdir:scott3. e.$ impdp help=y Keyword Description (Default) ATTACH Attach to an existing job. NOLOGFILE Do not write logfile. where XXXX can be FULL or SCHEMA or TABLE).

SQLFILE Write all the SQL DDL to a specified file. REMAP_DATA Specify a data conversion function. Valid keywords: (SKIP). REUSE_DATAFILES Tablespace will be initialized if it already exists(N).PARALLEL Change the number of active workers for current job.EMPNO:SCOTT. MERGE and (NONE). STREAMS_CONFIGURATION Enable the loading of streams metadata. QUERY=emp:"WHERE dept_id > 10". APPEND. REMAP_TABLESPACE Tablespace object are remapped to another tablespace. REMAP_DATA=EMP.SH. REMAP_TABLE=EMP.EMP. PARFILE Specify parameter file name. TABLESPACES Identifies a list of tablespaces to import. e.g. TABLES=HR.SALES:SALES_1995. TARGET_EDITION Edition to be used for loading metadata (from Oracle 11g . SKIP_UNUSABLE_INDEXES Skip indexes that were set to the Index Unusable state. REPLACE and TRUNCATE.EMPNO REMAP_DATAFILE Redefine datafile references in all DDL statements. QUERY Predicate clause used to import a subset of a table. e. REMAP_SCHEMA Objects from one schema are loaded into another schema. SOURCE_EDITION Edition to be used for extracting metadata (from Oracle 11g release2).g. Valid keywords are: DEPARTITION. TABLE_EXISTS_ACTION Action to take if imported object already exists. STATUS Frequency (secs) job status is to be monitored where the default (0) will show new status when available. TABLES Identifies a list of tables to import.g. e.EMPNO. REMAP_TABLE Table names are remapped to another table. e.g. SCHEMAS List of schemas to import. PARTITION_OPTIONS Specify how partitions should be transformed.EMPNO:SCOTT.

LATEST. STORAGE. STATUS Frequency (secs) job status is to be monitored where the default (0) will . Job will be re-started if idle. VERSION Version of objects to export. KILL_JOB Detach and delete job. press Control-C to go to interactive mode. TRANSPORT_DATAFILES List of datafiles to be imported by transportable mode. The following commands are valid while in interactive mode. EXIT_CLIENT Quit client session and leave job running. Valid keywords: SEGMENT_ATTRIBUTES. Valid keywords are: (COMPATIBLE). Only valid in NETWORK_LINK mode import operations. HELP Summarize interactive commands. TRANSPORT_FULL_CHECK Verify storage segments of all tables (N). PARALLEL=number of workers START_JOB Start/resume current job. TRANSPORT_TABLESPACES List of tablespaces from which metadata will be loaded.release2). Import> [[here you can use the below interactive commands]] Command Description (Default) CONTINUE_CLIENT Return to logging mode. or any valid database version. Only valid for NETWORK_LINK and SQLFILE. TRANSPORTABLE Options for choosing transportable data movement. Only valid in NETWORK_LINK mode import operations. TRANSFORM Metadata transform to apply to applicable objects. OID and PCTSPACE. PARALLEL Change the number of active workers for current job. Note: abbreviations are allowed Datapump Import interactive mode While importing is going on. START_JOB=SKIP_CURRENT will start the job after skipping any action which was in progress when job was stopped. Valid keywords: ALWAYS and (NEVER).

Note: values within parenthesis are the default values. The options in blue color are the enhancements in Oracle 11g Release2.log TABLES=tester.dmp LOGFILE=aslv_full. $ impdp system/manager DUMPFILE=testdb_emp. with the help of 4 processes.dmp LOGFILE=ref1imp. STOP_JOB=IMMEDIATE performs an immediate shutdown of Datapump job.show new status when available.employee ==> importing all the records of table (employee table records in tester schema).dmp LOGFILE=testdb_emp_imp. mba) . The options in sky blue color are the enhancements in Oracle 11g Release1.log PARALLEL=4 ==> importing all the exported data. $ impdp DUMPFILE=visi. The order of importing objects is: Tablespaces Users Roles Database links Sequences Directories Synonyms Types Tables/Partitions Views Comments Packages/Procedures/Functions Materialized views Datapump Import Examples $ impdp DUMPFILE=aslv_full.log TABLES=(brand. STATUS[=interval] STOP_JOB Orderly shutdown of job execution and exits the client.

empno:fixusers.dmp REMAP_DATAFILE=”’C:\DB1\HR\PAYROLL\tbs6.dbf’:’/db1/hr/payroll/tbs6.empno:fixusers.sql INCLUDE=TABLE.dmp SQLFILE=dpump_dir2:expfull.dmp REMAP_DATA=emp.log JOB_NAME=example2 ==> importing data of one tablespace into another tablespace.INDEX ==> will create sqlfile with DDL that could be executed in another database/schema to create the tables and indexes. $ impdp username/password DIRECTORY=dpump DUMPFILE=expfull.dbf’” ==> importing data by remapping one datafile to another.dmp REMAP_TABLESPACE=system:example2 LOGFILE=example2imp. $ impdp DIRECTORY=dpump_dir DUMPFILE=emps. $ impdp system DUMPFILE=example2. $ impdp DUMPFILE=prod.dmp LOGFILE=prod.newempid REMAP_DATA=card. $ impdp username/password DIRECTORY=dpump_dir DUMPFILE=scott.dmp INCLUDE=PROCEDURE ==> importing only procedures from the dump file.newempi TABLE_EXISTS_ACTION=append ==> importing and remapping of data.log REMAP_TABLESPACE=FRI:WED TABLE_EXISTS_ACTION=REPLACE PARALLEL=4 ==> importing data and replacing already existing tables.==> importing all the records of couple of tables.emp REMAP_SCHEMA=scott:jim ==> importing of tables from scott’s account to jim’s account $ impdp DIRECTORY=dpump_dir FULL=Y DUMPFILE=db_full. Importing using Datapump API (DBMS_DATAPUMP package) declare .dmp TABLES=scott. $ impdp user1/user1 DUMPFILE=btw:avail.

you should set ORACLE_HOME.'TABLE_EXISTS_ACTION'. the PARALLEL parameter value should be less than or equal to the number of dump files.0 through Oracle9idump files will be able to be loaded into Oracle 10 g and later. 'scott. Before using these commands.For Data Pump Export. Original Export is desupported from 10g release 2. dbms_datapump. / Here is a general guideline for using the PARALLEL parameter: . ORACLE_SID and PATH environment variables. 1.open ('IMPORT'.4). end. exception when others then dbms_output. and transfer the data across databases that reside on different hardware platforms on different Oracle versions.'REPLACE'). . 'SCHEMA'). so that Oracle Version 5. Export (exp) and import (imp) utilities are used to perform logical database backup and recovery. then tune from there.set_parallel(handle. catexp.sql at the time of database creation).handle number. Export Import Export & Import utilities in Oracle Export and Import are the Oracle utilities that allow us to make exports & imports of the data objects. dbms_datapump. .For Data Pump Import. dbms_datapump. Datapump Import can only read Oracle Database 10g (and later) Datapump Export dump files. Oracle recommends that customers convert to use the Oracle Datapump.add_file(handle.put_line(substr(sqlerrm.set_parameter(handle. dbms_datapump. Original Import will be maintained and shipped forever. begin handle := dbms_datapump. .Set the degree of parallelism to two times the number of CPUs. if you ran catalog.A PARALLEL greater than one is only available in Enterprise Edition.sql (in $ORACLE_HOME/rdbms/admin) will create EXP_FULL_DATABASE & IMP_FULL_DATABASE roles (no need to run this. 'EXTDIR'). the PARALLEL parameter value should not be much larger than the number of files in the dump file set. database objects are dumped to a binary file which can then be imported into another Oracle database.detach(handle). dbms_datapump.dmp'. . When exporting. 254)).start_job(handle).

valid values are COMPLETE.value2.valueN) USERID must be the first parameter on the command line. $ exp help=y Keyword Description (Default) USERID username/password FULL export entire file (N). To do full database export.DMP) TABLES list of table names COMPRESS import into one extent (Y) RECORDLENGTH length of IO record GRANTS export grants (Y) INCTYPE incremental export type. you need EXP_FULL_DATABASE role.. OS dependent OWNER list of owner usernames FILE output files (EXPDAT. CUMULATIVE INDEXES export indexes (Y) RECORD track incremental export (Y) DIRECT direct path (N) TRIGGERS export triggers (Y) LOG log file of screen output STATISTICS analyze objects (ESTIMATE) ROWS export data rows (Y) .exp utility Objects owned by SYS cannot be exported. that user must have EXP_FULL_DATABASE role BUFFER size of data buffer. If you want to export objects of another schema... Format: exp KEYWORD=value or KEYWORD=(value1. INCREMENTAL..

log owner=owner direct=y STATISTICS=none ==> exporting all the objects of a schema.log full=y ==> exporting full database.dmp log=schemas.PARFILE parameter filename CONSISTENT cross-table consistency(N). $ exp system/manager file=owner.owner. Implements SET TRANSACTION READ ONLY CONSTRAINTS export constraints (Y) OBJECT_CONSISTENT transaction set to read only during object export (N) FEEDBACK display progress (a dot) for every N rows (0) FILESIZE maximum size of each dump file FLASHBACK_SCN SCN used to set session snapshot back to FLASHBACK_TIME time used to get the SCN closest to the specified time QUERY select clause used to export a subset of a table RESUMABLE suspend when a space related error is encountered(N) RESUMABLE_NAME text string used to identify resumable statement RESUMABLE_TIMEOUT wait time for RESUMABLE TTS_FULL_CHECK perform full or partial dependency check for TTS VOLSIZE number of bytes to write to each tape volume (not available from Oracle 11g Release2) TABLESPACES list of tablespaces to export TRANSPORT_TABLESPACE export transportable tablespace metadata (N) TEMPLATE template name which invokes iAS mode export Note: values within parenthesis are the default values. Examples: $ exp system/manager file=emp. $ exp file=schemas. .log owner=master.dmp log=emp_exp.user direct=y STATISTICS=none ==> exporting all the objects of multiple schemas.dmp log=owner.

imp utility imp provides backward compatibility i. $ exp FILE=file1.dmp. $ exp transport_tablespace=y tablespaces=THU statistics=none file=THU.valueN) USERID must be the first parameter on the command line.ITEM query=\"where CODE in \(\'OT65FR7H\'. imp doesn't recreate already existing objects.log inctype=cumulative ==> exporting cumulatively (taking backup from last complete or cumulative backup).log inctype=complete ==> exporting full database (after some incremental/cumulative backups).dmp log=itinitem..ITIN. $ exp file=itinitem.$ exp file=testdb_emp.file2. Format: imp KEYWORD=value or KEYWORD=(value1. it will allows you to import the objects that you have exported in lower Oracle versions also..dmp log=scott.log inctype=incremental ==> exporting incrementally (taking backup from last complete or cumulative or incremental backup).value2.dmp. It either abort the import process (default) or ignores the errors (if you specify IGNORE=Y).tom.\'ATQ56F7H\'\)\" statistics=none ==> exporting the records of some tables which satisfies a particular criteria.emp direct=y STATISTICS=none ==> exporting all the rows in table (emp table records in scott schema).log tables=scott. that user must have IMP_FULL_DATABASE role BUFFER size of data buffer.DMP) . $ exp file=scott. $ exp file=scott.dmp log=testdb_emp..log tables=tom.log ==> exporting to multiple files. To do the full database import.e. OS dependent FROMUSER list of owner usernames FILE input files (EXPDAT.file3.log ==> exporting at tablespace level.dmp log=scott.dmp FILESIZE=10M LOG=multiple.dmp log=thu_exp..dmp log=scott. $ exp file=scott. $ imp help=y Keyword Description (Default) USERID username/password FULL import entire file (N).

will be used to check the validity of the dump file TABLES list of table names IGNORE ignore create errors (N) RECORDLENGTH length of IO record GRANTS import grants (Y) INCTYPE incremental import type. and functions (Y) . RESTORE (for data) INDEXES import indexes (Y) COMMIT commit array insert (N) ROWS import data rows (Y) PARFILE parameter filename LOG log file of screen output CONSTRAINTS import constraints (Y) DESTROY overwrite tablespace datafile (N) INDEXFILE will write DDLs of the objects in the dumpfile into the specified file SKIP_UNUSABLE_INDEXES skip maintenance of unusable indexes (N) FEEDBACK display progress every x rows(0) TOID_NOVALIDATE skip validation of specified type ids FILESIZE maximum size of each dump file STATISTICS import precomputed statistics (ALWAYS) RESUMABLE suspend when a space related error is encountered(N) RESUMABLE_NAME text string used to identify resumable statement RESUMABLE_TIMEOUT wait time for RESUMABLE COMPILE compile procedures.TOUSER list of usernames SHOW just list file contents (N). valid keywords are SYSTEM (for definitions). packages.

dmp log=intenary. $ imp file=spider.Examples: $ imp system/manager file=emp.employee ==> importing all the records of table (employee table records in tester schema). This command won't import the objects.dbf $ imp file=transporter3.dmp log=testdb_emp_imp.STREAMS_CONFIGURATION import streams general metadata (Y) STREAMS_INSTANTIATION import streams instantiation metadata (N) VOLSIZE number of bytes in file on each volume of a file on tape (not available fromOracle 11g Release2) DATA_ONLY import only data (N) (from Oracle 11g Release2) The following keywords only apply to transportable tablespaces TRANSPORT_TABLESPACE import transportable tablespace metadata (N) TABLESPACES tablespaces to be transported into database DATAFILES datafiles to be transported into database TTS_OWNERS users that own data in the transportable tablespace set Note: values within parenthesis are the default values.dmp log=scott.dmp LOG=two. $ imp FILE=two.log inctype=system ==> importing definitions from backup.log tables=tester. $ imp file=scott.dmp log=emp_imp.log indexfile=scott_schema.dmp log=transporter3.log FROMUSER=tom TOUSER=jerry ignore=y ==> importing data of one schema into another schema $ imp "/as sysdba" file=TUE. $ imp system/manager file=intenary.sql ==> will write DDLs of the objects in exported dumpfile (scott schema) into specified file. book) ==> importing all the records of couple of tables.log full=y ==> importing all the exported data.log show=y ==> checks the validity of the dumpfile.dmp log=transporter3. $ imp file=transporter3. . $ imp system/manager file=testdb_emp.dmp TTS_OWNERS=OWNER tablespaces=TUE transport_tablespace=y datafiles=TUE.log inctype=restore ==> importing data from backup.log IGNORE=Y GRANTS=N INDEXES=N COMMIT=Y TABLES=(brand.dmp log=spider.

Set LOG_BUFFER to big size. imp: 1. It's advisable to drop indexes before importing to speed up the import process or set INDEXES=N and building indexes later on after the import. make sure they write to different disks. The flash recovery area is managed via Oracle Managed Files (OMF). . Unified Backup Files Storage. Stop unnecessary applications to free the resources. centralized. and it can utilize disk resources managed byAutomatic Storage Management (ASM). From Oracle 11g release2. once backup components have been successfully created. 4. if possible. Use COMMIT=n. unified storage area that keep all the database backup & recovery related files and performs those activities in Oracle databases. Flash recovery area can be configured for use by multiple database instances. as they fire during import. 8. Default is 256KB. 5. Place the file to be imported in separate disk from datafiles. 2. Related Views DBA_EXP_VERSION DBA_EXP_FILES DBA_EXP_OBJECTS Flash/Fast Recovery Area Flash/Fast Recovery Area(FRA) in Oracle The flash recovery area is the most powerful tool available from Oracle 10g. Set Parameter COMMIT_WRITE=NOWAIT(in 10g) or COMMIT_WAIT=NOWAIT (in 11g) during import. 6. Flash Recovery Area can be defined as a single. once the flash recovery area is configured. all backup components are managed automatically by Oracle. Set the BUFFER parameter to a high value. 4.How to improve Export & Import exp: 1. Use STATISTICS=NONE 9. Stop redolog archiving. Exporting to disk is faster. Automated Disk-Based Backup and Recovery. Default is 256KB. 5. 3. if possible. 3. 10. 6. Set the BUFFER parameter to a high value. flash recovery area is called as fast recovery area. all backup components can be stored in one consolidated spot. 7. RMAN (Recovery Manager) can be configured to automatically clean up files that are no longer needed (thus reducing risk of insufficient disk space for backups). 2. If you are running multiple sessions. Disable the INSERT triggers. that plays a vital role in performing database backup & recovery operations. Increase the DB_CACHE_SIZE. Automatic Deletion of Backup Components. Set the RECORDLENGTH parameter to a high value. Do not export to NFS (Network File Share). Indexes can easily be recreated after the data was successfully imported. Use DIRECT=yes (direct mode export).

A single directory 2. You can designate the FRA as the location for one of the control files and redo log members to limit the exposure in case of disk failure. the flash recovery area can act as a disk cache area for those backup components that are eventually copied to tape. a copy of the control file is created in the flash recovery area. the Flash Recovery Area must be set up.  RMAN backup sets: The default destination of backup sets and image copies generated by RMAN is the flash recovery area. if your disaster recovery (DR) plan involves backing up to alternate media. Flashback Logs. An entire file system Raw Devices: 1.  Datafile copies: The flash recovery area also keeps the datafile copies. The flash recovery area is a specific area of disk storage that is set aside exclusively for retention . the flash recovery area is referred to retrieve all the files needed to recover a database.  Flashback logs: Flashback logs are kept in the flash recovery area when flashback database is enabled. the LOG_ARCHIVE_DEST_10 parameter in init.  Control file and SPFILE backups: The flash recovery area also keeps the control file and SPFILE backups. which are used during flashback backup operations to quickly restore a database to a prior desired state. In case of a media failure or a logical error.  Before any backup and recovery activity can take place.  Online redologs: Online redologs can be kept in FRA. Following are the various entities that can be considered as FRA: File System: 1.  Archived log files: During the configuration of the FRA.Disk Cache for Tape Copies. Notes:  The FRA is shared among databases in order to optimize the usage of disk space for database recovery operations. Archived log files are created by ARCn processes in the flash recovery area location and the location defined by LOG_ARCHIVE_DEST_n.ora file is automatically set to the flash recovery area location. which is automatically generated by Recovery Manager (RMAN) only if RMAN has been configured for control file autobackup. the FRA is also used to store and manage flashback logs. Automatic storage management (ASM) FRA Components The flash/fast recovery area can contain the following:  Control files: During database creation.

 DB_RECOVERY_FILE_DEST_SIZE parameter cannot be cleared up prior to the DB_RECOVERY_FILE_DEST parameter.  If the value specified in the DB_RECOVERY_FILE_DEST parameter is cleared then as a result the flash recovery area is disabled. archived redo logs. SQL> ALTER SYSTEM SET db_recovery_file_dest_size = 10g SCOPE = BOTH.The DB_RECOVERY_FILE_DEST_SIZE and DB_RECOVERY_FILE_DEST are defined to make the flash recovery area usable without shutting down and restarting the database instance i. archived redo log not yet backed up on tape. control files. and redo logs. control files. these two parameters are dynamic. SQL> ALTER SYSTEM SET db_recovery_file_dest = '/OFR1' SCOPE = BOTH. and control file auto backups. If the database is using Automatic Storage Management (ASM) feature.of backup components such as datafile image copies. SQL> ALTER SYSTEM SET db_flashback_retention_target = 1440 SCOPE = BOTH.  Enabling Flashback SQL> alter database flashback on. The database must be in archive log mode to enable flashback. then the shared disk area that ASM manages can be targeted for the Flashback Recovery Area. Notes: DB_RECOVERY_FILE_DEST_SIZE is defined before DB_RECOVERY_FILE_DEST in order to define the size of the flash recovery area. The size of the flash recovery area should be large enough to hold a copy of all data files. online redo logs. SQL> ALTER SYSTEM SET db_recovery_file_dest = '+dgroup1' SCOPE = BOTH.  RMAN also transfers the restored archive files from tape to the flash recovery area in order to perform recovery operations. Configuring FRA Following are the three initialization parameters that should be defined in order to set up the flash recovery area: o DB_RECOVERY_FILE_DEST_SIZE o DB_RECOVERY_FILE_DEST o DB_FLASHBACK_RETENTION_TARGET DB_RECOVERY_FILE_DEST_SIZE specifies the total size of all files that can be stored in the Flash Recovery Area. The flash recovery area can be created and maintained using Oracle Enterprise Manager Database Control. and control file auto backup copies.e. Oracle recommends that this be a separate location from the datafiles. all incremental backups.DB_RECOVERY_FILE_DEST parameter is to specify the physical location where all the flash recovery files are to be stored. Configuring Online Redolog Creation in Flash Recovery Area .

then the archive logs will be generated at LOG_ARCHIVE_DEST path. Whatever archiving scheme you choose. you can use LOG_ARCHIVE_DEST_1 to specify the same location for the archived redologs. If you enable FRA (DB_RECOVERY_FILE_DEST is set). The initialization parameters CONTROL_FILES. DB_RECOVERY_FILE_DEST and DB_CREATE_FILE_DEST. The initialization parameters that determine where online redolog files are created are DB_CREATE_ONLINE_LOG_DEST_n. Suppose log_archive_dest was set to ‘+arc_disk3′. If LOG_ARCHIVE_DEST is set & DB_RECOVERY_FILE_DEST is not set. The generated filenames for the archived redologs in the flash recovery area are Oracle Managed Filenames and are not determined by LOG_ARCHIVE_FORMAT. you have to set DB_CREATE_ONLINE_LOG_DEST_1 (OMF init parameter) to FRA location and create the online log groups/members. then you can’t just erase the values of the parameters for LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST in order to specify the location of the FRA. DB_RECOVERY_FILE_DEST and DB_CREATE_FILE_DEST all interact to determine the location where the database control files are created. if you use a different location. you have to set CONTROL_FILES parameter to FRA location.To store online redologs in FRA. DB_CREATE_ONLINE_LOG_DEST_n. then the archive logs will be generated in $ORACLE_HOME/dbs directory. It is recommended to use flash recovery area as an archived log location because the archived logs are automatically managed by the database. Managing Flash/Fast Recovery Area . Query the parameter to verify its current value: SQL> show parameter log_archive_dest SQL> show parameter log_archive_dest_1 SQL> alter system set log_archive_dest_1=’location=+arc_disk3′ scope=both. Configuring Control File Creation in Flash Recovery Area To store control file in FRA. it is always advisable to create multiple copies of archived logs. then the archive log files will be generated in FRA. To place your log files somewhere else other than the FRA you should use a different parameter to specify the archived redo log locations: use LOG_ARCHIVE_DEST_1 instead of LOG_ARCHIVE_DEST. and it will ignore the LOG_ARCHIVE_DEST and LOG_ARCHIVE_FORMAT i.e. Configuring Archived Redolog Creation in Flash Recovery Area If Archive log mode is enabled and LOG_ARCHIVE_DEST & DB_RECOVERY_FILE_DEST are not set. You can always define a different location for archive redo logs. FRA will follow its own naming convention. SQL> alter system set log_archive_dest=” scope=both.

and FLASHBACK DATABASE These commands restore archived redo logs from backup for use during media recovery. There are various other circumstances in which messages are written in the alert log: 1. then in such a case Oracle itself keeps track of those files that are not required on the disk. When none of the files are deleted. · RECOVER DATABASE or TABLESPACE. To recover from these alerts. RMAN creates backup pieces and image copies in the flash recovery area. Use the RMAN command CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK CLEAR to clear any configured format option for the control file autobackup location on disk. a message is written in the alert log. Backup some of the flash recovery files to another destination such as another disk or tape 2. The warning messages issued can be viewed in the DBA_OUTSTANDING_ALERTS data dictionary view and are also available in the OEM Database Control main window. If you do not specify SET ARCHIVELOG DESTINATION to override this behavior. In case the retention policy is sounds good. The assumption in all cases is that a flash recovery area has already been configured for your database. RMAN restores any redo log files needed during these operations to the flash . When the used space in the FRA is 85 percentage (a warning). set one of the LOG_ARCHIVE_DEST_n) parameters to 'LOCATION=USE_DB_RECOVERY_FILE_DEST'. · CONTROLFILE AUTOBACKUP RMAN can create control file autobackups in the flash recovery area. In a situation when the space does not prove enough for all flash recovery files. 3. as required by the command. with names in Oracle Managed Files name format. 2. and how to control whether a specific command creates files there or in some other destination. and do not configure a FORMAT option for disk backups.As the DB_RECOVERY_FILE_DEST_SIZE parameter specifies the space for the flash recovery area. 4. · RESTORE ARCHIVELOG Explicitly or implicitly (as in the case of. Whenever a file is deleted from the flash recovery area. When the used space in the FRA is 97 percentage (a critical warning). Adjust the retention policy to keep fewer copies of data files. These unnecessary files are then deleted to resolve the space issue in the flash recovery area.  drive. then the steps taken to recover from the alerts are:  More disk space should be added. Reduce the number of days in the recovery window RMAN files creation in the Flash Recovery Area This section describes RMAN commands or implicit actions (such as control file auto backup) that can create files in the flash recovery area. then restored archived redo log files will be stored in the flash recovery area. The commands are: · BACKUP Do not specify a FORMAT option to the BACKUP command. Control file autobackups will be placed in the flash recovery area when no other destination is configured. In such a case. a number of steps can be taken as remedial options: 1. BLOCKRECOVER.

After you transfer backups from the flash recovery area to tape. After you change this parameter. SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST='+disk1' SCOPE=BOTH SID='*'. all new flash recovery area files will be created in the new location. However.) You may also need to consider changing your backup retention policy and. and delete them once they are applied during media recovery. using forms of the RMAN DELETE command. you can follow this procedure: 1. One convenient way to back up all of your flash recovery area files to tape at once is the BACKUP RECOVERY AREA command. Changing the Flash Recovery Area to a new location If you need to move the flash recovery area of your database to a new location. 2. and make sure you are not using SET ARCHIVELOG DESTINATION to direct restored archived logs to some other destination. (Note that if you use host operating system commands to delete files. and increase DB_RECOVERY_FILE_DEST_SIZE to reflect the new space. 3. RMNA> delete noprompt copy of database. then the database will not be aware of the resulting free space. flashback logs and transient files can be left in the old flash recovery area location. You can use RMAN to remove old archivelog: $ rman target=/ RMAN> delete noprompt archivelog all. Therefore. To direct the restored archived redo logs to the flash recovery area. The permanent files (control files and online redo log files). set one of the LOG_ARCHIVE_DEST_n parameters to 'LOCATION=USE_DB_RECOVERY_FILE_DEST". . you can resolve the full recovery area condition by deleting files from the flash recovery area. See  Delete unnecessary files from the flash recovery area using the RMAN DELETE command.  Flashback logs are deleted automatically to satisfy the need for space for other files in the flash recovery area. in a BACKUP RECOVERY AREA operation the flashback logs are not backed up to tape. Note:  Flashback logs cannot be backed up outside the flash recovery area.recovery area. RMAN> delete noprompt backup of database. The database will delete the transient files from the old flash recovery area location as they become eligible for deletion. and then use the DELETE EXPIRED command to remove missing files from the RMAN repository.  Move backups from the flash recovery area to a tertiary device such as tape. consider changing your archivelog deletion policy. a guaranteed restore point can force the retention of flashback logs required to perform Flashback Database to the restore point SCN. You can run the RMAN CROSSCHECK command to have RMAN re-check the contents of the flash recovery area and identify expired files. Resolving full Flash Recovery Area You have a number of choices on how to resolve full flash/fast recovery area when there are no files eligible for deletion:  Make more disk space available. Change the DB_RECOVERY_FILE_DEST initialization parameter. if using Data Guard.

Oracle will clean up transient files remaining in the old flash recovery area location as they become
eligible for deletion.

In Oracle Database 11g, a new feature introduced i.e. Flashback Data Archive - flashback will
make use of flashback logs, explicitly created for that table, in FRA, will not use undo. Flashback data
archives can be defined on any table/tablespace. Flashback data archives are written by a
dedicatedbackground process called FBDA so there is less impact on performance. Can be purged at
regular intervals automatically.

Related views
V$RECOVERY_FILE_DEST
V$FLASH_RECOVERY_AREA_USAGE
V$DBA_OUTSTANDING_ALERTS
V$FLASHBACK_DATABASE_LOGFILE

Flashback
Flashback Technology in Oracle
Oracle has a number of products and features that provide high availability in cases of unplanned or
planned downtime. These include Fast-Start Fault Recovery, Real Application Clusters
(RAC), Recovery Manager (RMAN), backup and recovery solutions, Oracle Flashback, partitioning,
Oracle Data Guard, LogMiner, multiplexed redolog files and online reorganization.
To correct problems caused by logical data corruptions or user errors, we can use Oracle Flashback.

Flashback is possible only in Locally Managed Tablespace(LMTS).

Following are the Flashback options we have in Oracle database:

Flashback Query (from Oracle 9i)

Flashback Table (from Oracle 10g)

Flashback Drop (from Oracle 10g)

Flashback Version Query (from Oracle 10g)

Flashback Transaction Query (from Oracle 10g)

Flashback Database (from Oracle 10g)

Flashback Data Archive (from Oracle 11g)

Flashback Transaction (from Oracle 11g)

Flashback Query useful to view the data at a point-in-time in the past. This can be used (only) to view
and reconstruct lost data that was deleted or changed by accident.
Flashback Table useful to recover a table to a point-in-time in the past without restoring a backup.
Flashback Table is a push button solution to restore the contents of a table to a given point-in-time. An

application on top of Flashback Query can achieve this, but with less efficiency.
Flashback Drop provides a way to restore accidentally dropped tables. This will be done with the help
of Recyclebin feature.
Flashback Version Query uses undo data stored in the database to view the changes to one or more
rows along with all the metadata of the changes.
Flashback Transaction Query useful to examine changes to the database at the transaction level. As a
result, we can diagnose problems, perform analysis and audit transactions.
Flashback Database useful to bring database to a prior point in time by undoing all the changes that
have taken place since that time. This operation is fast, because we do not need to restore the
backups. This in turn results in much less downtime following data corruption or human error.
Flashback Database applies to the entire database. It requires configuration and resources, but it
provides a fast alternative to performing incomplete database recovery.
Flashback Data Archive - from Oracle 11g, flashback will make use of flashback logs, explicitly
created for that table, in FRA (Flash/Fast Recovery Area), will not use undo. Flashback data archives
can be defined on any table/tablespace. Flashback data archives are written by a
dedicated background process called FBDA so there is less impact on performance. Can be purged at
regular intervals automatically.

Rules in order to Flashback
1.

DBA must have enabled undo tablespace, with appropriate retention period.

2.

User must have realized the mistake before the retention period.

3.

User must not exited from the session.

4.

Even frontend must be enabled with flashback option (package).

When to Use Oracle Flashback
(Flashback Database Vs Flashback Table)
Flashback Table uses information in the undo tablespace to restore the table. This provides significant
benefits over media recovery in terms of ease of use, availability, and faster restoration.

Flashback Database and Flashback Table differ in granularity, performance, and restrictions. For a
primary database, consider using Flashback Database rather than Flashback Table in the following
situations:

A user error affected the whole database.


A user error affected a table or a small set of tables, but the impact of reverting this set of
tables is not clear because of the logical relationships between tables.

A user error affected a table or a small set of tables, but using Flashback Table would fail
because of its DDL restrictions.

Flashback Database works through all DDL operations, whereas Flashback Table does not.
Flashback Database moves the entire database back in time, constraints are not an issue, whereas
they are with Flashback Table. Flashback Table cannot be used on a standby database.

Flashback Query
Flashback Query in Oracle
Flashback Query was introduced in Oracle 9i
Oracle Flashback Query allows us to view and repair historical data. We can perform queries on the
database as of a certain time or specified system change number (SCN).
Flashback Query uses Oracle's multiversion read-consistency capabilities to restore data by
applyingundo as needed. Oracle Database 10g automatically tunes a parameter called the undo
retention period. The undo retention period indicates the amount of time that must pass before old
undo information, i.e. undo information for committed transactions, can be overwritten. The database
collects usagestatistics and tunes the undo retention period based on these statistics and on undo
tablespace size.
Using Flashback Query, we can query the database as it existed this morning, yesterday, or last week
(if undo_retention parameter is set appropriately). The speed of this operation depends only on the
amount of data being queried and the number of changes to the data that need to be backed out.
Note: If Oracle’s default locking is overridden at any level, the database administrator or application
developer should ensure that the overriding locking procedures operate correctly. The locking
procedures must satisfy the following criteria: data integrity is guaranteed, data concurrency is
acceptable, and deadlocks are not possible or are appropriately handled.
You set the date and time you want to view. Then, any SQL query you run operates on data as it
existed at that time. If you are an authorized user, then you can correct errors and back out the
restored data without needing the intervention of an administrator.

SQL> select * from dept as of timestamp sysdate-3/1440;
With the AS OF sql clause, we can choose different snapshots for each table in the query. Associating
a snapshot with a table is known as table decoration. If you do not decorate a table with a snapshot,
then a default snapshot is used for it. All tables without a specified snapshot get the same default
snapshot e.g. suppose you want to write a query to find all the new customer accounts created in the
past hour. You could do set operations on two instances of the same table decorated with different AS
OF clauses.

DML and DDL operations can use table decoration to choose snapshots within sub queries.
Operations such as CREATE TABLE AS SELECT and INSERT TABLE AS SELECT can be used with
table decoration in the sub queries to repair tables from which rows have been mistakenly deleted.
Table decoration can be any arbitrary expression: a bind variable, a constant, a string, date
operations, and so on. You can open a cursor and dynamically bind a snapshot value (a timestamp or
an SCN) to decorate a table with.

SQL> create table emp_old select * from emp as of timestamp sysdate-1;

Flashback Query Benefits
■ Application Transparency
Packaged applications, like report generation tools that only do queries, can run in Flashback Query
mode by using logon triggers. Applications can run transparently without requiring changes to code.
All the constraints that the application needs to be satisfied are guaranteed to hold good, because
there is a consistent version of the database as of the Flashback Query time.
■ Application Performance
If an application requires recovery actions, it can do so by saving SCNs and flashing back to those
SCNs. This is lot easier and faster than saving data sets and restoring them later, which would be
required if the application were to do explicit versioning. Using Flashback Query, there are no costs
for logging that would be incurred by explicit versioning.
■ Online Operation
Flashback Query is an online operation. Concurrent DMLs and queries from other sessions are
allowed while an object is queried inside Flashback Query. The speed of these operations is
unaffected. Moreover, different sessions can flash back to different Flashback times or SCNs on the
same object concurrently. The speed of the Flashback Query itself depends on the amount of undo
that needs to be applied, which is proportional to how far back in time the query goes.
■ Easy Manageability
There is no additional management on the part of the user, except setting the appropriate retention
interval, having the right privileges, and so on. No additional logging has to be turned on, because
past versions are constructed automatically, as needed.
Notes:

Flashback Query does not undo anything. It is only a query mechanism. We can take the
output from a Flashback Query and perform an undo in many circumstances.

Flashback Query does not tell us what changed, LogMiner does that.


Flashback Query can undo changes and can be very efficient if we know the rows that need
to be moved back in time. We can use it to move a full table back in time, but this is very expensive if
the table is large since it involves a full table copy.

Flashback Query does not work through DDL operations that modify columns, or drop or
truncate tables.

LogMiner is very good for getting change history, but it gives changes in terms of deltas
(insert, update, delete), not in terms of the before and after image of a row. These can be difficult to
deal with in some applications.

When to Use Flashback Query ■ Self-Service Repair Perhaps you accidentally deleted some important rows from a table and wanted to recover the deleted rows. it could be used for extraction of data as of a consistent point in time from OLTP systems. In addition. ■ Packaged Applications Packaged applications (like report generation tools) can make use of Flashback Query without any changes to application logic. . Using Flashback Query. you can move backward in time and see the missing rows and re-insert the deleted row into the current table. ■ E-mail or Voice Mail Applications You might have deleted mail in the past. To do the repair. In DSS environments. Flashback Query could be used after examination of audit information to see the before image of the data. ■ Account Balances You can view account prior account balances as of a certain day in the month. Any constraints that the application expects are guaranteed to be satisfied. you can restore the deleted mail by moving back in time and re-inserting the deleted message into the current message box. because users see a consistent version of the Database as of the given time or SCN.