Oracle 10g / 11g Database Administration

An Oracle database is a collection of data treated as a unit. The purpose of a database is to store and retrieve related information. A database server is the key to solving the problems of information management. In general, a server reliably manages a large amount of data in a multiuser environment so that many users can concurrently access the same data. All this is accomplished while delivering high performance. A database server also prevents unauthorized access and provides efficient solutions for failure recovery. Oracle Database is the first database designed for enterprise grid computing, the most flexible and cost effective way to manage information and applications. Enterprise grid computing creates large pools of industry-standard, modular storage and servers. With this architecture, each new system can be rapidly provisioned from the pool of components. There is no need for peak workloads, because capacity can be easily added or reallocated from the resource pools as needed. The database has logical structures and physical structures. Because the physical and logical structures are separate, the physical storage of data can be managed without affecting the access to logical storage structures.

Overview of Oracle Grid Architecture
The Oracle grid architecture pools large numbers of servers, storage, and networks into a flexible, on-demand computing resource for enterprise computing needs. The grid computing infrastructure continually analyzes demand for resources and adjusts supply accordingly. For example, you could run different applications on a grid of several linked database servers. When reports are due at the end of the month, the database administrator could automatically provision more servers to that application to handle the increased demand. Grid computing uses sophisticated workload management that makes it possible for applications to share resources across many servers. Data processing capacity can be added or removed on demand, and resources within a location can be dynamically provisioned. Web services can quickly integrate applications to create new business processes.

Difference between a cluster and a grid
Clustering is one technology used to create a grid infrastructure. Simple clusters have static resources for specific applications by specific owners. Grids, which can consist of multiple clusters, are dynamic resource pools shareable among many different applications and users. A grid does not assume that all servers in the grid are running the same set of applications. Applications can be scheduled and

migrated across servers in the grid. Grids share resources from and among independent system owners. At the highest level, the idea of grid computing is computing as a utility. In other words, you should not care where your data resides, or what computer processes your request. You should be able to request information or computation and have it delivered - as much as you want, and whenever you want. This is analogous to the way electric utilities work, in that you don't know where the generator is, or how the electric grid is wired, you just ask for electricity, and you get it. The goal is to make computing a utility, a commodity, and ubiquitous. Hence the name, The Grid. This view of utility computing is, of course, a "client side" view. From the "server side", or behind the scenes, the grid is about resource allocation, information sharing, and high availability. Resource allocation ensures that all those that need or request resources are getting what they need, that resources are not standing idle while requests are going unserviced. Information sharing makes sure that the information users and applications need is available where and when it is needed. High availability features guarantee all the data and computation is always there, just like a utility company always provides electric power.

Responsibilities of Database Administrators
Each database requires at least one database administrator (DBA). An Oracle Database system can be large and can have many users. Therefore, database administration is sometimes not a one-person job, but a job for a group of DBAs who share responsibility. A database administrator's responsibilities can include the following tasks:
• • • • • • • • •

Installing and upgrading the Oracle Database server and application tools Allocating system storage and planning future storage requirements for the database system Creating primary database storage structures (tablespaces) after application developers have designed an application Creating primary objects (tables, views, indexes) once application developers have designed an application Modifying the database structure, as necessary, from information given by application developers Enrolling users and maintaining system security Ensuring compliance with Oracle license agreements Controlling and monitoring user access to the database Monitoring and optimizing the performance of the database

• • • •

Planning for backup and recovery of database information Maintaining archived data on tape Backing up and restoring the database Contacting Oracle for technical support

Creating the Database
This section presents the steps involved when you create a database manually. These steps should be followed in the order presented. Before you create the database make sure you have done the planning about the size of the database, number of tablespaces and redo log files you want in the database. Regarding the size of the database you have to first find out how many tables are going to be created in the database and how much space they will be occupying for the next 1 year or 2. The best thing is to start with some specific size and later on adjust the size depending upon the requirement Plan the layout of the underlying operating system files your database will comprise. Proper distribution of files can improve database performance dramatically by distributing the I/O during file access. You can distribute I/O in several ways when you install Oracle software and create your database. For example, you can place redo log files on separate disks or use striping. You can situate datafiles to reduce contention. And you can control data density (number of rows to a data block). Select the standard database block size. This is specified at database creation by the DB_BLOCK_SIZE initialization parameter and cannot be changed after the database is created. For databases, block size of 4K or 8K is widely used Before you start creating the Database it is best to write down the specification and then proceed The examples shown in these steps create an example database my_ica_db Let us create a database my_ica_db with the following specification Database name and System Identifier

SID DB_NAME TABLESPACES

= =

myicadb myicadb

(we will have 5 tablespaces in this database. With 1 datafile in each tablespace)

Tablespace Name Datafile system /u01/oracle/oradata/myica/sys.dbf

Size 250M

ora Size 50M 50M (rememer the parameter file name should of the format init<sid>.users undotbs temp index_data sysaux LOGFILES /u01/oracle/oradata/myica/usr.ora Now open the parameter file and set the following parameters $vi initmyicadb.ora initmyicadb.ora PARAMETER FILE /u01/oracle/dbs/initmyicadb.dbf /u01/oracle/oradata/myica/temp.dbf /u01/oracle/oradata/myica/undo.dbf /u01/oracle/oradata/myica/indx.dbf 100M 100M 100M 100M 100M (we will have 2 log groups in the database) Logfile Group Member Group 1 /u01/oracle/oradata/myica/log1. Step 1: Login to oracle account and make directories for your database.ora CONTROL FILE /u01/oracle/oradata/myica/control. $mkdir $mkdir $mkdir $mkdir /u01/oracle/oradata/myica /u01/oracle/oradata/myica/bdump /u01/oracle/oradata/myica/udump /u01/oracle/oradata/myica/cdump Step 2: Create the parameter file by copying the default template (init.dbf /u01/oracle/oradata/myica/sysaux.ora) and set the required parameters $cd /u01/oracle/dbs $cp init.ora Group 2 /u01/oracle/oradata/myica/log2.ora BACKGROUND_DUMP_DEST=/u01/oracle/oradata/myica/bdump USER_DUMP_DEST=/u01/oracle/oradata/myica/udump CORE_DUMP_DEST=/u01/oracle/oradata/myica/cdump UNDO_TABLESPACE=undotbs UNDO_MANAGEMENT=AUTO .ora DB_NAME=myicadb DB_BLOCK_SIZE=8192 CONTROL_FILES=/u01/oracle/oradata/myica/control.ora and it should be in ORACLE_HOME/dbs directory in Unix o/s and ORACLE_HOME/database directory in windows o/s) Now let us start creating the database.

dbf’ size 100m default temporary tablespace temp tempfile ‘/u01/oracle/oradata/myica/tmp. $export ORACLE_SID=myicadb $sqlplus Enter User: / as sysdba SQL>startup nomount Step 4: Give the create database command Here I am not specfying optional setting such as language. The command to create the database is SQL>create database myicadb datafile ‘/u01/oracle/oradata/myica/sys.After entering the above parameters save the file by pressing Esc :wq Step 3: Now set ORACLE_SID environment variable and start the instance.ora’ size 50m.dbf’ size 100m undo tablespace undotbs datafile ‘/u01/oracle/oradata/myica/undo. characterset etc. If no accompanying messages are shown then you have to see the alert_myicadb.dbf’ size 250M sysaux datafile ‘/u01/oracle/oradata/myica/sysaux. . I am giving the barest command to create the database to keep it simple. group 2 ‘/u01/oracle/oradata/myica/log2.dbf’ size 100m logfile group 1 ‘/u01/oracle/oradata/myica/log1. After the command fishes you will get the following message Database created. If you are getting any errors then see accompanying messages. which will show the exact reason why the command has failed. For these settings oracle will use the default values.log file located in BACKGROUND_DUMP_DESTdirectory. After you have rectified the error please delete all created files in /u01/oracle/oradata/myica directory and again give the above command.ora’ size 50m.

sql This script will also take several minutes to complete. Step 8: Create Additional user accounts.ora file and restart the listener process.Step 5: After the above command finishes. Now create additional tablespaces To create USERS tablespace SQL>create tablespace users datafile ‘/u01/oracle/oradata/myica/usr. since the default passwords change_on_install and manager are known by everybody. After the above script is finished run the CATPROC.sql The above script will take several minutes. Let us create the popular account SCOTT. $cd /u01/oracle/network/admin $vi listener. You can create as many user account as you like.SQL script to install procedural option. SQL>create user scott default tablespace users identified by tiger quota 10M on users. Step 7: Now change the passwords for SYS and SYSTEM account. SQL>alter user system identified by myica. To create INDEX_DATA tablespace SQL>create tablespace index_data datafile ‘/u01/oracle/oradata/myica/indx. the database will get mounted and opened.dbf’ size 100M Step 6: To populate the database with data dictionaries and to install procedural options execute the following scripts First execute the CATALOG.dbf’ size 100M. SQL>@/u01/oracle/rdbms/admin/catproc. Step 9: Add this database SID in listener.ora .SQL script to install data dictionaries SQL>@/u01/oracle/rdbms/admin/catalog. SQL>grant connect to scott. SQL>alter user sys identified by myica.

1)(PORT = 1521)) ) ) SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME =/u01/oracle) (PROGRAM = extproc) ) (SID_DESC = (SID_NAME=ORCL) (ORACLE_HOME=/u01/oracle) ) #Add these lines (SID_DESC = (SID_NAME=myicadb) (ORACLE_HOME=/u01/oracle) ) ) Save the file by pressing Esc :wq Now restart the listener process.(This file will already contain sample entries. Managing Tablespaces and Datafiles Using multiple tablespaces provides several Advantages .200. How to take backup is deal in the Backup and Recovery Section. Copy and paste one sample entry and edit the SID setting) LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS =(PROTOCOL = TCP)(HOST=200.100. $lsnrctl stop $lsnrctl start Step 10: It is recommended to take a full database backup after you just created the database. Congratualtions you have just created an oracle database.

dbf' SIZE 50M EXTENT MANAGEMENT LOCAL AUTOALLOCATE. Separate data of one application from the data of another to prevent multiple applications from being affected if a tablespace must be taken offline. . 8i you can also create Locally managed tablespaces. Store different the datafiles of different tablespaces on different disk drives to reduce I/O contention. resulting in the following benefits: • Concurrency and speed of space operations is improved. because recursive operations that are sometimes required during dictionary-managed space allocation are eliminated • To create a locally managed tablespace give the following command SQL> CREATE TABLESPACE ica_lmts DATAFILE '/u02/oracle/ica/ica01. The advantages of locally managed tablespaces are Locally managed tablespaces track all extent information in the tablespace itself by using bitmaps. In prior versions of Oracle only Dictionary managed Tablespaces were available but from Oracle ver. Take individual tablespaces offline while others remain online. providing better overall availability. • • • Creating New Tablespaces You can create Locally Managed or Dictionary Managed Tablespaces. because space allocations and deallocations modify locally managed resources (bitmaps stored in header files) rather than requiring centrally managed resources such as enqueues Performance is improved.• Separate user data from data dictionary data to reduce contention among dictionary objects and schema objects for the same datafiles.

Traditional smallfile tablespaces. in contrast. You can specify that size in the SIZE clause of UNIFORM.dbf' SIZE 50G. The alternative to AUTOALLOCATE is UNIFORM. . but very large (up to 4G blocks) datafile.dbf' SIZE 50M EXTENT MANAGEMENT DICTIONARY. Bigfile tablespaces can reduce the number of datafiles needed for a database. can contain multiple datafiles. 10g) A bigfile tablespace is a tablespace with a single.dbf' SIZE 50M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 256K.AUTOALLOCATE causes the tablespace to be system managed with a minimum extent size of 64K. If you omit SIZE. Bigfile Tablespaces (Introduced in Oracle Ver. The following example creates a Locally managed tablespace with uniform extent size of 256K SQL> CREATE TABLESPACE ica_lmt DATAFILE '/u02/oracle/ica/ica01. which specifies that the tablespace is managed with extents of uniform size. but the files cannot be as large. To Create Dictionary Managed Tablespace SQL> CREATE TABLESPACE ica_lmt DATAFILE '/u02/oracle/ica/ica01. To create a bigfile tablespace give the following command SQL> CREATE BIGFILE TABLESPACE ica_bigtbs DATAFILE '/u02/oracle/ica/bigtbs01. then the default size is 1M.

Oracle will automatically increase the size of a datafile whenever space is required. In this. SQL> alter tablespace add datafile ‘/u02/oracle/ica/icatbs02.dbf’ size 50M. . You can specify by how much size the file should increase and Maximum size to which it should extend. Option 3 You can also use auto extend feature of datafile. This is useful if the size of existing datafile is reached o/s file size limit or the drive where the file is existing does not have free space. This will increase the size from 50M to 100M Option 2 You can also extend the size of a tablespace by adding a new datafile to a tablespace.dbf’ resize 100M.To Extend the Size of a tablespace Option 1 You can extend the size of a tablespace by increasing the size of an existing datafile by typing the following command SQL> alter database ica datafile ‘/u01/oracle/data/icatbs01. To add a new datafile to an existing tablespace give the following command.

When allocating new extents to a tablespace segment.dbf’ size 50M auto extend ON next 5M maxsize 500M. You can also make a datafile auto extendable while creating a new tablespace itself by giving the following command.dbf’ 30M. You decrease a datafile only up to size of empty space in it.dbf’ auto extend ON next 5M maxsize 500M. when segments are dropped. In some cases. SQL> create tablespace ica datafile ‘/u01/oracle/ica/icatbs01. To decrease the size of a datafile give the following command SQL> alter database datafile ‘/u01/oracle/ica/icatbs01. their extents are deallocated and marked as free. To decrease the size of a tablespace You can decrease the size of tablespace by decreasing the datafile associated with it.To make a existing datafile auto extendable give the following command SQL> alter database datafile ‘/u01/oracle/ica/icatbs01. but adjacent free extents are not immediately . resize Coalescing Tablespaces A free extent in a dictionary-managed tablespace is made up of a collection of contiguous free blocks. the database uses the free extent closest in size to the required extent.

recombined into larger free extents. COALESCE statement to manually coalesce any adjacent free extents. Taking tablespaces Offline or Online You can take an online tablespace offline so that it is temporarily unavailable for general use. To Take a Tablespace Offline give the following command SQL>alter tablespace ica offline. You should often use the ALTER TABLESPACE . To Coalesce a tablespace give the following command SQL> alter tablespace ica coalesce. To alter the availability of a tablespace.. The result is fragmentation that makes allocation of larger extents more difficult. Conversely. The database must be open to alter the availability of a tablespace. . you can bring an offline tablespace online to make the schema objects within the tablespace available to database users. The rest of the database remains open and available for users to access data. use the ALTER TABLESPACE statement. To again bring it back online give the following command.. You must have the ALTER TABLESPACE or MANAGE TABLESPACE system privilege.

The primary purpose of read-only tablespaces is to eliminate the need to perform backup and recovery of large. Read-only tablespaces also provide a way to protecting historical data so that users cannot modify it. To take individual datafile offline type the following command SQL>alter database datafile ‘/u01/oracle/ica/ica_tbs01.SQL>alter tablespace ica online.dbf’ offline. If the datafile has become corrupt or missing when the database is running in NOARCHIVELOG mode then you can only drop it by giving the following command SQL>alter database datafile ‘/u01/oracle/ica/ica_tbs01.dbf’ online. To make a tablespace read only . Note: You can’t take individual datafiles offline it the database is running in NOARCHIVELOG mode. Making a tablespace read-only prevents updates on all tables in the tablespace. Making a Tablespace Read only. regardless of a user's update privilege level.dbf’ offline for drop. Again to bring it back online give the following command SQL> alter database datafile ‘/u01/oracle/ica/ica_tbs01. Making a tablespace read-only prevents write operations on the datafiles in the tablespace. static portions of a database.

the data in the tablespace is not recoverable. Therefore. • Dropping Tablespaces You can drop a tablespace and its contents (the segments contained in the tablespace) from the database if the tablespace and its contents are no longer required. back up the database completely To drop a tablespace give the following command. You must have theDROP TABLESPACE system privilege to drop a tablespace. For example. you can rename a permanent or temporary tablespace. make sure that all data contained in a tablespace to be dropped will not be required in the future. then it will not be renamed and an error is raised. If any datafile in the tablespace is offline.0 or higher. then the tablespace is not renamed and an error is raised. immediately before and after dropping a tablespace from a database. If the tablespace being renamed is the SYSTEM tablespace or the SYSAUX tablespace. Caution: Once a tablespace has been dropped. . Also. The following affect the operation of this statement: • • The COMPATIBLE parameter must be set to 10. or if the tablespace is offline. Renaming Tablespaces Using the RENAME TO clause of the ALTER TABLESPACE. the following statement renames the users tablespace: ALTER TABLESPACE users RENAME TO usersts.SQL>alter tablespace ica read only Again to make it read write SQL>alter tablespace ica read write. SQL> drop tablespace ica.

the associated datafiles will also be deleted from the disk. If it is not empty and if you want to drop it anyhow then add the following keyword SQL>drop tablespace ica including contents. The following statement resizes a temporary file: SQL>ALTER DATABASE TEMPFILE '/u02/oracle/data/lmtemp02. The extent management clause is optional for temporary tablespaces because all temporary tablespaces are created with locally managed extents of a uniform size. Temporary Tablespace Temporary tablespace is used for sorting large tables. The AUTOALLOCATEclause is not allowed for temporary tablespaces. SQL>create temporary tablespace temp tempfile ‘/u01/oracle/data/ica_temp. . SQL>drop tablespace ica including contents and datafiles. To create temporary tablespace give the following command. Increasing or Decreasing the size of a Temporary Tablespace You can use the resize clause to increase or decrease the size of a temporary tablespace. Every database should have one temporary tablespace. But If you include datafiles keyword then.dbf’ size 100M extent management local uniform size 5M. This will drop the tablespace even if it is not empty.This will drop the tablespace only if it is empty.dbf' RESIZE 18M. But the datafiles will not be deleted you have to use operating system command to delete the files.

dbf' DROP INCLUDING DATAFILES. A tablespace group has the following characteristics: • It contains at least one tablespace. can alleviate problems caused where one tablespace is inadequate to hold the results of a sort. It shares the namespace of tablespaces. it is created implicitly when you assign the first temporary tablespace to the group. There is no explicit limit on the maximum number of tablespaces that are contained in a group. particularly on a table that has many partitions. rather than a single temporary tablespace.The following statement drops a temporary file and deletes the operating system file: SQL> ALTER DATABASE TEMPFILE '/u02/oracle/data/lmtemp02. Tablespace Groups A tablespace group enables a user to consume temporary space from multiple tablespaces. so its name cannot be the same as any tablespace. Using a tablespace group. The group is deleted when the last temporary tablespace it contains is removed from it. . • • You do not explicitly create a tablespace group. You can specify a tablespace group name wherever a tablespace name would appear when you assign a default temporary tablespace for the database or a temporary tablespace for a user. Rather. A tablespace group enables parallel execution servers in a single parallel operation to use multiple temporary tablespaces.

ALTER TABLESPACE ica_temp2 TABLESPACE GROUP group2. Creating a Temporary Tablespace Group You create a tablespace group implicitly when you include the TABLESPACE GROUP clause in the CREATE TEMPORARY TABLESPACE or ALTER TABLESPACE statement and the specified tablespace group does not currently exist.DEFAULT TEMPORARY TABLESPACE statement to assign a tablespace group as the default temporary tablespace for the database...The view DBA_TABLESPACE_GROUPS lists tablespace groups and their member tablespaces. then the following statements create those groups. each of which has only the specified tablespace as a member: CREATE TEMPORARY TABLESPACE ica_temp2 TEMPFILE '/u02/oracle/ica/ica_temp. Diagnosing and Repairing Locally Managed Tablespace Problems To diagnose and repair corruptions in Locally Managed Tablespaces Oracle has supplied a package called DBMS_SPACE_ADMIN. For example. Assigning a Tablespace Group as the Default Temporary Tablespace Use the ALTER DATABASE . For example: ALTER DATABASE sample DEFAULT TEMPORARY TABLESPACE group2. This package has many procedures described below: . if neither group1 nor group2 exists.dbf' SIZE 50M TABLESPACE GROUP group1.

SEGMENT_CORRUPT SEGMENT_DROP_CORRUPT SEGMENT_DUMP TABLESPACE_VERIFY TABLESPACE_REBUILD_BITMAPS TABLESPACE_FIX_BITMAPS TABLESPACE_REBUILD_QUOTAS TABLESPACE_MIGRATE_FROM_LOCAL Migrates a locally managed tablespace to dictionary-managed tablespace. Verifies that the bitmaps and extent maps for the segments in the tablespace are in sync. Cannot be used for a locally managed SYSTEM tablespace. Cannot be used for a locally managed SYSTEM tablespace. Cannot be used for a locally managed SYSTEM tablespace. Cannot be used for a locally managed SYSTEM tablespace. Drops a segment currently marked corrupt (without reclaiming space).Procedure SEGMENT_VERIFY Description Verifies the consistency of the extent map of the segment. Rebuilds quotas for given tablespace. Cannot be used to migrate a locally managed SYSTEM tablespace to a dictionary-managed SYSTEM tablespa . Marks the appropriate data block address range (extent) as free or used in bitmap. Dumps the segment header and extent map of a given segment. Rebuilds the appropriate bitmap. Marks the segment corrupt or valid so that appropriate error recovery can be done.

Cannot be used for a locally managed system tablespace.Procedure Description ce. Contact Oracle Support before using these procedures. perform the following tasks: 1. Be careful using the above procedures if not used properly you will corrupt your database. . Call the SEGMENT_DUMP procedure to dump the ranges that the administrator allocated to the segment. TABLESPACE_RELOCATE_BITMAPS TABLESPACE_FIX_SEGMENT_STATES Fixes the state of the segments in a tablespace in which migration was aborted. In this scenario. Following are some of the Scenarios where you can use the above procedures Scenario 1: Fixing Bitmap When Allocated Blocks are Marked Free (No Overlap) The TABLESPACE_VERIFY procedure discovers that a segment has allocated blocks that are marked free in the bitmap. TABLESPACE_MIGRATE_TO_LOCAL Migrates a tablespace from dictionary-managed format to locally managed format. Relocates the bitmaps to the destination specified. but no overlap between segments is reported.

2. Call the SEGMENT_VERIFY procedure with the SEGMENT_VERIFY_EXTENTS_GLOBAL option. The system has automatically marked the segment corrupted. If necessary. Call TABLESPACE_REBUILD_QUOTAS to fix up quotas. Call TABLESPACE_REBUILD_QUOTAS to fix up quotas. Drop table t1. then proceed with steps 2 through 5. 4. In this scenario. Call SEGMENT_DROP_CORRUPT to drop the SEG$ entry. Scenario 3: Fixing Bitmap Where Overlap is Reported The TABLESPACE_VERIFY procedure reports some overlapping. For each range. If no overlaps are reported. 5. Scenario 2: Dropping a Corrupted Segment You cannot drop a segment because the bitmap has segment blocks marked "free". call the TABLESPACE_FIX_BITMAPS procedure with the TABLESPACE_EXTENT_MAKE_USED option to mark the space as used. Call the SEGMENT_DUMP procedure to dump the DBA ranges allocated to the segment. perform the following tasks: 1. After choosing the object to be sacrificed. 3. For each range. call TABLESPACE_FIX_BITMAPS with the TABLESPACE_EXTENT_MAKE_FREE option to mark the space as free. 3. follow up by calling the SEGMENT_DROP_CORRUPT procedure. . Some of the real data must be sacrificed based on previous internal errors.2. in this case say. perform the following tasks: 1. table t1. Make a list of all objects that t1 overlaps. 2.

For example if you want to migrate a dictionary managed tablespace ICA2 to Locally managed then give the following command. or on a single block if only one is corrupt. Call the TABLESPACE_VERIFY procedure to verify that the bitmaps are consistent.TABLESPACE_MIGRATE_TO_LOCAL ('ica2'). Scenario 4: Correcting Media Corruption of Bitmap Blocks A set of bitmap blocks has media corruption. call the TABLESPACE_FIX_BITMAPS procedure to mark appropriate bitmap blocks as used. In this scenario. EXEC DBMS_SPACE_ADMIN. perform the following tasks: 1. 3. Call the TABLESPACE_REBUILD_BITMAPS procedure. Transporting Tablespaces You can use the transportable tablespaces feature to move a subset of an Oracle Database and "plug" it in to another Oracle Database. Scenario 5: Migrating from a Dictionary-Managed to a Locally Managed Tablespace To migrate a dictionary-managed tablespace to a locally managed tablespace. You use the TABLESPACE_MIGRATE_TO_LOCAL procedure. Call the TABLESPACE_REBUILD_QUOTAS procedure to rebuild quotas. either on all bitmap blocks. Call the SEGMENT_VERIFY procedure on all objects that t1 overlapped. essentially moving tablespaces between the databases. If necessary. 4. 2. Rerun the TABLESPACE_VERIFY procedure to verify the problem is resolved.3. The tablespaces being .

-------------1 Solaris[tm] OE (32-bit) 2 Solaris[tm] OE (64-bit) 7 Microsoft Windows NT 10 Linux IA (32-bit) 6 AIX-Based Systems (64-bit) 3 HP-UX (64-bit) 5 HP Tru64 UNIX Big Big Little Little Big Big Little .transported can be either dictionary managed or locally managed. you can transport tablespaces across platforms. Moving data using transportable tablespaces is much faster than performing either an export/import or unload/load of the same data. and you use an import utility to transfer only the metadata of the tablespace objects to the new database. However not all platforms are supported. To see which platforms are supported give the following query. Starting with Oracle Database 10g. the transported tablespaces are not required to be of the same block size as the target database standard block size. PLATFORM_ID PLATFORM_NAME ENDIAN_FORMAT ----------.-----------------------------. Starting with Oracle9i. SQL> COLUMN PLATFORM_NAME FORMAT A30 SQL> SELECT * FROM V$TRANSPORTABLE_PLATFORM. This functionality can be used to Allow a database to be migrated from one platform to another. This is because the datafiles containing all of the actual data are simply copied to the destination location.

For cross-platform transport. Otherwise you must do a conversion of the tablespace set either at the source or target database. Then. you can accomplish this by making the datafile read/write at least once. no conversion is necessary. In an Oracle Database with compatibility set to 10. Procedure for transporting tablespaces To move or copy a set of tablespaces. Big Little Little If the source platform and the target platform are of different endianness. . then determine if the source and target platforms are supported and their endianness.0. Important: Before a tablespace can be transported to a different platform. the datafile header must identify the platform to which it belongs. If you are transporting the tablespace set to a platform different from the source platform. then no conversion is necessary and tablespaces can be transported as if they were on the same platform. then an additional step must be done on either the source or target platform to convert the tablespace being transported to the target format.0 or higher. SQL> alter tablespace ica read only. If they are of the same endianness. check the endian format of both platforms by querying the V$TRANSPORTABLE_PLATFORM view. 1. If both platforms have the same endianness.4 HP-UX IA (64-bit) 11 Linux IA (64-bit) 15 HP Open VMS 10 rows selected. perform the following steps. SQL> alter tablespace ica read write.

Invoke the Export utility to plug the set of tablespaces into the target database. you must convert the tablespace set to the endianness of the target platform. Generate a transportable tablespace set. If you are transporting the tablespace set to a platform with different endianness from the source platform. A transportable tablespace set consists of datafiles for the set of tablespaces being transported and an export file containing structural information for the set of tablespaces. Plug in the tablespace. You can do this using any facility for copying flat files (for example. ftp. 2. Transport the tablespace set. 5. Pick a self-contained set of tablespaces.Ignore this step if you are transporting your tablespace set to the same platform. you should perform a target-side conversion now. Copy the datafiles and the export file to the target database. If you have transported the tablespace set to a platform with different endianness from the source platform. 3. or you can perform a target-side conversion as part of step 4. You can perform a source-side conversion at this step in the procedure. Transporting Tablespace Example These steps are illustrated more fully in the example that follows. and you have not performed a source-side conversion to the endianness of the target platform. an operating system copy utility. 4. theDBMS_FILE_COPY package. where it is assumed the following datafiles and tablespaces exist: . or publishing on CDs).

PLATFORM_NAME.Tablespace ica_sales_1 ica_sales_2 Datafile: /u01/oracle/oradata/ica_salesdb/ica_sales_101. you can execute the following query on both platforms to determine if the platforms are supported and their endian formats: SELECT d. You can only transport a set of tablespaces that is self-contained. V$DATABASE d WHERE tp. Step 2: Pick a Self-Contained Set of Tablespaces There may be logical or physical dependencies between objects in the transportable set and those outside of the set. ENDIAN_FORMAT FROM V$TRANSPORTABLE_PLATFORM tp. The following is the query result from the source platform: PLATFORM_NAME ENDIAN_FORMAT ------------------------.dbf /u01/oracle/oradata/ica_salesdb/ica_sales_201.-------------Microsoft Windows NT Little You can see that the endian formats are different and thus a conversion is necessary for transporting the tablespace set.dbf Step 1: Determine if Platforms are Supported and Endianness This step is only necessary if you are transporting the tablespace set to a platform different from the source platform. That is it should not have tables with .PLATFORM_NAME = d. If ica_sales_1 and ica_sales_2 were being transported to a different platform.PLATFORM_NAME.-------------Solaris[tm] OE (32-bit) Big The following is the result from the target platform: PLATFORM_NAME ENDIAN_FORMAT ------------------------.

It should not have tables with some partitions in other tablespaces. SQL> SELECT * FROM TRANSPORT_SET_VIOLATIONS. After executing the above give the following query to see whether any violations are there. To find out whether the tablespace is self contained do the following EXECUTE DBMS_TTS. TRUE).ica_sales_2'.DEPT in tablespace OTHER Partitioned table SAMI. VIOLATIONS -------------------------------------------------------------------------Constraint DEPT_FK between table SAMI. Tablespace altered. SQL> ALTER TABLESPACE ica_sales_1 READ ONLY. .TRANSPORT_SET_CHECK('ica_sales_1. generate a transportable tablespace set by performing the following actions: Make all tablespaces in the set you are copying read-only.SALES is partially contained in the transportable set These violations must be resolved before ica_sales_1 and ica_sales_2 are transportable Step 3: Generate a Transportable Tablespace Set After ensuring you have a self-contained set of tablespaces that you want to transport.EMP in tablespace ICA_SALES_1 and table SAMI.foreign keys referring to primary key of tables which are in other tablespaces.

and the endianness of the platforms is different.SQL> ALTER TABLESPACE ica_sales_2 READ ONLY. Invoke the Export utility on the host system and specify which tablespaces are in the transportable set. directory /temp.0. SQL> HOST $ exp system/password FILE=/u01/oracle/expdat. All rights connected to target database: ica_salesdb (DBID=3295731590) Convert the datafiles into a temporary location on the source platform.ica_sales_2 TO PLATFORM 'Microsoft Windows NT' FORMAT '/temp/ %U'.ica_sales_2 If ica_sales_1 and ica_sales_2 are being transported to a different platform.0.0 Copyright (c) 1995. You have to use RMAN utility to convert datafiles $ RMAN TARGET / Recovery Manager: Release 10.dmp TRANSPORT_TABLESPACES = ica_sales_1. has already been created.1. 2003. Tablespace altered. Oracle Corporation. assume that the temporary location. The converted datafiles are assigned names by the system. reserved. RMAN> CONVERT TABLESPACE ica_sales_1. In this example. Starting backup at 08-APR-03 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: sid=11 devtype=DISK . and if you want to convert before transporting the tablespace set. then convert the datafiles composing the ica_sales_1 and ica_sales_2 tablespaces.

dbf./ica_salesdb/ica_sa les_201. imp: IMP system/password FILE=expdat. all database objects (such as tables and indexes) are created in the same user schema as in the source database. If they do not exist. If you do not specify REMAP_SCHEMA. ftp. objects in the tablespace set owned .dbf converted datafile=/temp/data_D-10_I-3295731590_TSADMIN_TBS_FNO-5_05ek24v5 channel ORA_DISK_1: datafile conversion complete.dbf REMAP_SCHEMA=(smith:sami) REMAP_SCHEMA=(williams:john) The REMAP_SCHEMA parameter changes the ownership of database objects.dbf converted datafile=/temp/data_D-10_I-3295731590_TSEXAMPLE_FNO-4_06ek24vl channel ORA_DISK_1: datafile conversion complete. then the import utility returns an error. the DBMS_FILE_TRANSFER package. elapsed time: 00:00:45 Finished backup at 08-APR-07 Step 4: Transport the Tablespace Set Transport both the datafiles and the export file of the tablespaces to a place accessible to the target database.dmp DATAFILES=/ica_salesdb/ica_sales_101. or publishing on CDs). an operating system copy utility.channel ORA_DISK_1: starting datafile conversion input datafile fno=00005 name=/u01/oracle/oradata/ica_salesdb/ica_sales_101. In this example. You can use any facility for copying flat files (for example. elapsed time: 00:00:15 channel ORA_DISK_1: starting datafile conversion input datafile fno=00004 name=/u01/oracle/oradata/ica_salesdb/ica_sales_101. and those users must already exist in the target database. Step 5: Plug In the Tablespace Set Plug in the tablespaces and integrate the structural information using the Import utility.

Now. Viewing Information about Tablespaces and Datafiles Oracle has provided many Data dictionaries to view information about tablespaces and datafiles. but must have users sami and john. To view information about Tempfiles SQL>select * from dba_temp_files. ALTER TABLESPACE ica_sales_2 READ WRITE.by smith in the source database will be owned by sami in the target database after the tablespace set is plugged in. SQL>select * from v$tempfile. After this statement executes successfully. the target database is not required to have users smith and williams. Check the import logs to ensure that no error has occurred. To view information about free space in datafiles SQL>select * from dba_free_space. To view information about Datafiles SQL>select * from dba_data_files. Similarly. objects owned by williams in the source database will be owned by john in the target database. all tablespaces in the set being copied remain in read-only mode. Some of them are: To view information about Tablespaces in a database give the following query SQL>select * from dba_tablespaces SQL>select * from v$tablespace. put the tablespaces into read/write mode as follows: ALTER TABLESPACE ica_sales_1 READ WRITE. In this case. To view information about free space in tempfiles . SQL>select * from v$datafile.

dbf /u02/oracle/ica/usr01. 3. 2. 4.SQL>select * from V$TEMP_SPACE_HEADER.dbf’ to‘/u01/oracle/ica/user s02. Relocating or Renaming Datafiles You can rename datafiles to either change their names or relocate them. Bring the tablespace offline SQL> alter tablespace users offline.dbf’ . Take the tablespace offline Rename or Relocate the datafiles using operating system command Give the ALTER TABLESPACE with RENAME DATAFILE option to change the filenames within the Database.dbf’ Now you want to relocate /u01/oracle/ica/usr01.dbf’ to ‘/u02/oracle/ica/usr01. 2. Copy the file to new location using o/s command. $cp /u01/oracle/ica/usr01.dbf’ then follow the given the steps 1. 1. Renaming or Relocating Datafiles belonging to a Single Tablespace To rename or relocate datafiles belonging to a Single Tablespace do the following. Bring the tablespace Online For Example suppose you have a tablespace users with the following datafiles /u01/oracle/ica/usr01.dbf’ /u01/oracle/ica/usr02.dbf’ and want to rename ‘/u01/oracle/ica/usr02.

2.dbf and /u02/oracle/rbdb1/ user3. This method is the only choice if you want to rename or relocate datafiles of several tablespaces in one operation. the following statement renames the datafiles/u02/oracle/rbdb1/sort01.dbf’ to ‘/u01/oracle/ica/use rs02. respectively: .dbf.dbf’ to ‘/u02/oracle/ica/usr01. Use ALTER control file.Rename the file ‘/u01/oracle/ica/usr02. follow these steps.dbf to /u02/oracle/rbdb1/temp01. Procedure for Renaming and Relocating Datafiles in Multiple Tablespaces You can rename and relocate datafiles in one or more tablespaces using the ALTER DATABASE RENAME FILE statement. using the operating system. You must have the ALTER DATABASE system privilege To rename datafiles in multiple tablespaces. 3.dbf’ using o/s command.. Copy the datafiles to be renamed to their new locations and new names.dbf /u01/oracle/ica/users02.dbf’. 1. $mv /u01/oracle/ica/usr02. Now bring the tablespace Online SQL> alter tablespace users online. Ensure that the database is mounted but closed.dbf 3.dbf’. dbf’. ‘/u01/oracle/ica/usr02. 4. Now start SQLPLUS and type the following command to rename and relocate these files SQL> alter tablespace users rename file ‘/u01/oracle/ica/usr01.’/u01/oracle/ica/users02. DATABASE to rename the file pointers in the database For example.dbfand /u02/oracle/rb db1/users03.

Actually the row is not yet written back to the datafile but still it give the message to the user that row is updated. 4.ALTER DATABASE RENAME FILE '/u02/oracle/rbdb1/sort01. '/u02/oracle/rbdb1/users03. Start the Database Managing REDO LOGFILES Every Oracle database must have at least 2 redo logfile groups. Oracle writes all statements except. This is known as deferred batch writes.dbf.dbf' TO '/u02/oracle/rbdb1/temp01. Since Oracle defers writing to the datafile there is chance of power failure or system crash before the row is written to the disk. Back up the database. it does write changes to disk per statement instead it performs write in batches. to the logfiles. specify the old datafile names exactly as they appear in theDBA_DATA_FILES view. So in this case if a user updates a row. Oracle will change the row in db_buffer_cache and records the statement in the logfile and give the message to the user that row is updated.e. After making any structural changes to a database. After 3 seconds the row is actually written to the datafile. always perform an immediate and complete backup.dbf'. '/u02/oracle/rbdb1/user3. Always provide complete filenames (including their paths) to properly identify the old and new datafiles. In particular. This is done because Oracle performs deferred batch writes i. That’s why Oracle writes the statement in redo logfile so that in case of power failure or system . 5.dbf'. SELECT statement.

If you want to change MAXLOGMEMBERS setting you have create a new controlfile Important: Is it strongly recommended that you multiplex logfiles i. have at least two log members. If you want to change MAXLOGFILE setting you have to create a new controlfile. one member in one disk and another in second disk. in a database. Note: You can add members to a group up to the MAXLOGMEMBERS setting you have specified at the time of creating the database. Adding a New Redo Logfile Group To add a new Redo Logfile group to the database give the following command SQL>alter database add logfile group 3 ‘/u01/oracle/ica/log3.crash oracle can re-execute the statements next time when you open the database. .ora’ to group 1.e. Note: You can add groups to a database up to the MAXLOGFILES setting you have specified at the time of creating the database.ora’ size 10M. Adding Members to an existing group To add new member to an existing group give the following command SQL>alter database add logfile member ‘/u01/oracle/ica/log11.

force a log switch or wait so that log switch occurs and another group becomes current. . You have to use O/S command to delete the files from disk. If you want to drop members from the current group.ora’. SQL>alter database drop logfile group 3. you can also drop logfile group only if the database is having more than two groups and if it is not the current group. The following command can be used to drop a logfile member SQL>alter database drop logfile member ‘/u01/oracle/ica/log11. Dropping Logfile Group Similarly.Dropping Members from a group You can drop member from a log group only if the group is having more than one member and if it is not the current group. You have to use O/S command to delete the files from disk. To force a log switch give the following command SQL>alter system switch logfile. Note: When you drop logfiles the files are not deleted from the disk. Note: When you drop logfiles the files are not deleted from the disk.

Shutdown the database SQL>shutdown immediate.ora g1. Renaming or Relocating Logfiles To Rename or Relocate Logfiles perform the following steps For Example.Resizing Logfiles You cannot resize logfiles. suppose you want to move a logfile from ‘/u01/oracle/ica/log1.ora’.ora /u02/oracle/ica/lo 3.ora’ to ‘/u02/o racle/ica/log1. 2. If you want to resize a logfile create a new logfile group with the new size and subsequently drop the old logfile group. then do the following Steps 1. Move the logfile from Old location to new location using operating system command $mv /u01/oracle/ica/log1. Now give the following command to change the location in controlfile . Start and mount the database SQL>startup mount 4.

. The cleared redo logs are available for use even though they were not archived.SQL>alter database rename file ‘/u01/oracle/ica/log1. ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 3. 5. The following statement clears the log files in redo log group number 3: ALTER DATABASE CLEAR LOGFILE GROUP 3. This statement overcomes two situations where dropping redo logs is not possible: • • If there are only two log groups The corrupt redo log file belongs to the current group If the corrupt redo log file has not been archived. In this situation the ALTER DATABASE CLEAR LOGFILE statement can be used reinitialize the file without shutting down the database.ora’ to ‘/u02/oracle/ica/log2. Open the database SQL>alter database open. This statement clears the corrupted redo logs and avoids archiving them. Clearing REDO LOGFILES A redo log file might become corrupted while the database is open. and ultimately stop database activity because archiving cannot continue. use the UNARCHIVED keyword in the statement.ora’.

------------------.ORA /U01/ORACLE/ICA/LOG2. GROUP# THREAD# SEQ BYTES FIRST_CHANGE# FIRST_TIM -----.--------YES ACTIVE NO CURRENT YES INACTIVE YES INACTIVE To See how many members are there and where they are located give the following query SQL>SELECT * FROM V$LOGFILE. The database writes a message in the alert log describing the backups from which you cannot recover Viewing Information About Logfiles To See how many logfile groups are there and their status type the following query. which is a small binary file that records the physical structure of the database. GROUP# -----1 2 STATUS ------MEMBER ---------------------------------/U01/ORACLE/ICA/LOG1.If you clear a log file that is needed for recovery of a backup.----.ORA Managing Control Files Every Oracle Database has a control file.------. then you can no longer recover from that backup.--------1 1 20605 1048576 61515628 21-JUN-07 2 1 20606 1048576 41517595 21-JUN-07 3 1 20603 1048576 31511666 21-JUN-07 4 1 20604 1048576 21513647 21-JUN-07 MEMBERS ------1 1 1 1 ARC STATUS --. The control file includes: • The database name . SQL>SELECT * FROM V$LOG.

$cp /u01/oracle/ica/control.ora trol. In this way if control file becomes corrupt in one disk the another copy will be available and you don’t have to do recovery of control file. Multiplexing Control File Steps: 1. For example. Shutdown the Database. Copy the control file from old location to new location using operating system command.• • • • Names and locations of associated datafiles and redo log files The timestamp of the database creation The current log sequence number Checkpoint information It is strongly recommended that you multiplex control files i.ora /u02/oracle/ica/con 3. Have at least two control files one in one hard disk and another one located in another disk. If you have not multiplexed control file at the time of creating a database you can do it now by following given procedure. SQL>SHUTDOWN IMMEDIATE.e. in a database. 2. You can multiplex control file at the time of creating a database and later on also. Now open the parameter file and specify the new location like this .

. Creating A New Control File Follow the given steps to create a new controlfile Steps 1.CONTROL_FILES=/u01/oracle/ica/control. Changing the Name of a Database If you ever want to change the name of database or want to change the setting of MAXDATAFILES. MAXLOGMEMBERS then you have to create a new control file. After giving this statement oracle will write the CREATE CONTROLFILE statement in a trace file. MAXLOGFILES./u02/o racle/ica/control.TRC and it is created in USER_DUMP_DEST directory.ora 4. The trace file will be randomly named something like ORA23212. if one control file is lost you can copy it from another location.ora. Start the Database Now Oracle will start updating both the control files and. First generate the create controlfile statement SQL>alter database backup controlfile to trace.ora Change it to CONTROL_FILES=/u01/oracle/ica/control.

'/u01/oracle/ica/temp01. '/u01/oracle/ica/users01.log'). '/u01/oracle/ica/redo03_02. Go to the USER_DUMP_DEST directory and open the latest trace file in text editor.log') RESETLOGS DATAFILE '/u01/oracle/ica/system01.dbf' SIZE 3M.log'.log'.log'). This file will contain the CREATE CONTROLFILE statement.dbs' SIZE 5M.sql 3.sql file in text editor and set the database name from ica to prod shown in an example below CREATE CONTROLFILE SET DATABASE prod LOGFILE GROUP 1 ('/u01/oracle/ica/redo01_01.log'. GROUP 2 ('/u01/oracle/ica/redo02_01. It will have two sets of statement one with RESETLOGS and another without RESETLOGS. Let it be c.dbs' SIZE 5M.2. '/u01/oracle/ica/redo01_02. GROUP 3 ('/u01/oracle/ica/redo03_01. Now copy and paste the statement in a file. Now open the c.dbs' SIZE 5M MAXLOGFILES 50 MAXLOGMEMBERS 3 MAXLOGHISTORY 400 MAXDATAFILES 200 MAXINSTANCES 6 . '/u01/oracle/ica/rbs01. '/u01/oracle/ica/redo02_02. Since we are changing the name of the Database we have to use RESETLOGS option of CREATE CONTROLFILE statement.

The company management wants to develop some new modules and they have hired some programmers to do that. You have a Production database running in one server.ARCHIVELOG. SQL>STARTUP NOMOUNT.ora . Now execute c. Let us see an example of cloning a database We have a database running the production server with the following files PARAMETER FILE located in /u01/oracle/ica/initica. Now these programmers require access to the Production database and they want to make changes to it. You as a DBA can’t give direct access to Production database so you want to create a copy of this database on another server and wants to give developers access to it.sql 6. 5. 4.sql script SQL> @/u01/oracle/c. Cloning an Oracle Database. Start and do not mount the database. Now open the database with RESETLOGS SQL>ALTER DATABASE OPEN RESETLOGS.

dbf /u01/oracle/ica/tmp. In SERVER 1 generate CREATE CONTROLFILE statement by typing the following command .dbf /u01/oracle/ica/sysaux.dbf LOGFILE= /u01/oracle/ica/log1.dbf /u01/oracle/ica/usr.dbf /u01/oracle/ica/rbs.ora BACKGROUND_DUMP_DEST=/u01/oracle/ica/bdump USER_DUMP_DEST=/u01/oracle/ica/udump CORE_DUMP_DEST=/u01/oracle/ica/cdump LOG_ARCHIVE_DEST_1=”location=/u01/oracle/ica/arc 1” DATAFILES = /u01/oracle/ica/sys. Steps :1. In SERVER 2 you have /d01 filesystem.ora /u01/oracle/ica/log2. In SERVER 2 install the same version of o/s and same version Oracle as in SERVER 1. To Clone this Database on SERVER 2 do the following. 2.CONTROL FILES=/u01/oracle/ica/control.ora Now you want to copy this database to SERVER 2 and in SERVER 2 you don’t have /u01 filesystem.

CREATE CONTROLFILE SET DATABASE prod LOGFILE GROUP 1 ('/u01/oracle/ica/log1.dbf' SIZE 50M. Copy the CREATECONTROLFILE statement and paste in a file.SQL>alter database backup controlfile to trace. MAXLOGFILES 50 MAXLOGMEMBERS 3 MAXLOGHISTORY 400 MAXDATAFILES 200 MAXINSTANCES 6 ARCHIVELOG. 3.dbf’ size 100M. '/u01/oracle/ica/usr. In SERVER 2 create the following directories .sql The CREATE CONTROLFILE Statement will look like this.ora' GROUP 2 ('/u01/oracle/ica/log2.dbf' SIZE 50M.ora' DATAFILE '/u01/oracle/ica/sys. This file will contain steps and as well as CREATE CONTROLFILE statement. '/u01/oracle/ica/rbs. Let the filename be cr.dbf' SIZE 50M. '/u01/oracle/ica/tmp. go to the USER_DUMP_DEST directory and open the latest trace file.dbf' SIZE 300M. Now. ‘/u01/oracle/ica/sysaux.

sql script file to /d01/oracle/ica directory. Open the parameter file SERVER 2 and change the following parameters CONTROL FILES=//d01/oracle/ica/control. Copy parameter file to SERVER 2 in /d01/oracle/dbs directory and copy all archive log files to SERVER 2 in /d01/oracle/ica/arc1 directory. Copy the cr.sql file in text editor and change the locations like this .$cd /d01/oracle $mkdir ica $mkdir arc1 $cd ica $mkdir bdump udump cdump Shutdown the database on SERVER 1 and transfer all datafiles.ora BACKGROUND_DUMP_DEST=//d01/oracle/ica/bdump USER_DUMP_DEST=//d01/oracle/ica/udump CORE_DUMP_DEST=//d01/oracle/ica/cdump LOG_ARCHIVE_DEST_1=”location=//d01/oracle/ica/ar c1” 5. Now. logfiles and control file to SERVER 2 in /d01/oracle/ica directory. 4. open the cr.

In SERVER 2 export ORACLE_SID environment variable and start the instance $export ORACLE_SID=ica $sqlplus Enter User:/ as sysdba SQL> startup nomount.dbf' SIZE 50M.dbf' SIZE 50M. '//d01/oracle/ica/usr. MAXLOGFILES 50 MAXLOGMEMBERS 3 MAXLOGHISTORY 400 MAXDATAFILES 200 MAXINSTANCES 6 ARCHIVELOG. Open the database . Run cr.sql script to create the controlfile SQL>@/d01/oracle/ica/cr. '//d01/oracle/ica/tmp.dbf' SIZE 50M.dbf’ size 100M.CREATE CONTROLFILE SET DATABASE prod LOGFILE GROUP 1 ('//d01/oracle/ica/log1. 6. ‘//d01/oracle/ica/sysaux.ora' GROUP 2 ('//d01/oracle/ica/log2. '//d01/oracle/ica/rbs.sql 7.ora' DATAFILE '//d01/oracle/ica/sys.dbf' SIZE 300M.

changes to the database. Oracle strongly recommends that you use undo tablespace to manage undo rather than rollback segments. which simplifies undo space management by eliminating the complexities associated with rollback segment management. Managing the UNDO TABLESPACE Every Oracle Database must have a method of maintaining information that is used to roll back.SQL>alter database open. If you have not created an undo tablespace at the time of creating a database then. These records are collectively referred to as undo.dbf’ size 500M . Oracle9i introduced automatic undo management. or undo. primarily before they are committed. Switching to Automatic Management of Undo Space To go for automatic management of undo space set the following parameter. create an undo tablespace by typing the following command SQL>create undo tablespace myundo datafile ‘/u01/oracle/ica/undo_tbs. Steps:1. Undo records are used to: • • • • • Roll back transactions when a ROLLBACK statement is issued Recover the database Provide read consistency Analyze data as of an earlier point in time by using Flashback Query Recover from logical corruptions using Flashback features Earlier releases of Oracle Database used rollback segments to store undo. Such information consists of records of the actions of transactions.

UNDO_MANAGEMENT=AUTO UNDO_TABLESPACE=myundo 3. Shutdown the Database and set the following parameters in parameter file. you may be unsure of the space requirements of the undo tablespace. Now Oracle Database will use Automatic Undo Space Management. UPS is undo blocks for each second overhead is the small overhead for metadata (transaction tables. This value should take into consideration long-running queries and any flashback requirements.autoextend ON next 5M . Start the Database. When the system is first running in the production environment. bitmaps. you can enable automatic extension for datafiles of the undo tablespace so that they automatically increase in size when more space is needed 2. and so forth) • • . Calculating the Space Requirements For Undo Retention You can calculate space requirements manually using the following formula: UndoSpace = UR * UPS + overhead where: • • UndoSpace is the number of undo blocks UR is UNDO_RETENTION in seconds. In this case.

The following example drops the undo tablespace undotbs_01: SQL> DROP TABLESPACE myundo. and the transaction rate (UPS) is 100 undo blocks for each second.dbf’ resize 700M The following example adds a new datafile to undo tablespace SQL> ALTER TABLESPACE myundo ADD DATAFILE '/u01/oracle/ica/undo02.As an example. if UNDO_RETENTION is set to 3 hours. An undo tablespace can only be dropped if it is not currently used by any instance. Dropping an Undo Tablespace Use the DROP TABLESPACE statement to drop an undo tablespace. Altering UNDO Tablespace If the Undo tablespace is full.dbf' SIZE 200M AUTOEXTEND ON NEXT 1M MAXSIZE UNLIMITED. . Overhead query the V$UNDOSTAT view. you can resize existing datafiles or add new datafiles to it The following example extends an existing datafile SQL> alter database datafile ‘/u01/oracle/ica/undo_tbs. the required undo space is computed as follows: (3 * 3600 * 100 * 8K) = 8.24GBs To get the values for UPS. with a 8K block size. the DROP TABLESPACE statement fails. a transaction died but has not yet been recovered). By giving the following statement SQL> Select * from V$UNDOSTAT. If the undo tablespace contains any outstanding transactions (for example.

ACCESS or SYBASE or any other third party database. To see the sizes of extents in the undo tablespace give the following query SQL>select * from DBA_UNDO_EXTENTS. and then use SQL loader to load the data into Oracle. SQL Loader will only read the data from Flat files. the ALTER SYSTEM SET statement can be used to assign a new undo tablespace. you have to first convert that data into Delimited Format flat file or Fixed length format flat file.Switching Undo Tablespaces You can switch from using one undo tablespace to another. after this command successfully executes. SQL Loader SQL LOADER utility is used to load data from other data source into Oracle. Assuming myundo is the current undo tablespace. The following statement switches to a new undo tablespace: ALTER SYSTEM SET UNDO_TABLESPACE = myundo2. Because the UNDO_TABLESPACE initialization parameter is a dynamic parameter. the instance uses myundo2 in place of myundo as its undo tablespace. you can use SQL Loader to load the data into Oracle Tables. Viewing Information about Undo Tablespace To view statistics for tuning undo tablespace query the following dictionary SQL>select * from v$undostat. To see how many active Transactions are there and to see undo segment information give the following command SQL>select * from v$transaction. . For example. So If you want to load the data from Foxpro or any other database. if you have a table in FOXPRO.

4. Now transfer this file to Linux Server using FTP command a. . 2. Go to Command Prompt in windows b.Following is procedure to load the data from Third Party Database into Oracle using SQL Loader. Create the Table Structure in Oracle Database using appropriate datatypes 3.csv 1. At the command prompt type FTP followed by IP address of the server running Oracle. by clicking on File/Save As menu. Solution Steps Start MS-Access and convert the table into comma delimited flat (popularly known as csv) . CASE STUDY (Loading Data from MS-ACCESS to Oracle) Suppose you have a table in MS-ACCESS by name EMP.000 rows. describing how to interpret the flat file and options to load the data. Write a Control File. running under Windows O/S. with the following structure EMPNO NAME SAL JDATE INTEGER TEXT(50) CURRENCY DATE This table contains some 10. 1. Oracle Database is running in LINUX O/S. Execute SQL Loader utility specifying the control file in the command line argument To understand it better let us see the following case study. Now you want to load the data from this table into an Oracle Table. Let the delimited file name be emp. Convert the Data into Flat file using third party database command.

Supply a valid username and password of Oracle User in Linux For example:C:\>ftp 200.csv remote-file:/u01/oracle/emp. Now give PUT command to transfer file from current Windows machine to Linux machine. Now after the file is transferred quit the FTP utility by typing bye command. create a table like this $sqlplus scott/tiger SQL>CREATE TABLE emp (empno number(5). FTP>put Local file:C:\>emp.100.csv File transferred in 0. sal number(10. Now come the Linux Machine and create a table in Oracle with the same structure as in MS-ACCESS by taking appropriate datatypes.111 Name: oracle Password:oracle FTP> c. .200. FTP>bye Good-Bye 2.29 Seconds FTP> d.2).FTP will then prompt you for username and password to connect to the Linux Server. For example. name varchar2(50).

jdate date);
3.

After creating the table, you have to write a control file describing the actions which SQL Loader should do. You can use any text editor to write the control file. Now let us write a controlfile for our case study $vi emp.ctl 1 2 3 4 5 6 7 LOAD DATA INFILE BADFILE DISCARDFILE ‘/u01/oracle/emp.csv’ ‘/u01/oracle/emp.bad’ ‘/u01/oracle/emp.dsc’

INSERT INTO TABLE emp FIELDS TERMINATED BY “,” OPTIONALLY ENCLOSED BY ‘”’ TRAILING NULLCOLS (empno,name,sal,jdate date ‘mm/dd/yyyy’)

Notes:
(Do not write the line numbers, they are meant for explanation purpose) 1. 2. 3. The LOAD DATA statement is required at the beginning of the control file. The INFILE option specifies where the input file is located Specifying BADFILE is optional. If you specify, then bad records found during loading will be stored in this file. Specifying DISCARDFILE is optional. If you specify, then records which do not meet a WHEN condition will be written to this file. You can use any of the following loading option 1. 2.

4.

5.

INSERT : Loads rows only if the target table is empty APPEND: Load rows if the target table is empty or not.

3.

REPLACE: First deletes all the rows in the existing table and then, load
rows.

4. 6.

TRUNCATE: First truncates the table and then load rows.

This line indicates how the fields are separated in input file. Since in our case the fields are separated by “,” so we have specified “,” as the terminating char for fields. You can replace this by any char which is used to terminate fields. Some of the popularly use terminating characters are semicolon “;”, colon “:”, pipe “|” etc. TRAILING NULLCOLS means if the last column is null then treat this as null value, otherwise, SQL LOADER will treat the record as bad if the last column is null. In this line specify the columns of the target table. Note how do you specify format for Date columns

7.

4.

After you have wrote the control file save it and then, call SQL Loader utility by typing the following command

$sqlldr userid=scott/tiger control=emp.ctl log=emp.log
After you have executed the above command SQL Loader will shows you the output describing how many rows it has loaded. The LOG option of sqlldr specifies where the log file of this sql loader session should be created. The log file contains all actions which SQL loader has performed i.e. how many rows were loaded, how many were rejected and how much time is taken to load the rows and etc. You have to view this file for any errors encountered while running SQLLoader.

CASE STUDY (Loading Data from Fixed Length file into Oracle)

Suppose we have a fixed length format file containing employees data, as shown below, and wants to load this data into an Oracle table.
7782 CLARK 7839 KING 7934 MILLER 7566 JONES MANAGER PRESIDENT CLERK MANAGER 7782 7839 7839 2572.50 5500.00 920.00 3123.75 10 10 10 20

7499 ALLEN 7654 MARTIN 7658 CHAN 7654 MARTIN

SALESMAN SALESMAN ANALYST SALESMAN

7698 7698 7566 7698

1600.00 1312.50 3450.00 1312.50

300.00 30 1400.00 30 20 1400.00 30

SOLUTION: Steps :-

1. First Open the file in a text editor and count the length of fields, for example in our fixed length file, employee number is from 1st position to 4th position, employee name is from 6th position to 15th position, Job name is from 17th position to 25th position. Similarly other columns are also located. 2. Create a table in Oracle, by any name, but should match columns specified in fixed length file. In our case give the following command to create the table.

SQL> CREATE TABLE emp (empno name VARCHAR2(20), job mgr sal VARCHAR2(10), NUMBER(5), NUMBER(10,2),

NUMBER(5),

comm NUMBER(10,2), deptno NUMBER(3) );

job .ctl 1) 2) 3) LOAD DATA INFILE '/u01/oracle/fix. After creating the table.3. 2. 5) deptno GER EXTERNAL) Notes: (Do not write the line numbers. they are meant for explanation purpose) 1. The LOAD DATA statement is required at the beginning of the control file. now write a control file by using any text editor $vi empfix. name . The name of the file containing data follows the INFILE parameter. mgr GER EXTERNAL. comm MAL EXTERNAL.dat' INTO TABLE emp POSITION(01:04) POSITION(06:15) POSITION(17:25) POSITION(27:30) POSITION(32:39) POSITION(41:48) POSITION(50:51) INTE CHAR CHAR INTE DECI DECI INTE 4) (empno GER EXTERNAL. sal MAL EXTERNAL. .

and so on are names of columns in table emp. not of corresponding columns in the emp table.3. Lines 4 and 5 identify a column name and the location of the data in the datafile to be loaded into that column. 5.00 920. Loading Data into Multiple Tables using WHEN condition You can simultaneously load data into multiple tables in the same session.00 30 20 . After saving the control file now start SQL Loader utility by typing the following command. empno. For example. suppose we have a fixed length file as shown below 7782 CLARK 7839 KING 7934 MILLER 7566 JONES 7499 ALLEN 7654 MARTIN 7658 CHAN MANAGER PRESIDENT CLERK MANAGER SALESMAN SALESMAN ANALYST 7782 7839 7698 7698 7566 7839 2572.50 3450. DECIMAL EXTERNAL) identify the datatype of data fields in the file. $sqlldr userid=scott/tiger control=empfix. You can also use WHEN condition to load only specified rows which meets a particular condition (only equal to “=” and not equal to “<>” conditions are allowed). name.00 30 1400. 4.75 1600.00 3123. CHAR.ctl log=empfix. The INTO TABLE statement is required to identify the table to be loaded into. 4.00 1312. Note that the set of column specifications is enclosed in parentheses.log direct=y After you have executed the above command SQL Loader will shows you the output describing how many rows it has loaded. job.50 5500. The datatypes (INTEGER EXTERNAL.00 10 10 10 20 300.

7654 MARTIN SALESMAN 7698 1312.50 1400.ctl Load Data infile ‘/u01/oracle/empfix. comm EXTERNAL. CHAR. write a control file as shown below $vi emp_multi. name job mgr EXTERNAL. deptno EXTERNAL) POSITION(06:15) POSITION(17:25) POSITION(27:30) POSITION(32:39) POSITION(41:48) POSITION(50:51) INTEGER CHAR. sal EXTERNAL. Then.dat’ append into table scott. .emp2 WHEN (deptno<>’10 ‘) (empno POSITION(01:04) EXTERNAL.00 30 Now we want to load all the employees whose deptno is 10 into emp1 table and those employees whose deptno is not equal to 10 in emp2 table. To do this first create the tablesemp1 and emp2 by taking appropriate columns and datatypes. INTEGER DECIMAL DECIMAL INTEGER INTO TABLE scott.emp1 WHEN (deptno=’10 ‘) (empno POSITION(01:04) EXTERNAL. name POSITION(06:15) INTEGER CHAR.

. You can specify the method by using DIRECT command line option. Conventional Path Conventional path load (the default) uses the SQL INSERT statement and a bind array buffer to load data into database tables. it competes equally with all other processes for buffer resources. When SQL*Loader performs a conventional path load. deptno EXTERNAL) POSITION(17:25) POSITION(27:30) POSITION(32:39) POSITION(41:48) POSITION(50:51) CHAR. if omit this option or specify DIRECT=false. comm EXTERNAL. passed to Oracle. Although appropriate during normal use. This can slow the load significantly. SQL Loader can load the data into Oracle database using Conventional Path method or Direct Path method. Extra overhead is added as SQL statements are generated. and executed. INTEGER DECIMAL DECIMAL INTEGER After saving the file emp_multi.ctl Conventional Path Load and Direct Path Load. The Oracle database looks for partially filled blocks and attempts to fill them on each insert.job mgr EXTERNAL. sal EXTERNAL. If you give DIRECT=TRUE then SQL loader will use Direct Path Loading otherwise. then SQL Loader will use Conventional Path loading method. this can slow bulk loads dramatically.ctl run sqlldr $sqlldr userid=scott/tiger control=emp_multi.

You Export tool to export data from source database. At the . processes perform their own write I/O.Direct Path In Direct Path Loading. into fresh blocks beyond High Water Mark. During a direct path load. A direct path load uses multiblock asynchronous I/O for writes to the database files.e. Oracle will not use SQL INSERT statement for loading rows. SQL*Loader need not execute any SQL INSERT statements. instead of using Oracle's buffer cache. the processing load on the Oracle database is reduced. A direct path load calls on Oracle to lock tables and indexes at the start of the load and releases them when the load is finished. Tables to be loaded do not have any active transactions pending. in datafiles i. Direct Path load is very fast because • Partial blocks are not used. Loading a parent table together with a child Table Loading BFILE columns Export and Import These tools are used to transfer data from one oracle database to another oracle database. so no reads are needed to find them. • • • • Restrictions on Using Direct Path Loads The following conditions must be satisfied for you to use the direct path load method: • • • • Tables are not clustered. Instead it directly writes the rows. and fewer writes are performed. therefore. This dump file is transferred to the target database. A conventional path load calls Oracle once for each array of rows to process a SQL INSERT statement. and Import tool to load data into the target database. it does not scan for free blocks before high water mark. This minimizes contention with other Oracle users. When you export tables from source database export tool will extracts the tables and puts it into the dump file.

The export dump file contains objects in the following order: 1. these tools will prompt you for all the necessary input. triggers are imported. function-based. new tables are created.e. procedures. data is imported and indexes are built. and any bitmap. 10g Oracle is recommending to use Data Pump Export and Import tools. This sequence also prevents redundant triggers from firing twice on the same data Invoking Export and Import You can run Export and Import tool in two modes Command Line Mode Interactive Mode When you just type exp or imp at o/s prompt it will run in interactive mode i. From Ver. views. and triggers Bitmap. and domain indexes When you import the tables the import tool will perform the actions in the following order. which are enhanced versions of original Export and Import tools. Type definitions Table definitions Table data Table indexes Integrity constraints. 3. integrity constraints are enabled on the new tables. function-based. 2. 6. This sequence prevents data from being rejected due to the order in which tables are imported. and/or domain indexes are built. 4. 5. If you supply command line arguments when calling exp or imp then it will run in command line mode .target database the Import tool will copy the data from dump file to the target database.

if T1 is partitioned table Keyword Description (Default) -------------------------------------------------------------USERID BUFFER FILE COMPRESS GRANTS INDEXES DIRECT LOG ROWS CONSISTENT FULL username/password size of data buffer output files (EXPDAT. To specify parameters.T1:P2).DEPT...Command Line Parameters of Export tool You can control how Export runs by entering the EXP command followed by various arguments..valueN) Example: EXP SCOTT/TIGER GRANTS=Y TABLES=(EMP.DMP) import into one extent (Y) export grants (Y) export indexes (Y) direct path (N) log file of screen output export data rows (Y) cross-table consistency(N) export entire file (N) .MGR) or TABLES=(T1:P1.value2. you use keywords: Format: EXP KEYWORD=value or KEYWORD=(value1..

export (Y) export triggers (Y) analyze objects (ESTIMATE) parameter filename CONSTRAINTS export constraints (Y) OBJECT_CONSISTENT transaction set to read only during object export (N) FEEDBACK FILESIZE FLASHBACK_SCN FLASHBACK_TIME specified time QUERY table RESUMABLE is encountered(N) display progress every x rows (0) maximum size of each dump file SCN used to set session snapshot back to time used to get the SCN closest to the select clause used to export a subset of a suspend when a space related error RESUMABLE_NAME text string used to identify resumable statement RESUMABLE_TIMEOUT wait time for RESUMABLE TTS_FULL_CHECK TTS perform full or partial dependency check for .OWNER TABLES RECORDLENGTH INCTYPE RECORD TRIGGERS STATISTICS PARFILE list of owner usernames list of table names length of IO record incremental export type track incr.

dmp In the above command.dmp The above command will export all the objects stored in SCOTT and ALI’s schema. USERID option specifies the user account to connect to the database. to perform full export the user should have DBA or EXP_FULL_DATABASE privilege.ALI) FILE=exp_own. FILE option specifies the name of the dump file. Example of Exporting Schemas To export Objects stored in a particular schemas you can run export utility with the following arguments $exp USERID=scott/tiger OWNER=(SCOTT. Note. :Exports all the objects in all schemas :Exports objects only belonging to the . FULL option specifies that you want to export the full database. Example of Exporting Full Database The following example shows how to export full database $exp USERID=scott/tiger FULL=y FILE=myfull.TABLESPACES list of tablespaces to export export transportable tablespace TRANSPORT_TABLESPACE metadata (N) TEMPLATE export template name which invokes iAS mode The Export and Import tools support four modes of operation FULL OWNER given OWNER TABLES :Exports Individual Tables TABLESPACE :Export all objects located in a given TABLESPACE.

T1:P2). you use keywords: Format: IMP KEYWORD=value or KEYWORD=(value1.e. Using Import Utility Objects exported by export utility can only be imported by Import utility... Export utility will export a consistent image of the table i. Exporting Consistent Image of the tables If you include CONSISTENT=Y option in export command argument then..value2.scott. you can control how Import runs by entering the IMP command followed by various arguments. You can let Import prompt you for parameters by entering the IMP command followed by your username/password: Example: IMP SCOTT/TIGER Or.Exporting Individual Tables To export individual tables give the following command $exp USERID=scott/tiger TABLES=(scott. the changes which are done to the table during export operation will not be exported.valueN) Example: IMP SCOTT/TIGER IGNORE=Y TABLES=(EMP.dmp This will export scott’s emp and sales tables.emp. Import utility can run in Interactive mode or command line mode.. To specify parameters. if T1 is partitioned table USERID must be the first parameter on the command line.sales) FILE=exp_tab.DEPT) FULL=N or TABLES=(T1:P1. .

DMP) just list file contents (N) ignore create errors (N) import grants (Y) import indexes (Y) import data rows (Y) log file of screen output import entire file (N) list of owner usernames list of usernames list of table names length of IO record incremental import type commit array insert (N) parameter filename import constraints (Y) overwrite tablespace data file (N) write table/index info to specified file skip maintenance of unusable indexes (N) display progress every x rows(0) skip validation of specified type ids maximum size of each dump file import precomputed statistics (always) suspend when a space related error is encountered(N) RESUMABLE_NAME text string used to identify resumable statement RESUMABLE_TIMEOUT wait time for RESUMABLE COMPILE compile procedures.dmp FROMUSER=scott TABLES=(emp. packages.Keyword USERID BUFFER FILE SHOW IGNORE GRANTS INDEXES ROWS LOG FULL FROMUSER TOUSER TABLES RECORDLENGTH INCTYPE COMMIT PARFILE CONSTRAINTS DESTROY INDEXFILE SKIP_UNUSABLE_INDEXES FEEDBACK TOID_NOVALIDATE FILESIZE STATISTICS RESUMABLE Description (Default) username/password size of data buffer input files (EXPDAT.dept) . and functions (Y) STREAMS_CONFIGURATION import streams general metadata (Y) STREAMS_INSTANITATION import streams instantiation metadata (N) Example Importing Individual Tables To import individual tables from a full database export dump file give the following command $imp scott/tiger FILE=myfullexp.

This command will import only emp, dept tables into Scott user and you will get a output similar to as shown below
Export file created by EXPORT:V10.00.00 via conventional path import done in WE8DEC character set and AL16UTF16 NCHAR character set . importing SCOTT's objects into SCOTT . . importing table rows imported . . importing table rows imported "DEPT" "EMP" 4 14

Import terminated successfully without warnings.

Example, Importing Tables of One User account into another User account For example, suppose Ali has exported tables into a dump file mytables.dmp. Now Scott wants to import these tables. To achieve this Scott will give the following import command $imp scott/tiger FILE=mytables.dmp FROMUSER=ali TOUSER=scott Then import utility will give a warning that tables in the dump file was exported by user Ali and not you and then proceed. Example Importing Tables Using Pattern Matching Suppose you want to import all tables from a dump file whose name matches a particular pattern. To do so, use “%” wild character in TABLES option. For example, the following command will import all tables whose names starts with alphabet “e” and those tables whose name contains alphabet “d” $imp scott/tiger FILE=myfullexp.dmp FROMUSER=scott TABLES=(a%,%d%)

Migrating a Database across platforms.

The Export and Import utilities are the only method that Oracle supports for moving an existing Oracle database from one hardware platform to another. This includes moving between UNIX and NT systems and also moving between two NT systems running on different platforms. The following steps present a general overview of how to move a database between platforms.
1. As a DBA user, issue the following SQL query to get the exact name of all tablespaces. You will need this information later in the process.

SQL> SELECT tablespace_name FROM dba_tablespaces;
2. As a DBA user, perform a full export from the source database, for example:

> exp system/manager FULL=y FILE=myfullexp.dmp
3. Move the dump file to the target database server. If you use FTP, be sure to copy

it in binary format (by entering binary at the FTP prompt) to avoid file corruption. 4. Create a database on the target server. 5. Before importing the dump file, you must first create your tablespaces, using the information obtained in Step 1. Otherwise, the import will create the corresponding datafiles in the same file structure as at the source database, which may not be compatible with the file structure on the target system. 6. As a DBA user, perform a full import with the IGNORE parameter enabled:

> imp system/manager FULL=y IGNORE=y FILE=myfullexp.dmp
Using IGNORE=y instructs Oracle to ignore any creation errors during the import and permit the import to complete.

7. Perform a full backup of your new database.

DATA PUMP Utility

Starting with Oracle 10g, Oracle has introduced an enhanced version of EXPORT and IMPORT utility known as DATA PUMP. Data Pump is similar to EXPORT and IMPORT utility but it has many advantages. Some of the advantages are:

Most Data Pump export and import operations occur on the Oracle database server. i.e. all the dump files are created in the server even if you run the Data Pump utility from client machine. This results in increased performance because data is not transferred through network.

You can Stop and Re-Start export and import jobs. This is particularly useful if you have started an export or import job and after some time you want to do some other urgent work.

The ability to detach from and reattach to long-running jobs without affecting the job itself. This allows DBAs and other operations personnel to monitor jobs from multiple locations.

The ability to estimate how much space an export job would consume, without actually performing the export

Support for an interactive-command mode that allows monitoring of and interaction with ongoing jobs

Using Data Pump Export Utility

To Use Data Pump, DBA has to create a directory in Server Machine and create a Directory Object in the database mapping to the directory created in the file system.

full03.log JOB_NAME=myfullJob This will create multiple dump files named full01.dmp in the directory on the server /u01/oracle/my_dump_dir In some cases where the Database is in Terabytes the above command will not feasible since the dump file size will be larger than the operating system limit. In this situation you can create multiple dump files by typing the following command $expdp scott/tiger FULL=y DIRECTORY=data_pump_dir DUMPFILE=full%U. give the following command $expdp scott/tiger FULL=y DIRECTORY=data_pump_dir DUMPFILE=full.log JOB_NAME=myfullJob The above command will export the full database and it will create the dump file full. Now grant access on this directory object to SCOTT user SQL> grant read.dmp. .dmp and so on.dmp LOGFILE=myfullexp. $mkdir my_dump_dir $sqlplus Enter User:/ as sysdba SQL>create directory data_pump_dir as ‘/u01/oracle/my_dump_dir’. The FILESIZE parameter specifies how much larger the dump file should be. and hence export will fail.The following example creates a directory in the filesystem and creates a directory object in the database and grants privileges on the Directory Object to the SCOTT user.dmp. Example of Exporting a Full Database To Export Full Database. full02.write on directory data_pump_dir to scott.dmp FILESIZE=5G LOGFILE=myfullexp.

$expdp scott/tiger DIRECTORY=data_pump_dir DUMPFILE=scott_schema.dm p SCHEMAS=SCOTT You can omit SCHEMAS since the default mode of Data Pump export is SCHEMAS only. tbs_5.jobs.Example of Exporting a Schema To export all the objects of SCOTT’S schema you can run the following export data pump command.dmp TABLESPACES=tbs_4.dm p SCHEMAS=SCOTT.departments Exporting Tables located in a Tablespace If you want to export tables located in a particular tablespace you can type the following command $expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=tbs. The following example shows the syntax to export tables $expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=tables. If you want to export objects of multiple schemas you can specify the following command $expdp scott/tiger DIRECTORY=data_pump_dir DUMPFILE=scott_schema.dmp TABLES=employees. tbs_6 .HR.ALI Exporting Individual Tables using Data Pump Export You can use Data Pump Export utility to export individual tables.

Similarly you can also INCLUDE option to only export certain objects like this $expdp scott/tiger DIRECTORY=data_pump_dir DUMPFILE=scott_schema. SCHEMA Using Query to Filter Rows during Export You can use QUERY option to export only required rows.dm p SCHEMAS=SCOTT EXCLUDE=TABLE:”like ‘A%’” Then all tables in Scott’s Schema whose name starts with “A “ will not be exported. it will export only those tables of Scott’s schema whose name starts with “A” Similarly you can also exclude INDEXES. GRANTS. USER.e. For example you are exporting a schema and don’t want to export tables whose name starts with “A” then you can type the following command $expdp scott/tiger DIRECTORY=data_pump_dir DUMPFILE=scott_schema.tbs_5.tbs_6 Excluding and Including Objects during Export You can exclude objects while performing a export by using EXCLUDE option of Data Pump utility. CONSTRAINTS. the following will export only those rows of employees tables whose salary is above 10000 and whose dept id is 10.dm p SCHEMAS=SCOTT INCLUDE=TABLE:”like ‘A%’” This is opposite of EXCLUDE option i. For Example.The above will export all the objects located in tbs_4. expdp hr/hr QUERY=emp:'"WHERE dept_id > 10 AND sal > 10000"' NOLOGFILE=y DIRECTORY=dpump_dir1 DUMPFILE=exp1.dmp .

After finishing his other work. For Example. You can start a job in one client machine and then. the user has locked his/her cabin.Suspending and Resuming Export Jobs (Attaching and Re-Attaching to the Jobs) You can suspend running export jobs and later on resume these jobs or kill these jobs using Data Pump Export. or you can restart the job from another client machine. Afterwards when your work has been finished you can continue the job from the same client. the DBA wants to resume the export job and the client machine from where he actually started the job is locked because. you can suspend it.dmp LOGFILE=myfullexp. . where you stopped the job.log JOB_NAME=myfullJob After some time. suppose a DBA starts a full database export by typing the following command at one client machine CLNT1 by typing the following command $expdp scott/tiger@mydb FULL=y DIRECTORY=data_pump_dir DUMPFILE=full. if because of some work. he can issue the CONTINUE_CLIENT command to resume logging mode and restart the myfulljob job. Then he will get the Export> prompt where he can type interactive commands Now he wants to stop this export job so he will type the following command Export> STOP_JOB=IMMEDIATE Are you sure you wish to stop this job ([y]/n): y The job is placed in a stopped state and exits the client. Then he presses CTRL+C to enter into interactive mode. So now the DBA will go to another client machine and he reattach to the job by typing the following command $expdp hr/hr@mydb ATTACH=myfulljob After the job status is displayed. the DBA wants to stop this job temporarily.

and processing status is output to the client.log This example imports everything from the expfull. a directory object must be provided on both theDUMPFILE parameter and the LOGFILE parameter Importing Objects of One Schema to another Schema The following example loads all tables belonging to hr schema to scott schema $impdp SYSTEM/password DIRECTORY=dpump_dir1 DUMPFILE=hr. The following describes how to use Data Pump Import utility to import objects Importing Full Dump File If you want to Import all the objects in a dump file then you can type the following command. Therefore.Export> CONTINUE_CLIENT A message is displayed that the job has been reopened. Data Pump Import Utility Objects exported by Data Pump Export Utility can be imported into a database using Data Pump Import Utility. Note: After reattaching to the Job a DBA can also kill the job by typing KILL_JOB.dmp FULL=y LOGFILE=dpump_dir2:full_imp.dmp REMAP_SCHEMA=hr:scott . In this example. a DIRECTORY parameter is not provided.dmp dump file. $impdp hr/hr DUMPFILE=dpump_dir1:expfull. if he doesn’t want to continue with the export job.

Loading Objects of one Tablespace to another Tablespace. the dump file was exported by the user SYSTEM and imported by the user SYSTEM who has DBA privileges.dmp REMAP_TABLESPACE=users:sales The above example loads tables. If scott account does not exist. Then all objects contained . First. $ impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull. you can use this parameter to perform a schema-mode import by specifying a single schema other than your own or a list of schemas to import.If SCOTT account exist in the database then hr objects will be loaded into scott schema. You can use remap_tablespace option to import objects of one tablespace to another tablespace by giving the command $impdp SYSTEM/password DIRECTORY=dpump_dir1 DUMPFILE=hr. then Import Utility will create the SCOTT account with an unusable password because.dmp SQLFILE=dpump_dir2:expfull. Importing objects of only a Particular Schema If you have the IMP_FULL_DATABASE role. and so on.sql A SQL file named expfull. password history. the schemas themselves are created (if they do not already exist). including system and role grants. stored in users tablespace.sql is written to dpump_dir2. in the sales tablespace. Generating SQL File containing DDL commands using Data Pump Import You can generate SQL file which contains all the DDL commands which Import would have executed if you actually run Import utility The following is an example of using the SQLFILE parameter.

jobs This will import only employees and jobs tables from the DUMPFILE.dmp dump file used in this example by running the example provided for the Full Database Export in Previous Topic. suppose a DBA starts a importing by typing the following command at one client machine CLNT1 by typing the following command .dmp TABLES=employees. schemas.dmp file. You can create theexpfull. resumed or killed.dmp file used in this example by running the example provided for the Export SCHEMASparameter.dmp file. $impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull. You can create the expdat.within the schemas are imported. In that case.oe DIRECTORY=dpump_dir1 LOGFILE=schemas. is written to dpump_dir1 Importing Only Particular Tables The following example shows a simple use of the TABLES parameter to import only the employees and jobs tables from the expfull.dmp The hr and oe schemas are imported from the expdat. For Example. Example The following is an example of using the SCHEMAS parameter.log.log DUMPFILE=expdat. Nonprivileged users can specify only their own schemas. $impdp hr/hr SCHEMAS=hr. no information about the schema definition is imported. only the objects contained within it. The log file. you can attach to an already existing import job from any client machine. And. Running Import Utility in Interactive Mode Similar to the DATA PUMP EXPORT utility the Data Pump Import Jobs can also be suspended.

log JOB_NAME=myfullJob After some time. Then he presses CTRL+C to enter into interactive mode. the user has locked his/her cabin.dmp LOGFILE=myfullexp. if he doesn’t want to continue with the import job. Import> CONTINUE_CLIENT A message is displayed that the job has been reopened. Then he will get the Import> prompt where he can type interactive commands Now he wants to stop this export job so he will type the following command Import> STOP_JOB=IMMEDIATE Are you sure you wish to stop this job ([y]/n): y The job is placed in a stopped state and exits the client. and processing status is output to the client. the DBA wants to stop this job temporarily. Flash Back Features . Note: After reattaching to the Job a DBA can also kill the job by typing KILL_JOB. the DBA wants to resume the export job and the client machine from where he actually started the job is locked because. So now the DBA will go to another client machine and he reattach to the job by typing the following command $impdp hr/hr@mydb ATTACH=myfulljob After the job status is displayed. he can issue the CONTINUE_CLIENT command to resume logging mode and restart the myfulljob job. After finishing his other work.$impdp scott/tiger@mydb FULL=y DIRECTORY=data_pump_dir DUMPFILE=full.

The Flashback Version Query returns a table with a row for each version of the row that existed at any time during the time interval you . Now he can give a flashback AS. Or SQL> SELECT * FROM emp AS OF TIMESTAMP TO_TIMESTAMP('2007-06-07 10:00:00'. using flash back query he can get back the rows. OF query to get back the deleted rows like this. If you have set the UNDO_RETENTION parameter to 2 hours then. A new row version is created whenever a COMMIT statement is executed. For example.. Flashback Query SQL>select * from emp as of timestamp sysdate1/24. For example. Flashback feature depends upon on how much undo retention time you have specified. 'YYYYMM-DD HH:MI:SS') To insert the accidently deleted rows again in the table he can type SQL> insert into emp (select * from emp as of timestamp sysdate-1/24) Using Flashback Version Query You use a Flashback Version Query to retrieve the different versions of specific rows that existed during a given time interval. After 1 hour he realizes that delete statement is mistakenly performed. suppose a user accidently deletes rows from a table and commits it also then. Oracle will not overwrite the data in undo tablespace even after committing until 2 Hours have passed. 9i Oracle has introduced Flashback Query feature. It is useful to recover from accidental statement failures. Users can recover from their mistakes made since last 2 hours only.From Oracle Ver. suppose John gives a delete statement at 10 AM and commits it.

5000). Each row in the table includespseudocolumns of metadata about the row version.’YYYYY --------------------------2007-06-19 20:30:43 Suppose a user creates a emp table and inserts a row into it and commits the row. I for Insert. :Starting System Change Number when the :Starting System Change Time when the row :SCN when the row version expired.’YYYY-MM-DD HH:MI:SS’) from dual. SQL> insert into emp values (101. SQL> Create table emp (empno number(5).sal number(10. SQL>commit. D for Delete VERSIONS_STARTSCN row version was created VERSIONS_STARTTIME version was created VERSIONS_ENDSCN VERSIONS_ENDTIME :Identifier of the transaction that created the :Operation Performed.’Sami’.specify. The pseudocolumns available are VERSIONS_XID row version VERSIONS_OPERATION U for Update. Now a user sitting at another machine erroneously changes the Salary from 5000 to 2000 using Update statement .name varchar2(20). :Timestamp when the row version expired To understand let’s see the following example Before Starting this example let’s us collect the Timestamp SQL> select to_char(SYSTIMESTAMP.2)). At this time emp table has one version of one row. TO_CHAR(SYSTIMESTAMP.

a new transaction updates the name of the employee from Sami to Smith.’yyyy-mm-dd hh:mi:ss’) and to_timestamp(‘2007-06-19 21:00:00’.versions_starttime.name. VERSION_XID NAME SAL -------------0200100020D 11323 02001003C02 11345 V STARTSCN .’yyyy-mm-dd hh:mi:ss’).sal from emp ve rsions between timestamp to_timestamp(‘2007-06-19 20:30:00’. Subsequently. SQL> commit.-------U U 101 101 ENDSCN -----EMPNO ----. The query uses Flashback Version Query pseudocolumns SQL> SQL> SQL> SQL> Connect / as sysdba column versions_starttime format a16 column versions_endtime format a16 set linesize 120. SQL>update emp set name=’Smith’ where empno=101. At this point.empno. versions_operation.versions_ endtime. SQL> select versions_xid. SQL> commit.SQL> update emp set sal=sal-3000 where empno=101.-------2000 2000 SMITH SAMI . The DBA issues the following query to retrieve versions of the rows in the emp table that correspond to empno 101. the DBA detects the application error and needs to diagnose the problem.

OPERATION LOGON_USER UNDO_SQL --------.0002302C03A 12320 I 101 SAMI 5000 The Output should be read from bottom to top. . The DBA identifies the transaction 02001003C02 as erroneous and issues the following query to get the SQL command to undo the change SQL> select operation.---------. from the output we can see that an Insert has taken place and then erroneous update has taken place and then again update has taken place to change the name. and without taking any part of the database offline. easily.undo_sql from flashback_transaction_query where xid=HEXTORAW(’02001003C02’).logon_user. Oracle Flashback Table provides the DBA the ability to recover a table or set of tables to a specified point in time in the past very quickly. In many cases.-------------------------------------U = AAJ29AAA' SCOTT update emp set sal=5000 where ROWID 'AAAKD2AABA Now DBA can execute the command to undo the changes made by the user SQL> update emp set sal=5000 where ROWID ='AAAKD2AABAAAJ29AAA' 1 row updated Using Flashback Table to return Table to Past States. Flashback Table eliminates the need to perform more complicated pointin-time recovery operations.

`YYYY-MM-DD HH24:MI:SS'). This employee was present at 14:00.Flashback Table uses information in the undo tablespace to restore the table. the last time she ran a report. The following example performs a FLASHBACK TABLE operation the table emp FLASHBACK TABLE emp TO TIMESTAMP TO_TIMESTAMP('2007-06-19 09:30:00'. UNDO_RETENTION parameter is significant in Flashing Back Tables to a past state. You have to give ENABLE TRIGGERS option otherwise.'YYYY-MM-DD HH:MI:SS') ENABLE TRIGGERS. You can enable row movement with the following SQL statement: ALTER TABLE table ENABLE ROW MOVEMENT. Example:At 17:00 an HR administrator discovers that an employee "JOHN" is missing from the EMPLOYEE table. Row movement must be enabled on the table for which you are issuing the FLASHBACK TABLE statement. Someone accidentally deleted the record for "JOHN" between 14:00 and the present time. by default all database triggers on the table will be disabled. You can only flash back tables up to the retention time you specified. Therefore. as shown in this example: FLASHBACK TABLE EMPLOYEES TO TIMESTAMP TO_TIMESTAMP('2007-06-21 14:00:00'. The emp table is restored to its state when the database was at the time specified by the timestamp. She uses Flashback Table to return the table to its state at 14:00. .

For Example.e. suppose a user accidently drops emp table SQL>drop table emp. This feature is not dependent on UNDO TABLESPACE so UNDO_RETENTION parameter has no impact on this feature. 10g Oracle introduced the concept of Recycle Bin i. . To recover this dropped table a user can type the command SQL> Flashback table emp to before drop.Recovering Drop Tables (Undo Drop Table) In Oracle Ver. Table Dropped Now for user it appears that table is dropped but it is actually renamed and placed in Recycle Bin. You can also restore the dropped table by giving it a different name like this SQL> Flashback table emp to before drop rename to emp2. If you want to purge objects of logon user give the following command SQL> purge recycle bin. whatever tables you drop the database does not immediately remove the space used by table. The FLASHBACK TABLE…BEFORE DROP command will restore the table. Purging Objects from Recycle Bin If you want to recover the space used by a dropped table give the following command SQL> purge table emp. the table is renamed and placed in Recycle Bin. Instead.

CREATE TABLE EMP ( ..If you want to recover space for dropped object of a particular tablespace give the command SQL> purge tablespace hr. regardless of which user owns the objects. # EMP version 2 DROP TABLE EMP. Permanently Dropping Tables If you want to permanently drop tables without putting it into Recycle Bin drop tables with purge command like this SQL> drop table emp purge. several objects with the same original name. # EMP version 1 DROP TABLE EMP. and they will all be stored in the recycle bin. Flashback Drop of Multiple Objects With the Same Original Name You can create. then you can purge all objects from the recycle bin. To view the contents of Recycle Bin give the following command SQL> show recycle bin.columns ).. If you have the SYSDBA privilege. and then drop. using this command: SQL>PURGE DBA_RECYCLEBIN. consider these SQL statements: CREATE TABLE EMP ( . . This will drop the table permanently and it cannot be restored... For example. You can also purge only objects from a tablespace belonging to a specific user.columns ). using the following form of the command: SQL>PURGE TABLESPACE hr USER scott.

# EMP version 3 DROP TABLE EMP. TO BEFORE DROP statement with the original name of the table. LOB segments. The following example shows the retrieval from the recycle bin of all three dropped EMP tables from the previous example. when you perform Flashback Drop. nested tables. oracle will delete objects from recycle bin 2. when you drop the table. FLASHBACK TABLE EMP TO BEFORE DROP RENAME TO EMP_VER_1. There is no fixed amount of space allocated to the recycle bin. as shown in this example: FLASHBACK TABLE EMP TO BEFORE DROP. . There is no guarantee that objects will remain in Recycle Bin..CREATE TABLE EMP ( . constraints and so on) go into the recycle bin together. or for months. and no guarantee as to how long dropped objects remain in the recycle bin. with its original name.. FLASHBACK TABLE EMP TO BEFORE DROP RENAME TO EMP_VER_2. Depending upon system activity. the objects are generally all retrieved together. The most recently dropped table with that original name is retrieved from the recycle bin. a dropped object may remain in the recycle bin for seconds. In such a case. 3. A table and all of its dependent objects (indexes. whenever tablespace becomes full and transaction requires new extents then. You can retrieve it and assign it a new name using a RENAME TO clause. with each assigned a new name: FLASHBACK TABLE EMP TO BEFORE DROP RENAME TO EMP_VER_3. Oracle might empty recycle bin whenever Space Pressure occurs i. Important Points: 1. triggers. You can use a FLASHBACK TABLE..e..columns ). each table EMP is assigned a unique name in the recycle bin when it is dropped. Likewise.

Now Oracle start writing Flashback logs to recovery area. to specify how far back into the past you want to be able to restore your database with Flashback Database. at regular intervals. To enable Flashback Database. Start the instance and mount the Database. and set a flashback retention target. Once you set these parameters. SQL>startup mount. Now enable the flashback database by giving the following command SQL>alter database flashback on. Enabling Flash Back Database Step 1. Step 3.e. you set up a flash recovery area.Flashback Database: Alternative to Point-In-Time Recovery Oracle Flashback Database. . These Flashback logs are use to flashback database to a point in time. 3x24x60=4320) Step 2. lets you quickly recover the entire database from logical data corruptions or user errors. the database copies images of each altered block in every datafile into flashback logs stored in the flash recovery area. Shutdown the database if it is already running and set the following parameters rea DB_RECOVERY_FILE_DEST=/d01/ica/flasha DB_RECOVERY_FILE_DEST_SIZE=10G DB_FLASHBACK_RETENTION_TARGET=4320 (Note: the db_flashback_retention_target is specified in minutes here we have specified 3 days i. From that time on.

You as a DBA came to know of this at 5PM. run the following query: SQL> SELECT ESTIMATED_FLASHBACK_SIZE FROM V$FLASHBACK_DATABASE_LOG. Example: Flashing Back Database to a point in time Suppose. . a user erroneously drops a schema at 10:00AM. Run the FLASHBACK DATABASE command to return the database to 9:59AM by typing the following command RMAN> FLASHBACK DATABASE TO TIME timestamp('2007-06-21 09:59:00'). After you have enabled the Flashback Database feature and allowed the database to generate some flashback logs. This will show how much size the recovery area should be set to. you can flashback the database to 9:50AM by following the given procedure 1. How far you can flashback database.To how much size we should set the flash recovery area. OLDEST_FLASHBACK_TIME FROM V$FLASHBACK_DATABASE_LOG. give the following query: SELECT OLDEST_FLASHBACK_SCN. Now since you have configured the flashback area and set up the flashback retention time to 3 Days. To determine the earliest SCN and earliest Time you can Flashback your database. Start RMAN $rman target / 2.

RMAN> FLASHBACK DATABASE TO TIME (SYSDATE-8/24). you can also type this command. Option 2:If you discover that you have chosen the wrong target time for your Flashback Database operation. At this time. 3. you have several options Option 1:If you are content with your result you can open the database by performing ALTER DATABASE OPEN RESETLOGS SQL>ALTER DATABASE OPEN RESETLOGS. you can use RECOVER DATABASE UNTIL to bring the database forward. RMAN> SQL 'ALTER DATABASE OPEN READ ONLY'. you can evaluate the results by opening the database read-only and run some queries to check whether your Flashback Database has returned the database to the desired state.or. or perform FLASHBACK DATABASE again with an SCN further in the past. You can completely undo the effects of your flashback operation by performing complete recovery of the database: RMAN> RECOVER DATABASE. . When the Flashback Database operation completes.

dmp Log Miner Using Log Miner utility. 6. it . you can open the database read-only. you can query the contents of online redo log files and archived log files. After database is recovered shutdown and restart the database in normal mode and import the schema by running IMPORT utility $imp userid=system/manager file=scott. third option is relevant for us.Option 3:If you only want to retrieve some lost data from the past time. Because LogMiner provides a well-defined. and comprehensive relational interface to redo log files. come out of RMAN and run EXPORT utility to export the whole schema $exp userid=system/manager file=scott.dmp owner=SCOTT 5. easy-to-use. then perform a logical export of the data using an Oracle export utility. then run RECOVER DATABASE to return the database to the present time and reimport the data using the Oracle import utility 4. Now. Since in our example only a schema is dropped and the rest of database is good. Now Start RMAN and recover database to the present time $rman target / RMAN> RECOVER DATABASE.

can be used as a powerful data audit tool."COL 4") values (HEXTORAW('45465f4748'). LogMiner returns internal object IDs and presents data as binary data. ."COL 3". the LogMiner dictionary. The LogMiner dictionary allows LogMiner to provide table and column names. 11000). Without the dictionary.HEXTORAW('c3020b')). 4000. JOB_TITLE."OBJ# 45522"("COL 1". MAX_SALARY) VALUES('IT_WT'. • The redo log files contain the changes made to the database or database dictionary."COL 2". consider the following the SQL statement: INSERT INTO HR. LogMiner Configuration There are three basic objects in a LogMiner configuration that you should be familiar with: the source database. instead of internal object IDs.HEXTORAW('546563686e6963616c20577269746572') .'Technical Writer'. • LogMiner uses the dictionary to translate internal object identifiers and datatypes to object names and external data formats.JOBS(JOB_ID. LogMiner will display: insert into "UNKNOWN". and the redo log files containing the data of interest: • The source database is the database that produces all the redo log files that you want LogMiner to analyze. when it presents the redo log data that you request. MIN_SALARY. Without a dictionary. For example. HEXTORAW('c229'). as well as a tool for sophisticated data analysis.

as follows: SQL> EXECUTE DBMS_LOGMNR.LogMiner Dictionary Options LogMiner requires a dictionary to translate object IDs into object names when it returns redo data to you. Using the Online Catalog To direct LogMiner to use the dictionary currently in use for the database. Oracle recommends that you use either the online catalog or extract the dictionary from redo log files instead. • Extracting the LogMiner Dictionary to a Flat File This option is maintained for backward compatibility with previous releases. Extracting a LogMiner Dictionary to the Redo Log Files To extract a LogMiner dictionary to the redo log files.START_LOGMNR(OPTIONS => DBMS_LOGMNR. • Extracting a LogMiner Dictionary to the Redo Log Files Oracle recommends that you use this option when you do not expect to have access to the source database from which the redo log files were created. While the dictionary is being extracted to the redo log stream. no DDL . the database must be open and in ARCHIVELOG mode and archiving must be enabled. This option does not guarantee transactional consistency. LogMiner gives you three options for supplying the dictionary: • Using the Online Catalog Oracle recommends that you use this option when you will have access to the source database from which the redo log files were created and when no changes to the column definitions in the tables of interest are anticipated.DICT_FROM_ONLINE_CATALOG). This is the most efficient and easy-to-use option. specify the online catalog as your dictionary source when you start LogMiner. or if you anticipate that changes will be made to the column definitions in the tables of interest.

STORE_IN_FLAT_FILE). Therefore.statements can be executed.ora in /oracle/database: SQL> EXECUTE DBMS_LOGMNR_D. Extracting the LogMiner Dictionary to a Flat File When the LogMiner dictionary is in a flat file.'/oracle/database/'. Set the initialization parameter.BUILD. Do not specify a filename or location. For example. in the initialization parameter file. Redo Log File Options . For example. use the DBMS_LOGMNR_D. This procedure creates the dictionary file. Oracle recommends that you regularly back up the dictionary extract to ensure correct analysis of older redo log files. to set UTL_FILE_DIR to use /oracle/database as the directory where the dictionary file is placed. Specify a filename for the dictionary and a directory path name for the file.ora'.STORE_IN_REDO_LOGS). SQL> EXECUTE DBMS_LOGMNR_D.BUILD procedure with the STORE_IN_REDO_LOGS option. DBMS_LOGMNR_D. enter the following in the initialization parameter file: UTL_FILE_DIR = /oracle/database 2. Start the Database SQL> startup 3. Execute the PL/SQL procedure DBMS_LOGMNR_D. UTL_FILE_DIR. 1. fewer system resources are used than when it is contained in the redo log files. To extract dictionary information to the redo log files. enter the following to create the file dictionary.BUILD('dictionary.BUILD(OPTIONS=> DBMS_LOGMNR_D. the dictionary extracted to the redo log files is guaranteed to be consistent (whereas the dictionary extracted to a flat file is not).

After the first redo log file has been added to the list. as follows: Automatically If LogMiner is being used on the source database. When using this method. Example: Finding All Modifications in the Current Redo Log File The easiest way to examine the modification history of a database is to mine at the source database and use the online catalog to translate the redo log files. OPTIONS => DBMS_LOGMNR. or you can explicitly specify a list of redo log files for LogMiner to analyze. LogMiner need not be connected to the source database. . This example shows how to do the simplest analysis using LogMiner. each subsequently added redo log file must be from the same database and associated with the same database RESETLOGS SCN. LogMiner needs information about which redo log files to mine. Step 1 Specify the list of redo log files to be analyzed. then you can direct LogMiner to find and create a list of redo log files for analysis automatically.ADD_LOGFILE procedure to manually create a list of redo log files before you start LogMiner. Specify the redo log files which you want to analyze. You can direct LogMiner to automatically and dynamically create a list of redo log files to analyze.ADD_LOGFILE( LOGFILENAME => '/usr/oracle/ica/log1.ora'. Use the CONTINUOUS_MINE option when you start LogMiner. SQL> EXECUTE DBMS_LOGMNR.To mine data in the redo log files.NEW). Manually Use the DBMS_LOGMNR.

"HIRE_DATE"."FIRST_NAME". 'OE').DICT_FROM_ONLINE_CATALOG). SQL> EXECUTE DBMS_LOGMNR.05'. and TO_DATE('10-jan-2003 13:34:43'. "JOB_ID".' || XIDSQN) AS XID. Step 3 Query the V$LOGMNR_CONTENTS view.'Mohammed'. (XIDUSN || '."SALARY".SQL> EXECUTE DBMS_LOGMNR. and two were not). SQL_UNDO FROM V$LOGMNR_CONTENTS WHERE username IN ('HR'. Note that there are four transactions (two of them were committed within the redo log file being analyzed. OPTIONS => DBMS_LOGMNR.SQL_REDO.START_LOGMNR( OPTIONS => DBMS_LOGMNR. "PHONE_NUMBER"."EMPLOYEES" '306' 'Mohammed' 'Sami' '1234567890' "DEPARTMENT_ID") values TO_DATE('10-JAN-2003 ('306'.ADD_LOGFILE( LOGFILENAME => '/u01/oracle/ica/log2. '. The output shows the DML statements in the order in which they were executed."EMAIL".11. 'dd-monand "JOB_ID" = 'HR_REP' "SALARY" = '120000' and "COMMISSION_PCT" = "DEPARTMENT_ID" = '10' HR 1.'120000'.ADDFILE). and . '."MANAGER_ID". yyyy hh24:mi:ss') 'MDSAMI'. Start LogMiner and specify the dictionary to use.'Sami'. Step 2 Start LogMiner. 'dd-mon-yyyy hh24:mi:ss'). insert into "HR".' || XIDSLT || '."EMPLOYEES"( "EMPLOYEE_ID". USR ---HR XID --------1.1476 SQL_REDO SQL_UNDO ---------------------------------------------------set transaction read write. SQL> SELECT username AS USR.ora'. delete from where "EMPLOYEE_ID" = and "FIRST_NAME" = and "LAST_NAME" = and "EMAIL" = 'MDSAMI' and "PHONE_NUMBER" = and "HIRE_DATE" = 13:34:43'. '1234567890'.1476 "HR".05' and 'HR_REP'.11. "LAST_NAME". "COMMISSION_PCT". thus transactions interleave among themselves.

05'.11. HR 1. "LAST_NAME". set transaction read write.1.'10').1484 1. "JOB_ID". OE HR 1.1. OE 1.'50'). OE 1."MANAGER_ID".1.'John'.1484 set transaction read write. "COMMISSION_PCT". 'dd-monand "JOB_ID" ='105' and "DEPARTMENT_ID" = '50' and ROWID = 'AAAHSkAABAAAY6rAAP'.1476 "HR"."PRODUCT_INFORMATION" update "OE". 'JSILVER'."SALARY"."FIRST_NAME". ='AAAHTKAABAAAY9mAAC'.15.1484 update "OE".1484 update "OE".1481 commit."PRODUCT_INFORMATION" update "OE".'Silver'. 'SH_CLERK'. '.'105'. set "WARRANTY_PERIOD" TO_YMINTERVAL('+01"PRODUCT_ID" = '1799' "WARRANTY_PERIOD" = TO_YMINTERVAL('+05ROWID = set "WARRANTY_PERIOD" TO_YMINTERVAL('+01"PRODUCT_ID" = '1801' "WARRANTY_PERIOD" = TO_YMINTERVAL('+05ROWID delete from "EMPLOYEE_ID" = '307' "FIRST_NAME" = 'John' "LAST_NAME" = 'Silver' "EMAIL" = 'JSILVER' and "PHONE_NUMBER" = and "HIRE_DATE" = 2003 13:41:03'. 'dd-mon-yyyy hh24:mi:ss')."EMPLOYEES"( "EMPLOYEE_ID". 'AAAHSkAABAAAY6rAAO'. '105'."EMPLOYEES" and and and '5551112222' TO_DATE('10-janyyyy hh24:mi:ss') insert into "HR"."HIRE_DATE"."PRODUCT_INFORMATION" set "WARRANTY_PERIOD" = = TO_YMINTERVAL('+05-00') where 00') where "PRODUCT_ID" = '1801' and and "WARRANTY_PERIOD" = TO_YMINTERVAL('+01-00') and 00') and ROWID = 'AAAHTKAABAAAY9mAAC'. ."PRODUCT_INFORMATION" set "WARRANTY_PERIOD" = = TO_YMINTERVAL('+05-00') where 00') where "PRODUCT_ID" = '1799' and and "WARRANTY_PERIOD" = TO_YMINTERVAL('+01-00') and 00') and ROWID = 'AAAHTKAABAAAY9mAAB'. "DEPARTMENT_ID") values ('307'. "PHONE_NUMBER". '5551112222'.'110000'."EMAIL". TO_DATE('10-jan-2003 13:41:03'. 'AAAHTKAABAAAY9mAAB'. ROWID = OE 1.1.

and "SALARY"= '12000' 'AC_MGR'."FIRST_NAME"."EMAIL".8080 '. 'dd-mon-yyyy hh24:mi:ss') TO_DATE('07-jun-1994 10:05:01'."EMPLOYEES"( where "EMPLOYEE_ID" = '205' and "EMPLOYEE_ID".8. Example of Mining Without Specifying the List of Redo Log Files Explicitly The previous example explicitly specified the redo log file or files to be mined.'Shelley'. and "JOB_ID" = 'AC_MGR' 'dd-monyyyy hh24:mi:ss').1484 update "OE".END_LOGMNR(). set "WARRANTY_PERIOD" TO_YMINTERVAL('+20"PRODUCT_ID" = '2350' "WARRANTY_PERIOD" = TO_YMINTERVAL('+20ROWID Step 4 End the LogMiner session."PRODUCT_INFORMATION" update "OE". if you are mining in the same database that generated the redo log files. "PHONE_NUMBER" = ' 515.NULL. HR 1."SALARY". '07-jun-1994 10:05:01'.'12000'.15.1476 commit. and "COMMISSION_PCT" IS NULL and "MANAGER_ID" = '101' and "DEPARTMENT_ID" = '110' and ROWID = 'AAAHSkAABAAAY6rAAM'.'101'. "EMAIL" = 'SHIGGINS' and "COMMISSION_PCT".1484 set transaction read write."MANAGER_ID". "FIRST_NAME" = 'Shelley' and "LAST_NAME".11."PRODUCT_INFORMATION" set "WARRANTY_PERIOD" = = TO_YMINTERVAL('+12-06') where 00') where "PRODUCT_ID" = '2350' and and "WARRANTY_PERIOD" = TO_YMINTERVAL('+20-00') and 00') and ROWID = 'AAAHTKAABAAAY9tAAD'. "JOB_ID"."EMPLOYEES" insert into "HR".8.HR 1. OE 1. and 'SHIGGINS'.'110').' 515.'Higgins'.123.123.8080 ' "DEPARTMENT_ID") values and "HIRE_DATE" = TO_DATE( ('205'. ='AAAHTKAABAAAY9tAAD'. OE 1. SQL> EXECUTE DBMS_LOGMNR. However."PHONE_NUMBER".1481 delete from "HR". "LAST_NAME" = 'Higgins' and "HIRE_DATE". then you can mine the appropriate list of redo log files .

dbf 12:01:34 FIRST_TIME 10-jan-2003 Step 2 Display all the redo log files that have been generated so far. use theDBMS_LOGMNR. and specify either a time range or an SCN range of interest.START_LOGMNR procedure. SQL> SELECT NAME. but is included to demonstrate that the CONTINUOUS_MINE option works as expected. To mine a set of redo log files without explicitly specifying them. SQL> SELECT FILENAME name FROM V$LOGMNR_LOGS WHERE LOW_TIME > '10-jan-2003 12:01:34'.CONTINUOUS_MINE option to the DBMS_LOGMNR. FIRST_TIME FROM V$ARCHIVED_LOG WHERE SEQUENCE# = (SELECT MAX(SEQUENCE#) FROM V$ARCHIVED_LOG WHERE DICTIONARY_BEGIN = 'YES'). Step 1 Determine the timestamp of the redo log file that contains the start of the data dictionary. NAME --------------------------------------------------------------/usr/oracle/data/db1arch_1_207_482701534. as will be shown in Step 4. .by just specifying the time (or SCN) range of interest. Example : Mining Redo Log Files in a Given Time Range This example assumes that you want to use the data dictionary extracted to the redo log files. This step is not required.

DICT_FROM_REDO_LOGS + DBMS_LOGMNR. and CONTINUOUS_MINE options.dbf /usr/oracle/data/db1arch_1_210_482701534. This step shows that the DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR. NAME ------------------------------------------------------ .dbf Step 3 Start LogMiner.) SQL> SELECT FILENAME name FROM V$LOGMNR_LOGS. Start LogMiner by specifying the dictionary to use and the COMMITTED_DATA_ONLY. Step 4 Query the V$LOGMNR_LOGS view.CONTINUOUS_MINE).START_LOGMNR procedure with the CONTINUOUS_MINE option includes all of the redo log files that have been generated so far.START_LOGMNR(STARTTIME => '10-jan-2003 12:01:34'.PRINT_PRETTY_SQL + DBMS_LOGMNR. ENDTIME => SYSDATE.dbf /usr/oracle/data/db1arch_1_208_482701534.dbf /usr/oracle/data/db1arch_1_209_482701534. SQL> EXECUTE DBMS_LOGMNR.NAME ---------------------------------------------/usr/oracle/data/db1arch_1_207_482701534. as expected. OPTIONS => DBMS_LOGMNR. (Compare the output in this step to the output in Step 2. PRINT_PRETTY_SQL.

2).(XIDUSN || '. (This query specifies a timestamp to exclude transactions that were involved in the dictionary extraction.dbf /usr/oracle/data/db1arch_1_208_482701534.' || XIDSLT || '. SQL> SELECT USERNAME AS usr. old_list_price number(8.dbf /usr/oracle/data/db1arch_1_210_482701534.2. modified_time date.2. SYS 1.dbf /usr/oracle/data/db1arch_1_209_482701534. old_warranty_period interval year(2) to month)./usr/oracle/data/db1arch_1_207_482701534. .1594 commit.1594 SQL_REDO ----------------------------------set transaction read write.product_tracking (product_id number not null.dbf Step 5 Query the V$LOGMNR_CONTENTS view. To reduce the number of rows returned by the query.' || XIDSQN) as XID.1594 create table oe.) Note that all reconstructed SQL statements returned by the query are correctly translated.2. 'SYSTEM') AND TIMESTAMP > '10-jan-2003 15:59:53'. USR ----------SYS XID -------1. SQL_REDO FROM V$LOGMNR_CONTENTS WHERE SEG_OWNER IS NULL OR SEG_OWNER NOT IN ('SYS'. exclude all DML statements done in the sys or system schema. SYS 1.

list_price <> old.product_information for each row when (new.warranty_period) declare begin insert into oe.list_price or new.18. :old.9.1598 insert into "OE"."PRODUCT_TRACKING" values "PRODUCT_ID" = 1729. sysdate.1598 update "OE". "LIST_PRICE" = 100 where "PRODUCT_ID" = 1729 and "WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00') and "LIST_PRICE" = 80 and ROWID = 'AAAHTKAABAAAY9yAAA'. OE 1. :old.warranty_period <> old.18.product_tracking_trigger before update on oe. end. OE 1.1602 set transaction read write. .product_tracking values (:old. SYS 1.18.product_id.list_price.1602 create or replace trigger oe.SYS 1.9. SYS 1.warranty_period).1602 commit."PRODUCT_INFORMATION" set "WARRANTY_PERIOD" = TO_YMINTERVAL('+08- 00').

Step 6 End the LogMiner session. "OLD_WARRANTY_PERIOD" = TO_YMINTERVAL('+05- 00'). 'dd-mon-yyyy hh24:mi:ss').1598 update "OE". OE 1."PRODUCT_TRACKING" values "PRODUCT_ID" = 2340. ."PRODUCT_INFORMATION" set "WARRANTY_PERIOD" = TO_YMINTERVAL('+08- 00').1598 commit. "OLD_LIST_PRICE" = 80.9. OE 1. "OLD_WARRANTY_PERIOD" = TO_YMINTERVAL('+05- 00'). "LIST_PRICE" = 92 where "PRODUCT_ID" = 2340 and "WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00') and "LIST_PRICE" = 72 and ROWID = 'AAAHTKAABAAAY9zAAA'.16:07:03'.1598 insert into "OE". OE 1. "OLD_LIST_PRICE" = 72.9. "MODIFIED_TIME" = TO_DATE('13-jan-2003 'dd-mon-yyyy hh24:mi:ss').9. "MODIFIED_TIME" = TO_DATE('13-jan-2003 16:07:07'.

%r.%t. STEP 5: Then type the following to confirm. To open the database in Archive log mode. Step 7: It is recommended that you take a full backup after you brought the database in archive log mode. SQL> STARTUP MOUNT STEP 4: Give the following command SQL> ALTER DATABASE ARCHIVELOG.SQL> EXECUTE DBMS_LOGMNR. . STEP 6: Now open the database SQL>alter database open. STEP 2: Take a full offline backup.arc LOG_ARCHIVE_DEST_1=”location=/u02/ica/arc1” If you want you can specify second destination also LOG_ARCHIVE_DEST_2=”location=/u02/ica/arc1” Step 3: Start and mount the database. LOG_ARCHIVE_FORMAT=ica%s. SQL> ARCHIVE LOG LIST. BACKUP AND RECOVERY Opening or Bringing the database in Archivelog mode. Follow these steps: STEP 1: Shutdown the database if it is running. STEP 3: Set the following parameters in parameter file.END_LOGMNR().

. STEP 5: Shutdown the database and take full offline backup. TIP: To identify the datafiles. # LOG_ARCHIVE_DEST_1=”location=/u02/ica/arc1” # LOG_ARCHIVE_DEST_2=”location=/u02/ica/arc2” # LOG_ARCHIVE_FORMAT=ica%s. STEP 4: Give the following Commands SQL> ALTER DATABASE NOARCHIVELOG. controlfiles.To again bring back the database in NOARCHIVELOG mode. Follow these steps: STEP 1: Shutdown the database if it is running. Copy all the datafiles. Then the following command copies all the files to the backup destination /u02/backup. SQL> STARTUP MOUNT. Lets suppose all the files are in "/u01/ica" directory. TAKING OFFLINE BACKUPS. logfiles. ( UNIX ) Shutdown the database if it is running. parameter file and password file to your backup destination.%r. $sqlplus SQL> connect / as sysdba SQL> Shutdown immediate SQL> Exit After Shutting down the database.%t. Then start SQL Plus and connect as SYSDBA.arc STEP 3: Startup and mount the database. Logfiles query the data dictionary tables V$DATAFILE and V$LOGFILE before shutting down. STEP 2: Comment the following parameters in parameter file by putting " # " .

Let us suppose we want to take online backup of "USERS" tablespace. SQL> host cp /u01/ica/usr1.(UNIX) To take online backups the database should be running in Archivelog mode. After connecting give the command "archive log list" this will show you the status of archiving. Give the following series of commands to take online backup of USERS tablespace.dbf ". . Now you can open the database. You can create text file and put the destinations of each file for future use. You can query the V$DATAFILE view to find out the name of datafiles associated with this tablespace. To check whether the database is running in Archivelog mode or Noarchivelog mode. Lets suppose the file is "/u01/ica/usr1. This will be useful when restoring from this backup. $sqlplus Enter User:/ as sysdba SQL> alter tablespace users begin backup.$cd /u01/ica $cp * /u02/backup/ Be sure to remember the destination of each file.dbf /u02/backup SQL> alter tablespace users end backup. SQL> exit. TAKING ONLINE (HOT) BACKUPS. Start sqlplus and then connect as SYSDBA. $sqlplus Enter User:/ as sysdba SQL> ARCHIVE LOG LIST If the database is running in archive log mode then you can take online backups.

Option 2:When you have the Backup. The following are the steps to drop a damaged datafile and open the database. '/u01/ica/usr1. if it does not contain important information which you can afford to loose. (UNIX) STEP 1: First take full backup of database for safety. You will loose all information contained in the damaged datafile. SQL>alter database open. Or you can restore from full backup. If you have lost one datafile and if you don't have any backup and if the datafile does not contain important objects then. $sqlplus Enter User:/ as sysdba SQL> STARTUP MOUNT SQL> ALTER DATABASE DATAFILE drop. you can drop the damaged datafile and open the database. If the database is running in Noarchivelog mode and if you have a full backup.RECOVERING THE DATABASE IF IT IS RUNNING IN NOARCHIVELOG MODE. STEP 2: Start the sqlplus and give the following commands. You will loose all the changes made to the database since last full backup. Then there are two options for you. ii . To drop the damaged datafile follow the steps shown previously. Option 1: When you don’t have a backup. Do the following. STEP 1: Take a full backup of current database. To restore from full database backup. Either you can drop the damaged datafile.dbf ' offline . i .

Then simply copy the control file from mirrored location to the damaged location and open the database If you have lost all the mirrored control files and all the datafiles and logfiles are intact. STEP 1: Start sqlplus STEP 2: connect / as sysdba STEP 3: Start and do not mount a database like this. copy all the files from backup to their original locations. (UNIX) Suppose the backup is in "/u2/oracle/backup" directory. Then you have to alter the script and include the filename and size of the file in script file. . STEP 5: Mount and Open the database.SQL" script file. If you have lost the control file and if it is mirrored. Then do the following. just create the controlfile by executing the statement Buf If you have added any new tablespace after generating create controlfile statement. If your script file containing the control file creation statement is "CR. Then you can recreate a control file.STEP 2: Restore from full database backup i.e. " and if you have not added any tablespace since then. RECOVERING FROM LOST OF CONTROL FILE. If you have already taken the backup of control file creation statement by giving this command. " ALTER DATABASE BACKUP CONTROLFILE TO TRACE.SQL" Then just do the following. Also remember to copy the control files to all the mirrored locations. SQL> STARTUP NOMOUNT STEP 4: Run the "CR. $cp /u02/backup/* /u01/ica This will copy all the files from backup directory to original destination.

Then you have to manually give the CREATE CONTROL FILE statement. Refer to "Managing Control File" topic for the CREATE CONTROL FILE statement. If you do not have a backup of Control file creation statement. After you get the "Media Recovery Completely" statement. Recovering from the lost of Damaged Datafile. Go on to next step.SQL>alter database mount. Now open the database SQL>alter database open. Then Start sqlplus and connect as SYSDBA. SQL>alter database recover. . Restore the datafile from most recent backup. Then Immediately shutdown the database and take a full offline backup. STEP 4. If all archive log files are available then recovery should go on smoothly. If you have lost one datafile. Recovering from the Lost Archived Files: If you have lost the archived files. SQL>alter database open. STEP 1. You have to write the file names and sizes of all the datafiles. SQL>Set autorecovery on. You will lose any datafiles which you do not include. Shutdown the Database if it is running. STEP 2. Recovering Database when the database is running in ARCHIVELOG Mode. STEP 3. Then follow the steps shown below. $sqlplus Enter User:/ as sysdba SQL>Startup mount.

Time Based Recovery (INCOMPLETE RECOVERY). Start SQLPLUS and start and mount the database. STEP 8. Open the database and reset the logs. . STEP 1. For further information please refer to Flashback Features Topic in this book. Export the table to a dump file using Export Utility. STEP 2. like this SQL> alter database open resetlogs. STEP 3. SQL> recover database until time '2007:08:16:13:55:00' using backup controlfile. logfiles and control file from the full offline backup which was taken on Monday. Note: In Oracle 10g you can easily recover drop tables by using Flashback feature. After database is open. STEP 5. STEP 4. Then give the following command to recover database until specified time. Because you have performed a Incomplete Recovery. You have taken a full backup of the database on Monday 13-Aug-2007 and the table was created on Tuesday 14-Aug-2007 and thousands of rows were inserted into it. STEP 6. Suppose a user has a dropped a crucial table accidentally and you have to recover the dropped table. Restore from the full database backup which you have taken on Saturday. STEP 7. Open the database and Import the table. Shutdown the database and take a full offline backup. Some user accidently drop the table on Thursday 16-Aug-2007 and nobody notice this until Saturday. Restore all the datafiles. Now to recover the table follow these steps.

Sign up to vote on this title
UsefulNot useful