Professional Documents
Culture Documents
Introduction:
An Oracle database is a collection of data treated as a unit. The purpose of a database is to store and
retrieve related information. A database server is the key to solving the problems of information
management. In general, a server reliably manages a large amount of data in a multiuser environment so
that many users can concurrently access the same data. All this is accomplished while delivering high
performance. A database server also prevents unauthorized access and provides efficient solutions for
failure recovery.
Oracle Database is the first database designed for enterprise grid computing, the most flexible and cost
effective way to manage information and applications. Enterprise grid computing creates large pools of
industry-standard, modular storage and servers. With this architecture, each new system can be rapidly
provisioned from the pool of components. There is no need for peak workloads, because capacity can be
easily added or reallocated from the resource pools as needed.
The database has logical structures and physical structures. Because the physical and logical
structures are separate, the physical storage of data can be managed without affecting the access to
logical storage structures.
For example, you could run different applications on a grid of several linked database servers. When
reports are due at the end of the month, the database administrator could automatically provision more
servers to that application to handle the increased demand.
Grid computing uses sophisticated workload management that makes it possible for applications to share
resources across many servers. Data processing capacity can be added or removed on demand, and
resources within a location can be dynamically provisioned. Web services can quickly integrate
applications to create new business processes.
At the highest level, the idea of grid computing is computing as a utility. In other words, you should not
care where your data resides, or what computer processes your request. You should be able to request
information or computation and have it delivered - as much as you want, and whenever you want. This
is analogous to the way electric utilities work, in that you don't know where the generator is, or how the
electric grid is wired, you just ask for electricity, and you get it. The goal is to make computing a utility, a
commodity, and ubiquitous. Hence the name, The Grid. This view of utility computing is, of course, a
"client side" view.
From the "server side", or behind the scenes, the grid is about resource allocation, information sharing,
and high availability. Resource allocation ensures that all those that need or request resources are getting
what they need, that resources are not standing idle while requests are going unserviced. Information
sharing makes sure that the information users and applications need is available where and when it is
needed. High availability features guarantee all the data and computation is always there, just like a utility
company always provides electric power.
Installing and upgrading the Oracle Database server and application tools
Allocating system storage and planning future storage requirements for the database system
Creating primary database storage structures (tablespaces) after application developers have
designed an application
Creating primary objects (tables, views, indexes) once application developers have designed
an application
Modifying the database structure, as necessary, from information given by application
developers
Enrolling users and maintaining system security
Ensuring compliance with Oracle license agreements
Controlling and monitoring user access to the database
Monitoring and optimizing the performance of the database
Planning for backup and recovery of database information
Maintaining archived data on tape
Backing up and restoring the database
Contacting Oracle for technical support
Separate user data from data dictionary data to reduce contention among dictionary objects
and schema objects for the same datafiles.
Separate data of one application from the data of another to prevent multiple applications from
being affected if a tablespace must be taken offline.
Store different the datafiles of different tablespaces on different disk drives to reduce I/O
contention.
Take individual tablespaces offline while others remain online, providing better overall
availability.
Locally managed tablespaces track all extent information in the tablespace itself by using bitmaps,
resulting in the following benefits:
Concurrency and speed of space operations is improved, because space allocations and
deallocations modify locally managed resources (bitmaps stored in header files) rather than
requiring centrally managed resources such as enqueues
Performance is improved, because recursive operations that are sometimes required during
dictionary-managed space allocation are eliminated
AUTOALLOCATE causes the tablespace to be system managed with a minimum extent size of 64K.
The alternative to AUTOALLOCATE is UNIFORM. which specifies that the tablespace is managed with
extents of uniform size. You can specify that size in the SIZE clause of UNIFORM. If you omit SIZE, then
the default size is 1M. The following example creates a Locally managed tablespace with uniform extent
size of 256K
You can extend the size of a tablespace by increasing the size of an existing datafile by typing the
following command
Option 2
You can also extend the size of a tablespace by adding a new datafile to a tablespace. This is useful if the
size of existing datafile is reached o/s file size limit or the drive where the file is existing does not have
free space. To add a new datafile to an existing tablespace give the following command.
Option 3
You can also use auto extend feature of datafile. In this, Oracle will automatically increase the size of a
datafile whenever space is required. You can specify by how much size the file should increase and
Maximum size to which it should extend.
You can also make a datafile auto extendable while creating a new tablespace itself by giving the
following command.
Coalescing Tablespaces
A free extent in a dictionary-managed tablespace is made up of a collection of contiguous free blocks.
When allocating new extents to a tablespace segment, the database uses the free extent closest in size
to the required extent. In some cases, when segments are dropped, their extents are deallocated and
marked as free, but adjacent free extents are not immediately recombined into larger free extents. The
result is fragmentation that makes allocation of larger extents more difficult.
You should often use the ALTER TABLESPACE ... COALESCE statement to manually coalesce any
adjacent free extents. To Coalesce a tablespace give the following command
To alter the availability of a tablespace, use the ALTER TABLESPACE statement. You must have the ALTER
TABLESPACE or MANAGE TABLESPACE system privilege.
To Take a Tablespace Offline give the following command
Note: You can’t take individual datafiles offline it the database is running in NOARCHIVELOG mode. If
the datafile has become corrupt or missing when the database is running in NOARCHIVELOG mode then
you can only drop it by giving the following command
Renaming Tablespaces
Using the RENAME TO clause of the ALTER TABLESPACE, you can rename a permanent or temporary
tablespace. For example, the following statement renames the users tablespace:
ALTER TABLESPACE users RENAME TO usersts;
Dropping Tablespaces
You can drop a tablespace and its contents (the segments contained in the tablespace) from the
database if the tablespace and its contents are no longer required. You must have the DROP
TABLESPACE system privilege to drop a tablespace.
Caution: Once a tablespace has been dropped, the data in the tablespace is not recoverable. Therefore,
make sure that all data contained in a tablespace to be dropped will not be required in the future. Also,
immediately before and after dropping a tablespace from a database, back up the database completely
This will drop the tablespace only if it is empty. If it is not empty and if you want to drop it anyhow then
add the following keyword
This will drop the tablespace even if it is not empty. But the datafiles will not be deleted you have to use
operating system command to delete the files.
But If you include datafiles keyword then, the associated datafiles will also be deleted from the disk.
Temporary Tablespace
Temporary tablespace is used for sorting large tables. Every database should have one temporary
tablespace. To create temporary tablespace give the following command.
The following statement drops a temporary file and deletes the operating system file:
Tablespace Groups
A tablespace group enables a user to consume temporary space from multiple tablespaces. A
tablespace group has the following characteristics:
It contains at least one tablespace. There is no explicit limit on the maximum number of
tablespaces that are contained in a group.
It shares the namespace of tablespaces, so its name cannot be the same as any tablespace.
You can specify a tablespace group name wherever a tablespace name would appear when you
assign a default temporary tablespace for the database or a temporary tablespace for a user.
You do not explicitly create a tablespace group. Rather, it is created implicitly when you assign the first
temporary tablespace to the group. The group is deleted when the last temporary tablespace it contains
is removed from it.
Using a tablespace group, rather than a single temporary tablespace, can alleviate problems caused
where one tablespace is inadequate to hold the results of a sort, particularly on a table that has many
partitions. A tablespace group enables parallel execution servers in a single parallel operation to use
multiple temporary tablespaces.
The view DBA_TABLESPACE_GROUPS lists tablespace groups and their member tablespaces.
For example, if neither group1 nor group2 exists, then the following statements create those groups,
each of which has only the specified tablespace as a member:
Procedure Description
segment.
TABLESPACE_VERIFY Verifies that the bitmaps and extent maps for the segments in
the tablespace are in sync.
TABLESPACE_FIX_BITMAPS Marks the appropriate data block address range (extent) as free
or used in bitmap. Cannot be used for a locally managed
SYSTEM tablespace.
Be careful using the above procedures if not used properly you will corrupt your database. Contact
Oracle Support before using these procedures.
Following are some of the Scenarios where you can use the above procedures
1. Call the SEGMENT_DUMP procedure to dump the ranges that the administrator allocated to the
segment.
2. For each range, call the TABLESPACE_FIX_BITMAPS procedure with the
TABLESPACE_EXTENT_MAKE_USED option to mark the space as used.
3. Call TABLESPACE_REBUILD_QUOTAS to fix up quotas.
After choosing the object to be sacrificed, in this case say, table t1, perform the following tasks:
For example if you want to migrate a dictionary managed tablespace ICA2 to Locally managed then give
the following command.
Transporting Tablespaces
You can use the transportable tablespaces feature to move a subset of an Oracle Database and "plug" it
in to another Oracle Database, essentially moving tablespaces between the databases. The tablespaces
being transported can be either dictionary managed or locally managed. Starting with Oracle9i, the
transported tablespaces are not required to be of the same block size as the target database standard
block size.
Moving data using transportable tablespaces is much faster than performing either an export/import or
unload/load of the same data. This is because the datafiles containing all of the actual data are simply
copied to the destination location, and you use an import utility to transfer only the metadata of the
tablespace objects to the new database.
Starting with Oracle Database 10g, you can transport tablespaces across platforms. This functionality
can be used to Allow a database to be migrated from one platform to another. However not all
platforms are supported. To see which platforms are supported give the following query.
10 rows selected.
If the source platform and the target platform are of different endianness, then an additional step must
be done on either the source or target platform to convert the tablespace being transported to the
target format. If they are of the same endianness, then no conversion is necessary and tablespaces can
be transported as if they were on the same platform.
Important: Before a tablespace can be transported to a different platform, the datafile header must
identify the platform to which it belongs. In an Oracle Database with compatibility set to 10.0.0 or
higher, you can accomplish this by making the datafile read/write at least once.
Then,
1. For cross-platform transport, check the endian format of both platforms by querying the
V$TRANSPORTABLE_PLATFORM view.
If you are transporting the tablespace set to a platform different from the source platform, then
determine if the source and target platforms are supported and their endianness. If both
platforms have the same endianness, no conversion is necessary. Otherwise you must do a
conversion of the tablespace set either at the source or target database.
Ignore this step if you are transporting your tablespace set to the same platform.
A transportable tablespace set consists of datafiles for the set of tablespaces being transported
and an export file containing structural information for the set of tablespaces.
If you are transporting the tablespace set to a platform with different endianness from the
source platform, you must convert the tablespace set to the endianness of the target platform.
You can perform a source-side conversion at this step in the procedure, or you can perform a
target-side conversion as part of step 4.
Copy the datafiles and the export file to the target database. You can do this using any facility
for copying flat files (for example, an operating system copy utility, ftp, the DBMS_FILE_COPY
package, or publishing on CDs).
If you have transported the tablespace set to a platform with different endianness from the
source platform, and you have not performed a source-side conversion to the endianness of the
target platform, you should perform a target-side conversion now.
Invoke the Export utility to plug the set of tablespaces into the target database.
Tablespace Datafile:
ica_sales_1 /u01/oracle/oradata/ica_salesdb/ica_sales_101.dbf
ica_sales_2 /u01/oracle/oradata/ica_salesdb/ica_sales_201.dbf
This step is only necessary if you are transporting the tablespace set to a platform different from
the source platform. If ica_sales_1 and ica_sales_2 were being transported to a different
platform, you can execute the following query on both platforms to determine if the platforms
are supported and their endian formats:
PLATFORM_NAME ENDIAN_FORMAT
------------------------- --------------
Solaris[tm] OE (32-bit) Big
PLATFORM_NAME ENDIAN_FORMAT
------------------------- --------------
Microsoft Windows NT Little
You can see that the endian formats are different and thus a conversion is necessary for
transporting the tablespace set.
There may be logical or physical dependencies between objects in the transportable set and those
outside of the set. You can only transport a set of tablespaces that is self-contained. That is it
should not have tables with foreign keys referring to primary key of tables which are in other
tablespaces. It should not have tables with some partitions in other tablespaces. To find out
whether the tablespace is self contained do the following
After executing the above give the following query to see whether any violations are there.
VIOLATIONS
---------------------------------------------------------------------------
These violations must be resolved before ica_sales_1 and ica_sales_2 are transportable
After ensuring you have a self-contained set of tablespaces that you want to transport, generate a
transportable tablespace set by performing the following actions:
Tablespace altered.
Tablespace altered.
Invoke the Export utility on the host system and specify which tablespaces are in the transportable set.
SQL> HOST
If ica_sales_1 and ica_sales_2 are being transported to a different platform, and the
endianness of the platforms is different, and if you want to convert before transporting the
tablespace set, then convert the datafiles composing the ica_sales_1 and ica_sales_2
tablespaces. You have to use RMAN utility to convert datafiles
$ RMAN TARGET /
Convert the datafiles into a temporary location on the source platform. In this example, assume that the
temporary location, directory /temp, has already been created. The converted datafiles are assigned
names by the system.
RMAN> CONVERT TABLESPACE ica_sales_1,ica_sales_2
TO PLATFORM 'Microsoft Windows NT' FORMAT '/temp/%U';
Transport both the datafiles and the export file of the tablespaces to a place accessible to the
target database. You can use any facility for copying flat files (for example, an operating system
copy utility, ftp, the DBMS_FILE_TRANSFER package, or publishing on CDs).
Plug in the tablespaces and integrate the structural information using the Import utility, imp:
REMAP_SCHEMA=(smith:sami) REMAP_SCHEMA=(williams:john)
The REMAP_SCHEMA parameter changes the ownership of database objects. If you do not specify
REMAP_SCHEMA, all database objects (such as tables and indexes) are created in the same user schema
as in the source database, and those users must already exist in the target database. If they do not exist,
then the import utility returns an error. In this example, objects in the tablespace set owned by smith
in the source database will be owned by sami in the target database after the tablespace set is plugged
in. Similarly, objects owned by williams in the source database will be owned by john in the target
database. In this case, the target database is not required to have users smith and williams, but must
have users sami and john.
After this statement executes successfully, all tablespaces in the set being copied remain in read-
only mode. Check the import logs to ensure that no error has occurred.
Relocating or Renaming Datafiles
You can rename datafiles to either change their names or relocate them.
3. Give the ALTER TABLESPACE with RENAME DATAFILE option to change the filenames within the
Database.
For Example suppose you have a tablespace users with the following datafiles
/u01/oracle/ica/usr01.dbf’
/u01/oracle/ica/usr02.dbf’
3. Now start SQLPLUS and type the following command to rename and relocate these files
2. Copy the datafiles to be renamed to their new locations and new names, using the
operating system..
3. Use ALTER DATABASE to rename the file pointers in the database control file.
ALTER DATABASE
RENAME FILE '/u02/oracle/rbdb1/sort01.dbf',
'/u02/oracle/rbdb1/user3.dbf'
TO '/u02/oracle/rbdb1/temp01.dbf',
'/u02/oracle/rbdb1/users03.dbf;
Always provide complete filenames (including their paths) to properly identify the old
and new datafiles. In particular, specify the old datafile names exactly as they appear in
the DBA_DATA_FILES view.
4. Back up the database. After making any structural changes to a database, always perform
an immediate and complete backup.
Every Oracle database must have at least 2 redo logfile groups. Oracle writes all statements
except, SELECT statement, to the logfiles. This is done because Oracle performs deferred batch
writes i.e. it does write changes to disk per statement instead it performs write in batches. So in
this case if a user updates a row, Oracle will change the row in db_buffer_cache and records the
statement in the logfile and give the message to the user that row is updated. Actually the row is
not yet written back to the datafile but still it give the message to the user that row is updated.
After 3 seconds the row is actually written to the datafile. This is known as deferred batch writes.
Since Oracle defers writing to the datafile there is chance of power failure or system crash before
the row is written to the disk. That’s why Oracle writes the statement in redo logfile so that in
case of power failure or system crash oracle can re-execute the statements next time when you
open the database.
Note: You can add groups to a database up to the MAXLOGFILES setting you have specified at
the time of creating the database. If you want to change MAXLOGFILE setting you have to
create a new controlfile.
Note: You can add members to a group up to the MAXLOGMEMBERS setting you have
specified at the time of creating the database. If you want to change MAXLOGMEMBERS
setting you have create a new controlfile
Important: Is it strongly recommended that you multiplex logfiles i.e. have at least two log
members, one member in one disk and another in second disk, in a database.
Note: When you drop logfiles the files are not deleted from the disk. You have to use O/S
command to delete the files from disk.
Note: When you drop logfiles the files are not deleted from the disk. You have to use O/S
command to delete the files from disk.
Resizing Logfiles
You cannot resize logfiles. If you want to resize a logfile create a new logfile group with the new
size and subsequently drop the old logfile group.
Steps
SQL>shutdown immediate;
2. Move the logfile from Old location to new location using operating system command
$mv /u01/oracle/ica/log1.ora /u02/oracle/ica/log1.ora
SQL>startup mount
4. Now give the following command to change the location in controlfile
The following statement clears the log files in redo log group number 3:
This statement overcomes two situations where dropping redo logs is not possible:
If the corrupt redo log file has not been archived, use the UNARCHIVED keyword in the statement.
This statement clears the corrupted redo logs and avoids archiving them. The cleared redo logs
are available for use even though they were not archived.
If you clear a log file that is needed for recovery of a backup, then you can no longer recover
from that backup. The database writes a message in the alert log describing the backups from
which you cannot recover
To See how many members are there and where they are located give the following query
Every Oracle Database has a control file, which is a small binary file that records the physical
structure of the database. The control file includes:
Names and locations of associated datafiles and redo log files
It is strongly recommended that you multiplex control files i.e. Have at least two control files
one in one hard disk and another one located in another disk, in a database. In this way if control
file becomes corrupt in one disk the another copy will be available and you don’t have to do
recovery of control file.
You can multiplex control file at the time of creating a database and later on also. If you have
not multiplexed control file at the time of creating a database you can do it now by following
given procedure.
SQL>SHUTDOWN IMMEDIATE;
2. Copy the control file from old location to new location using operating system command. For
example.
3. Now open the parameter file and specify the new location like this
CONTROL_FILES=/u01/oracle/ica/control.ora
Change it to
CONTROL_FILES=/u01/oracle/ica/control.ora,/u02/oracle/ica/control
.ora
Now Oracle will start updating both the control files and, if one control file is lost you
can copy it from another location.
Steps
After giving this statement oracle will write the CREATE CONTROLFILE statement in a
trace file. The trace file will be randomly named something like ORA23212.TRC and it
is created in USER_DUMP_DEST directory.
2. Go to the USER_DUMP_DEST directory and open the latest trace file in text editor. This file will
contain the CREATE CONTROLFILE statement. It will have two sets of statement one with
RESETLOGS and another without RESETLOGS. Since we are changing the name of the
Database we have to use RESETLOGS option of CREATE CONTROLFILE statement. Now copy
and paste the statement in a file. Let it be c.sql
3. Now open the c.sql file in text editor and set the database name from ica to prod shown in an
example below
CREATE CONTROLFILE
SET DATABASE prod
LOGFILE GROUP 1 ('/u01/oracle/ica/redo01_01.log',
'/u01/oracle/ica/redo01_02.log'),
GROUP 2 ('/u01/oracle/ica/redo02_01.log',
'/u01/oracle/ica/redo02_02.log'),
GROUP 3 ('/u01/oracle/ica/redo03_01.log',
'/u01/oracle/ica/redo03_02.log')
RESETLOGS
DATAFILE '/u01/oracle/ica/system01.dbf' SIZE 3M,
'/u01/oracle/ica/rbs01.dbs' SIZE 5M,
'/u01/oracle/ica/users01.dbs' SIZE 5M,
'/u01/oracle/ica/temp01.dbs' SIZE 5M
MAXLOGFILES 50
MAXLOGMEMBERS 3
MAXLOGHISTORY 400
MAXDATAFILES 200
MAXINSTANCES 6
ARCHIVELOG;
SQL> @/u01/oracle/c.sql
We have a database running the production server with the following files
CONTROL FILES=/u01/oracle/ica/control.ora
BACKGROUND_DUMP_DEST=/u01/oracle/ica/bdump
USER_DUMP_DEST=/u01/oracle/ica/udump
CORE_DUMP_DEST=/u01/oracle/ica/cdump
LOG_ARCHIVE_DEST_1=”location=/u01/oracle/ica/arc1”
DATAFILES =
/u01/oracle/ica/sys.dbf
/u01/oracle/ica/usr.dbf
/u01/oracle/ica/rbs.dbf
/u01/oracle/ica/tmp.dbf
/u01/oracle/ica/sysaux.dbf
LOGFILE=
/u01/oracle/ica/log1.ora
/u01/oracle/ica/log2.ora
Now you want to copy this database to SERVER 2 and in SERVER 2 you don’t have /u01
filesystem. In SERVER 2 you have /d01 filesystem.
Steps :-
1. In SERVER 2 install the same version of o/s and same version Oracle as in SERVER 1.
2. In SERVER 1 generate CREATE CONTROLFILE statement by typing the following command
Now, go to the USER_DUMP_DEST directory and open the latest trace file. This file will contain
steps and as well as CREATE CONTROLFILE statement. Copy the CREATE CONTROLFILE
statement and paste in a file. Let the filename be cr.sql
CREATE CONTROLFILE
SET DATABASE prod
LOGFILE GROUP 1 ('/u01/oracle/ica/log1.ora'
GROUP 2 ('/u01/oracle/ica/log2.ora'
DATAFILE '/u01/oracle/ica/sys.dbf' SIZE 300M,
'/u01/oracle/ica/rbs.dbf' SIZE 50M,
'/u01/oracle/ica/usr.dbf' SIZE 50M,
'/u01/oracle/ica/tmp.dbf' SIZE 50M,
‘/u01/oracle/ica/sysaux.dbf’ size 100M;
MAXLOGFILES 50
MAXLOGMEMBERS 3
MAXLOGHISTORY 400
MAXDATAFILES 200
MAXINSTANCES 6
ARCHIVELOG;
$mkdir ica
$mkdir arc1
$cd ica
Shutdown the database on SERVER 1 and transfer all datafiles, logfiles and control file to
SERVER 2 in /d01/oracle/ica directory.
Copy parameter file to SERVER 2 in /d01/oracle/dbs directory and copy all archive log files
to SERVER 2 in /d01/oracle/ica/arc1 directory. Copy the cr.sql script file
to /d01/oracle/ica directory.
4. Open the parameter file SERVER 2 and change the following parameters
CONTROL FILES=//d01/oracle/ica/control.ora
BACKGROUND_DUMP_DEST=//d01/oracle/ica/bdump
USER_DUMP_DEST=//d01/oracle/ica/udump
CORE_DUMP_DEST=//d01/oracle/ica/cdump
LOG_ARCHIVE_DEST_1=”location=//d01/oracle/ica/arc1”
5. Now, open the cr.sql file in text editor and change the locations like this
CREATE CONTROLFILE
SET DATABASE prod
LOGFILE GROUP 1 ('//d01/oracle/ica/log1.ora'
GROUP 2 ('//d01/oracle/ica/log2.ora'
DATAFILE '//d01/oracle/ica/sys.dbf' SIZE 300M,
'//d01/oracle/ica/rbs.dbf' SIZE 50M,
'//d01/oracle/ica/usr.dbf' SIZE 50M,
'//d01/oracle/ica/tmp.dbf' SIZE 50M,
‘//d01/oracle/ica/sysaux.dbf’ size 100M;
MAXLOGFILES 50
MAXLOGMEMBERS 3
MAXLOGHISTORY 400
MAXDATAFILES 200
MAXINSTANCES 6
ARCHIVELOG;
$export ORACLE_SID=ica
$sqlplus
SQL>@/d01/oracle/ica/cr.sql
Every Oracle Database must have a method of maintaining information that is used to roll back,
or undo, changes to the database. Such information consists of records of the actions of
transactions, primarily before they are committed. These records are collectively referred to as
undo.
Earlier releases of Oracle Database used rollback segments to store undo. Oracle9i introduced
automatic undo management, which simplifies undo space management by eliminating the
complexities associated with rollback segment management. Oracle strongly recommends that
you use undo tablespace to manage undo rather than rollback segments.
Steps:-
1. If you have not created an undo tablespace at the time of creating a database then, create
an undo tablespace by typing the following command
2. Shutdown the Database and set the following parameters in parameter file.
UNDO_MANAGEMENT=AUTO
UNDO_TABLESPACE=myundo
Now Oracle Database will use Automatic Undo Space Management.
You can calculate space requirements manually using the following formula:
where:
As an example, if UNDO_RETENTION is set to 3 hours, and the transaction rate (UPS) is 100 undo
blocks for each second, with a 8K block size, the required undo space is computed as follows:
To get the values for UPS, Overhead query the V$UNDOSTAT view. By giving the following
statement
If the Undo tablespace is full, you can resize existing datafiles or add new datafiles to it
Use the DROP TABLESPACE statement to drop an undo tablespace. The following example drops
the undo tablespace undotbs_01:
An undo tablespace can only be dropped if it is not currently used by any instance. If the undo
tablespace contains any outstanding transactions (for example, a transaction died but has not yet
been recovered), the DROP TABLESPACE statement fails.
You can switch from using one undo tablespace to another. Because the UNDO_TABLESPACE
initialization parameter is a dynamic parameter, the ALTER SYSTEM SET statement can be used to
assign a new undo tablespace.
Assuming myundo is the current undo tablespace, after this command successfully executes, the
instance uses myundo2 in place of myundo as its undo tablespace.
To view statistics for tuning undo tablespace query the following dictionary
To see how many active Transactions are there and to see undo segment information give the
following command
6. SQL Loader
SQL LOADER utility is used to load data from other data source into Oracle. For example, if
you have a table in FOXPRO, ACCESS or SYBASE or any other third party database, you can
use SQL Loader to load the data into Oracle Tables. SQL Loader will only read the data from
Flat files. So If you want to load the data from Foxpro or any other database, you have to first
convert that data into Delimited Format flat file or Fixed length format flat file, and then use
SQL loader to load the data into Oracle.
Following is procedure to load the data from Third Party Database into Oracle using SQL
Loader.
1. Convert the Data into Flat file using third party database command.
2. Create the Table Structure in Oracle Database using appropriate datatypes
3. Write a Control File, describing how to interpret the flat file and options to load the data.
4. Execute SQL Loader utility specifying the control file in the command line argument
Suppose you have a table in MS-ACCESS by name EMP, running under Windows O/S, with the
following structure
EMPNO INTEGER
NAME TEXT(50)
SAL CURRENCY
JDATE DATE
This table contains some 10,000 rows. Now you want to load the data from this table into an
Oracle Table. Oracle Database is running in LINUX O/S.
Solution
Steps
Start MS-Access and convert the table into comma delimited flat (popularly known as csv) , by
clicking on File/Save As menu. Let the delimited file name be emp.csv
1. Now transfer this file to Linux Server using FTP command
a. Go to Command Prompt in windows
b. At the command prompt type FTP followed by IP address of the server running
Oracle.
FTP will then prompt you for username and password to connect to the Linux
Server. Supply a valid username and password of Oracle User in Linux
For example:-
C:\>ftp 200.200.100.111
Name: oracle
Password:oracle
FTP>
c. Now give PUT command to transfer file from current Windows machine to Linux
machine.
FTP>put
Local file:C:\>emp.csv
remote-file:/u01/oracle/emp.csv
File transferred in 0.29 Seconds
FTP>
d. Now after the file is transferred quit the FTP utility by typing bye command.
FTP>bye
Good-Bye
2. Now come the Linux Machine and create a table in Oracle with the same structure as in
MS-ACCESS by taking appropriate datatypes. For example, create a table like this
$sqlplus scott/tiger
SQL>CREATE TABLE emp (empno number(5),
name varchar2(50),
sal number(10,2),
jdate date);
3. After creating the table, you have to write a control file describing the actions which SQL
Loader should do. You can use any text editor to write the control file. Now let us write
a controlfile for our case study
$vi emp.ctl
1 LOAD DATA
2 INFILE ‘/u01/oracle/emp.csv’
3 BADFILE ‘/u01/oracle/emp.bad’
4 DISCARDFILE ‘/u01/oracle/emp.dsc’
5 INSERT INTO TABLE emp
6 FIELDS TERMINATED BY “,” OPTIONALLY ENCLOSED BY ‘”’
TRAILING NULLCOLS
7 (empno,name,sal,jdate date ‘mm/dd/yyyy’)
Notes:
(Do not write the line numbers, they are meant for explanation purpose)
1. The LOAD DATA statement is required at the beginning of the control file.
2. The INFILE option specifies where the input file is located
3. Specifying BADFILE is optional. If you specify, then bad records found during loading will be
stored in this file.
4. Specifying DISCARDFILE is optional. If you specify, then records which do not meet a WHEN
condition will be written to this file.
3. REPLACE: First deletes all the rows in the existing table and then, load rows.
4. TRUNCATE: First truncates the table and then load rows.
6. This line indicates how the fields are separated in input file. Since in our case the fields are separated
by “,” so we have specified “,” as the terminating char for fields. You can replace this by any char
which is used to terminate fields. Some of the popularly use terminating characters are semicolon
“;”, colon “:”, pipe “|” etc. TRAILING NULLCOLS means if the last column is null then
treat this as null value, otherwise, SQL LOADER will treat the record as bad if the last column is
null.
7. In this line specify the columns of the target table. Note how do you specify format for Date columns
4. After you have wrote the control file save it and then, call SQL Loader utility by typing
the following command
After you have executed the above command SQL Loader will shows you the output
describing how many rows it has loaded.
The LOG option of sqlldr specifies where the log file of this sql loader session should
be created. The log file contains all actions which SQL loader has performed i.e. how
many rows were loaded, how many were rejected and how much time is taken to load the
rows and etc. You have to view this file for any errors encountered while running SQL
Loader.
CASE STUDY (Loading Data from Fixed Length file into Oracle)
Suppose we have a fixed length format file containing employees data, as shown below, and
wants to load this data into an Oracle table.
SOLUTION:
Steps :-
1. First Open the file in a text editor and count the length of fields, for example in our
fixed length file, employee number is from 1st position to 4th position, employee name
is from 6th position to 15th position, Job name is from 17th position to 25th position.
Similarly other columns are also located.
2. Create a table in Oracle, by any name, but should match columns specified in fixed
length file. In our case give the following command to create the table.
SQL> CREATE TABLE emp (empno NUMBER(5),
name VARCHAR2(20),
job VARCHAR2(10),
mgr NUMBER(5),
sal NUMBER(10,2),
comm NUMBER(10,2),
deptno NUMBER(3) );
3. After creating the table, now write a control file by using any text editor
$vi empfix.ctl
1) LOAD DATA
2) INFILE '/u01/oracle/fix.dat'
3) INTO TABLE emp
4) (empno POSITION(01:04) INTEGER EXTERNAL,
name POSITION(06:15) CHAR,
job POSITION(17:25) CHAR,
mgr POSITION(27:30) INTEGER EXTERNAL,
sal POSITION(32:39) DECIMAL EXTERNAL,
comm POSITION(41:48) DECIMAL EXTERNAL,
5) deptno POSITION(50:51) INTEGER EXTERNAL)
Notes:
(Do not write the line numbers, they are meant for explanation purpose)
1. The LOAD DATA statement is required at the beginning of the control file.
2. The name of the file containing data follows the INFILE parameter.
3. The INTO TABLE statement is required to identify the table to be loaded into.
4. Lines 4 and 5 identify a column name and the location of the data in the datafile to be loaded into that
column. empno, name, job, and so on are names of columns in table emp. The datatypes (INTEGER
EXTERNAL, CHAR, DECIMAL EXTERNAL) identify the datatype of data fields in the file, not of
corresponding columns in the emp table.
5. Note that the set of column specifications is enclosed in parentheses.
4. After saving the control file now start SQL Loader utility by typing the following command.
$sqlldr userid=scott/tiger control=empfix.ctl log=empfix.log
direct=y
After you have executed the above command SQL Loader will shows you the output
describing how many rows it has loaded.
You can simultaneously load data into multiple tables in the same session. You can also use
WHEN condition to load only specified rows which meets a particular condition (only equal to
“=” and not equal to “<>” conditions are allowed).
Now we want to load all the employees whose deptno is 10 into emp1 table and those
employees whose deptno is not equal to 10 in emp2 table. To do this first create the tables
emp1 and emp2 by taking appropriate columns and datatypes. Then, write a control file as
shown below
$vi emp_multi.ctl
Load Data
infile ‘/u01/oracle/empfix.dat’
append into table scott.emp1
WHEN (deptno=’10 ‘)
(empno POSITION(01:04) INTEGER EXTERNAL,
name POSITION(06:15) CHAR,
job POSITION(17:25) CHAR,
mgr POSITION(27:30) INTEGER EXTERNAL,
sal POSITION(32:39) DECIMAL EXTERNAL,
comm POSITION(41:48) DECIMAL EXTERNAL,
deptno POSITION(50:51) INTEGER EXTERNAL)
INTO TABLE scott.emp2
WHEN (deptno<>’10 ‘)
(empno POSITION(01:04) INTEGER EXTERNAL,
name POSITION(06:15) CHAR,
job POSITION(17:25) CHAR,
mgr POSITION(27:30) INTEGER EXTERNAL,
sal POSITION(32:39) DECIMAL EXTERNAL,
comm POSITION(41:48) DECIMAL EXTERNAL,
deptno POSITION(50:51) INTEGER EXTERNAL)
SQL Loader can load the data into Oracle database using Conventional Path method or Direct
Path method. You can specify the method by using DIRECT command line option. If you give
DIRECT=TRUE then SQL loader will use Direct Path Loading otherwise, if omit this option or
specify DIRECT=false, then SQL Loader will use Conventional Path loading method.
Conventional Path
Conventional path load (the default) uses the SQL INSERT statement and a bind array buffer to
load data into database tables.
When SQL*Loader performs a conventional path load, it competes equally with all other
processes for buffer resources. This can slow the load significantly. Extra overhead is added as
SQL statements are generated, passed to Oracle, and executed.
The Oracle database looks for partially filled blocks and attempts to fill them on each insert.
Although appropriate during normal use, this can slow bulk loads dramatically.
Direct Path
In Direct Path Loading, Oracle will not use SQL INSERT statement for loading rows. Instead it
directly writes the rows, into fresh blocks beyond High Water Mark, in datafiles i.e. it does not
scan for free blocks before high water mark. Direct Path load is very fast because
Partial blocks are not used, so no reads are needed to find them, and fewer writes are
performed.
SQL*Loader need not execute any SQL INSERT statements; therefore, the processing
load on the Oracle database is reduced.
A direct path load calls on Oracle to lock tables and indexes at the start of the load and
releases them when the load is finished. A conventional path load calls Oracle once for
each array of rows to process a SQL INSERT statement.
A direct path load uses multiblock asynchronous I/O for writes to the database files.
During a direct path load, processes perform their own write I/O, instead of using
Oracle's buffer cache. This minimizes contention with other Oracle users.
The following conditions must be satisfied for you to use the direct path load method:
From Ver. 10g Oracle is recommending to use Data Pump Export and Import tools, which are
enhanced versions of original Export and Import tools.
1. Type definitions
2. Table definitions
3. Table data
4. Table indexes
5. Integrity constraints, views, procedures, and triggers
6. Bitmap, function-based, and domain indexes
When you import the tables the import tool will perform the actions in the following order, new
tables are created, data is imported and indexes are built, triggers are imported, integrity
constraints are enabled on the new tables, and any bitmap, function-based, and/or domain
indexes are built. This sequence prevents data from being rejected due to the order in which
tables are imported. This sequence also prevents redundant triggers from firing twice on the
same data
Interactive Mode
When you just type exp or imp at o/s prompt it will run in interactive mode i.e. these tools will
prompt you for all the necessary input. If you supply command line arguments when calling exp
or imp then it will run in command line mode
You can control how Export runs by entering the EXP command followed
by various arguments. To specify parameters, you use keywords:
--------------------------------------------------------------
USERID username/password
FLASHBACK_TIME time used to get the SCN closest to the specified time
To export Objects stored in a particular schemas you can run export utility with the following
arguments
The above command will export all the objects stored in SCOTT and ALI’s schema.
If you include CONSISTENT=Y option in export command argument then, Export utility will
export a consistent image of the table i.e. the changes which are done to the table during export
operation will not be exported.
Objects exported by export utility can only be imported by Import utility. Import utility can run
in Interactive mode or command line mode.
You can let Import prompt you for parameters by entering the IMP command followed by your
username/password:
Or, you can control how Import runs by entering the IMP command followed
or TABLES=(T1:P1,T1:P2), if T1 is partitioned
table
To import individual tables from a full database export dump file give the following command
import done in WE8DEC character set and AL16UTF16 NCHAR character set
Example, Importing Tables of One User account into another User account
For example, suppose Ali has exported tables into a dump file mytables.dmp. Now Scott
wants to import these tables. To achieve this Scott will give the following import command
Then import utility will give a warning that tables in the dump file was exported by user Ali and
not you and then proceed.
Suppose you want to import all tables from a dump file whose name matches a particular pattern.
To do so, use “%” wild character in TABLES option. For example, the following command will
import all tables whose names starts with alphabet “e” and those tables whose name contains
alphabet “d”
The Export and Import utilities are the only method that Oracle supports for moving an existing
Oracle database from one hardware platform to another. This includes moving between UNIX
and NT systems and also moving between two NT systems running on different platforms.
The following steps present a general overview of how to move a database between platforms.
1. As a DBA user, issue the following SQL query to get the exact name of all tablespaces.
You will need this information later in the process.
3. Move the dump file to the target database server. If you use FTP, be sure to copy it in
binary format (by entering binary at the FTP prompt) to avoid file corruption.
4. Create a database on the target server.
5. Before importing the dump file, you must first create your tablespaces, using the
information obtained in Step 1. Otherwise, the import will create the corresponding
datafiles in the same file structure as at the source database, which may not be compatible
with the file structure on the target system.
6. As a DBA user, perform a full import with the IGNORE parameter enabled:
Using IGNORE=y instructs Oracle to ignore any creation errors during the import and permit the import
to complete.
tarting with Oracle 10g, Oracle has introduced an enhanced version of EXPORT and IMPORT
utility known as DATA PUMP. Data Pump is similar to EXPORT and IMPORT utility but it has
many advantages. Some of the advantages are:
Most Data Pump export and import operations occur on the Oracle database server. i.e.
all the dump files are created in the server even if you run the Data Pump utility from
client machine. This results in increased performance because data is not transferred
through network.
You can Stop and Re-Start export and import jobs. This is particularly useful if you have
started an export or import job and after some time you want to do some other urgent
work.
The ability to detach from and reattach to long-running jobs without affecting the job
itself. This allows DBAs and other operations personnel to monitor jobs from multiple
locations.
The ability to estimate how much space an export job would consume, without actually
performing the export
Support for an interactive-command mode that allows monitoring of and interaction with
ongoing jobs
To Use Data Pump, DBA has to create a directory in Server Machine and create a Directory
Object in the database mapping to the directory created in the file system.
The following example creates a directory in the filesystem and creates a directory object in the
database and grants privileges on the Directory Object to the SCOTT user.
$mkdir my_dump_dir
$sqlplus
Enter User:/ as sysdba
SQL>create directory data_pump_dir as ‘/u01/oracle/my_dump_dir’;
The above command will export the full database and it will create the dump file full.dmp in the
directory on the server /u01/oracle/my_dump_dir
In some cases where the Database is in Terabytes the above command will not feasible since the
dump file size will be larger than the operating system limit, and hence export will fail. In this
situation you can create multiple dump files by typing the following command
$expdp scott/tiger FULL=y DIRECTORY=data_pump_dir DUMPFILE=full
%U.dmp
FILESIZE=5G LOGFILE=myfullexp.log JOB_NAME=myfullJob
This will create multiple dump files named full01.dmp, full02.dmp, full03.dmp and so on. The
FILESIZE parameter specifies how much larger the dump file should be.
To export all the objects of SCOTT’S schema you can run the following export data pump
command.
You can omit SCHEMAS since the default mode of Data Pump export is SCHEMAS only.
If you want to export objects of multiple schemas you can specify the following command
You can use Data Pump Export utility to export individual tables. The following example shows
the syntax to export tables
TABLES=employees,jobs,departments
If you want to export tables located in a particular tablespace you can type the following
command
The above will export all the objects located in tbs_4,tbs_5,tbs_6
You can exclude objects while performing a export by using EXCLUDE option of Data Pump
utility. For example you are exporting a schema and don’t want to export tables whose name
starts with “A” then you can type the following command
Then all tables in Scott’s Schema whose name starts with “A “ will not be exported.
Similarly you can also INCLUDE option to only export certain objects like this
This is opposite of EXCLUDE option i.e. it will export only those tables of Scott’s schema
whose name starts with “A”
Similarly you can also exclude INDEXES, CONSTRAINTS, GRANTS, USER, SCHEMA
You can use QUERY option to export only required rows. For Example, the following will
export only those rows of employees tables whose salary is above 10000 and whose dept id is 10.
You can suspend running export jobs and later on resume these jobs or kill these jobs using Data
Pump Export. You can start a job in one client machine and then, if because of some work, you
can suspend it. Afterwards when your work has been finished you can continue the job from the
same client, where you stopped the job, or you can restart the job from another client machine.
For Example, suppose a DBA starts a full database export by typing the following command at
one client machine CLNT1 by typing the following command
After some time, the DBA wants to stop this job temporarily. Then he presses CTRL+C to enter
into interactive mode. Then he will get the Export> prompt where he can type interactive
commands
Now he wants to stop this export job so he will type the following command
Export> STOP_JOB=IMMEDIATE
Are you sure you wish to stop this job ([y]/n): y
After finishing his other work, the DBA wants to resume the export job and the client machine
from where he actually started the job is locked because, the user has locked his/her cabin. So
now the DBA will go to another client machine and he reattach to the job by typing the following
command
After the job status is displayed, he can issue the CONTINUE_CLIENT command to resume logging
mode and restart the myfulljob job.
Export> CONTINUE_CLIENT
A message is displayed that the job has been reopened, and processing status is output to the
client.
Note: After reattaching to the Job a DBA can also kill the job by typing KILL_JOB, if he doesn’t
want to continue with the export job.
Objects exported by Data Pump Export Utility can be imported into a database using Data Pump
Import Utility. The following describes how to use Data Pump Import utility to import objects
If you want to Import all the objects in a dump file then you can type the following command.
$impdp hr/hr DUMPFILE=dpump_dir1:expfull.dmp FULL=y
LOGFILE=dpump_dir2:full_imp.log
This example imports everything from the expfull.dmp dump file. In this example, a
DIRECTORY parameter is not provided. Therefore, a directory object must be provided on both
the DUMPFILE parameter and the LOGFILE parameter
The following example loads all tables belonging to hr schema to scott schema
REMAP_SCHEMA=hr:scott
If SCOTT account exist in the database then hr objects will be loaded into scott schema. If scott
account does not exist, then Import Utility will create the SCOTT account with an unusable
password because, the dump file was exported by the user SYSTEM and imported by the user
SYSTEM who has DBA privileges.
You can use remap_tablespace option to import objects of one tablespace to another tablespace
by giving the command
REMAP_TABLESPACE=users:sales
The above example loads tables, stored in users tablespace, in the sales
tablespace.
Generating SQL File containing DDL commands using Data Pump Import
You can generate SQL file which contains all the DDL commands which Import would have
executed if you actually run Import utility
The following is an example of using the SQLFILE parameter.
SQLFILE=dpump_dir2:expfull.sql
If you have the IMP_FULL_DATABASE role, you can use this parameter to perform a schema-
mode import by specifying a single schema other than your own or a list of schemas to import.
First, the schemas themselves are created (if they do not already exist), including system and role
grants, password history, and so on. Then all objects contained within the schemas are imported.
Nonprivileged users can specify only their own schemas. In that case, no information about the
schema definition is imported, only the objects contained within it.
Example
The following is an example of using the SCHEMAS parameter. You can create the expdat.dmp
file used in this example by running the example provided for the Export SCHEMAS parameter.
DUMPFILE=expdat.dmp
The hr and oe schemas are imported from the expdat.dmp file. The log file, schemas.log, is
written to dpump_dir1
The following example shows a simple use of the TABLES parameter to import only the
employees and jobs tables from the expfull.dmp file. You can create the
expfull.dmp dump file used in this example by running the example provided for the Full
Database Export in Previous Topic.
This will import only employees and jobs tables from the DUMPFILE.
Running Import Utility in Interactive Mode
Similar to the DATA PUMP EXPORT utility the Data Pump Import Jobs can also be suspended,
resumed or killed. And, you can attach to an already existing import job from any client
machine.
For Example, suppose a DBA starts a importing by typing the following command at one client
machine CLNT1 by typing the following command
After some time, the DBA wants to stop this job temporarily. Then he presses CTRL+C to enter
into interactive mode. Then he will get the Import> prompt where he can type interactive
commands
Now he wants to stop this export job so he will type the following command
Import> STOP_JOB=IMMEDIATE
Are you sure you wish to stop this job ([y]/n): y
After finishing his other work, the DBA wants to resume the export job and the client machine
from where he actually started the job is locked because, the user has locked his/her cabin. So
now the DBA will go to another client machine and he reattach to the job by typing the following
command
After the job status is displayed, he can issue the CONTINUE_CLIENT command to resume
logging mode and restart the myfulljob job.
Import> CONTINUE_CLIENT
A message is displayed that the job has been reopened, and processing status is output to the
client.
Note: After reattaching to the Job a DBA can also kill the job by typing KILL_JOB, if he
doesn’t want to continue with the import job.
10. Flash Back Features
From Oracle Ver. 9i Oracle has introduced Flashback Query feature. It is useful to recover from
accidental statement failures. For example, suppose a user accidently deletes rows from a table
and commits it also then, using flash back query he can get back the rows.
Flashback feature depends upon on how much undo retention time you have specified. If you
have set the UNDO_RETENTION parameter to 2 hours then, Oracle will not overwrite the data
in undo tablespace even after committing until 2 Hours have passed. Users can recover from
their mistakes made since last 2 hours only.
For example, suppose John gives a delete statement at 10 AM and commits it. After 1 hour he
realizes that delete statement is mistakenly performed. Now he can give a flashback AS.. OF
query to get back the deleted rows like this.
Flashback Query
Or
To insert the accidently deleted rows again in the table he can type
You use a Flashback Version Query to retrieve the different versions of specific rows that
existed during a given time interval. A new row version is created whenever a COMMIT
statement is executed.
The Flashback Version Query returns a table with a row for each version of the row that existed
at any time during the time interval you specify. Each row in the table includes pseudocolumns
of metadata about the row version. The pseudocolumns available are
TO_CHAR(SYSTIMESTAMP,’YYYYY
---------------------------
2007-06-19 20:30:43
Suppose a user creates a emp table and inserts a row into it and commits the row.
Now a user sitting at another machine erroneously changes the Salary from 5000 to 2000 using
Update statement
Subsequently, a new transaction updates the name of the employee from Sami to Smith.
At this point, the DBA detects the application error and needs to diagnose the problem. The DBA
issues the following query to retrieve versions of the rows in the emp table that correspond to
empno 101. The query uses Flashback Version Query pseudocolumns
The Output should be read from bottom to top, from the output we can see that an Insert has
taken place and then erroneous update has taken place and then again update has taken place to
change the name.
The DBA identifies the transaction 02001003C02 as erroneous and issues the following query to
get the SQL command to undo the change
Now DBA can execute the command to undo the changes made by the user
1 row updated
Oracle Flashback Table provides the DBA the ability to recover a table or set of tables to a
specified point in time in the past very quickly, easily, and without taking any part of the
database offline. In many cases, Flashback Table eliminates the need to perform more
complicated point-in-time recovery operations.
Flashback Table uses information in the undo tablespace to restore the table. Therefore,
UNDO_RETENTION parameter is significant in Flashing Back Tables to a past state. You can
only flash back tables up to the retention time you specified.
Row movement must be enabled on the table for which you are issuing the FLASHBACK
TABLE statement. You can enable row movement with the following SQL statement:
The following example performs a FLASHBACK TABLE operation the table emp
The emp table is restored to its state when the database was at the time specified by the
timestamp.
Example:-
At 17:00 an HR administrator discovers that an employee "JOHN" is missing from the
EMPLOYEE table. This employee was present at 14:00, the last time she ran a report. Someone
accidentally deleted the record for "JOHN" between 14:00 and the present time. She uses
Flashback Table to return the table to its state at 14:00, as shown in this example:
ENABLE TRIGGERS;
You have to give ENABLE TRIGGERS option otherwise, by default all database triggers on the
table will be disabled.
Table Dropped
Now for user it appears that table is dropped but it is actually renamed and placed in Recycle
Bin. To recover this dropped table a user can type the command
You can also restore the dropped table by giving it a different name like this
If you want to recover the space used by a dropped table give the following command
If you want to purge objects of logon user give the following command
If you want to recover space for dropped object of a particular tablespace give the command
You can also purge only objects from a tablespace belonging to a specific user, using the
following form of the command:
If you have the SYSDBA privilege, then you can purge all objects from the recycle bin,
regardless of which user owns the objects, using this command:
SQL>PURGE DBA_RECYCLEBIN;
You can create, and then drop, several objects with the same original name, and they will all be
stored in the recycle bin. For example, consider these SQL statements:
In such a case, each table EMP is assigned a unique name in the recycle bin when it is dropped.
You can use a FLASHBACK TABLE... TO BEFORE DROP statement with the original name of the
table, as shown in this example:
The most recently dropped table with that original name is retrieved from the recycle bin, with
its original name. You can retrieve it and assign it a new name using a RENAME TO clause. The
following example shows the retrieval from the recycle bin of all three dropped EMP tables from
the previous example, with each assigned a new name:
3. There is no fixed amount of space allocated to the recycle bin, and no guarantee as to
how long dropped objects remain in the recycle bin. Depending upon system activity, a
dropped object may remain in the recycle bin for seconds, or for months.
Oracle Flashback Database, lets you quickly recover the entire database from logical data
corruptions or user errors.
To enable Flashback Database, you set up a flash recovery area, and set a flashback retention
target, to specify how far back into the past you want to be able to restore your database with
Flashback Database.
Once you set these parameters, From that time on, at regular intervals, the database copies
images of each altered block in every datafile into flashback logs stored in the flash recovery
area. These Flashback logs are use to flashback database to a point in time.
Step 1. Shutdown the database if it is already running and set the following parameters
DB_RECOVERY_FILE_DEST=/d01/ica/flasharea
DB_RECOVERY_FILE_DEST_SIZE=10G
DB_FLASHBACK_RETENTION_TARGET=4320
(Note: the db_flashback_retention_target is specified in minutes here we have specified 3 days i.e.
3x24x60=4320)
SQL>startup mount;
Step 3. Now enable the flashback database by giving the following command
SQL>alter database flashback on;
After you have enabled the Flashback Database feature and allowed the database to generate
some flashback logs, run the following query:
This will show how much size the recovery area should be set to.
To determine the earliest SCN and earliest Time you can Flashback your database, give the
following query:
FROM V$FLASHBACK_DATABASE_LOG;
Suppose, a user erroneously drops a schema at 10:00AM. You as a DBA came to know of this at
5PM. Now since you have configured the flashback area and set up the flashback retention time
to 3 Days, you can flashback the database to 9:50AM by following the given procedure
1. Start RMAN
$rman target /
2. Run the FLASHBACK DATABASE command to return the database to 9:59AM by
typing the following command
RMAN> FLASHBACK DATABASE TO TIME timestamp('2007-06-21 09:59:00');
or, you can also type this command.
RMAN> FLASHBACK DATABASE TO TIME (SYSDATE-8/24);
3. When the Flashback Database operation completes, you can evaluate the results by
opening the database read-only and run some queries to check whether your Flashback
Database has returned the database to the desired state.
Option 1:-
If you are content with your result you can open the database by performing ALTER
DATABASE OPEN RESETLOGS
Option 2:-
If you discover that you have chosen the wrong target time for your Flashback
Database operation, you can use RECOVER DATABASE UNTIL to bring the database
forward, or perform FLASHBACK DATABASE again with an SCN further in the past.
You can completely undo the effects of your flashback operation by performing
complete recovery of the database:
RMAN> RECOVER DATABASE;
Option 3:-
If you only want to retrieve some lost data from the past time, you can open the
database read-only, then perform a logical export of the data using an Oracle export
utility, then run RECOVER DATABASE to return the database to the present time and
re-import the data using the Oracle import utility
4. Since in our example only a schema is dropped and the rest of database is good, third
option is relevant for us.
Now, come out of RMAN and run EXPORT utility to export the whole schema
$exp userid=system/manager file=scott.dmp owner=SCOTT
5. Now Start RMAN and recover database to the present time
$rman target /
RMAN> RECOVER DATABASE;
6. After database is recovered shutdown and restart the database in normal mode and
import the schema by running IMPORT utility
$imp userid=system/manager file=scott.dmp
Using Log Miner utility, you can query the contents of online redo log files and archived log
files. Because LogMiner provides a well-defined, easy-to-use, and comprehensive relational
interface to redo log files, it can be used as a powerful data audit tool, as well as a tool for
sophisticated data analysis.
LogMiner Configuration
There are three basic objects in a LogMiner configuration that you should be familiar with: the
source database, the LogMiner dictionary, and the redo log files containing the data of interest:
The source database is the database that produces all the redo log files that you want
LogMiner to analyze.
The LogMiner dictionary allows LogMiner to provide table and column names, instead
of internal object IDs, when it presents the redo log data that you request.
LogMiner uses the dictionary to translate internal object identifiers and datatypes to object
names and external data formats. Without a dictionary, LogMiner returns internal object IDs and
presents data as binary data.
(HEXTORAW('45465f4748'),HEXTORAW('546563686e6963616c20577269746572'),
HEXTORAW('c229'),HEXTORAW('c3020b'));
The redo log files contain the changes made to the database or database dictionary.
LogMiner requires a dictionary to translate object IDs into object names when it returns redo
data to you. LogMiner gives you three options for supplying the dictionary:
Oracle recommends that you use this option when you will have access to the source database
from which the redo log files were created and when no changes to the column definitions in the
tables of interest are anticipated. This is the most efficient and easy-to-use option.
Oracle recommends that you use this option when you do not expect to have access to the source
database from which the redo log files were created, or if you anticipate that changes will be
made to the column definitions in the tables of interest.
This option is maintained for backward compatibility with previous releases. This option does
not guarantee transactional consistency. Oracle recommends that you use either the online
catalog or extract the dictionary from redo log files instead.
To direct LogMiner to use the dictionary currently in use for the database, specify the online
catalog as your dictionary source when you start LogMiner, as follows:
To extract a LogMiner dictionary to the redo log files, the database must be open and in
ARCHIVELOG mode and archiving must be enabled. While the dictionary is being extracted to
the redo log stream, no DDL statements can be executed. Therefore, the dictionary extracted to
the redo log files is guaranteed to be consistent (whereas the dictionary extracted to a flat file is
not).
To extract dictionary information to the redo log files, use the DBMS_LOGMNR_D.BUILD
procedure with the STORE_IN_REDO_LOGS option. Do not specify a filename or location.
When the LogMiner dictionary is in a flat file, fewer system resources are used than when it is
contained in the redo log files. Oracle recommends that you regularly back up the dictionary
extract to ensure correct analysis of older redo log files.
1. Set the initialization parameter, UTL_FILE_DIR, in the initialization parameter file. For
example, to set UTL_FILE_DIR to use /oracle/database as the directory where the
dictionary file is placed, enter the following in the initialization parameter file:
UTL_FILE_DIR = /oracle/database
SQL> startup
DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);
To mine data in the redo log files, LogMiner needs information about which redo log files to
mine.
You can direct LogMiner to automatically and dynamically create a list of redo log files to
analyze, or you can explicitly specify a list of redo log files for LogMiner to analyze, as follows:
Automatically
If LogMiner is being used on the source database, then you can direct LogMiner to find and
create a list of redo log files for analysis automatically. Use the CONTINUOUS_MINE option
when you start LogMiner.
Manually
Use the DBMS_LOGMNR.ADD_LOGFILE procedure to manually create a list of redo log files
before you start LogMiner. After the first redo log file has been added to the list, each
subsequently added redo log file must be from the same database and associated with the same
database RESETLOGS SCN. When using this method, LogMiner need not be connected to the
source database.
The easiest way to examine the modification history of a database is to mine at the source
database and use the online catalog to translate the redo log files. This example shows how to do
the simplest analysis using LogMiner.
SQL> SELECT username AS USR, (XIDUSN || '.' || XIDSLT || '.' || XIDSQN) AS
XID,SQL_REDO, SQL_UNDO FROM V$LOGMNR_CONTENTS WHERE username IN ('HR', 'OE');
Example of Mining Without Specifying the List of Redo Log Files Explicitly
The previous example explicitly specified the redo log file or files to be mined. However, if you
are mining in the same database that generated the redo log files, then you can mine the
appropriate list of redo log files by just specifying the time (or SCN) range of interest. To mine a
set of redo log files without explicitly specifying them, use the
DBMS_LOGMNR.CONTINUOUS_MINE option to the DBMS_LOGMNR.START_LOGMNR
procedure, and specify either a time range or an SCN range of interest.
This example assumes that you want to use the data dictionary extracted to the redo log files.
Step 1 Determine the timestamp of the redo log file that contains the start of the data
dictionary.
NAME FIRST_TIME
-------------------------------------------- --------------------
Step 2 Display all the redo log files that have been generated so far.
This step is not required, but is included to demonstrate that the CONTINUOUS_MINE option
works as expected, as will be shown in Step 4.
NAME
----------------------------------------------
/usr/oracle/data/db1arch_1_207_482701534.dbf
/usr/oracle/data/db1arch_1_208_482701534.dbf
/usr/oracle/data/db1arch_1_209_482701534.dbf
/usr/oracle/data/db1arch_1_210_482701534.dbf
DBMS_LOGMNR.COMMITTED_DATA_ONLY + -
DBMS_LOGMNR.PRINT_PRETTY_SQL + -
DBMS_LOGMNR.CONTINUOUS_MINE);
NAME
------------------------------------------------------
/usr/oracle/data/db1arch_1_207_482701534.dbf
/usr/oracle/data/db1arch_1_208_482701534.dbf
/usr/oracle/data/db1arch_1_209_482701534.dbf
/usr/oracle/data/db1arch_1_210_482701534.dbf
To reduce the number of rows returned by the query, exclude all DML statements done in the
sys or system schema. (This query specifies a timestamp to exclude transactions that were
involved in the dictionary extraction.)
Note that all reconstructed SQL statements returned by the query are correctly translated.
declare
begin
end;
set
where
values
set
"LIST_PRICE" = 92
where
values
LOG_ARCHIVE_FORMAT=ica%s.%t.%r.arc
LOG_ARCHIVE_DEST_1=”location=/u02/ica/arc1”
LOG_ARCHIVE_DEST_2=”location=/u02/ica/arc1”
Step 7: It is recommended that you take a full backup after you brought the database in archive log mode.
To again bring back the database in NOARCHIVELOG mode. Follow these steps:
STEP 2: Comment the following parameters in parameter file by putting " # " .
# LOG_ARCHIVE_DEST_1=”location=/u02/ica/arc1”
# LOG_ARCHIVE_DEST_2=”location=/u02/ica/arc2”
# LOG_ARCHIVE_FORMAT=ica%s.%t.%r.arc
Shutdown the database if it is running. Then start SQL Plus and connect as SYSDBA.
$sqlplus
SQL> Exit
After Shutting down the database. Copy all the datafiles, logfiles, controlfiles, parameter file and password
file to your backup destination.
TIP:
To identify the datafiles, Logfiles query the data dictionary tables V$DATAFILE and V$LOGFILE before
shutting down.
Lets suppose all the files are in "/u01/ica" directory. Then the following command copies all the files to
the backup destination /u02/backup.
$cd /u01/ica
$cp * /u02/backup/
Be sure to remember the destination of each file. This will be useful when restoring from this backup. You
can create text file and put the destinations of each file for future use. Now you can open the database.
To take online backups the database should be running in Archivelog mode. To check whether the
database is running in Archivelog mode or Noarchivelog mode. Start sqlplus and then connect as
SYSDBA.
After connecting give the command "archive log list" this will show you the status of archiving.
$sqlplus
If the database is running in archive log mode then you can take online backups.
Let us suppose we want to take online backup of "USERS" tablespace. You can query the V$DATAFILE
view to find out the name of datafiles associated with this tablespace. Lets suppose the file is
"/u01/ica/usr1.dbf ".
Give the following series of commands to take online backup of USERS tablespace.
$sqlplus
SQL> exit;
If you have lost one datafile and if you don't have any backup and if the datafile does not contain
important objects then, you can drop the damaged datafile and open the database. You will loose all
information contained in the damaged datafile.
The following are the steps to drop a damaged datafile and open the database.
(UNIX)
$sqlplus
If the database is running in Noarchivelog mode and if you have a full backup. Then there are two options
for you.
i . Either you can drop the damaged datafile, if it does not contain important information which you can
afford to loose.
ii . Or you can restore from full backup. You will loose all the changes made to the database since last full
backup.
STEP 2: Restore from full database backup i.e. copy all the files from backup to their original locations.
(UNIX)
This will copy all the files from backup directory to original destination. Also remember to copy the control
files to all the mirrored locations.
RECOVERING FROM LOST OF CONTROL FILE.
If you have lost the control file and if it is mirrored. Then simply copy the control file from mirrored location
to the damaged location and open the database
If you have lost all the mirrored control files and all the datafiles and logfiles are intact. Then you can
recreate a control file.
If you have already taken the backup of control file creation statement by giving this command. " ALTER
DATABASE BACKUP CONTROLFILE TO TRACE; " and if you have not added any tablespace since
then, just create the controlfile by executing the statement
Buf If you have added any new tablespace after generating create controlfile statement. Then you have to
alter the script and include the filename and size of the file in script file.
If your script file containing the control file creation statement is "CR.SQL"
If you do not have a backup of Control file creation statement. Then you have to manually give the
CREATE CONTROL FILE statement. You have to write the file names and sizes of all the datafiles. You
will lose any datafiles which you do not include.
Refer to "Managing Control File" topic for the CREATE CONTROL FILE statement.
If you have lost one datafile. Then follow the steps shown below.
STEP 1. Shutdown the Database if it is running.
$sqlplus
SQL>Startup mount;
If all archive log files are available then recovery should go on smoothly. After you get the "Media
Recovery Completely" statement. Go on to next step.
If you have lost the archived files. Then Immediately shutdown the database and take a full offline
backup.
Suppose a user has a dropped a crucial table accidentally and you have to recover the dropped table.
You have taken a full backup of the database on Monday 13-Aug-2007 and the table was created on
Tuesday 14-Aug-2007 and thousands of rows were inserted into it. Some user accidently drop the table
on Thursday 16-Aug-2007 and nobody notice this until Saturday.
STEP 2. Restore all the datafiles, logfiles and control file from the full offline backup which was taken on
Monday.
STEP 4. Then give the following command to recover database until specified time.
SQL> recover database until time '2007:08:16:13:55:00'
STEP 5. Open the database and reset the logs. Because you have performed a Incomplete Recovery,
like this
STEP 6. After database is open. Export the table to a dump file using Export Utility.
STEP 7. Restore from the full database backup which you have taken on Saturday.
Note: In Oracle 10g you can easily recover drop tables by using Flashback feature. For further information
please refer to Flashback Features Topic in this book.