Professional Documents
Culture Documents
Installing and upgrading the Oracle Database server and application tools
Allocating system storage and planning future storage requirements for the database system
Creating primary database storage structures (tablespaces) after application developers have
designed an application
Creating primary objects (tables, views, indexes) once application developers have designed an
application
Modifying the database structure, as necessary, from information given by application developers
SID
DB_NAME
TABLESPACES
=
=
myicadb
myicadb
(we will have 5 tablespaces in this database. With 1 datafile in each tablespace)
Tablespace
system
users
undotbs
temp
index_data
sysaux
Name Datafile
/u01/oracle/oradata/myica/sys.dbf
/u01/oracle/oradata/myica/usr.dbf
/u01/oracle/oradata/myica/undo.dbf
/u01/oracle/oradata/myica/temp.dbf
/u01/oracle/oradata/myica/indx.dbf
/u01/oracle/oradata/myica/sysaux.dbf
Size
250M
100M
100M
100M
100M
100M
LOGFILES
(we will have 2 log groups in the database)
Logfile Group
Member
Group 1
/u01/oracle/oradata/myica/log1.ora
Group 2
/u01/oracle/oradata/myica/log2.ora
CONTROL FILE
PARAMETER FILE
Size
50M
50M
/u01/oracle/oradata/myica/control.ora
/u01/oracle/dbs/initmyicadb.ora
(rememer the parameter file name should of the format init<sid>.ora and it should be
in ORACLE_HOME/dbs directory in Unix o/s and ORACLE_HOME/database directory in windows o/s)
Now let us start creating the database.
Step 1: Login to oracle account and make directories for your database.
$mkdir
$mkdir
$mkdir
$mkdir
/u01/oracle/oradata/myica
/u01/oracle/oradata/myica/bdump
/u01/oracle/oradata/myica/udump
/u01/oracle/oradata/myica/cdump
Step 2: Create the parameter file by copying the default template (init.ora) and set the required
parameters
$cd /u01/oracle/dbs
$cp init.ora initmyicadb.ora
Now open the parameter file and set the following parameters
$vi initmyicadb.ora
DB_NAME=myicadb
DB_BLOCK_SIZE=8192
CONTROL_FILES=/u01/oracle/oradata/myica/control.ora
BACKGROUND_DUMP_DEST=/u01/oracle/oradata/myica/bdump
USER_DUMP_DEST=/u01/oracle/oradata/myica/udump
CORE_DUMP_DEST=/u01/oracle/oradata/myica/cdump
UNDO_TABLESPACE=undotbs
UNDO_MANAGEMENT=AUTO
After entering the above parameters save the file by pressing Esc :wq
Step 3: Now set ORACLE_SID environment variable and start the instance.
$export ORACLE_SID=myicadb
$sqlplus
Enter User: / as sysdba
SQL>startup nomount
Step 4: Give the create database command
Here I am not specfying optional setting such as language, characterset etc. For these
settings oracle will use the default values. I am giving the barest command to create the
database to keep it simple.
The command to create the database is
The above script will take several minutes. After the above script is finished run
the CATPROC.SQL script to install procedural option.
SQL>@/u01/oracle/rdbms/admin/catproc.sql
This script will also take several minutes to complete.
Step 7: Now change the passwords for SYS and SYSTEM account, since the default passwords
change_on_install and manager are known by everybody.
SQL>alter user sys identified by myica;
SQL>alter user system identified by myica;
Step 8: Create Additional user accounts. You can create as many user account as you like. Let us create
the popular account SCOTT.
SQL>create user scott default tablespace users
identified by tiger quota 10M on users;
SQL>grant connect to scott;
Step 9: Add this database SID in listener.ora file and restart the listener process.
$cd /u01/oracle/network/admin
$vi listener.ora
(This file will already contain sample entries. Copy and paste one sample entry and edit
the SID setting)
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS =(PROTOCOL = TCP)(HOST=200.200.100.1)(PORT = 1521))
)
)
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME =/u01/oracle)
(PROGRAM = extproc)
)
(SID_DESC =
(SID_NAME=ORCL)
(ORACLE_HOME=/u01/oracle)
)
#Add these lines
(SID_DESC =
(SID_NAME=myicadb)
(ORACLE_HOME=/u01/oracle)
)
)
Save the file by pressing Esc :wq
Separate user data from data dictionary data to reduce contention among dictionary
objects and schema objects for the same datafiles.
Separate data of one application from the data of another to prevent multiple applications
from being affected if a tablespace must be taken offline.
Store different the datafiles of different tablespaces on different disk drives to reduce I/O
contention.
Take individual tablespaces offline while others remain online, providing better overall
availability.
Concurrency and speed of space operations is improved, because space allocations and
deallocations modify locally managed resources (bitmaps stored in header files) rather
than requiring centrally managed resources such as enqueues
A bigfile tablespace is a tablespace with a single, but very large (up to 4G blocks)
datafile. Traditional smallfile tablespaces, in contrast, can contain multiple datafiles,
but the files cannot be as large. Bigfile tablespaces can reduce the number of datafiles
needed for a database.
To create a bigfile tablespace give the following command
SQL> CREATE BIGFILE TABLESPACE ica_bigtbs
DATAFILE '/u02/oracle/ica/bigtbs01.dbf' SIZE 50G;
Option 1
You can extend the size of a tablespace by increasing the size of an existing datafile
by typing the following command
SQL> alter database ica datafile
/u01/oracle/data/icatbs01.dbf resize 100M;
This will increase the size from 50M to 100M
Option 2
You can also extend the size of a tablespace by adding a new datafile to a tablespace.
This is useful if the size of existing datafile is reached o/s file size limit or the drive
where the file is existing does not have free space. To add a new datafile to an existing
tablespace give the following command.
SQL> alter tablespace
add datafile /u02/oracle/ica/icatbs02.dbf size 50M;
Option 3
You can also use auto extend feature of datafile. In this, Oracle will automatically
increase the size of a datafile whenever space is required. You can specify by how
much size the file should increase and Maximum size to which it should extend.
To make a existing datafile auto extendable give the following command
SQL> alter database datafile
/u01/oracle/ica/icatbs01.dbf auto extend ON next
5M maxsize 500M;
You can also make a datafile auto extendable while creating a new tablespace itself by
giving the following command.
SQL> create tablespace ica datafile
/u01/oracle/ica/icatbs01.dbf size 50M auto extend
ON next 5M maxsize 500M;
You can decrease the size of tablespace by decreasing the datafile associated with it.
You decrease a datafile only up to size of empty space in it. To decrease the size of a
datafile give the following command
SQL> alter database datafile
/u01/oracle/ica/icatbs01.dbf
resize 30M;
Coalescing Tablespaces
You can take an online tablespace offline so that it is temporarily unavailable for
general use. The rest of the database remains open and available for users to access
data. Conversely, you can bring an offline tablespace online to make the schema
objects within the tablespace available to database users. The database must be open
to alter the availability of a tablespace.
To alter the availability of a tablespace, use the ALTER TABLESPACE statement. You
must have the ALTER TABLESPACE or MANAGE TABLESPACE system privilege.
To Take a Tablespace Offline give the following command
SQL>alter tablespace ica offline;
To again bring it back online give the following command.
SQL>alter tablespace ica online;
To take individual datafile offline type the following command
SQL>alter database datafile
/u01/oracle/ica/ica_tbs01.dbf offline;
Again to bring it back online give the following command
SQL> alter database datafile
/u01/oracle/ica/ica_tbs01.dbf online;
Note: You cant take individual datafiles offline it the database is running in
NOARCHIVELOG mode. If the datafile has become corrupt or missing when the
database is running in NOARCHIVELOG mode then you can only drop it by giving
the following command
SQL>alter database datafile
/u01/oracle/ica/ica_tbs01.dbf
offline for drop;
Making a Tablespace Read only.
Dropping Tablespaces
You can drop a tablespace and its contents (the segments contained in the
tablespace) from the database if the tablespace and its contents are no longer
required. You must have theDROP TABLESPACE system privilege to drop a
tablespace.
Caution: Once a tablespace has been dropped, the data in the tablespace is not
recoverable. Therefore, make sure that all data contained in a tablespace to be
dropped will not be required in the future. Also, immediately before and after
dropping a tablespace from a database, back up the database completely
To drop a tablespace give the following command.
SQL> drop tablespace ica;
This will drop the tablespace only if it is empty. If it is not empty and if you want to
drop it anyhow then add the following keyword
SQL>drop tablespace ica including contents;
This will drop the tablespace even if it is not empty. But the datafiles will not be
deleted you have to use operating system command to delete the files.
But If you include datafiles keyword then, the associated datafiles will also be
deleted from the disk.
You can use the resize clause to increase or decrease the size of a
temporary tablespace. The following statement resizes a temporary
file:
SQL>ALTER DATABASE TEMPFILE '/u02/oracle/data/lmtemp02.dbf' RESIZE 18M;
INCLUDING
Tablespace Groups
You can specify a tablespace group name wherever a tablespace name would
appear when you assign a default temporary tablespace for the database or a
temporary tablespace for a user.
Description
SEGMENT_VERIFY
SEGMENT_CORRUPT
SEGMENT_DROP_CORRUPT
SEGMENT_DUMP
TABLESPACE_VERIFY
TABLESPACE_REBUILD_BITMAPS
TABLESPACE_FIX_BITMAPS
TABLESPACE_REBUILD_QUOTAS
TABLESPACE_MIGRATE_FROM_LOCA
L
Procedure
Description
used to migrate a locally
managed SYSTEM tablespace to a dictionarymanaged SYSTEM tablespace.
TABLESPACE_MIGRATE_TO_LOCAL
TABLESPACE_RELOCATE_BITMAPS
TABLESPACE_FIX_SEGMENT_STATE
S
Be careful using the above procedures if not used properly you will
corrupt your database. Contact Oracle Support before using these
procedures.
Following are some of the Scenarios where you can use the above
procedures
Scenario 1: Fixing Bitmap When Allocated Blocks are Marked Free (No
Overlap)
You cannot drop a segment because the bitmap has segment blocks
marked "free". The system has automatically marked the segment
corrupted.
In this scenario, perform the following tasks:
1. Call the SEGMENT_VERIFY procedure with the
SEGMENT_VERIFY_EXTENTS_GLOBAL option. If no overlaps are reported, then
proceed with steps 2 through 5.
2. Call the SEGMENT_DUMP procedure to dump the DBA ranges allocated to the
segment.
3. For each range, call TABLESPACE_FIX_BITMAPS with the
TABLESPACE_EXTENT_MAKE_FREE option to mark the space as free.
4. Call SEGMENT_DROP_CORRUPT to drop the SEG$ entry.
5. Call TABLESPACE_REBUILD_QUOTAS to fix up quotas.
Transporting Tablespaces
ENDIAN_FORMAT
----------1
2
7
10
6
3
5
4
11
15
-------------Big
Big
Little
Little
Big
Big
Little
Big
Little
Little
-----------------------------Solaris[tm] OE (32-bit)
Solaris[tm] OE (64-bit)
Microsoft Windows NT
Linux IA (32-bit)
AIX-Based Systems (64-bit)
HP-UX (64-bit)
HP Tru64 UNIX
HP-UX IA (64-bit)
Linux IA (64-bit)
HP Open VMS
10 rows selected.
If the source platform and the target platform are of different endianness,
then an additional step must be done on either the source or target platform
to convert the tablespace being transported to the target format. If they are
of the same endianness, then no conversion is necessary and tablespaces
can be transported as if they were on the same platform.
Copy the datafiles and the export file to the target database. You can
do this using any facility for copying flat files (for example, an
operating system copy utility, ftp, theDBMS_FILE_COPY package, or
publishing on CDs).
If you have transported the tablespace set to a platform with different
endianness from the source platform, and you have not performed a
source-side conversion to the endianness of the target platform, you
should perform a target-side conversion now.
5. Plug in the tablespace.
Invoke the Export utility to plug the set of tablespaces into the target
database.
Tablespace
Datafile:
ica_sales_1
/u01/oracle/oradata/ica_salesdb/ica_sales_101.dbf
ica_sales_2
/u01/oracle/oradata/ica_salesdb/ica_sales_201.dbf
This step is only necessary if you are transporting the tablespace set to a platform
different from the source platform. If ica_sales_1 and ica_sales_2 were being
transported to a different platform, you can execute the following query on both
platforms to determine if the platforms are supported and their endian formats:
SELECT d.PLATFORM_NAME, ENDIAN_FORMAT
FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d
WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;
You can see that the endian formats are different and thus a conversion is necessary
for transporting the tablespace set.
Step 2: Pick a Self-Contained Set of Tablespaces
After executing the above give the following query to see whether any violations
are there.
SQL> SELECT * FROM TRANSPORT_SET_VIOLATIONS;
VIOLATIONS
--------------------------------------------------------------------------Constraint DEPT_FK between table SAMI.EMP in tablespace ICA_SALES_1 and table
SAMI.DEPT in tablespace OTHER
Partitioned table SAMI.SALES is partially contained in the transportable set
If ica_sales_1 and ica_sales_2 are being transported to a different platform, and the
endianness of the platforms is different, and if you want to convert before transporting
the tablespace set, then convert the datafiles composing
the ica_sales_1 and ica_sales_2 tablespaces. You have to use RMAN utility to
convert datafiles
$ RMAN TARGET /
Recovery Manager: Release 10.1.0.0.0
Copyright (c) 1995, 2003, Oracle Corporation.
Transport both the datafiles and the export file of the tablespaces to a place accessible
to the target database. You can use any facility for copying flat files (for example, an
operating system copy utility, ftp, the DBMS_FILE_TRANSFER package, or publishing on
CDs).
After this statement executes successfully, all tablespaces in the set being copied
remain in read-only mode. Check the import logs to ensure that no error has occurred.
Now, put the tablespaces into read/write mode as follows:
ALTER TABLESPACE ica_sales_1 READ WRITE;
ALTER TABLESPACE ica_sales_2 READ WRITE;
Oracle has provided many Data dictionaries to view information about tablespaces
and datafiles. Some of them are:
To view information about Tablespaces in a database give the following query
SQL>select * from dba_tablespaces
SQL>select * from v$tablespace;
To view information about Datafiles
SQL>select * from dba_data_files;
SQL>select * from v$datafile;
To view information about Tempfiles
2.
3.
4.
Give the ALTER TABLESPACE with RENAME DATAFILE option to change the
filenames within the Database.
Bring the tablespace Online
For Example suppose you have a tablespace users with the following datafiles
/u01/oracle/ica/usr01.dbf
/u01/oracle/ica/usr02.dbf
Now you want to relocate /u01/oracle/ica/usr01.dbf
to /u02/oracle/ica/usr01.dbf and want to
rename /u01/oracle/ica/usr02.dbf to/u01/oracle/ica/users02.dbf then
follow the given the steps
1.
2.
/u02/oracle/ica/usr01.dbf
Rename the
file /u01/oracle/ica/usr02.dbf to /u01/oracle/ica/users02.dbf usi
ng o/s command.
$mv
3.
/u01/oracle/ica/usr02.dbf /u01/oracle/ica/users02.dbf
Now start SQLPLUS and type the following command to rename and relocate
these files
SQL> alter tablespace users rename file
/u01/oracle/ica/usr01.dbf, /u01/oracle/ica/usr02.dbf
to
/u02/oracle/ica/usr01.dbf,/u01/oracle/ica/users02.dbf;
4.
DATABASE
To add a new Redo Logfile group to the database give the following
command
SQL>alter database add logfile group 3
/u01/oracle/ica/log3.ora size 10M;
Note: You can add groups to a database up to the MAXLOGFILES
setting you have specified at the time of creating the database. If
you want to change MAXLOGFILE setting you have to create a new
controlfile.
You can drop member from a log group only if the group is having
more than one member and if it is not the current group. If you
want to drop members from the current group, force a log switch or
wait so that log switch occurs and another group becomes current.
To force a log switch give the following command
SQL>alter system switch logfile;
The following command can be used to drop a logfile member
SQL>alter database drop logfile member
/u01/oracle/ica/log11.ora;
Note: When you drop logfiles the files are not deleted from the disk.
You have to use O/S command to delete the files from disk.
Dropping Logfile Group
Similarly, you can also drop logfile group only if the database is
having more than two groups and if it is not the current group.
SQL>alter database drop logfile group 3;
Note: When you drop logfiles the files are not deleted from the disk.
You have to use O/S command to delete the files from disk.
Resizing Logfiles
2. Move the logfile from Old location to new location using operating system command
$mv /u01/oracle/ica/log1.ora
/u02/oracle/ica/log1.ora
This statement overcomes two situations where dropping redo logs is not
possible:
If the corrupt redo log file has not been archived, use the UNARCHIVED keyword
in the statement.
ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 3;
This statement clears the corrupted redo logs and avoids archiving them.
The cleared redo logs are available for use even though they were not
archived.
If you clear a log file that is needed for recovery of a backup, then you can
no longer recover from that backup. The database writes a message in the
alert log describing the backups from which you cannot recover
To See how many logfile groups are there and their status type the
following query.
MEMBERS
------1
1
1
1
ARC
--YES
NO
YES
YES
STATUS
--------ACTIVE
CURRENT
INACTIVE
INACTIVE
FIRST_CHANGE#
------------61515628
41517595
31511666
21513647
FIRST_TIM
--------21-JUN-07
21-JUN-07
21-JUN-07
21-JUN-07
To See how many members are there and where they are located
give the following query
SQL>SELECT * FROM V$LOGFILE;
GROUP#
-----1
2
STATUS
-------
MEMBER
---------------------------------/U01/ORACLE/ICA/LOG1.ORA
/U01/ORACLE/ICA/LOG2.ORA
Checkpoint information
Steps:
1. Shutdown the Database.
SQL>SHUTDOWN IMMEDIATE;
2. Copy the control file from old location to new location using operating system
command. For example.
$cp
/u01/oracle/ica/control.ora
/u02/oracle/ica/control.ora
3. Now open the parameter file and specify the new location like this
CONTROL_FILES=/u01/oracle/ica/control.ora
Change it to
CONTROL_FILES=/u01/oracle/ica/control.ora,/u02/oracle/ica
/control.ora
3. Now open the c.sql file in text editor and set the database name from ica to prod
shown in an example below
CREATE CONTROLFILE
SET DATABASE prod
LOGFILE GROUP 1 ('/u01/oracle/ica/redo01_01.log',
'/u01/oracle/ica/redo01_02.log'),
GROUP 2 ('/u01/oracle/ica/redo02_01.log',
'/u01/oracle/ica/redo02_02.log'),
GROUP 3 ('/u01/oracle/ica/redo03_01.log',
'/u01/oracle/ica/redo03_02.log')
RESETLOGS
DATAFILE '/u01/oracle/ica/system01.dbf' SIZE 3M,
'/u01/oracle/ica/rbs01.dbs' SIZE 5M,
'/u01/oracle/ica/users01.dbs' SIZE 5M,
'/u01/oracle/ica/temp01.dbs' SIZE 5M
MAXLOGFILES 50
MAXLOGMEMBERS 3
MAXLOGHISTORY 400
MAXDATAFILES 200
MAXINSTANCES 6
ARCHIVELOG;
DATAFILES =
/u01/oracle/ica/sys.dbf
/u01/oracle/ica/usr.dbf
/u01/oracle/ica/rbs.dbf
/u01/oracle/ica/tmp.dbf
/u01/oracle/ica/sysaux.dbf
LOGFILE=
/u01/oracle/ica/log1.ora
/u01/oracle/ica/log2.ora
Now you want to copy this database to SERVER 2 and in SERVER 2 you dont
have /u01 filesystem. In SERVER 2 you have /d01 filesystem.
To Clone this Database on SERVER 2 do the following.
Steps :1. In SERVER 2 install the same version of o/s and same version Oracle as in SERVER
1.
Now, go to the USER_DUMP_DEST directory and open the latest trace file. This file
will contain steps and as well as CREATE CONTROLFILE statement. Copy
the CREATECONTROLFILE statement and paste in a file. Let the filename
be cr.sql
4. Open the parameter file SERVER 2 and change the following parameters
CONTROL FILES=//d01/oracle/ica/control.ora
BACKGROUND_DUMP_DEST=//d01/oracle/ica/bdump
USER_DUMP_DEST=//d01/oracle/ica/udump
CORE_DUMP_DEST=//d01/oracle/ica/cdump
LOG_ARCHIVE_DEST_1=location=//d01/oracle/ica/arc1
5. Now, open the cr.sql file in text editor and change the locations like this
CREATE CONTROLFILE
SET DATABASE prod
LOGFILE GROUP 1 ('//d01/oracle/ica/log1.ora'
GROUP 2 ('//d01/oracle/ica/log2.ora'
DATAFILE '//d01/oracle/ica/sys.dbf' SIZE 300M,
'//d01/oracle/ica/rbs.dbf' SIZE 50M,
'//d01/oracle/ica/usr.dbf' SIZE 50M,
'//d01/oracle/ica/tmp.dbf' SIZE 50M,
//d01/oracle/ica/sysaux.dbf size 100M;
MAXLOGFILES 50
MAXLOGMEMBERS 3
MAXLOGHISTORY 400
MAXDATAFILES 200
MAXINSTANCES 6
ARCHIVELOG;
When the system is first running in the production environment, you may be
unsure of the space requirements of the undo tablespace. In this case, you can
enable automatic extension for datafiles of the undo tablespace so that they
automatically increase in size when more space is needed
2. Shutdown the Database and set the following parameters in parameter file.
UNDO_MANAGEMENT=AUTO
UNDO_TABLESPACE=myundo
3. Start the Database.
where:
UR is UNDO_RETENTION in seconds. This value should take into consideration longrunning queries and any flashback requirements.
overhead is the small overhead for metadata (transaction tables, bitmaps, and so forth)
As an example, if UNDO_RETENTION is set to 3 hours, and the transaction rate (UPS) is 100 undo
blocks for each second, with a 8K block size, the required undo space is computed as follows:
(3 * 3600 * 100 * 8K) = 8.24GBs
To get the values for UPS, Overhead query the V$UNDOSTAT view. By giving the following
statement
SQL> Select * from V$UNDOSTAT;
If the Undo tablespace is full, you can resize existing datafiles or add new datafiles to
it
The following example extends an existing datafile
SQL> alter database datafile /u01/oracle/ica/undo_tbs.dbf resize
700M
The following example adds a new datafile to undo tablespace
An undo tablespace can only be dropped if it is not currently used by any instance. If the undo
tablespace contains any outstanding transactions (for example, a transaction died but has not yet
been recovered), the DROP TABLESPACE statement fails.
Assuming myundo is the current undo tablespace, after this command successfully executes, the
instance uses myundo2 in place of myundo as its undo tablespace.
SQL Loader
SQL LOADER utility is used to load data from other data source into Oracle. For
example, if you have a table in FOXPRO, ACCESS or SYBASE or any other third
party database, you can use SQL Loader to load the data into Oracle Tables. SQL
Loader will only read the data from Flat files. So If you want to load the data
from Foxpro or any other database, you have to first convert that data into Delimited
Format flat file or Fixed length format flat file, and then use SQL loader to load the
data into Oracle.
Following is procedure to load the data from Third Party Database into Oracle using
SQL Loader.
1. Convert the Data into Flat file using third party database command.
2. Create the Table Structure in Oracle Database using appropriate datatypes
3. Write a Control File, describing how to interpret the flat file and options to load
the data.
4. Execute SQL Loader utility specifying the control file in the command line
argument
To understand it better let us see the following case study.
Suppose you have a table in MS-ACCESS by name EMP, running under Windows
O/S, with the following structure
EMPNO
NAME
SAL
JDATE
INTEGER
TEXT(50)
CURRENCY
DATE
This table contains some 10,000 rows. Now you want to load the data from this table
into an Oracle Table. Oracle Database is running in LINUX O/S.
Solution
Steps
Start MS-Access and convert the table into comma delimited flat (popularly known
as csv) , by clicking on File/Save As menu. Let the delimited file name
be emp.csv
1. Now transfer this file to Linux Server using FTP command
a. Go to Command Prompt in windows
b. At the command prompt type FTP followed by IP address of the server
running Oracle.
FTP will then prompt you for username and password to connect to the
Linux Server. Supply a valid username and password of Oracle User in
Linux
For example:C:\>ftp 200.200.100.111
Name: oracle
Password:oracle
FTP>
c. Now give PUT command to transfer file from current Windows machine
to Linux machine.
FTP>put
Local file:C:\>emp.csv
remote-file:/u01/oracle/emp.csv
File transferred in 0.29 Seconds
FTP>
d. Now after the file is transferred quit the FTP utility by
typing bye command.
FTP>bye
Good-Bye
2. Now come the Linux Machine and create a table in Oracle with the same
structure as in MS-ACCESS by taking appropriate datatypes. For
example, create a table like this
$sqlplus scott/tiger
SQL>CREATE TABLE emp (empno number(5),
name varchar2(50),
sal
number(10,2),
jdate date);
3. After creating the table, you have to write a control file describing the actions
which SQL Loader should do. You can use any text editor to write the
control file. Now let us write a controlfile for our case study
$vi emp.ctl
1
LOAD DATA
INFILE
BADFILE
/u01/oracle/emp.bad
DISCARDFILE
/u01/oracle/emp.dsc
/u01/oracle/emp.csv
Notes:
(Do not write the line numbers, they are meant for explanation purpose)
1.
The LOAD DATA statement is required at the beginning of the control file.
2.
3.
Specifying BADFILE is optional. If you specify, then bad records found during loading will be
stored in this file.
4.
Specifying DISCARDFILE is optional. If you specify, then records which do not meet
a WHEN condition will be written to this file.
5.
6.
2.
3.
REPLACE: First deletes all the rows in the existing table and then, load rows.
4.
This line indicates how the fields are separated in input file. Since in our case the fields are separated
by , so we have specified , as the terminating char for fields. You can replace this by any char
which is used to terminate fields. Some of the popularly use terminating characters are
semicolon ;, colon :, pipe | etc. TRAILING NULLCOLS means if the last
column is null then treat this as null value, otherwise, SQL LOADER will treat the record as bad if
the last column is null.
7.
In this line specify the columns of the target table. Note how do you specify format for Date columns
4. After you have wrote the control file save it and then, call SQL Loader utility by typing
After you have executed the above command SQL Loader will shows you the output
describing how many rows it has loaded.
The LOG option of sqlldr specifies where the log file of this sql loader session
should be created. The log file contains all actions which SQL loader has performed
i.e. how many rows were loaded, how many were rejected and how much time is taken to
load the rows and etc. You have to view this file for any errors encountered while
running SQLLoader.
CASE STUDY (Loading Data from Fixed Length file into Oracle)
Suppose we have a fixed length format file containing employees data, as shown
below, and wants to load this data into an Oracle table.
7782 CLARK
MANAGER
7839 KING
PRESIDENT
7934 MILLER
CLERK
7566 JONES
7839
2572.50
10
5500.00
10
7782
920.00
10
MANAGER
7839
3123.75
20
7499 ALLEN
SALESMAN
7698
1600.00
300.00 30
7654 MARTIN
SALESMAN
7698
1312.50
1400.00 30
7658 CHAN
ANALYST
7566
3450.00
20
7654 MARTIN
SALESMAN
7698
1312.50
1400.00 30
SOLUTION:
Steps :-
1. First Open the file in a text editor and count the length of fields, for
example in our fixed length file, employee number is from 1 st position to
4th position, employee name is from 6th position to 15th position, Job name is
from 17th position to 25th position. Similarly other columns are also located.
2. Create a table in Oracle, by any name, but should match columns specified
in fixed length file. In our case give the following command to create the
table.
NUMBER(5),
name VARCHAR2(20),
job
VARCHAR2(10),
mgr
NUMBER(5),
sal
NUMBER(10,2),
comm NUMBER(10,2),
deptno
NUMBER(3) );
3. After creating the table, now write a control file by using any text editor
$vi empfix.ctl
1)
LOAD DATA
2)
INFILE '/u01/oracle/fix.dat'
3)
4)
(empno
EXTERNAL,
POSITION(01:04)
INTEGER
name
POSITION(06:15)
CHAR,
job
POSITION(17:25)
CHAR,
mgr
EXTERNAL,
POSITION(27:30)
INTEGER
sal
EXTERNAL,
POSITION(32:39)
DECIMAL
comm
EXTERNAL,
POSITION(41:48)
DECIMAL
5)
deptno
EXTERNAL)
POSITION(50:51)
INTEGER
Notes:
(Do not write the line numbers, they are meant for explanation
purpose)
1.
The LOAD DATA statement is required at the beginning of the control file.
2.
The name of the file containing data follows the INFILE parameter.
3.
The INTO TABLE statement is required to identify the table to be loaded into.
4.
Lines 4 and 5 identify a column name and the location of the data in the datafile to be loaded into that
column. empno, name, job, and so on are names of columns in table emp. The datatypes (INTEGER
EXTERNAL, CHAR, DECIMAL EXTERNAL) identify the datatype of data fields in the file, not of
corresponding columns in the emp table.
5.
4. After saving the control file now start SQL Loader utility by typing the following
command.
You can simultaneously load data into multiple tables in the same session. You can
also use WHEN condition to load only specified rows which meets a particular
condition (only equal to = and not equal to <> conditions are allowed).
For example, suppose we have a fixed length file as shown below
7782 CLARK
MANAGER
7839
2572.50
10
7839 KING
PRESIDENT
5500.00
10
7934 MILLER
CLERK
7782
920.00
10
7566 JONES
MANAGER
7839
3123.75
20
7499 ALLEN
SALESMAN
7698
1600.00
300.00 30
7654 MARTIN
SALESMAN
7698
1312.50
1400.00 30
7658 CHAN
ANALYST
7566
3450.00
20
7654 MARTIN
SALESMAN
7698
1312.50
1400.00 30
Now we want to load all the employees whose deptno is 10 into emp1 table and
those employees whose deptno is not equal to 10 in emp2 table. To do this first
create the tablesemp1 and emp2 by taking appropriate columns and datatypes. Then,
write a control file as shown below
$vi emp_multi.ctl
Load Data
infile /u01/oracle/empfix.dat
append into table scott.emp1
WHEN (deptno=10 )
(empno
POSITION(01:04)
INTEGER EXTERNAL,
name
POSITION(06:15)
CHAR,
job
POSITION(17:25)
CHAR,
mgr
POSITION(27:30)
INTEGER EXTERNAL,
sal
POSITION(32:39)
DECIMAL EXTERNAL,
comm
POSITION(41:48)
DECIMAL EXTERNAL,
deptno
POSITION(50:51)
INTEGER EXTERNAL)
INTEGER EXTERNAL,
name
POSITION(06:15)
CHAR,
job
POSITION(17:25)
CHAR,
mgr
POSITION(27:30)
INTEGER EXTERNAL,
sal
POSITION(32:39)
DECIMAL EXTERNAL,
comm
POSITION(41:48)
DECIMAL EXTERNAL,
deptno
POSITION(50:51)
INTEGER EXTERNAL)
SQL Loader can load the data into Oracle database using Conventional Path method
or Direct Path method. You can specify the method by using DIRECT command line
option. If you give DIRECT=TRUE then SQL loader will use Direct Path Loading
otherwise, if omit this option or specify DIRECT=false, then SQL Loader will use
Conventional Path loading method.
Conventional Path
Conventional path load (the default) uses the SQL INSERT statement and a bind
array buffer to load data into database tables.
When SQL*Loader performs a conventional path load, it competes equally with all
other processes for buffer resources. This can slow the load significantly. Extra
overhead is added as SQL statements are generated, passed to Oracle, and executed.
The Oracle database looks for partially filled blocks and attempts to fill them on each
insert. Although appropriate during normal use, this can slow bulk loads dramatically.
Direct Path
In Direct Path Loading, Oracle will not use SQL INSERT statement for loading rows.
Instead it directly writes the rows, into fresh blocks beyond High Water Mark, in
datafiles i.e. it does not scan for free blocks before high water mark. Direct Path load
is very fast because
Partial blocks are not used, so no reads are needed to find them, and fewer writes are performed.
SQL*Loader need not execute any SQL INSERT statements; therefore, the processing load on
the Oracle database is reduced.
A direct path load calls on Oracle to lock tables and indexes at the start of the load and releases
them when the load is finished. A conventional path load calls Oracle once for each array of rows
to process a SQL INSERT statement.
A direct path load uses multiblock asynchronous I/O for writes to the database files.
During a direct path load, processes perform their own write I/O, instead of using Oracle's buffer
cache. This minimizes contention with other Oracle users.
Type definitions
Table definitions
Table data
Table indexes
Integrity constraints, views, procedures, and triggers
When you import the tables the import tool will perform the actions in the following
order, new tables are created, data is imported and indexes are built, triggers are
imported, integrity constraints are enabled on the new tables, and any bitmap,
function-based, and/or domain indexes are built. This sequence prevents data from
being rejected due to the order in which tables are imported. This sequence also
prevents redundant triggers from firing twice on the same data
Invoking Export and Import
Keyword
Description (Default)
-------------------------------------------------------------USERID
BUFFER
username/password
size of data buffer
FILE
COMPRESS
GRANTS
INDEXES
DIRECT
LOG
ROWS
CONSISTENT
cross-table consistency(N)
FULL
OWNER
TABLES
RECORDLENGTH
length of IO record
INCTYPE
RECORD
TRIGGERS
STATISTICS
PARFILE
parameter filename
CONSTRAINTS
OBJECT_CONSISTENT
FEEDBACK
FILESIZE
FLASHBACK_SCN
FLASHBACK_TIME
QUERY
RESUMABLE
RESUMABLE_NAME
RESUMABLE_TIMEOUT
TTS_FULL_CHECK
TABLESPACES
TRANSPORT_TABLESPACE
TEMPLATE
Objects exported by export utility can only be imported by Import utility. Import
utility can run in Interactive mode or command line mode.
You can let Import prompt you for parameters by entering the IMP command
followed by your username/password:
Example: IMP SCOTT/TIGER
Or, you can control how Import runs by entering the IMP command followed
by various arguments. To specify parameters, you use keywords:
Format: IMP KEYWORD=value or
KEYWORD=(value1,value2,...,valueN)
Example: IMP SCOTT/TIGER IGNORE=Y TABLES=(EMP,DEPT)
FULL=N
or TABLES=(T1:P1,T1:P2), if T1 is
partitioned table
Description (Default)
USERID
username/password
BUFFER
FILE
SHOW
IGNORE
GRANTS
INDEXES
ROWS
LOG
FULL
FROMUSER
TOUSER
list of usernames
TABLES
RECORDLENGTH
length of IO record
INCTYPE
COMMIT
PARFILE
parameter filename
CONSTRAINTS
DESTROY
INDEXFILE
SKIP_UNUSABLE_INDEXES
FEEDBACK
TOID_NOVALIDATE
FILESIZE
STATISTICS
RESUMABLE
RESUMABLE_NAME
RESUMABLE_TIMEOUT
COMPILE
STREAMS_CONFIGURATION
STREAMS_INSTANITATION
"DEPT"
4 rows imported
. . importing table
"EMP"
14 rows imported
The Export and Import utilities are the only method that Oracle supports for moving
an existing Oracle database from one hardware platform to another. This includes
moving between UNIX and NT systems and also moving between two NT systems
running on different platforms.
The following steps present a general overview of how to move a database between
platforms.
1. As a DBA user, issue the following SQL query to get the exact name of all tablespaces. You will
need this information later in the process.
3. Move the dump file to the target database server. If you use FTP, be sure to copy it in binary
format (by entering binary at the FTP prompt) to avoid file corruption.
4. Create a database on the target server.
5. Before importing the dump file, you must first create your tablespaces, using the information
obtained in Step 1. Otherwise, the import will create the corresponding datafiles in the same file
structure as at the source database, which may not be compatible with the file structure on the
target system.
6. As a DBA user, perform a full import with the IGNORE parameter enabled:
Starting with Oracle 10g, Oracle has introduced an enhanced version of EXPORT and
IMPORT utility known as DATA PUMP. Data Pump is similar to EXPORT and
IMPORT utility but it has many advantages. Some of the advantages are:
Most Data Pump export and import operations occur on the Oracle database
server. i.e. all the dump files are created in the server even if you run the Data
Pump utility from client machine. This results in increased performance
because data is not transferred through network.
You can Stop and Re-Start export and import jobs. This is particularly useful if
you have started an export or import job and after some time you want to do
some other urgent work.
The ability to detach from and reattach to long-running jobs without affecting
the job itself. This allows DBAs and other operations personnel to monitor jobs
from multiple locations.
The ability to estimate how much space an export job would consume, without
actually performing the export
To Use Data Pump, DBA has to create a directory in Server Machine and create a
Directory Object in the database mapping to the directory created in the file system.
The following example creates a directory in the filesystem and creates a directory
object in the database and grants privileges on the Directory Object to the SCOTT
user.
$mkdir my_dump_dir
$sqlplus
Enter User:/ as sysdba
SQL>create directory data_pump_dir as
/u01/oracle/my_dump_dir;
Now grant access on this directory object to SCOTT user
SQL> grant read,write on
directory data_pump_dir to scott;
Example of Exporting a Full Database
In some cases where the Database is in Terabytes the above command will not
feasible since the dump file size will be larger than the operating system limit, and
hence export will fail. In this situation you can create multiple dump files by typing
the following command
$expdp scott/tiger FULL=y
DIRECTORY=data_pump_dir DUMPFILE=full%U.dmp
FILESIZE=5G LOGFILE=myfullexp.log
JOB_NAME=myfullJob
This will create multiple dump files named full01.dmp, full02.dmp, full03.dmp and so
on. The FILESIZE parameter specifies how much larger the dump file should be.
Example of Exporting a Schema
To export all the objects of SCOTTS schema you can run the following export data
pump command.
$expdp scott/tiger
DIRECTORY=data_pump_dir DUMPFILE=scott_schema.dmp
SCHEMAS=SCOTT
You can omit SCHEMAS since the default mode of Data Pump export is SCHEMAS
only.
If you want to export objects of multiple schemas you can specify the following
command
$expdp scott/tiger
DIRECTORY=data_pump_dir DUMPFILE=scott_schema.dmp
SCHEMAS=SCOTT,HR,ALI
Exporting Individual Tables using Data Pump Export
You can use Data Pump Export utility to export individual tables. The following
example shows the syntax to export tables
$expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=tables.dmp
TABLES=employees,jobs,departments
If you want to export tables located in a particular tablespace you can type the
following command
You can exclude objects while performing a export by using EXCLUDE option of
Data Pump utility. For example you are exporting a schema and dont want to export
tables whose name starts with A then you can type the following command
$expdp scott/tiger
DIRECTORY=data_pump_dir DUMPFILE=scott_schema.dmp
SCHEMAS=SCOTT EXCLUDE=TABLE:like A%
Then all tables in Scotts Schema whose name starts with A will not be exported.
Similarly you can also INCLUDE option to only export certain objects like this
$expdp scott/tiger
DIRECTORY=data_pump_dir DUMPFILE=scott_schema.dmp
SCHEMAS=SCOTT INCLUDE=TABLE:like A%
This is opposite of EXCLUDE option i.e. it will export only those tables of Scotts
schema whose name starts with A
Similarly you can also exclude INDEXES, CONSTRAINTS, GRANTS,
USER, SCHEMA
Using Query to Filter Rows during Export
You can use QUERY option to export only required rows. For Example, the following
will export only those rows of employees tables whose salary is above 10000 and
whose dept id is 10.
Suspending and Resuming Export Jobs (Attaching and ReAttaching to the Jobs)
You can suspend running export jobs and later on resume these jobs or kill these jobs
using Data Pump Export. You can start a job in one client machine and then, if
because of some work, you can suspend it. Afterwards when your work has been
finished you can continue the job from the same client, where you stopped the job, or
you can restart the job from another client machine.
For Example, suppose a DBA starts a full database export by typing the following
command at one client machine CLNT1 by typing the following command
$expdp scott/tiger@mydb FULL=y DIRECTORY=data_pump_dir
DUMPFILE=full.dmp LOGFILE=myfullexp.log
JOB_NAME=myfullJob
After some time, the DBA wants to stop this job temporarily. Then he presses
CTRL+C to enter into interactive mode. Then he will get the Export> prompt where
he can type interactive commands
Now he wants to stop this export job so he will type the following command
Export> STOP_JOB=IMMEDIATE
Are you sure you wish to stop this job ([y]/n): y
After the job status is displayed, he can issue the CONTINUE_CLIENT command to resume logging
mode and restart the myfulljob job.
Export> CONTINUE_CLIENT
A message is displayed that the job has been reopened, and processing status is output to the
client.
Note: After reattaching to the Job a DBA can also kill the job by typing KILL_JOB, if he doesnt
want to continue with the export job
If you want to Import all the objects in a dump file then you can type the following
command.
$impdp hr/hr DUMPFILE=dpump_dir1:expfull.dmp FULL=y
LOGFILE=dpump_dir2:full_imp.log
This example imports everything from the expfull.dmp dump file. In this example,
a DIRECTORY parameter is not provided. Therefore, a directory object must be
provided on both theDUMPFILE parameter and the LOGFILE parameter
Importing Objects of One Schema to another Schema
The following example loads all tables belonging to hr schema to scott schema
You can use remap_tablespace option to import objects of one tablespace to another
tablespace by giving the command
$impdp SYSTEM/password DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp
REMAP_TABLESPACE=users:sales
The above example loads tables, stored in users tablespace, in the sales
tablespace.
You can generate SQL file which contains all the DDL commands which Import
would have executed if you actually run Import utility
The following is an example of using the SQLFILE parameter.
$ impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp
SQLFILE=dpump_dir2:expfull.sql
If you have the IMP_FULL_DATABASE role, you can use this parameter to perform a
schema-mode import by specifying a single schema other than your own or a list of
schemas to import. First, the schemas themselves are created (if they do not already
exist), including system and role grants, password history, and so on. Then all objects
contained within the schemas are imported. Nonprivileged users can specify only their
own schemas. In that case, no information about the schema definition is imported, only
the objects contained within it.
Example
The following is an example of using the SCHEMAS parameter. You can create the
expdat.dmp file used in this example by running the example provided for the
Export SCHEMASparameter.
The hr and oe schemas are imported from the expdat.dmp file. The log file, schemas.log, is
written to dpump_dir1
The following example shows a simple use of the TABLES parameter to import only
the employees and jobs tables from the expfull.dmp file. You can create
theexpfull.dmp dump file used in this example by running the example provided
for the Full Database Export in Previous Topic.
$impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp TABLES=employees,jobs
This will import only employees and jobs tables from the DUMPFILE.
Running Import Utility in Interactive Mode
Similar to the DATA PUMP EXPORT utility the Data Pump Import Jobs can also be
suspended, resumed or killed. And, you can attach to an already existing import job
from any client machine.
For Example, suppose a DBA starts a importing by typing the following command at
one client machine CLNT1 by typing the following command
$impdp scott/tiger@mydb FULL=y DIRECTORY=data_pump_dir
DUMPFILE=full.dmp LOGFILE=myfullexp.log
JOB_NAME=myfullJob
After some time, the DBA wants to stop this job temporarily. Then he
presses CTRL+C to enter into interactive mode. Then he will get the Import> prompt
where he can type interactive commands
Now he wants to stop this export job so he will type the following command
Import> STOP_JOB=IMMEDIATE
Are you sure you wish to stop this job ([y]/n): y
After finishing his other work, the DBA wants to resume the export job and the client
machine from where he actually started the job is locked because, the user has locked
his/her cabin. So now the DBA will go to another client machine and he reattach to
the job by typing the following command
$impdp hr/hr@mydb ATTACH=myfulljob
After the job status is displayed, he can issue the CONTINUE_CLIENT command to
resume logging mode and restart the myfulljob job.
Import> CONTINUE_CLIENT
A message is displayed that the job has been reopened, and processing status is output
to the client.
Note: After reattaching to the Job a DBA can also kill the job by typing KILL_JOB, if
he doesnt want to continue with the import job
To insert the accidently deleted rows again in the table he can type
SQL> insert into emp (select * from emp as of timestamp
sysdate-1/24)
Using Flashback Version Query
You use a Flashback Version Query to retrieve the different versions of specific rows
that existed during a given time interval. A new row version is created whenever a
COMMIT statement is executed.
The Flashback Version Query returns a table with a row for each version of the row
that existed at any time during the time interval you specify. Each row in the table
includespseudocolumns of metadata about the row version.
The pseudocolumns available are
VERSIONS_XID
VERSIONS_OPERATION
D for Delete
VERSIONS_STARTSCN
was created
VERSIONS_STARTTIME
created
VERSIONS_ENDSCN
VERSIONS_ENDTIME
Connect / as sysdba
column versions_starttime format a16
column versions_endtime format a16
set linesize 120;
SQL>
select versions_xid,versions_starttime,versions_endtime,
versions_operation,empno,name,sal from emp versions
between
timestamp to_timestamp(2007-06-19 20:30:00,yyyy-mmdd hh:mi:ss)
and to_timestamp(2007-06-19 21:00:00,yyyy-mmdd hh:mi:ss);
VERSION_XID
----------0200100020D
11323
02001003C02
0002302C03A
V STARTSCN ENDSCN
- -------- -----U
101
SMITH
U 11345
I 12320
EMPNO NAME
SAL
----- -------- ---2000
101
101
SAMI
SAMI
2000
5000
The Output should be read from bottom to top, from the output we can see that an
Insert has taken place and then erroneous update has taken place and then again
update has taken place to change the name.
The DBA identifies the transaction 02001003C02 as erroneous and issues the
following query to get the SQL command to undo the change
SQL> select operation,logon_user,undo_sql
from flashback_transaction_query
where xid=HEXTORAW(02001003C02);
OPERATION
-----------U
LOGON_USER UNDO_SQL
---------- -----------------------------------
SCOTT
Now DBA can execute the command to undo the changes made by the user
SQL> update emp set sal=5000 where ROWID ='AAAKD2AABAAAJ29AAA'
1 row updated
Oracle Flashback Table provides the DBA the ability to recover a table or set of tables
to a specified point in time in the past very quickly, easily, and without taking any part
of the database offline. In many cases, Flashback Table eliminates the need to perform
more complicated point-in-time recovery operations.
Flashback Table uses information in the undo tablespace to restore the table.
Therefore, UNDO_RETENTION parameter is significant in Flashing Back Tables to a
past state. You can only flash back tables up to the retention time you specified.
Row movement must be enabled on the table for which you are issuing the
FLASHBACK TABLE statement. You can enable row movement with the following
SQL statement:
ALTER TABLE table ENABLE ROW MOVEMENT;
The following example performs a FLASHBACK TABLE operation the table emp
The emp table is restored to its state when the database was at the time specified by the
timestamp.
Example:At 17:00 an HR administrator discovers that an employee "JOHN" is missing from the
EMPLOYEE table. This employee was present at 14:00, the last time she ran a report. Someone
accidentally deleted the record for "JOHN" between 14:00 and the present time. She uses
Flashback Table to return the table to its state at 14:00, as shown in this example:
FLASHBACK TABLE EMPLOYEES TO TIMESTAMP
TO_TIMESTAMP('2007-06-21 14:00:00','YYYY-MM-DD HH:MI:SS')
ENABLE TRIGGERS;
You have to give ENABLE TRIGGERS option otherwise, by default all database
triggers on the table will be disabled.
Table Dropped
Now for user it appears that table is dropped but it is actually renamed and placed in
Recycle Bin. To recover this dropped table a user can type the command
SQL> Flashback table emp to before drop;
You can also restore the dropped table by giving it a different name like this
SQL> Flashback table emp to before drop rename to emp2;
Purging Objects from Recycle Bin
If you want to recover the space used by a dropped table give the following command
SQL> purge table emp;
If you want to purge objects of logon user give the following command
SQL> purge recycle bin;
If you want to recover space for dropped object of a particular tablespace give the
command
SQL> purge tablespace hr;
You can also purge only objects from a tablespace belonging to a specific user, using the
following form of the command:
In such a case, each table EMP is assigned a unique name in the recycle bin when it is dropped.
You can use a FLASHBACK TABLE... TO BEFORE DROP statement with the original name of the
table, as shown in this example:
FLASHBACK TABLE EMP TO BEFORE DROP;
The most recently dropped table with that original name is retrieved from the recycle bin, with
its original name. You can retrieve it and assign it a new name using a RENAME TO clause. The
following example shows the retrieval from the recycle bin of all three dropped EMP tables from
the previous example, with each assigned a new name:
FLASHBACK TABLE EMP TO BEFORE DROP RENAME TO EMP_VER_3;
FLASHBACK TABLE EMP TO BEFORE DROP RENAME TO EMP_VER_2;
FLASHBACK TABLE EMP TO BEFORE DROP RENAME TO EMP_VER_1;
Important Points:
1.
There is no guarantee that objects will remain in Recycle Bin. Oracle might
empty recycle bin whenever Space Pressure occurs i.e. whenever tablespace
becomes full and transaction requires new extents then, oracle will delete
objects from recycle bin
2.
A table and all of its dependent objects (indexes, LOB segments, nested
tables, triggers, constraints and so on) go into the recycle bin together, when
you drop the table. Likewise, when you perform Flashback Drop, the objects
are generally all retrieved together.
3. There is no fixed amount of space allocated to the recycle bin, and no guarantee
as to how long dropped objects remain in the recycle bin. Depending upon
system activity, a dropped object may remain in the recycle bin for seconds, or
for months.
Oracle Flashback Database, lets you quickly recover the entire database from logical
data corruptions or user errors.
To enable Flashback Database, you set up a flash recovery area, and set a flashback
retention target, to specify how far back into the past you want to be able to restore
your database with Flashback Database.
Once you set these parameters, From that time on, at regular intervals, the database
copies images of each altered block in every datafile into flashback logs stored in the
flash recovery area. These Flashback logs are use to flashback database to a point in
time.
Enabling Flash Back Database
Step 1. Shutdown the database if it is already running and set the following
parameters
DB_RECOVERY_FILE_DEST=/d01/ica/flasharea
DB_RECOVERY_FILE_DEST_SIZE=10G
DB_FLASHBACK_RETENTION_TARGET=4320
(Note: the db_flashback_retention_target is specified in minutes here we have specified 3 days i.e.
3x24x60=4320)
This will show how much size the recovery area should be set to.
How far you can flashback database.
To determine the earliest SCN and earliest Time you can Flashback your
database, give the following query:
SELECT OLDEST_FLASHBACK_SCN, OLDEST_FLASHBACK_TIME
FROM V$FLASHBACK_DATABASE_LOG;
1.
Start RMAN
$rman target /
2.
3. When the Flashback Database operation completes, you can evaluate the results by
opening the database read-only and run some queries to check whether your Flashback
Database has returned the database to the desired state.
RMAN> SQL 'ALTER DATABASE OPEN READ ONLY';
Option 1:If you are content with your result you can open the database by
performing ALTER
DATABASE OPEN RESETLOGS
Option 2:If you discover that you have chosen the wrong target time for your
Flashback Database operation, you can use RECOVER DATABASE
UNTIL to bring the database forward, or perform FLASHBACK
DATABASE again with an SCN further in the past. You can completely undo
the effects of your flashback operation by performing complete recovery of
the database:
RMAN> RECOVER DATABASE;
Option 3:If you only want to retrieve some lost data from the past time, you can open
the database read-only, then perform a logical export of the data using an
Oracle export utility, then run RECOVER DATABASE to return the database
to the present time and re-import the data using the Oracle import utility
4.
Since in our example only a schema is dropped and the rest of database is
good, third option is relevant for us.
Now, come out of RMAN and run EXPORT utility to export the whole schema
$exp userid=system/manager
file=scott.dmp owner=SCOTT
5.
6.
Log Miner
Using Log Miner utility, you can query the contents of online redo log files and
archived log files. Because LogMiner provides a well-defined, easy-to-use, and
comprehensive relational interface to redo log files, it can be used as a powerful data
audit tool, as well as a tool for sophisticated data analysis.
LogMiner Configuration
There are three basic objects in a LogMiner configuration that you should be familiar
with: the source database, the LogMiner dictionary, and the redo log files containing
the data of interest:
The source database is the database that produces all the redo log files that you want LogMiner
to analyze.
The LogMiner dictionary allows LogMiner to provide table and column names, instead of
internal object IDs, when it presents the redo log data that you request.
LogMiner uses the dictionary to translate internal object identifiers and datatypes to
object names and external data formats. Without a dictionary, LogMiner returns
internal object IDs and presents data as binary data.
For example, consider the following the SQL statement:
INSERT INTO HR.JOBS(JOB_ID, JOB_TITLE, MIN_SALARY,
MAX_SALARY) VALUES('IT_WT','Technical Writer', 4000,
11000);
The redo log files contain the changes made to the database or database dictionary.
LogMiner requires a dictionary to translate object IDs into object names when it
returns redo data to you. LogMiner gives you three options for supplying the
dictionary:
Oracle recommends that you use this option when you will have access to the source
database from which the redo log files were created and when no changes to the
column definitions in the tables of interest are anticipated. This is the most efficient
and easy-to-use option.
Oracle recommends that you use this option when you do not expect to have access to
the source database from which the redo log files were created, or if you anticipate
that changes will be made to the column definitions in the tables of interest.
This option is maintained for backward compatibility with previous releases. This
option does not guarantee transactional consistency. Oracle recommends that you use
either the online catalog or extract the dictionary from redo log files instead.
Using the Online Catalog
To direct LogMiner to use the dictionary currently in use for the database, specify the
online catalog as your dictionary source when you start LogMiner, as follows:
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
When the LogMiner dictionary is in a flat file, fewer system resources are used than
when it is contained in the redo log files. Oracle recommends that you regularly back
up the dictionary extract to ensure correct analysis of older redo log files.
1. Set the initialization parameter, UTL_FILE_DIR, in the initialization parameter file. For example,
to set UTL_FILE_DIR to use /oracle/database as the directory where the dictionary file is
placed, enter the following in the initialization parameter file:
UTL_FILE_DIR = /oracle/database
To mine data in the redo log files, LogMiner needs information about which redo log
files to mine.
You can direct LogMiner to automatically and dynamically create a list of redo log
files to analyze, or you can explicitly specify a list of redo log files for LogMiner to
analyze, as follows:
Automatically
If LogMiner is being used on the source database, then you can direct LogMiner to
find and create a list of redo log files for analysis automatically. Use
the CONTINUOUS_MINE option when you start LogMiner.
Manually
Use the DBMS_LOGMNR.ADD_LOGFILE procedure to manually create a list of redo
log files before you start LogMiner. After the first redo log file has been added to the
list, each subsequently added redo log file must be from the same database and
associated with the same database RESETLOGS SCN. When using this method,
LogMiner need not be connected to the source database.
Example: Finding All Modifications in the Current Redo Log File
The easiest way to examine the modification history of a database is to mine at the
source database and use the online catalog to translate the redo log files. This example
shows how to do the simplest analysis using LogMiner .
Step 1 Specify the list of redo log files to be analyzed.
Specify the redo log files which you want to analyze.
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( LOGFILENAME => '/usr/oracle/ica/log1.ora',
OPTIONS => DBMS_LOGMNR.NEW);
USR
XID
SQL_REDO
SQL_UNDO
---HR
--------1.11.1476
HR
1.11.1476
OE
1.1.1484
OE
1.1.1484
update "OE"."PRODUCT_INFORMATION"
set "WARRANTY_PERIOD" =
TO_YMINTERVAL('+05-00') where
"PRODUCT_ID" = '1799' and
"WARRANTY_PERIOD" =
TO_YMINTERVAL('+01-00') and
ROWID = 'AAAHTKAABAAAY9mAAB';
update "OE"."PRODUCT_INFORMATION"
set "WARRANTY_PERIOD" =
TO_YMINTERVAL('+01-00') where
"PRODUCT_ID" = '1799' and
"WARRANTY_PERIOD" =
TO_YMINTERVAL('+05-00') and
ROWID = 'AAAHTKAABAAAY9mAAB';
OE
1.1.1484
update "OE"."PRODUCT_INFORMATION"
set "WARRANTY_PERIOD" =
TO_YMINTERVAL('+05-00') where
"PRODUCT_ID" = '1801' and
"WARRANTY_PERIOD" =
TO_YMINTERVAL('+01-00') and
ROWID = 'AAAHTKAABAAAY9mAAC';
update "OE"."PRODUCT_INFORMATION"
set "WARRANTY_PERIOD" =
TO_YMINTERVAL('+01-00') where
"PRODUCT_ID" = '1801' and
"WARRANTY_PERIOD" =
TO_YMINTERVAL('+05-00') and
ROWID ='AAAHTKAABAAAY9mAAC';
HR
1.11.1476
jan('307','John','Silver',
'JSILVER', '5551112222',
TO_DATE('10-jan-2003 13:41:03',
'dd-mon-yyyy hh24:mi:ss'),
'SH_CLERK','110000', '.05',
'105','50');
OE
1.1.1484
HR
1.15.1481
HR
1.15.1481
commit;
set transaction read write;
delete from "HR"."EMPLOYEES"
where "EMPLOYEE_ID" = '205' and
"FIRST_NAME" = 'Shelley' and
"LAST_NAME" = 'Higgins' and
"EMAIL" = 'SHIGGINS' and
"PHONE_NUMBER" = '515.123.8080'
and "HIRE_DATE" = TO_DATE(
'07-jun-1994 10:05:01',
'dd-mon-yyyy hh24:mi:ss')
and "JOB_ID" = 'AC_MGR'
and "SALARY"= '12000'
and "COMMISSION_PCT" IS NULL
and "MANAGER_ID"
= '101' and "DEPARTMENT_ID" =
'110' and ROWID =
'AAAHSkAABAAAY6rAAM';
OE
1.8.1484
OE
1.8.1484
update "OE"."PRODUCT_INFORMATION"
set "WARRANTY_PERIOD" =
TO_YMINTERVAL('+12-06') where
"PRODUCT_ID" = '2350' and
"WARRANTY_PERIOD" =
TO_YMINTERVAL('+20-00') and
ROWID = 'AAAHTKAABAAAY9tAAD';
HR
1.11.1476
commit;
update "OE"."PRODUCT_INFORMATION"
set "WARRANTY_PERIOD" =
TO_YMINTERVAL('+20-00') where
"PRODUCT_ID" = '2350' and
"WARRANTY_PERIOD" =
TO_YMINTERVAL('+20-00') and
ROWID ='AAAHTKAABAAAY9tAAD';
The previous example explicitly specified the redo log file or files to be mined.
However, if you are mining in the same database that generated the redo log files, then
you can mine the appropriate list of redo log files by just specifying the time (or SCN)
range of interest. To mine a set of redo log files without explicitly specifying them,
use theDBMS_LOGMNR.CONTINUOUS_MINE option to
the DBMS_LOGMNR.START_LOGMNR procedure, and specify either a time range or
an SCN range of interest.
Example : Mining Redo Log Files in a Given Time Range
This example assumes that you want to use the data dictionary extracted to the redo
log files.
Step 1 Determine the timestamp of the redo log file that contains the start of the
data dictionary.
SQL> SELECT NAME, FIRST_TIME FROM V$ARCHIVED_LOG
WHERE SEQUENCE# = (SELECT MAX(SEQUENCE#) FROM V$ARCHIVED_LOG
WHERE DICTIONARY_BEGIN = 'YES');
NAME
FIRST_TIME
--------------------------------------------
--------------------
/usr/oracle/data/db1arch_1_207_482701534.dbf
10-jan-2003 12:01:34
Step 2 Display all the redo log files that have been generated so far.
This step is not required, but is included to demonstrate that
the CONTINUOUS_MINE option works as expected, as will be shown in Step 4.
SQL> SELECT FILENAME name FROM V$LOGMNR_LOGS
WHERE LOW_TIME > '10-jan-2003 12:01:34';
NAME
---------------------------------------------/usr/oracle/data/db1arch_1_207_482701534.dbf
/usr/oracle/data/db1arch_1_208_482701534.dbf
/usr/oracle/data/db1arch_1_209_482701534.dbf
/usr/oracle/data/db1arch_1_210_482701534.dbf
NAME
-----------------------------------------------------/usr/oracle/data/db1arch_1_207_482701534.dbf
/usr/oracle/data/db1arch_1_208_482701534.dbf
/usr/oracle/data/db1arch_1_209_482701534.dbf
/usr/oracle/data/db1arch_1_210_482701534.dbf
USR
XID
SQL_REDO
-----------
--------
-----------------------------------
SYS
1.2.1594
SYS
1.2.1594
SYS
1.2.1594
commit;
SYS
1.18.1602
SYS
1.18.1602
SYS
1.18.1602
commit;
OE
1.9.1598
update "OE"."PRODUCT_INFORMATION"
set
"WARRANTY_PERIOD" = TO_YMINTERVAL('+08-00'),
"LIST_PRICE" = 100
where
"PRODUCT_ID" = 1729 and
"WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00') and
"LIST_PRICE" = 80 and
ROWID = 'AAAHTKAABAAAY9yAAA';
OE
1.9.1598
OE
1.9.1598
update "OE"."PRODUCT_INFORMATION"
set
"WARRANTY_PERIOD" = TO_YMINTERVAL('+08-00'),
"LIST_PRICE" = 92
where
"PRODUCT_ID" = 2340 and
"WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00') and
"LIST_PRICE" = 72 and
ROWID = 'AAAHTKAABAAAY9zAAA';
OE
1.9.1598
OE
1.9.1598
commit;
STEP 2: Comment the following parameters in parameter file by putting " # " .
# LOG_ARCHIVE_DEST_1=location=/u02/ica/arc1
# LOG_ARCHIVE_DEST_2=location=/u02/ica/arc2
# LOG_ARCHIVE_FORMAT=ica%s.%t.%r.arc
STEP 3: Startup and mount the database.
SQL> STARTUP MOUNT;
STEP 4: Give the following Commands
SQL> ALTER DATABASE NOARCHIVELOG;
STEP 5: Shutdown the database and take full offline backup.
/u02/backup
$sqlplus
Enter User:/ as sysdba
SQL> STARTUP MOUNT
SQL> ALTER DATABASE DATAFILE
'/u01/ica/usr1.dbf '
offline drop;
/u01/ica
This will copy all the files from backup directory to original destination. Also remember to copy the control
files to all the mirrored locations.
Buf If you have added any new tablespace after generating create controlfile statement. Then you have to
alter the script and include the filename and size of the file in script file.
If your script file containing the control file creation statement is "CR.SQL"
Then just do the following.
STEP 1: Start sqlplus
STEP 2: connect / as sysdba
STEP 3: Start and do not mount a database like this.
SQL> STARTUP NOMOUNT
STEP 4: Run the "CR.SQL" script file.
STEP 5: Mount and Open the database.
SQL>alter database mount;
SQL>alter database open;
If you do not have a backup of Control file creation statement. Then you have to manually give the
CREATE CONTROL FILE statement. You have to write the file names and sizes of all the datafiles. You
will lose any datafiles which you do not include.
Refer to "Managing Control File" topic for the CREATE CONTROL FILE statement.
Note: In Oracle 10g you can easily recover drop tables by using Flashback feature. For further information
please refer to Flashback Features Topic in this book