You are on page 1of 13

Below Are the differences in tradionational exp/imp vs

Datapump

 Datapump operates on a group of files called dump file sets. However,


normal export operates on a single file.
 Datapump access files in the server (using ORACLE directories).
Traditional export can access files in client and server both (not using
ORACLE directories).
 Exports (exp/imp) represent database metadata information as DDLs in
the dump file, but in datapump, it represents in XML document format.
 Datapump has parallel execution but in exp/imp single stream
execution.
 Datapump does not support sequential media like tapes, but traditional
export supports.
 Impdp/Expdp use parallel execution rather than a single stream of
execution, for improved performance.
  Data Pump will recreate the user, whereas the old imp utility required
the DBA to create the user ID before importing.
 In Data Pump, we can stop and restart the jobs.

Why expdp is faster than exp (or) why Data Pump is faster than
conventional export/import. 

 Data Pump is block mode, exp is byte mode.


 Data Pump will do parallel execution.
 Data Pump uses direct path API.
 In Data Pump, where the jobs info will be stored (or) if you restart a job
in Data Pump, how it will know from where to resume – Whenever Data
Pump export or import is running, Oracle will create a table with the
JOB_NAME and will be deleted once the job is done. From this table,
Oracle will find out how much job has completed and from where to
continue etc.
 Default export job name will be SYS_EXPORT_XXXX_01, where XXXX
can be FULL or SCHEMA or TABLE.
 Default import job name will be SYS_IMPORT_XXXX_01, where XXXX
can be FULL or SCHEMA or TABLE.
 Datapump gives 15 – 50% performance improvement than exp/imp.
 Export and import can be taken over the network using database links
even without generating the dump file
using NETWORK_LINK parameter.
  CONTENT parameter gives the freedom for what to export with options
METADATA ONLY, DATA, BOTH.
 Few parameter name changes in datapump and it always makes
confusion with parameters in normal exp/imp
SLNO EXP/IMP Parameter EXPDP/IMPDP Parameter

1 owner schemas

2 file dumpfile

3 log logfile/nologfile

4 IMP: fromuser, touser IMPDP: remap_schema

Additional Features Available with datapump are:

1. Job Estimation can be done in datapump


2. SAMPLE parameter is used for taking the sample % data.
3. Failed export/import Jobs can be restarted
4. EXCLUDE/INCLUDE parameter allows the fine-grained object
selection.
5. Data remapping can be done using REMAPDATA parameter.
6. Export and import can be taken over the network using database links
even without generating the dump file
using NETWORK_LINK parameter.
7. CONTENT parameter gives the freedom for what to export with options
METADATA ONLY, DATA, BOTH.
8. You don’t need to specify the BUFFER size in datapump
9. Job estimated completion time can be monitored from
v$session_longops view.
10.Dump file can be compressed with COMPRESSION parameter. In
conventional exp/imp you have to compress the dumps using OS
utilities.
11. Data encryption can be done in datapump.
12.DATAPUMP has interactive options like ADD_FILE, START_JOB,
KILL_JOB, STOP_JOB.
13.REUSE_DUMPFILES parameter asks the confirmation/rewrite the
existing dumpfile.
14.CONTENT= METADATA_ONLY
15. How to import into different
user/tablespace/datafile/table?
16.REMAP_SCHEMA
REMAP_TABLESPACE
REMAP_DATAFILE
REMAP_TABLE
REMAP_DATA
17. How you can check data dump dump file is taken by exp or
expdp?
18.For Conventional export, the logfile ends with “Export Terminated”
19.$ tail -1 exp_user1.log
20. Export terminated successfully without warnings.
21.For Data pump the logfile ends with “Job”
22. $ tail -1 expdp_user1.log
23. Job “SYS”.”SYS_EXPORT_SCHEMA_01″ successfully
completed.
24. The simplest way would be: just run the imp, if it throws error ,
then its created through expdp, because dump from expdp cannot be
used with imp and vice versa. So that could help you find.
25.  

Oracle Data Pump (expdp, impdp) in Oracle Database


10g, 11g, 12c, 18c, 19c, 21c
Oracle Data Pump is a newer, faster and more flexible alternative to the "exp" and
"imp" utilities used in previous Oracle versions. In addition to basic import and
export functionality data pump provides a PL/SQL API and support for external
tables.

This article was originally written against Oracle 10g, but the information is still
relevant up to and including the latest versions of Oracle. New features are broken
out into separate articles, but the help section at the bottom is up to date with the
latest versions.

 Getting Started
 Table Exports/Imports
 Schema Exports/Imports
 Database Exports/Imports
 INCLUDE and EXCLUDE
 CONTENT and QUERY
 Network Exports/Imports (NETWORK_LINK)
 Flashback Exports
 Miscellaneous Information
 Data Pump API
 External Tables (Unloading/Loading Data Using External Tables)
 Secure External Password Store
 Roles
 Interactive Command Mode
 Help

o expdp
o impdp
 Time Zone File Version

Related articles.

 Data Pump (expdp, impdp) : All Articles


 Data Pump Quick Links : 10g, 11g, 12cR1, 12cR2, 18c, 19c, 21c, Transportable
Tablespaces
 SQL Developer 3.1 Data Pump Wizards (expdp, impdp)
 Oracle Cloud : Autonomous Data Warehouse (ADW) - Import Data from an
Object Store (impdp)

Getting Started
For the examples to work we must first unlock the SCOTT account and create a
directory object it can access. The directory object is only a pointer to a physical
directory, creating it does not actually create the physical directory on the file system
of the database server.

CONN / AS SYSDBA
ALTER USER scott IDENTIFIED BY tiger ACCOUNT UNLOCK;

CREATE OR REPLACE DIRECTORY test_dir AS '/u01/app/oracle/oradata/';


GRANT READ, WRITE ON DIRECTORY test_dir TO scott;

Existing directories can be queried using the ALL_DIRECTORIES view.

 Data Pump is a server-based technology, so it typically deals with directory objects


pointing to physical directories on the database server. It does not write to the local
file system on your client PC.

Table Exports/Imports
The TABLES parameter is used to specify the tables that are to be exported. The
following is an example of the table export and import syntax.

expdp scott/tiger@db10g tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp


logfile=expdpEMP_DEPT.log

impdp scott/tiger@db10g tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp


logfile=impdpEMP_DEPT.log

For example output files see expdpEMP_DEPT.log and impdpEMP_DEPT.log.

The TABLE_EXISTS_ACTION=APPEND parameter allows data to be imported into


existing tables.

Schema Exports/Imports
The OWNER parameter of exp has been replaced by the SCHEMAS parameter which is
used to specify the schemas to be exported. The following is an example of the
schema export and import syntax.

expdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR dumpfile=SCOTT.dmp


logfile=expdpSCOTT.log

impdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR dumpfile=SCOTT.dmp


logfile=impdpSCOTT.log

For example output files see expdpSCOTT.log and impdpSCOTT.log.

Database Exports/Imports
The FULL parameter indicates that a complete database export is required. The
following is an example of the full database export and import syntax.

expdp system/password@db10g full=Y directory=TEST_DIR dumpfile=DB10G.dmp


logfile=expdpDB10G.log

impdp system/password@db10g full=Y directory=TEST_DIR dumpfile=DB10G.dmp


logfile=impdpDB10G.log

For an example output file see expdpDB10G.log.

That database user performing the export will


need DATAPUMP_EXP_FULL_DATABASE role, and the user performing the import will
need the DATAPUMP_IMP_FULL_DATABASE role.

INCLUDE and EXCLUDE


The INCLUDE and EXCLUDE parameters can be used to limit the export/import to
specific objects. When the INCLUDE parameter is used, only those objects specified
by it will be included in the export/import. When the EXCLUDE parameter is used, all
objects except those specified by it will be included in the export/import. The two
parameters are mutually exclusive, so use the parameter that requires the least
entries to give you the result you require. The basic syntax for both parameters is the
same.

INCLUDE=object_type[:name_clause] [, ...]
EXCLUDE=object_type[:name_clause] [, ...]

The following code shows how they can be used as command line parameters.

expdp scott/tiger@db10g schemas=SCOTT include=TABLE:"IN ('EMP', 'DEPT')" directory=TEST_DIR


dumpfile=SCOTT.dmp logfile=expdpSCOTT.log

expdp scott/tiger@db10g schemas=SCOTT exclude=TABLE:"= 'BONUS'" directory=TEST_DIR


dumpfile=SCOTT.dmp logfile=expdpSCOTT.log

If the parameter is used from the command line, depending on your OS, the special
characters in the clause may need to be escaped, as follows. Because of this, it is
easier to use a parameter file.

include=TABLE:\"IN (\'EMP\', \'DEPT\')\"

A single import/export can include multiple references to the parameters, so to


export tables, views and some packages we could use either of the following
approaches.

INCLUDE=TABLE,VIEW,PACKAGE:"LIKE '%API'"

or

INCLUDE=TABLE
INCLUDE=VIEW
INCLUDE=PACKAGE:"LIKE '%API'"

Multiple objects can be targeted in once statement using the LIKE and IN operators.

EXCLUDE=SCHEMA:"LIKE 'SYS%'"

EXCLUDE=SCHEMA:"IN
('OUTLN','SYSTEM','SYSMAN','FLOWS_FILES','APEX_030200','APEX_PUBLIC_USER','ANONYM
OUS')"

The valid object type paths that can be included or excluded can be displayed using
the DATABASE_EXPORT_OBJECTS, SCHEMA_EXPORT_OBJECTS,
and TABLE_EXPORT_OBJECTS views.
CONTENT and QUERY
The CONTENT parameter allows you to alter the contents of the export. The following
command uses the METADATA_ONLY parameter value to export the contents of the
schema without the data.

expdp system/password@db10g schemas=SCOTT directory=TEST_DIR dumpfile=scott_meta.dmp


logfile=expdp.log content=METADATA_ONLY

To capture the data without the metadata use the DATA_ONLY parameter value.

expdp system/password@db10g schemas=SCOTT directory=TEST_DIR dumpfile=scott_data.dmp


logfile=expdp.log content=DATA_ONLY

The QUERY parameter allows you to alter the rows exported from one or more tables.
The following example does a full database export, but doesn't include the data for
the EMP and DEPT tables.

expdp system/password@db10g full=Y directory=TEST_DIR dumpfile=full.dmp logfile=expdp_full.log


query='SCOTT.EMP:"WHERE deptno=0",SCOTT.DEPT:"WHERE deptno=0"'

The way you handle quotes on the command line will vary depending on what you
are trying to achieve. Here are some examples that work for single tables and
multiple tables directly from the command line.

# Single Table. Multiple quoting methods possible.


expdp scott/tiger@pdb1 schemas=scott directory=TEST_DIR dumpfile=scott1.dmp logfile=scott1.log
query=SCOTT.EMP:'"WHERE deptno=10"'
expdp scott/tiger@pdb1 schemas=scott directory=TEST_DIR dumpfile=scott2.dmp logfile=scott2.log
query=SCOTT.EMP:\"WHERE deptno=10\"
expdp scott/tiger@pdb1 schemas=scott directory=TEST_DIR dumpfile=scott3.dmp logfile=scott3.log
query='SCOTT.EMP:"WHERE deptno=10"'

# Multiple WHERE clause on each table.


expdp scott/tiger@pdb1 schemas=scott directory=TEST_DIR dumpfile=scott4.dmp logfile=scott4.log
query='SCOTT.EMP:"WHERE deptno=10",SCOTT.DEPT:"WHERE deptno=20"'

Network Exports/Imports (NETWORK_LINK)


The NETWORK_LINK parameter identifies a database link to be used as the source for
a network export/import. The following database link will be used to demonstrate its
use.

CONN / AS SYSDBA
GRANT CREATE DATABASE LINK TO test;

CONN test/test
CREATE DATABASE LINK remote_scott CONNECT TO scott IDENTIFIED BY tiger USING 'DEV';

In the case of exports, the NETWORK_LINK parameter identifies the database link


pointing to the source server. The objects are exported from the source server in the
normal manner, but written to a directory object on the local server, rather than one
on the source server. Both the local and remote users require
the EXP_FULL_DATABASE role granted to them.

expdp test/test@db10g tables=SCOTT.EMP network_link=REMOTE_SCOTT directory=TEST_DIR


dumpfile=EMP.dmp logfile=expdpEMP.log

For imports, the NETWORK_LINK parameter also identifies the database link pointing


to the source server. The difference here is the objects are imported directly from the
source into the local server without being written to a dump file. Although there is no
need for a DUMPFILE parameter, a directory object is still required for the logs
associated with the operation. Both the local and remote users require
the IMP_FULL_DATABASE role granted to them.

impdp test/test@db10g tables=SCOTT.EMP network_link=REMOTE_SCOTT directory=TEST_DIR


logfile=impdpSCOTT.log remap_schema=SCOTT:TEST

Flashback Exports
The exp utility used the CONSISTENT=Y parameter to indicate the export should be
consistent to a point in time. By default the expdp utility exports are only consistent
on a per table basis. If you want all tables in the export to be consistent to the same
point in time, you need to use the FLASHBACK_SCN or FLASHBACK_TIME parameter.

The FLASHBACK_TIME parameter value is converted to the approximate SCN for the


specified time.

expdp ..... flashback_time=systimestamp

# In parameter file.
flashback_time="to_timestamp('09-05-2011 09:00:00', 'DD-MM-YYYY HH24:MI:SS')"

# Escaped on command line.


expdp ..... flashback_time=\"to_timestamp\(\'09-05-2011 09:00:00\', \'DD-MM-YYYY HH24:MI:SS\'\)\"

Not surprisingly, you can make exports consistent to an earlier point in time by
specifying an earlier time or SCN, provided you have enough UNDO space to keep a
read consistent view of the data during the export operation.

If you prefer to use the SCN, you can retrieve the current SCN using one of the
following queries.
SELECT current_scn FROM v$database;
SELECT DBMS_FLASHBACK.get_system_change_number FROM dual;
SELECT TIMESTAMP_TO_SCN(SYSTIMESTAMP) FROM dual;

That SCN is then used with the FLASHBACK_SCN parameter.

expdp ..... flashback_scn=5474280

The following queries may prove useful for converting between timestamps and
SCNs.

SELECT TIMESTAMP_TO_SCN(SYSTIMESTAMP) FROM dual;


SELECT SCN_TO_TIMESTAMP(5474751) FROM dual;

In 11.2, the introduction of legacy mode means that you can use
the CONSISTENT=Y parameter with the expdp utility if you wish.

Miscellaneous Information
Unlike the original exp and imp utilities all data pump ".dmp" and ".log" files are
created on the Oracle server, not the client machine.

Data pump performance can be improved by using the PARALLEL parameter. This


should be used in conjunction with the "%U" wildcard in the DUMPFILE parameter to
allow multiple dumpfiles to be created or read. The same wildcard can be used
during the import to allow you to reference multiple files.

expdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR parallel=4 dumpfile=SCOTT_%U.dmp


logfile=expdpSCOTT.log

impdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR parallel=4 dumpfile=SCOTT_%U.dmp


logfile=impdpSCOTT.log

The DBA_DATAPUMP_JOBS view can be used to monitor the current jobs.

system@db10g> select * from dba_datapump_jobs;

OWNER_NAME JOB_NAME OPERATION


------------------------------ ------------------------------ ------------------------------
JOB_MODE STATE DEGREE ATTACHED_SESSIONS
------------------------------ ------------------------------ ---------- -----------------
SYSTEM SYS_EXPORT_FULL_01 EXPORT
FULL EXECUTING 1 1

Data Pump API


You can see more examples of this here.
Along with the data pump utilities Oracle provide an PL/SQL API. The following is an
example of how this API can be used to perform a schema export.

DECLARE
l_dp_handle NUMBER;
BEGIN
-- Open an schema export job.
l_dp_handle := DBMS_DATAPUMP.open(
operation => 'EXPORT',
job_mode => 'SCHEMA',
remote_link => NULL,
job_name => 'SCOTT_EXPORT',
version => 'LATEST');

-- Specify the dump file name and directory object name.


DBMS_DATAPUMP.add_file(
handle => l_dp_handle,
filename => 'SCOTT.dmp',
directory => 'TEST_DIR');

-- Specify the log file name and directory object name.


DBMS_DATAPUMP.add_file(
handle => l_dp_handle,
filename => 'expdpSCOTT.log',
directory => 'TEST_DIR',
filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);

-- Specify the schema to be exported.


DBMS_DATAPUMP.metadata_filter(
handle => l_dp_handle,
name => 'SCHEMA_EXPR',
value => '= ''SCOTT''');

DBMS_DATAPUMP.start_job(l_dp_handle);

DBMS_DATAPUMP.detach(l_dp_handle);
END;
/

Once the job has started the status can be checked using.

system@db10g> select * from dba_datapump_jobs;

The following is an example of how this API can be used to perform a schema import
with a schema remap operation.

DECLARE
l_dp_handle NUMBER;
BEGIN
-- Open an schema import job.
l_dp_handle := DBMS_DATAPUMP.open(
operation => 'IMPORT',
job_mode => 'SCHEMA',
remote_link => NULL,
job_name => 'SCOTT_IMPORT',
version => 'LATEST');

-- Specify the dump file name and directory object name.


DBMS_DATAPUMP.add_file(
handle => l_dp_handle,
filename => 'SCOTT.dmp',
directory => 'TEST_DIR');

-- Specify the log file name and directory object name.


DBMS_DATAPUMP.add_file(
handle => l_dp_handle,
filename => 'impdpSCOTT.log',
directory => 'TEST_DIR',
filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);

-- Perform a REMAP_SCHEMA from SCOTT to SCOTT2.


DBMS_DATAPUMP.metadata_remap(
handle => l_dp_handle,
name => 'REMAP_SCHEMA',
old_value => 'SCOTT',
value => 'SCOTT2');

DBMS_DATAPUMP.start_job(l_dp_handle);

DBMS_DATAPUMP.detach(l_dp_handle);
END;
/

External Tables (Unloading/Loading Data Using External


Tables)
Oracle have incorporated support for data pump technology into external tables.
The ORACLE_DATAPUMP access driver can be used to unload data to data pump
export files and subsequently reload it. The unload of data occurs when the external
table is created using the "AS" clause.

CREATE TABLE emp_xt


ORGANIZATION EXTERNAL
(
TYPE ORACLE_DATAPUMP
DEFAULT DIRECTORY test_dir
LOCATION ('emp_xt.dmp')
)
AS SELECT * FROM emp;

The data can then be queried using the following.

SELECT * FROM emp_xt;


The syntax to create the external table pointing to an existing file is similar, but
without the "AS" clause. In this case we will do it the same schema, but this could be
in a different schema in the same instance, or in an entirely different instance.

DROP TABLE emp_xt;

CREATE TABLE emp_xt (


EMPNO NUMBER(4),
ENAME VARCHAR2(10),
JOB VARCHAR2(9),
MGR NUMBER(4),
HIREDATE DATE,
SAL NUMBER(7,2),
COMM NUMBER(7,2),
DEPTNO NUMBER(2))
ORGANIZATION EXTERNAL (
TYPE ORACLE_DATAPUMP
DEFAULT DIRECTORY test_dir
LOCATION ('emp_xt.dmp')
);

SELECT * FROM emp_xt;

Creating an external table using the ORACLE_DATAPUMP access driver is restricted to


dump files created by the external table unload.

Secure External Password Store


You can also use the secure external password store to provide credentials for data
pump.

$ expdp /@db10g_test tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp


logfile=expdpEMP_DEPT.log

Roles
That database user performing the export and import operations will need the
appropriate level of privilege to complete the actions. For example, if the user can't
create a table in the schema, it will not be able to import a table into a schema.

Some of operations, including those at database level will need


the DATAPUMP_EXP_FULL_DATABASE and/or DATAPUMP_IMP_FULL_DATABASE roles.
These are very powerful, so don't grant them without careful consideration.

Interactive Command Mode


All data pump actions are performed by multiple jobs
(DBMS_SCHEDULER not DBMS_JOB jobs). These jobs are controlled by a master
control process which uses Advanced Queuing. At runtime an advanced queue table,
named after the job name, is created and used by the master control process. The
table is dropped on completion of the data pump job. The job and the advanced
queue can be named using the JOB_NAME parameter. Cancelling the client process
does not stop the associated data pump job. Issuing "CTRL+C" on the client during a
job stops the client output and puts you into interactive command mode. You can
read more about this in more detail here. Typing "status" at this prompt allows you
to monitor the current job.

Export> status

Job: SYS_EXPORT_FULL_01
Operation: EXPORT
Mode: FULL
State: EXECUTING
Bytes Processed: 0
Current Parallelism: 1
Job Error Count: 0
Dump File: D:\TEMP\DB10G.DMP
bytes written: 4,096

Worker 1 Status:
State: EXECUTING
Object Schema: SYSMAN
Object Name: MGMT_CONTAINER_CRED_ARRAY
Object Type: DATABASE_EXPORT/SCHEMA/TYPE/TYPE_SPEC
Completed Objects: 261
Total Objects: 261

You might also like