You are on page 1of 3

Original Export and Import Versus Data Pump Export and

Import

If you have worked with prior 10g database you possibly are familiar with exp/imp utilities of oracle database.
Oracle 10g introduces a new feature called data pump export and import.Data pump export/import differs from
original export/import. The difference is listed below.
1)Impdp/Expdp has self-tuning unities. Tuning parameters that were used in original Export and Import, such as
BUFFER and RECORDLENGTH, are neither required nor supported by Data Pump Export and Import.
2)Data Pump represent metadata in the dump file set as XML documents rather than as DDL commands.
3)Impdp/Expdp use parallel execution rather than a single stream of execution, for improved performance.
4)In Data Pump expdp full=y and then impdp schemas=prod is same as of expdp schemas=prod and then impdp
full=y where in original export/import does not always exhibit this behavior.
5)Expdp/Impdp access files on the server rather than on the client.
6)Expdp/Impdp operate on a group of files called a dump file set rather than on a single sequential dump file.
7)Sequential media, such as tapes and pipes, are not supported in oracle data pump.But in original export/import
we could directly compress the dump by using pipes.
8)The Data Pump method for moving data between different database versions is different than the method used
by original Export/Import.
9)When you are importing data into an existing table using either APPEND or TRUNCATE, if any row violates an
active constraint, the load is discontinued and no data is loaded. This is different from original Import, which logs
any rows that are in violation and continues with the load.
10)Expdp/Impdp consume more undo tablespace than original Export and Import.
11)If a table has compression enabled, Data Pump Import attempts to compress the data being loaded. Whereas,
the original Import utility loaded data in such a way that if a even table had compression enabled, the data was not
compressed upon import.
12)Data Pump supports character set conversion for both direct path and external tables. Most of the restrictions
that exist for character set conversions in the original Import utility do not apply to Data Pump. The one case in
which character set conversions are not supported under the Data Pump is when using transportable tablespaces.
13)There is no option to merge extents when you re-create tables. In original Import, this was provided by the
COMPRESS parameter. Instead, extents are reallocated according to storage parameters for the target table.

. Let us have some examples how to perform the export from database: Performing a Table-Mode Export Issue the following Data Pump export command to perform a table export of the tables employees and jobs from the human resources (hr) schema: expdp hr/hr TABLES=employees. it does not include metadata.par parameter file: > expdp hr/hr PARFILE=exp. Issue the following command to use the BLOCKS method to estimate the number of bytes required to export the data in the following three tables located in the human resource (hr) schema: employees. and locations. Only Unload of Selected Tables and Rows DIRECTORY=dpump_dir1 DUMPFILE=dataonly. Interestingly.dmp NOLOGFILE=y Because user hr is exporting tables in his own schema. it is not necessary to specify the schema name for the tables. locations LOGFILE=estimate.par) that you could use to perform a data-only unload of all tables in the human resources (hr) schema except for the tables countries and regions.. Rows in the employees table are unloaded that have a department_id other than 50. how to run the datapump export.Datapump Export handy examples: Analytical backup of your data Here I want to show some handy examples. departments.log Performing a Schema-Mode Export The estimate is printed in the log file and displayed on the client's standard output device. It means. Now these files used to restore the data then following two condition must be followed up:  Platform must not be changed. When we do conventional backup of our database either by using RMAN or by using hot or cold backup methods then output file (backup file) actually contains block to block copy of source file. Example 2-3 Estimating Disk Space Needed in a Table-Mode Export > expdp hr/hr DIRECTORY=dpump_dir1 ESTIMATE_ONLY=y TABLES=employees. without actually performing the export operation.dmp CONTENT=DATA_ONLY EXCLUDE=TABLE:"IN ('COUNTRIES'.par Estimating Disk Space Needed in a Table-Mode Export the use of the ESTIMATE_ONLY parameter to estimate the space that would be consumed in a table-mode export. Data-Only Unload of Selected Tables and Rows the contents of a parameter file (exp.. but the CONTENT parameter effectively limits the export to an unload of just the table's data. A schema-mode export (the default mode) is performed.. The DBA previously created the directory object dpump_dir1 which points to .. if we have dump export then it can be used as analytical backup. The rows are ordered by employee_id. The estimate is for table row data only.jobs DUMPFILE=dpump_dir1:table. 'REGIONS')" QUERY=employees:"WHERE department_id !=50 ORDER BY employee_id" You can issue the following command to execute the exp.. it can imported irrespective platform and oracle database.  Backup process should identify the oracle blocks from backup files. The NOLOGFILE=y parameter indicates that an Export log file of the operation will not be generated. departments.

The job and master table will have a name of expfull. This will start the interactive-command interface of Data Pump Export. press Ctrl+C. you can issue the CONTINUE_CLIENT command to resume logging mode and restart the expfull job.log JOB_NAME=expfull Because this is a full database export. Dump files full101. dpump_dir2:full2%U.dmp FILESIZE=2G PARALLEL=3 LOGFILE=dpump_dir1:expfull. Stopping and Reattaching to a Job At the Export prompt. issue the following command to stop the job: Export> STOP_JOB=IMMEDIATE Are you sure you wish to stop this job ([y]/n): y The job is placed in a stopped state and exits the client. > expdp hr/hr DUMPFILE=dpump_dir1:expschema.dmp LOGFILE=dpump_dir1:expschema. Export> CONTINUE_CLIENT . More files will be created. Each file will be up to 2 gigabytes in size.dmp. and so on will be created in a round-robin fashion in the directories pointed to by the dpump_dir1 and dpump_dir2 directory objects. up to three files will be created. The dump file dataonly. these should be on separate I/O channels. Initially.the directory on the server where user hr is authorized to read and write export dump files. as necessary.dmp.dmp is created in dpump_dir1. Using Interactive Mode to Stop and Reattach to a Job While the export is running.full102. full201. The log file will be written to expfull. In the interactive interface.dmp. For best performance. logging to the terminal stops and the Export prompt is displayed. if needed. Enter the following command to reattach to the job you just stopped: > expdp hr/hr ATTACH=EXPFULL After the job status is displayed.dmp. all data and metadata in the database will be exported.log in the dpump_dir1 directory.log Performing a Parallel Full Database Export > expdp hr/hr FULL=y DUMPFILE=dpump_dir1:full1%U.