You are on page 1of 6

Using Oracle Database 10g Data Pump Restart Capability

Oracle Data Pump is a new feature in Oracle Database 10g that enables very high-
speed movement of data and metadata between databases. This technology is the
basis for Oracle's new data movement utilities, Data Pump Export and Data Pump
Import.
One very prominent feature of Data Pump is the ability to restart jobs. The ability to
restart a Data Pump Export or Import job is extremely valuable to the DBA who
might be responsible for moving large amounts of data, especially for big jobs that
take a long time to complete. The Data Pump job is restarted with no data loss or
corruption after an unexpected failure or after a STOP_JOB parameter is issued from
the Import or Export interactive mode.
A very common reason to restart a Data Pump job is when a failure such as a power
failure, an internal error, or an accidental instance bounce, prevents the job from
succeeding. Typical reasons for failure might also be due to system resource issues,
such as insufficient dump file space (in the Data Pump Export case), or insufficient
Tablespace resources (in the Data Pump Import case). Upon Data Pump job failure,
the DBA or user has the ability to intervene to correct a problem. A Data Pump
restart command (START_JOB) can then be issued to continue the job from the point
of failure.
This Technical Note describes Data Pump restart capability with two examples, using
Data Pump Export and Import command line utilities, respectively. In both examples,
it is necessary to define a directory object, DATA_PUMP_DIR, for the dump files.
Furthermore, the Data Pump user, which in our examples is SYSTEM, needs to hold
the exp_full_database and imp_full_database roles. Restart also works for
unprivileged users. (See Oracle Database Utilities 10g Release 1 (10.1) for additional
information about Data Pump and its use of directory objects.)

Example 1: Restart Data Pump Export


Our first example demonstrates how the restart capability can be used during a Data
Pump Export. We will perform a Data Pump Export of the HR schema, specifying the
maximum size of the dump file. Data Pump users typically specify the maximum
dump file size (filesize) as a mechanism to manage on disk resources. In this
example, our job will fail because the specified dump file size is too small.
Step 1: Start Export
In this example, we'll use the "expdp" client interface. An optional job_name has
been specified on the command line, which may make it easier for you to find and
attach to the job by name at a later time.
Here is the export command:

> expdp system/manager schemas=hr directory=data_pump_dir \


logfile=example1.log filesize=300000 dumpfile=example1.dmp \
job_name=EXAMPLE1

The output will look something like this:

Export: Release 10.1.0.2.0 - Production on Tuesday, 06 July, 2004 6:37


.
.
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
.
. . exported "HR"."COUNTRIES" 6.078 KB 25 rows
. . exported "HR"."DEPARTMENTS" 6.632 KB 27 rows

ORA-39095: Dump file space has been exhausted: Unable to allocate 217088 bytes
Job "SYSTEM"."EXAMPLE1" stopped due to fatal error at 06:38
>

Step 2: Attach to the Job

Our Export job (EXAMPLE1) has encountered a fatal error and the client has returned
to the operating system prompt (>). We can examine the job state by invoking the
following query:

SQL> select job_name,state from dba_datapump_jobs;

JOB_NAME STATE
------------------------------ ------------------------------
EXAMPLE1 NOT RUNNING

In this simple example, it's quite obvious what the problem is. The dump file we
specified is too small for the HR schema. We can determine the reason for the error
by looking at the client output that was displayed on our screen or the Data Pump
log file.
To fix this problem, we need to add a second dump file. Let's attach to our job using
the "EXAMPLE1" name. When we successfully attach to our job, the job status and
other interesting information about the job is displayed.

>expdp system/manager attach=EXAMPLE1


Export: Release 10.1.0.2.0 - Production on Tuesday, 06 July, 2004 6:38
.
.
Job: EXAMPLE1
Owner: SYSTEM
Operation: EXPORT
.
.
Total Objects: 7
Worker Parallelism: 1
Step 3: Add a Dump File
At this juncture, a dump file can be added by issuing the ADD_FILE directive at the
Export> prompt. The new dump file will automatically be created in the same
directory as our original dump file (DATA_PUMP_DIR).

Export>add_file=hr1.dmp

We can next perform the status command and see that the additional dump file is
now being displayed.

Export>status
Job: EXAMPLE1
Operation: EXPORT
Mode: SCHEMA
State: IDLING
Bytes Processed: 55,944
Percent Done: 99
Current Parallelism: 1
Job Error Count: 0
Dump File: /work1/private/oracle/rdbms/log/example1.dmp
size: 303,104
bytes written: 163,840
Dump File: /work1/private/oracle/rdbms/log/hr1.dmp
bytes written: 4,096

Step 4: Restart/Continue the Job

Finally, we issue the CONTINUE_CLIENT command. The job EXAMPLE1 will now
resume.

Export>continue_client

Export> Job EXAMPLE1 has been reopened at Tuesday, 06 July, 2004 6:38
Restarting "SYSTEM"."EXAMPLE1": system/******** schemas=hr
directory=data_pump_dir logfile=example1.log filesize=300000
dumpfile=example1.dmp job_name=EXAMPLE1
Master table "SYSTEM"."EXAMPLE1" successfully loaded/unloaded
***************************************************************************
Dump file set for SYSTEM.EXAMPLE1 is:
/work1/private/oracle/rdbms/log/example1.dmp
/work1/private/oracle/rdbms/log/hr1.dmp
Job "SYSTEM"."EXAMPLE1" completed with 1 error(s) at 06:38

We could have alternatively used the START_JOB command. The CONTINUE_CLIENT


command changes the mode from interactive-command mode to logging mode and
then does a START_JOB.
Example 2: Restart Data Pump Import—Resumable Wait Timeout

In Example 2, we will demonstrate Data Pump restart capability by doing a remap


tablespace Import operation. Our Data Pump job will experience what is called a
resumable wait. This wait is due to insufficient target tablespace resources. We show
how the DBA can intervene by adding an additional file to the database and
subsequently restart the import job.

Step 1: Create a New Tablespace


Our dump file contains various schemas that we would like imported into a new
tablespace. We have our target database up and running. First, it is necessary for the
DBA to create the new tablespace for our import schemas. We'll need to bring up
SQL*Plus and perform the following command:
SQL> create tablespace example2
datafile '/work1/private/rdbms/dbs/example2.f'
size 1M extent management local

Step 2: Start the Import

Now that our target tablespace has been created, we are ready to perform the Data
Pump Import job by using this command:

>impdp system/manager dumpfile=example2.dmp


remap_tablespace=system:example2 logfile=example2imp.log
job_name=example2

Import: Release 10.1.0.2.0 - Production on Tuesday, 06 July, 2004 6:54


.
.
.
Processing object type SCHEMA_EXPORT/TABLE/TABLE

ORA-39171: Job is experiencing a resumable wait.


ORA-01658: unable to create INITIAL extent for segment in tablespace EXAMPLE2

Step 3: Stop the Job—Add a Tablespace File

Our Import job has entered the resumable wait state and is hung. This job will stay
in a resumable wait until the job is stopped or until the resumable wait period
expires, which by default is two hours. At this juncture, the DBA can intervene by
adding an additional file to the EXAMPLE2 tablespace. One very good reason to stop
the job is if the DBA has to do maintenance to the disk subsystem in conjunction
with adding the second dump file. In the general case it may not be necessary to
stop the job.
In our example, we will stop the job with a Control-C prior to the resumable wait
expiration.
^C
Import>stop_job=immediate
Step 4: Add a File to the Tablespace
We can invoke SQL*Plus and add a file to the EXAMPLE2 tablespace.
SQL>alter tablespace example2 add datafile '/work1/private/rdbms/dbs/example2b.f'
size 1m autoextend on maxsize 50m;
Step 5: Attach to the Job
We are now ready to attach to our job and restart our import. Note that we attach to
the job by job_name; in this case EXAMPLE2.

>impdp system/manager attach=example2

Import: Release 10.1.0.2.0 - Production on Tuesday, 6 July, 2004 07:01

Copyright (c) 2003, Oracle. All rights reserved.


.
.
.
Job Error Count: 0
Dump File: /work1/private/oracle/rdbms/log/example2.dmp

Worker 1 Status:
State: UNDEFINED
Object Schema: HR
Object Name: COUNTRIES
Object Type: SCHEMA_EXPORT/TABLE/TABLE
Completed Objects: 15
Worker Parallelism: 1

Step 6: Restart the Job

Now we can start the job again. This time, we'll use START_JOB.

Import> start_job

Step 7: Check the Job Status

We can optionally check the status of the job.

Import> status

Job: EXAMPLE2
Operation: IMPORT
Mode: SCHEMA
State: EXECUTING
Bytes Processed: 2,791,768
Percent Done: 99
Current Parallelism: 1
Job Error Count: 0
Dump File: /work1/private/oracle/rdbms/log/example2.dmp

Worker 1 Status:
State: EXECUTING
Object Type: SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Worker Parallelism: 1

When the job completes, you will be able to check the example2imp.log file for job
status and other information.
In Example 2, we demonstrated how to restart a Data Pump import job. It's
important to note that normally it would not be necessary to stop the job (in Step 3)
in order to add the second dump file. We could have simply added the file to the
tablespaces, from another session. In other words, we could have skipped over steps
3, 5,6,7, and 8. The job would have automatically resumed in this case.
Summary
If you use Data Pump and experience a failure, you may be able to easily correct the
problem and then use the Data Pump restart capability without any loss of data, and
without having to completely redo the operation.

You might also like