You are on page 1of 11

SQL Loader

This utility is used to load data from other database or data source to
oracle. if you have a table in access or sybase etc we can use this utility
to load data into oracle tables. This will read data from flat files only.
if we want to load data from access or any other database first we have to
convert the data into delimited format flat file or fixed length format file.
Following is procedure to load the data from Third Party Database into
oracle using SQL Loader.
1. Convert the Data into Flat file using third party database command.
2. Create the Table Structure in Oracle Database using appropriate
datatypes
3. Write a Control File, describing how to interpret the flat file and
options to load the data.
4. Execute SQL Loader utility specifying the control file in the command
line argument
To understand it better let us see the following case study.
Suppose you have a table in MS-ACCESS by name EMP, running under
Windows O/S, with the following structure
EMPNO INTEGER
NAME TEXT(50)
SAL CURRENCY
JDATE DATE
For example this contains some thousands of rows and oracle is running
under linux opearating system.
Start MS-Access and convert the table into comma delimited flat (popularly
known as csv) , by clicking on File/Save As menu. Let the delimited file
name be e.csv
Now transfer this file to Linux Server using FTP command
Go to Command Prompt in windows
At the command prompt type FTP followed by IP address of the server
running Oracle.
FTP will then prompt you for username and password to connect to the
Linux Server. Supply a valid username and password of Oracle User in
Linux
for example file is as follows
10,kiran,3000.40,04/20/2008
20,ramesh,4000.50,03/12/2008
30,david,3500.56,12/23/2007
Now come to the linux machine and create a table in oracle with the same
structure as in ms access. assume that we want create table in madhu
account.
SQL > create table emp (empno number(5),
name varchar2(50),
sal number(10,2),
jdate date);

Table created.
After creating the table we need to write a control file describing the
actions which sql loader should do.
$ vi load.ctl
LOAD DATA
INFILE '/home/oracle/e.csv'
BADFILE '/home/oracle/e.bad'
DISCARDFILE '/home/oracle/e.dsc'
INSERT INTO TABLE emp
FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED
BY '"' TRAILING NULLCOLS
(empno,name,sal,jdate date 'mm/dd/yyyy')
now save and quit.
The Load Data statement is required at the beginning of the control file.
The Infile option specifies the location of input file.
Badfile is optional. If we specify bad records found during loading will be
stored in this file.
Discardfile is optional. This records which do not meet a when condition
will be written to this file.
And we can use any of the following loading options.
INSERT – Loads rows only if the target table is empty.
APPEND - Load rows if the target table is empty or not.
REPLACE - First delete all the rows in the table and load rows.
TRUNCATE - First truncate the table and then load rows.

call sql loader utility as follows.


$ sqlldr userid=madhu/madhu control=load.ctl log=e.log

Creating an external table

Since an external table's data is in the operating system, its data file needs to
be in a place Oracle can access it. So the first step is to create a directory and
grant access to it.

employee_report.csv:
001,Hutt,Jabba,896743856,jabba@thecompany.com,18
002,Simpson,Homer,382947382,homer@thecompany.com,20
003,Kent,Clark,082736194,superman@thecompany.com,5
004,Kid,Billy,928743627,billythkid@thecompany.com,9
005,Stranger,Perfect,389209831,nobody@thecompany.com,23
006,Zoidberg,Dr,094510283,crustacean@thecompany.com,1
SQL> connect sys as sysdba
Enter password:
Connected.
SQL> create or replace directory xtern_data_dir
2 as '/oracle/feeds/xtern/mySID/data';

Directory created.
SQL> grant read,write on directory xtern_data_dir to bulk_load;

Grant succeeded.

The last step is to create the table. The CREATE TABLE statement for an
external table has two parts. The first part, like a normal CREATE TABLE,
has the table name and field specs. This is followed by a block of syntax
specific to external tables, which lets you tell Oracle how to interpret the
data in the external file.
SQL> connect bulkload
Enter password:
Connected.
SQL> create table xtern_empl_rpt
2 ( empl_id varchar2(3),
3 last_name varchar2(50),
4 first_name varchar2(50),
5 ssn varchar2(9),
6 email_addr varchar2(100),
7 years_of_service number(2,0)
8 )
9 organization external
10 ( default directory xtern_data_dir
11 access parameters
12 ( records delimited by newline
13 fields terminated by ','
14 )
15 location ('employee_report.csv')
16 );

Table created.

With the create table statement, you've created table metadata in the data
dictionary and instructed Oracle how to direct the ORACLE_LOADER
access driver to parse the data in the datafile. Now, kick off the load by
accessing the table:
SQL> select * from xtern_empl_rpt ;
Transporting Tablespaces
In this approach, you take a set of self-contained, read-only tablespaces,
export only the metadata, copy the datafiles of those tablespaces at the OS
level to the target platform, and import the metadata into the data dictionary
—a process known as plugging.
The tablespaces can be dictionary managed or locally managed tablespaces.
Moving data using transportable tablespaces is much faster than performing
import or export operations because all the actual data present in the
datafiles are copied to the destination and we can use an import utility to
transfer only the metadata of the tablespace objects
to the new database.
Starting from oracle 10g we can transport tablespaces across platforms.
By using this facility we can migrate database from one platform to another
platform.
To see which platforms are supported
SQL > select * from v$transportable_platform;
if the source or target platforms are different endianness then the tablespace
being transported must be converted to the target format.
If they are same endianness no conversion is required.
Before a tablespace can be transported to a different platform the datafile
header must identify the platform to which it belongs. In oracle 10g we can
perform the following. Make tablespace read write at least once.
SQL > alter tablespace ts1 readonly;
SQL > alter tablespace ts1 read write;
Procedure for transporting tablespaces
● for cross platform transport check the endian format on both source and
target machines.
if both platforms have same endian no conversion is required.
● select set of tablespaces.
● generate a transportable table set.
A transportable tablespace set consists of datafiles for the set of
tablespaces being transported and an export file containing structural
information for the set of tablespaces
● Transport the table set.
● copy the datafiles and export files to the target database.
● Plug in the tablespace.
Example
Assume that database name is sales.
we want to transport the following tablespace.
test1 datafile is /disk1/sales/tt1.dbf
First we need to determine the platforms are supported or not.
Execute the following query on both the source and target machines.
SQL> column platform_name format A30;
SQL> select d.platform_name,endian_format from
2 v$transportable_platform tp, v$database d where
3 tp.platform_name=d.platform_name;

PLATFORM_NAME ENDIAN_FORMAT
------------------------------ --------------
Linux IA (32-bit) Little
If you can see that endian formats are same no conversion required.
Check the tablespace being transported is self contained or not.
That is it should not have tables with foreign keys referring to primary key
of tables which are in other tablespaces. It should not have tables with some
partitions in other tablespaces. To find out whether the tablespace is self
contained do the following
SQL> exec dbms_tts.transport_set_check('TEST1',TRUE);

PL/SQL procedure successfully completed.


Execute the following query to check any violations are there.
SQL> select * from transport_set_violations;

no rows selected
Generate a transportable table set.
After ensuring you have a self-contained set of tablespaces that you want to
transport, generate a transportable tablespace set by performing the
following
Make the tablespace readonly.
SQL> alter tablespace test1 read only;

Tablespace altered.
Execute the export utility and specify the tablespaces in the transportable
tablespace sets.
[oracle@linuxsrv1 ~]$ exp transport_tablespace=y tablespaces=TEST1
file=e1.dmp
Export: Release 10.2.0.1.0 - Production on Thu May 22 11:22:15 2008

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Username: sys/oracle as sysdba


Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Export done in US7ASCII character set and AL16UTF16 NCHAR character set
Note: table data (rows) will not be exported
About to export transportable tablespace metadata...
For tablespace TEST1 ...
. exporting cluster definitions
. exporting table definitions
. exporting referential integrity constraints
. exporting triggers
. end transportable tablespace metadata export
Export terminated successfully without warnings.

Transport the both export file and datafiles to the target database.
Plug in the tablespace set in the target database using import utility.
[oracle@linuxsrv2 ~]$ imp tablespaces=TEST1 transport_tablespace=y
file=e1.dmp datafiles='/disk2/test/tt1.dbf'

Import: Release 10.2.0.1.0 - Production on Thu May 22 11:26:00 2008


Copyright (c) 1982, 2005, Oracle. All rights reserved.

Username: sys/oracle as sysdba


Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options

Export file created by EXPORT:V10.02.01 via conventional path


About to import transportable tablespace(s) metadata...
import done in US7ASCII character set and AL16UTF16 NCHAR character set
. importing SYS's objects into SYS
. importing SYS's objects into SYS
Import terminated successfully without warnings.
SQL> select tablespace_name,status from dba_tablespaces
2 where tablespace_name='TEST1';

TABLESPACE_NAME STATUS
------------------------------ ---------
TEST1 READ ONLY
Now alter the tablespace read write in the target database.
SQL> alter tablespace test1 read write;

Tablespace altered.
SQL> select tablespace_name,plugged_in from dba_tablespaces
2 where tablespace_name='TEST1';

TABLESPACE_NAME PLU
------------------------------ ---
TEST1 YES
Creating an external table

Since an external table's data is in the operating system, its data file needs to
be in a place Oracle can access it. So the first step is to create a directory and
grant access to it.

employee_report.csv:
001,Hutt,Jabba,896743856,jabba@thecompany.com,18
002,Simpson,Homer,382947382,homer@thecompany.com,20
003,Kent,Clark,082736194,superman@thecompany.com,5
004,Kid,Billy,928743627,billythkid@thecompany.com,9
005,Stranger,Perfect,389209831,nobody@thecompany.com,23
006,Zoidberg,Dr,094510283,crustacean@thecompany.com,1
SQL> connect sys as sysdba
Enter password:
Connected.
SQL> create or replace directory xtern_data_dir
2 as '/oracle/feeds/xtern/mySID/data';

Directory created.

SQL> grant read,write on directory xtern_data_dir to bulk_load;

Grant succeeded.

The last step is to create the table. The CREATE TABLE statement for an
external table has two parts. The first part, like a normal CREATE TABLE,
has the table name and field specs. This is followed by a block of syntax
specific to external tables, which lets you tell Oracle how to interpret the
data in the external file.
SQL> connect bulkload
Enter password:
Connected.
SQL> create table xtern_empl_rpt
2 ( empl_id varchar2(3),
3 last_name varchar2(50),
4 first_name varchar2(50),
5 ssn varchar2(9),
6 email_addr varchar2(100),
7 years_of_service number(2,0)
8 )
9 organization external
10 ( default directory xtern_data_dir
11 access parameters
12 ( records delimited by newline
13 fields terminated by ','
14 )
15 location ('employee_report.csv')
16 );

Table created.

With the create table statement, you've created table metadata in the data
dictionary and instructed Oracle how to direct the ORACLE_LOADER
access driver to parse the data in the datafile. Now, kick off the load by
accessing the table:
SQL> select * from xtern_empl_rpt ;

EMP LAST_NAME FIRST_NAME SSN EMAIL_ADDR


YEARS_OF_SERVICE
--- ---------- ---------- --------- ------------------------------ ----------------
001 Hutt Jabba 896743856 jabba@thecompany.com 18
002 Simpson Homer 382947382 homer@thecompany.com
20
003 Kent Clark 082736194 superman@thecompany.com
5
004 Kid Billy 928743627 billythkid@thecompany.com 9
005 Stranger Perfect 389209831 nobody@thecompany.com
23
006 Zoidberg Dr 094510283 crustacean@thecompany.com
1

6 rows selected.
Oracle used the ORACLE_LOADER driver to process the file, and just as
with SQL*Loader, it's created a log file that you can inspect to see what just
happened. The log file -- and the "bad" and "discard" files -- will have been
written to the directory you specified as the "default directory" in your
CREATE TABLE statement, and the file names default to
tablename_ospid :
Loading the data into your tables
Where external tables really shine are in the ease with which you can load
their data into your tables. A particularly nice feature is that you can use any
valid function that the current Oracle user has rights on to transform the raw
data before loading it into your database tables. For example, suppose you
had a function, get_bday_from_ssn (ssn in varchar2) that looked up an
employee's birth date given their SSN. You can use that function to populate
a BIRTH_DATE column in your local database table in the same step as you
load the data into it.
SQL> create table empl_info as
2 (select empl_id, last_name, first_name, ssn, get_bday_from_ssn (ssn)
birth_dt
3* from xtern_empl_rpt)
SQL> /

Table created.

SQL> select * from empl_info ;

EMP LAST_NAME FIRST_NAME SSN BIRTH_DT


--- ---------- ---------- --------- ----------
001 Hutt Jabba 896743856 03/11/1939
002 Simpson Homer 382947382 11/01/1967
003 Kent Clark 082736194 01/15/1925
004 Kid Billy 928743627 07/20/1954
005 Stranger Perfect 389209831 10/23/1980
006 Zoidberg Dr 094510283 04/04/2989

6 rows selected.

You might also like