You are on page 1of 21

Overview of the TOAD SQL*Loader Interface

Mark Lerch
October 25, 2000

Overview
Examples
Inserting from a single data file, with one error, into one table specifying table
level parameters
Loading into multiple tables by extracting multiple logical records
Conditional load into partitions
Loading from several data files, each with a different format
Column-level delimiters and conditions, command-line options and constraint
exception management
Using the Scheduler
Execution Options
Frequently Asked Questions
Executable Location
Executable Names per Oracle Version
My Environment
Future Enhancements
In Summary

Overview
The TOAD SQL*Loader Interface is a utility which allows the user to graphically build a
control file for use with SQL*Loader. It also has the capability to run SQL*Loader with
the control file, optionally running in either the foreground or the background. It can also
schedule the SQL*Loader execution as a Windows Job for later execution.

Most of this document consists of real-world examples in using the loader, since that
seems to be the best way to learn about it. In the first example I will explain step by step
what each tab in the window does, so it is worth looking it over to get an understanding
of the GUI.

Examples
These example runs will start with the most common uses and increase in complexity to
demonstrate some of the more advanced features of SQL*Loader.
Inserting from a single data file, with one error, into one table specifying
table-level parameters
Let’s start by creating and populating a sample table. Here’s the script you can copy and
paste right into TOAD:

create table MarksFavoriteFoods (Name varchar2(30), Rank number)

insert into MarksFavoriteFoods values ('Tuna', 1)

insert into MarksFavoriteFoods values ('Salmon', 2)

insert into MarksFavoriteFoods values ('Broccoli', 3)

insert into MarksFavoriteFoods values ('Asparagus', 4)

insert into MarksFavoriteFoods values ('Bell peppers', 5)

insert into MarksFavoriteFoods values ('Chicken', 6)

insert into MarksFavoriteFoods values ('Yogurt', 7)

insert into MarksFavoriteFoods values ('Brown rice', 8)

insert into MarksFavoriteFoods values ('Carrots', 9)

insert into MarksFavoriteFoods values ('Lean ground beef', 10)

(I happen to be eating Salmon, brown rice and bell peppers while typing this).

Start up Schema Browser, find the table, select the Data tab on the right, right-click,
select Save As, choose “ASCII, Comma delimited”, select “Save To File” on the bottom
and choose “C:\MarksDiet.Dat” or any other filename you wish. Select OK.

Now let’s empty our table with this line:


delete from MarksFavoriteFoods

And you can verify in Schema Browser that it is empty.

Here is what the first couple of lines look like from our data file:

Tuna,1
Salmon,2

This is our data, or input, file. What I’m going to do is open up Notepad and edit the first
line, replacing the comma with a tab, to intentionally create “bad” data. Here is what the
first line now looks like:

Tuna 1

Save the file.


Open up the SQL*Loader interface (DBA | SQL*Loader Interface). The first tab is
“Source Files”. Here is where we enter the list of the data files we want to load into one
or more tables. At least one input file is required. Let me briefly describe each:

Input file – This is the actual data file. It can be in three different formats: stream, fixed
and variable. Stream format is the default. With stream format, lines are read until an
end-of-record marker is found (end of line character, by default). With fixed record
format, each record must be a fixed number of bytes in length. With variable record
format, each record may be a different length, as specified by a special field – the first
field in each record. The user must specify the length of this field.

Examples:
Stream record format, end-of-line character – the default: Tuna,1
Stream record format, ‘|’ character specified: Tuna,1|
Fixed record format - all data records must be same length
Variable record format, specifier field is 3 bytes long: 006Tuna,1

Bad file – This file will contain rejected records. By default, it gets named the same as
the input file, with a .BAD extension. In our example, this file should (if everything
works right!) contain our bad Tuna record because it doesn’t conform to the parameters
we will specify.

Discard file – The discard file contains records that were not inserted during the load
because they did not match any of the selection criteria. We will see in a later example
that we can actually tell SQL*Loader WHEN we want a record inserted – it must match
criteria we specify.

Select “Add” to add our data file. The following dialog appears:
(Notice that when the mouse passes over each field, “MicroHelp” is displayed in the
status bar).

Click on the ellipse button next to Input filename and choose the data file:

The Bad file and Discard file are automatically entered with their default extensions.
Stream is chosen by default, and we’ll take that. We’ll also leave the “end of record
string” field empty, taking the end of line character as the default.

The “Discard” field indicates the maximum number of records to put into the discard file.
We’ll leave this empty also, indicating that we want all of them.

Click OK to close.

Note at this point we could choose as many different input files as we wanted – as long as
they all had the same record layout. This is not to say they couldn’t each have a different
record format, as we will see in a later example.

Move to the Destination Tables tab. This is where we will choose the destination table
for the load. Select MarksFavoriteFoods from the list. Since our data is comma-
delimited, move to the All fields delimited by and enter a comma. If our data fields were
surrounded by double-quotes, as:

“Tuna”,”1”

then we would enter a double-quote in the All fields enclosed by field. If the trailing
field enclosure was different than the initial field enclosure character, we would enter it
into the second field. For example, if our data looked like:
“Tuna#,”1#

we would enter double quotes into the first “enclosed by” field, and # into the second
field.

As it is, we are not using field enclosure characters, so leave those fields blank.
Our screen now looks like:

Let’s move on to the Parameters tab.

For a control file name, I’m going to enter D:\confile.ctl. You can name it whatever you
want, however, including the extension. The SQL*Loader EXE field automatically fills
in with the full path to the SQL*Loader. The rest of the fields on this tab are various
options that we’re not going to worry about right now.

We can preview the control file any time we want by clicking on the Preview Control
File tab. Let’s do it now. Here is what mine looks like at this point:

LOAD DATA
INFILE 'C:\marksdiet.dat'
BADFILE 'C:\marksdiet.bad'
DISCARDFILE 'C:\marksdiet.dsc'

INTO TABLE "MARKSFAVORITEFOODS"


FIELDS TERMINATED BY ','

(NAME,
RANK)

This is what the actual contents of the control file will be. At any time you can click
Save to save the control file. The control file is also saved when you choose Execute
Now.

Let’s go for it! Click Execute Now. Here, hopefully, is what you will get for a result:

This is a very information-packed screen. The first tab is a “messages” tab, and provides
the standard output from running the loader. If any errors occurred when running the
loader itself, they would be displayed here. The second tab contains the text of the log
file ,which presents detailed information about what occurred.

The first thing we discover on the Messages tab is some information about SQL*Loader
itself – its version and the date and time it was executed.

The last line states that 10 rows were inserted into our table. You can verify this with
Schema Browser. Success!

The log file contains a lot of great data about what happened. We won’t go into all the
details here, but scroll down a bit in the window. You’ll see that “1 record was rejected”.
And moving out to Windows Explorer, we see that the file named “MarksDiet.bad” was
created in the same directory as our data file. Open it up. It contains one row:

Tuna 1

This row did not match the criteria we specified for the load, namely, that each record is
comma delimited.

Just for fun, let’s close our Status window and click “Execute Now” again. Our status
window will open, and the line:

SQL*Loader-601: For INSERT option, table must be empty. Error on table


"MARKSFAVORITEFOODS"

Appears in the window. What happened here? Well, by default, INSERTs are
performed. Since we didn’t change our load method, that is what it tried to do. But the
table already had data.

Move back to the Parameters tab and find the Load method field (there is another on the
Destination Tables tab since this option can be specified at the table level, but we want to
do it for all tables, even though we’ve only one). Select “Append” from the drop down
list. Click Execute again. We will discover by reading the Messages and Log file (or
simply by looking in Schema Browser) that we’ve successfully appended all 9 records (1
is still bad, remember) into our table.

Well, that was a great start. This is a good time for a break, because we’re about to get a
bit more complex!

Loading into multiple tables by extracting multiple logical records


For this example, make another table just like MarksFavoriteFoods. Ensure both tables
are empty. Edit the data file to make it look like this:

I use a screen shot here because its important that the data be lined up exactly. And those
are spaces in there – not tabs!
This example is going to demonstrate how we can load data from one data file into
multiple tables by using logical records. What is different about this data is that each
line of the data file corresponds to more than one physical record. There are two logical
records in each line.

Here’s how we do this. Select the data file as the source file (actually if you haven’t
closed the window yet its still there. I kept mine open). On the Destination Tables tab
we’re going to select our two tables.

In the Destination Tables tree view, open up the first table. Select the Name column. On
the right side, select the Column Parameters tab.

In the From/To column fields on the right, enter 1 and 12 respectively. This means we
want this table to read the first 12 columns in our data file to extract the Name field.
Click on the Rank column and enter 13 and 14 for the From/To. That is where the Rank
data lives in our input file for that field.

Open up the second table in the tree view and select its Name column. The From/To
values for this are 18 and 33. Finally, select the last Rank column and enter 34 and 36 for
the From/To.

Make sure you’ve entered a control file. Here’s what the control file looks like:

LOAD DATA
INFILE 'd:\marksdiet.dat'

INTO TABLE "MARKSFAVORITEFOODS"

FIELDS TERMINATED BY ','


(NAME POSITION(1:12),
RANK POSITION(13:14))

INTO TABLE "TESTTHIS"

(NAME POSITION(18:33),
RANK POSITION(34:36))

Click Execute and you will see that the foods ranked 1, 3, 5, 7 and 9 went into the first
table, while those ranked 2, 4, 6, 8 and 10 went into the second table.

Conditional load into partitions


This example will demonstrate loading into a partition with conditions.
NOTE: At this time when you select a table the subpartitions field does not get populated
with the available subpartitions (as the partitions field does with the tables’ partitions);
you must enter the name directly.

Let’s drop and recreate our table with range partitions. Run the following:

drop table marksfavoritefoods

CREATE TABLE MARKSFAVORITEFOODS (


NAME VARCHAR2 (30),
RANK NUMBER)
PARTITION BY RANGE (RANK)
(PARTITION FoodRank1 VALUES LESS THAN (5),
PARTITION FoodRank2 VALUES LESS THAN (MAXVALUE))

If we were to run our first example, foods with a ranking up through and including four
would go into the partition named FoodRank1, and all the rest would go into the partition
named FoodRank2. Try it if you like, and verify the contents through the following SQL:

select * from MarksFavoriteFoods partition (FoodRank1)

select * from MarksFavoriteFoods partition (FoodRank2)

For this example, however, we will attempt to load all our data into partition FoodRank1.
Let’s use our original, comma delimited file from the beginning of example 1. Select it
and add it to the input file list on the first tab, if it isn’t already there. For our Destination
Table we’ll chose MarksFavoriteFoods again. This time, we will select the Partition
radio button on the Table Parameters tab. Click the drop down list and you will see the
two partitions listed that we created. Choose FOODRANK1. Remember to enter a
comma in the Delimiter field below it. (By the way, if our data were tab delimited, we
would choose WHITESPACE in the drop down).

On the right side of the Table Parameters tab is a field called “Load rec when”. This
means “load the record into the table when the following condition(s) are present”. In
this field, enter the following:

RANK != “1”

This says that we only want records whose RANK field is not equal to the character “1”.
(All character data is interpreted automatically by Oracle, by the way. If we wanted to
enforce certain data types for special conditions we could do so under the Column
Parameters data type field).

On the Parameters tab choose a control file name to create. At this point, your control
file should look something like the following:

LOAD DATA
INFILE 'd:\marksdiet.dat'
INTO TABLE "MARKSFAVORITEFOODS"
PARTITION (FOODRANK1)
WHEN RANK != "1"

FIELDS TERMINATED BY ','

(NAME,
RANK)

Give it a whirl. If you were successful, the status window should open. Let’s go to the
Log File tab. Move down through and you should come to these lines:

Record 1: Discarded - failed all WHEN clauses.


Record 5: Rejected - Error on table "MARKSFAVORITEFOODS", partition
FOODRANK1.
ORA-14401: inserted partition key is outside specified partition
[and so on for the rest of the records]

This says that the first record failed the WHEN clause. It certainly did – it had a rank of
1 and we didn’t want to load any records with that rank. The rest of the rejection lines
state that the inserted partition key is outside the partition bounds. This is because
records with a rank of 5 and above exceed the partition bounds we chose for
FOODRANK1. Look in Schema Browser and you should find my foods ranked 2
through 4 in the data.

Loading from several data files, each with a different format


This example will use three different data files and demonstrate the three supported
format types: stream, fixed and variable.

Split the data file MarksDiet.dat into three separate files. Use Notepad (important!) as an
editor. Create three files, MarksDiet1.dat, MarksDiet2.dat and MarksDiet3.dat. Edit the
first file. Make it look like this:

Tuna,1*Salmon,2*Broccoli,3*

Important! There are no extra spaces or new line characters at the end of that line. This
sample demonstrates using an asterisk as an end of record marker. Up until now, we
have been using the carriage return/new line character combo to designate physical
records.

Edit MarksDiet2.dat and make it look like this:

Asparagus, 4,Bell peppers,5,Chicken, 6,


Once again, no spaces or new line characters at the end of the line. This is going to be
our fixed record length file. Each record is fixed at precisely 15 characters.

The third file should be named MarksDiet3.dat and look like this:

0009Yogurt,7,0015Brown rice,8,
0010Carrots,9,0019Lean ground beef,10

(Note: On SQL Loader versions prior to 8 (7.3, e.g.), a space is required after the record
length field)

This is our variable format file. At the beginning of each record is a field which
designates how long that record is. Notice Brown Rice on the first line. You may count
13 characters. But Notepad also adds two more characters – a carriage return/line feed
pair. We need to account for that! (That’s why I had you use Notepad, some editors may
only add one line feed character). Once again, no extra spaces or carriage returns at the
end of the second line.

This time when we add each file, we will specify “Stream” format for the first, and enter
an asterisk into the “end of record string” field. MarksDiet2.dat should be specified as
Fixed format, with a length of 15. And MarksDiet3.dat is variable format, and the length
indication field is 4 bytes long. After adding these, here is what your Source Files tab
should look like:
Select the same Destination table, enter the comma delimiter and the control file name,
and all the data will be loaded (did you remember to empty the table first?).

If your table is still partitioned, as mine was, you can use these lines to see the data in
each one:

select * from MarksFavoriteFoods partition (FoodRank1)

select * from MarksFavoriteFoods partition (FoodRank2)

Column-level delimiters and conditions, command-line options and


constraint exception management
This final example will demonstrate specifying input data delimiters at the column level,
capturing constraint errors and some of the command line options available.

For this example, we are going to create a foreign key to a table containing all of our food
ranks. Here’s the SQL and PL/SQL I’d like you to execute:

drop table marksfavoritefoods

create table foodrank (Rank number primary key)

declare
i integer;
begin
i := 1;
loop
insert into foodrank values (i);
i := i + 1;
if i > 10 then
exit;
end if;
end loop;
end;

create table MarksFavoriteFoods (Name varchar2(20), Rank Number)

create table loaderexceptions(row_id urowid,


owner varchar2(30), table_name varchar2(30), constraint varchar2(30))

alter table MarksFavoriteFoods add constraint check_rank foreign key


(Rank) references FoodRank(Rank)
exceptions into loaderexceptions

We’re also going to modify our input data file. I’ll provide it here, but be very careful
about copying and pasting into an editor. Make sure you don’t get an empty line at the
end.

"Grease^#1
"Tuna^#1
"Salmon^#02
"Broccoli^#3
"Asparagus^#4
"Bell peppers^#5
"Chicken^#6
"Yogurt^7
"Brown rice^#8
"Carrots^#9
Lean ground beef#10
"Egg whites^#11
"Congealed Fat^#99

Let’s look at this briefly. It is clear that our first field, Food Name, has a double-quote as
its first delimiter. It’s closing delimiter is a caret. And its ending field specifier is a #
character. The Rank field is not delimited. Or is it? Copy and paste that data into an
editor and again, make sure there are no hidden characters anywhere.

(Incidentally, how did Grease and Congealed Fat make it into the list? We’ll have to do
something about that…)

Save the data file and select it as the input file. Go to the Destination Tables tab and
select MarksFavoriteFoods. Select it then go to the Table Parameters tab and enter or
pick “LOADEREXCEPTIONS” as the Exceptions table (as of this writing, there is a
refresh problem in the Table pick list, so it doesn’t appear there for me even though I’ve
created it. So manually enter the name). What this is going to indicate is that we want
any constraint exceptions to go into LOADEREXCEPTIONS. The exceptions table must
be in the format as shown above. The RowID of the violated rows will go into this table.

Notice when you entered a name that “Reenable Constraints” automatically became
checked. We’re asking that constraints be reenabled after the load is finished. When the
constraints become reenabled, then the referential integrity checks will fire, which will
cause some of the data to fail and the row to be marked in our exceptions table. In
looking back at our data, its pretty clear that “Congealed Fat” with a food rank of 99, will
clearly violate our referential integrity constraint. We only have ten ranks in our
FOODRANK table - 1 through 10, so anything else will not be allowed.

Display the columns for MarksFavoriteFoods. Select the Name column and go over to
the Column Parameters tab. Enter # in the Field is terminated by. The Field is enclosed
by “ and ^, so enter those characters as well. In looking back at our data, we find that not
all the food name fields are delimited, so we will check the “optionally” check box

Move across to the Null If field. Null If says “set character columns to null, and number
columns to zero, upon this condition”. Enter RANK=”3” in the Null If field. This will
blank out the Food Name column when Rank is 3. The food for that rank is Broccoli, so
it will never appear, sadly.
Move to the Default if field. Enter NAME=”Bell peppers”. This is also going to set the
Food Name column to null whenever the Name is “Bell peppers”. (There is some subtle
distinction between these two fields that escapes me at the moment. Perhaps it is that
Null If sets numeric fields to zero, while Default if sets them to null. The documentation
suggests that our example is redundant but who cares. I’ll leave this to the reader to
investigate further).

Here is what my screen looks like at this time:

Let’s go on to the Parameters tab. Enter a control file name (I’ve been using
D:\confile.ctl, but you can name it anything). Go down to the command line options
(these are options which can be specified on the executable command line). Enter a 1
into Skip. This says we want to skip 1 record. I told you I was going to get rid of that
Grease field! Enter 11 into the Load field. This says we want to load 11 records from
our data file. So the first line will be skipped and the next 11 loaded. The Congealed Fat
record will not get loaded. Even if it did, it has a Rank of 99, so it would fail the
constraint check.

Select the “Direct” checkbox, since we want to do a Direct Path Load (a very different
style of loading which does not perform standard SQL Inserts but rather uses buffers.
This will permit the constraint to be turned off).

Finally, under Silent, check the “All” check box. This tells loader to suppress all output
messages (the log file will still be created). (Incidentally, these are not mutually
exclusive – you can disable Feedback and Errors, but not Discards, etc.)
I think we’re ready to give this a whirl. Click Execute Now.

The Messages tab shows only these lines:

SQL*Loader: Release 8.1.6.0.0 - Production on Fri Oct 27 13:57:14 2000

(c) Copyright 1999 Oracle Corporation. All rights reserved.

Since we suppressed all messages.

The Log file tells us that 10 rows were loaded; 1 row was not, due to data errors. Which
was that? Open up MarksDiet.bad (or whatever you named the data file, plus the .bad
extension). You will find this line:

"Yogurt^7

What’s wrong with that line? Well, it has no field termination character - #. Notice that
the lean ground beef line:

Lean ground beef#10

Made it in, even though it doesn’t have delimiters. That’s because we said they were
optional.

Now, open up Schema Browser and look at MarksFavoriteFoods. It looks like this:
We see that Broccoli and Bell Peppers got blanked out, as we requested. Grease was
skipped and Congealed Fat was not loaded because it was beyond our “Loaded” limit.
Yogurt wasn’t loaded due to bad data. But Egg Whites had a Rank of 11. Why didn’t
the constraint fail? And what’s up with the Rank of 0 for Salmon? It had a rank of 2!

Let’s open up our log file. (Whatever you named the control file but with a .LOG
extension, and in the same directory as the control file). This is what we find toward the
bottom:

Column Name Position Len Term Encl Datatype


------------------------------ ---------- ----- ---- ---- ---------------------
NAME FIRST * # O(") CHARACTER
O(^)
NULL if RANK = 0X33(character '3')
DEFAULT if NAME = 0X42656c6c2070657070657273(character 'Bell peppers')
RANK NEXT 1 CHARACTER

“Len” means length. We see a length of * for Name, meaning – read to the end of field
marker, which is # - the Terminator character. But Rank has a length of 1. I guess that’s
why only 1 character was loaded. But why? Well, we never specified a field terminator
for Rank. We did for Name, but not Rank.

Let’s go back to the Destination Tables tab, select Rank and go to the Column Parameters
tab. In the Field is terminated by field, select WHITESPACE from the dropdown.

Now, open up a SQL edit window and remove the records from MarksFavoriteFoods by
entering:

delete from marksfavoritefoods

Run it once more. Notice in Schema Browser that all the numeric data makes it in
properly. In examining the log file, we see that our constraint was disabled, the records
loaded, and an attempt was made to reenable the constraint. But the particular constraint
we used – a foreign key constraint – could not be reenabled because there were orphaned
records – the Egg White. Look in the LOADEREXCEPTIONS table and you will find
the RowID of the offending record.

Using the Scheduler


Included in SQL*Loader is a scheduler which provides the ability to schedule the load as
a Windows task. Clicking the Schedule button opens the following window:
Select when you want SQL*Loader to run. I’ve selected 4:55 pm on the day I’m writing
this. Click “OK” and you will be informed that a job has been added.

Open up Windows Explorer. On the left side, after your hard drive and CD ROM letters,
you will see Control Panel, Printers and Scheduled Tasks (and maybe other things,
depending on your system). Click on Scheduled Tasks. On the right side you will see
the newly added job. Here is what mine looks like:

You can right-click, select properties and see just what is going to happen at that time by
looking in the “Run” field. Here is what mine contains:
D:\ORACLE\ORA81\BIN\SQLLDR.EXE userid=MLERCH/MLERCH@ORA8I
control=d:\confile.ctl log=d:\confile.log

Now that’s a sneak peak at exactly what the TOAD SQL*Loader runs when you click
Execute.

Just for fun I had a load operation due to start in 1 minute. So I took a stretch and after a
minute a command prompt window opened, SQL*Loader launched and ran the control
file. Cool, huh?

Execution Options
The View | Options DBA tab has a new option. As previously mentioned, you can run
the loader in either the background or the foreground. Here is what the new option looks
like:

Its important to note that running loader in the foreground is perhaps the most beneficial,
as you can see error messages and results when it is completed. This is the mode of
running that I would recommend during testing (except when testing that the background
mode actually works!), as you can include the result messages in any problem reports.
So there you have it - maximum flexibility. You can schedule SQL*Loader as a
foreground process, a background task, or a Windows job.

I hope this document helps you as much as it has helped me improve this tool.

Frequently Asked Questions


Questions with the GUI

“I cannot select anything on the Table Parameters tab of the Destination Tables tab.”
Each table can have its own set of parameters. Make sure you have a table selected under
“Destination Tables” tree view.

I select a table that has subpartitions, but the subpartitions field is a simple entry field –
it doesn’t list them like the partitions field lists the partitions.
This will be developed at a future time. For now you must know the subpartition name
and enter it directly.

Questions after running “Execute Now”

I receive a “Missing DLL” error message


The most likely cause of this happening is there is an earlier client version of
SQL*Loader trying to access a later version database. Upgrade SQL*Loader on the
client.

I receive an “Entry point mismatch” message


There is a mixed version of SQL*Loader and its supporting DLL’s on the client. To
resolve try reinstalling the correct version of SQL*Loader. This should ensure a proper
version for the exe and supporting DLL’s.

An “SQL*Loader-282: Unable to locate character set handle for character set ID (0)”
error appears
I’m currently getting this when trying to run an 8.1.6. SQL*Loader to load a 7.3.4
database. Error is related to NLS data being mismatched. Still working on resolution.

I press “Execute Now” and nothing seems to happen.


Make sure you are not running it in the background. See View | Options DBA tab.
Choosing Foreground will cause it to run while you wait, then display a results window
afterwards.

Why can’t I see a status window after it finishes running in the background like I see
when it runs in the foreground?
TOAD launches a separate Windows shell program to run it in the background. There is
no way to know when it finishes. Even if there were, say through starting it in a thread
(even if that’s possible, which is questionable), there is no way to capture stdError to
display in the Messages tab. In the future I’ll investigate launching it within a new thread
so the user can at least be notified when it finishes. Then again, they’ll know when the
Command Prompt window closes, so never mind.

I receive the error “bad length for VAR record” when specifying an input file with
variable length format. The data looks fine – what’s up?”
Well, when that kept happening to me, it was because there was a return character at the
end of the file. It choked on the entire thing!

I receive an error when I have specified a terminating string for my Stream file format
data file.
The Oracle documentation states this is a new addition for version 8.1.

I’ve got more than one destination table. No data is getting into any of them!
Make sure

Miscellaneous Questions

Can’t control files themselves contain the data for the load?
That is correct. In which case, use the Interface to build the parameters, then insert the
appropriate section of the generated control file into the data file. This is currently the
only support for this type of load, and is outside the scope of this tool.

I’d like to run SQL*Loader myself, and provide it a control file I’ve built.
Sure. Open up a Command Prompt, and enter:
Sqlldr userid=MLERCH/MLERCH@ORA8I control=MyControlFile
Replacing the obvious stuff.

Executable Location
If you accept the default paths, then the executable will be in “OraNT/Bin” for versions
prior to 8.1. For 8.1 and greater, mine is located in “Oracle/Ora81/Bin.” TOAD uses the
following algorithm to offer a default path to the executable:
Use DOA to find the path to the OCI.DLL. Locate an executable in that directory
starting with “SQLLDR*.EXE”. Offer the full path as the default.

The path to the SQL*Loader executable can be chosen as an option (View|Options DBA
tab). There was a bug in prior versions in that the full path was not being presented.
Since it was being stored as an option, you need to delete the old value in Options. This
will cause TOAD to perform a new find.

Executable Names Per Oracle Version


These are the names of the SQL*Loader executables
8.1.6 – “sqlldr.exe”
8.0.5 – “sqlldr80.exe”
8.0.4. – “sqlldr80.exe”
7.3.4 – “sqlldr73.exe”

My Environment
I have a version 8.1.6 SQL*Loader. With it, I have successfully loaded tables in 3
different environments: 8.1.6, 8.0.5, and 8.0.4. I’m still trying to load into a 7.3.4
database with it. I have an NLS data mismatch error or something going on.

Future Enhancements
One significant enhancement I plan to add includes the ability to save and restore all the
parameters and configurations as a “style”. This will be a significant feature in that most
of the time the data files are used in the same format (e.g., the DBA will receive data files
each week or month). The user will be able to simply select a pre-saved style, tweak a
table name or two, and have their new control file.

In Summary
SQL*Loader is a very, very big tool. There are enough options to make your head spin.
I’ve tried to present the majority of the features in the TOAD Interface to it, knowing that
trying to present all its myriad of options would be bewildering at best and at worst give
me a head of gray hair. It’s important to remember that the TOAD window is intended to
serve one primary purpose – to help users get started in building their control files. This
has been the primary request from users – a tool to help them get started. Its my hope
that with TOAD SQL*Loader Interface we’re off to a good start.

You might also like