You are on page 1of 6

Why isn't Oracle using my index ?!

The question in the title of this piece is probably the single most frequently occurring
question that appears in the Metalink forums and Usenet newsgroups. This article
uses a test case that you can rebuild on your own systems to demonstrate the most
fundamental issues with how cost-based optimisation works. And at the end of the
article, you should be much better equipped to give an answer the next time you
hear that dreaded question.

Because of the wide variety of options

that are available when installing Once you have got this data in place,
Oracle, it isn't usually safe to predict you might want to convince yourself
exactly what will happen when that the two sets of data are identical.
someone runs a script that you have In particular that the N1 columns in
dictated to them. But I'm going to risk both data sets have values ranging
it, in the hope that your database is a from 0 to 199, with 15 occurrences of
fairly vanilla installation, with the each value. You might try the
default values for the mostly commonly following check:
tweaked parameters. The example has
been built and tested on an 8.1.7 select n1, count(*)
from t1
database with the db_block_size set
group by n1;
to the commonly used value of 8K and
the db_file_multiblock_read_count and the matching query against T2 to
set to the equally commonly used prove the point.
value 8. The results may be a little
different under Oracle 9.2 If you then execute the queries:
Run the script from Figure 1, which select * from t1 where n1 = 45;
creates a couple of tables, then select * from t2 where n1 = 45;
indexes and analyses them.
You will find that each query returns 15
create table t1 as rows. However if you
trunc((rownum-1)/15) n1, set autotrace traceonly explain
trunc((rownum-1)/15) n2,
rpad('x', 215) v1 you will discover that the two queries
from all_objects
where rownum <= 3000;
have different execution paths. The
query against table T1 uses the index,
create table t2 as but the query against table T2 does a
select full tablescan.
mod(rownum,200) n1,
mod(rownum,200) n2,
rpad('x',215) v1 So you have two sets of identical data,
from all_objects with dramatically different access
where rownum <= 3000; paths for the same query.
create index t1_i1 on t1(N1);
What happened to the index ?
create index t2_i1 on t2(n1);
Note - if you've ever come across any
analyze table t1 compute of those 'magic number' guidelines
statistics; regarding the use of indexes, e.g.:
analyze table t2 compute 'Oracle will use an index for less than
23%, 10%, 2% (pick number at
random) of the data' then you may ayt
Figure 1 The test data sets.
this stage begin to doubt their validity.
In this example Oracle has used a tablescan cheaper than the cost of
tablescan for 15 rows out of 3,000 i.e. using the index ?
for just one half of one percent of the
data ! By looking into this question you
uncover the key mechanisms (and
To investigate problems like this, there critically erroneous assumptions) of
is one very simple ploy that I always the Cost Based Optimiser.
try as the first step. Put in some hints
to make Oracle do what I think it ought Let's start by examining the indexes by
to be doing, and see if that gives me running the query:
any clues.
In this case, a simple hint:
/*+ index(t2, t2_i1) */ avg_leaf_blocks_per_key,
is sufficient to switch Oracle from the from user_indexes;
full tablescan to the indexed access
path. The three paths with costs The results are given in the table
(abbreviated to C=nnn) are shown in below:
Figure 2:
T1 T2
select * from t1 where n1 = 45; Blevel 1 1
EXECUTION PLAN Data block / key 1 15
-------------- Leaf block / key 1 1
INDEX(RANGE SCAN) OF T1_I1 (C=1) Clustering factor 96 3000

select * from t2 where n1 = 45;

Note particularly the value for 'data
blocks per key'. This is the number of
EXECUTION PLAN different blocks in the table that Oracle
thinks it will have to visit if you execute
a query that contains an equality test
on a complete key value for this index.
select /*+ index(t2 t2_i1) */
from t1 So where do the costs for our queries
where n1 = 45; come from? As far as Oracle is
EXECUTION PLAN concerned, if we fire in the key value
-------------- 45 we get the data from table T1 by
TABLE ACCESS BY INDEX ROWID OF T2 (C=16) hitting one index leaf block and one
table block - two blocks, so a cost of
Figure 2 The different queries and their costs.

If we try the same with table T2, we

So why hasn't Oracle used the index have to hit one index leaf block and
by default in for the T2 query ? Easy - fifteen table blocks - a total of 16
as the execution plan shows, the cost blocks, so a cost of 16.
of doing the tablescan is cheaper than
the cost of using the index. Clearly, according to this viewpoint,
the index on table T1 is much more
Why is the tablescan cheaper ? desirable than the index on table T2.
This, of course, is simply begging the This leaves two questions outstanding
question. Why is the cost of the though.
Where does the tablescan cost come
from, and why are the figures for the The first is that every block acquisition
avg_data_blocks_per_key so equates to a physical disk read, and
different between the two tables ? the second is that a multiblock read is
just as quick as a single block read.
The answer to the second question is
simple. Look back at the definition of So what impact do these assumptions
table T1 - it uses the trunc() function have on our experiment ?
to generate the N1 values, dividing the
"rownum - 1 "by 15 and truncating. If you query the user_tables view with
the following SQL:
Trunc(675/15) = 45
Trunc(676/15) = 45
… blocks
Trunc(689/15) = 45 from user_tables;

All the rows with the value 45 do you will find that our two tables each
actually appear one after the other in a cover 96 blocks.
tight little clump (probably all fitting one
data block) in the table. At the start of the article, I pointed out
that the test case was running a
Table T2 uses the mod() function to version 8 system with the value 8 for
generate the N1 values, using the db_file_multiblock_read_count.
modulus 200 on the rownum:
Roughly speaking, Oracle has decided
mod(45,200) = 45 that it can read the entire 96 block
mod(245,200) = 45 table in 96/8 = 12 disk read requests.

mod(2845,200) = 45 Since it takes 16 block (= disk read)
requests to access the table by index,
The rows with the value 45 appear it is clearer quicker (from Oracle's
every two hundredth position in the sadly deluded perspective) to scan the
table (probably resulting in no more table - after all 12 is less than 16.
than one row in every relevant block).
Voila ! If the data you are targetting is
By doing the analyze, Oracle was able suitably scattered across the table, you
to get a perfect description of the data get tablescans even for a very small
scatter in our table. So the optimiser percentage of the data - a problem that
was able to work out exactly how can be exaggerated in the case of very
many blocks Oracle would have to visit big blocks and very small rows.
to answer our query - and, in simple
cases, the number of block visits is the Correction
cost of the query. In fact you will have noticed that my
calculated number of scan reads was
But why the tablescan ? 12, whilst the cost reported in the
So we see that an indexed access into execution plan was 15. It is a slight
T2 is more expensive than the same simplfication to say that the cost of a
path into T1, but why has Oracle tablescan (or an index fast full scan for
switched to the tablescan ? that matter) is

This brings us to the two simple- 'number of blocks' /

minded, and rather inappropriate, db_file_multiblock_read_count.
assumptions that Oracle makes.
Oracle uses an 'adjusted' multi-block See Tim Gorman's article for a proper
read value for the calculation (although description of these parameters, but
it then tries to use the actual requested briefly:
size when the scan starts to run).
Optimizer_index_cost_adj takes a
For reference, the following table value between 1 and 10000 with a
compares a few of the actual and default of 100. Effectively, this
adjusted values parameter describes how cheap a
single block read is compared to a
Actual Adjusted multiblock read. For example the value
4 4.175 30 (which is often a suitable first guess
8 6.589 for an OLTP system) would tell Oracle
16 10.398 that a single block read costs 30% of a
32 16.409 multiblock read. Oracle would
64 25.895 therefore incline towards using
128 40.865 indexed access paths for low values of
this parameter.
As you can see, Oracle makes some
attempt to protect you from the error of Optimizer_index_caching takes a
supplying an unfeasibly large value for value between 0 and 100 with a
this parameter. default of 0. This tells Oracle to
assume that that percentage of index
There is a minor change in version 9, blocks will be found in the buffer
by the way, where the tablescan cost cache. In this case, setting values
is further adjusted by adding one to close to 100 encourages the use of
result of the division - which means indexes over tablescans.
tablescans in v9 are generally just a
little more expensive than in v8, so The really nice thing about both these
indexes are just a little more likely to parameters is that they can be set to
be used. 'truthful' values.

Adjustments: Set the optimizer_index_caching to

We have seen that there are two something in the region of the 'buffer
assumptions built into the optimizer cache hit ratio'. (You have to make
that are not very sensible. your own choice about whether this
should be the figure derived from the
default pool, keep pool or both).
• A single block read costs just as
much as a multi-block read - (not
The optimizer_index_cost_adj is a
really likely, particularly when
little more complicated. Check the
running on file systems without
typical wait times in v$system_event
for the events 'db file scattered read'
(multi block reads) and 'db file
• A block access will be a physical
sequential reads' (single block reads).
disk read - (so what is the buffer
Divide the latter by the former and
cache for ?)
multiply by one hundred.
Since the early days of Oracle 8.1,
there have been a couple of
Don't forget that the two parameters
parameters that allow us to correct
may need to be adjusted at different
these assumption in a reasonably
times of the day and week to reflect
truthful way.
the end-user work-load. You can't just
derive one pair of figures, and use numerous different strategies that
them for ever. Oracle uses to work out more general
Happily, in Oracle 9, things have
improved. You can now collect system Consider some of the cases I have
statistics, which are originally conveniently overlooked:
included just the four:
• Average single block read time • Multi-column indexes
• Average multi block read time • Part-used multi-column indexes
• Average actual multiblock read • Range scans
• Notional usable CPU speed. • Unique indexes
• Non-unique indexes representing
Suffice it to say that this feature is unique constraints
worth an article in its own right - but do • Index skip scans
note that the first three allow Oracle to • Index only queries
discover the truth about the cost of • Bitmap indexes
multi block reads. And in fact, the CPU • Effects of nulls
speed allows Oracle to work out the
CPU cost of unsuitable access The list goes on and on. There is no
mechanisms like reading every single one simple formula that tells you how
row in a block to find a specific data Oracle works out a cost - there is only
value and behave accordingly. a general guideline that gives you the
flavour of the approach and a list of
When you migrate to version 9, one of different formulae that apply in
the first things you should investigate different cases.
is the correct use of system statistics.
This one feature alone may reduce the However, the purpose of this article
amount of time you spend trying to was to make you aware of the general
'tune' awkward SQL. approach and the two assumptions
built into the optimiser's strategy. And I
In passing, despite the wonderful hope that this may be enough to take
effect of system statistics both of the you a long way down the path of
optimizer adjusting parameters still understanding the (apparently) strange
apply - although the exact formula for things that the optimiser has been
their use seems to have changed known to do.
between version 8 and version 9.
Further Reading:
Variations on a theme. Tim Gorman: The
Of course, I have picked one very search for Intelligent Life in the Cost
special case - equality on a single Based Optimiser.
column non-unique index, where thare
are no nulls in the table - and treated it Wolfgang Breitling:
very simply. (I haven't even mentioned Looking under
the relevance of the index blevel and the hood of the CBO.
clustering_factor yet). There are

Jonathan Lewis is a freelance consultant with more than 17 years experience of

Oracle. He specialises in physical database design and the strategic use of the
Oracle database engine, is author of 'Practical Oracle 8i - Designing Efficient
Databases' published by Addison-Wesley, and is one of the best-known speakers on
the UK Oracle circuit. Further details of his published papers, presentations,
seminars and tutorials can be found at, which also hosts
The Co-operative Oracle Users' FAQ for the Oracle-related Usenet newsgroups.