You are on page 1of 2

Impact of Truncate or Drop Table

When Flashback Database is Enabled

Basic Explanation

1. When you drop or truncate a table, the file level deallocation SCN keeps track
of the time of the last drop/truncate operation, and that is compared to the
retention target. Blocks are not written to flashback logs, unless they are

2. If the retention target is over and no further activity happened on the blocks
affected by a drop or truncate table, there will be no overhead for subsequent
inserts on the table or reuse of the blocks of a dropped table.

3. If immediately after the truncate we run a batch that inserts large amounts of
data into the table all reused blocks need to be written to the flashback logs
before being reformatted with the new data, that has an overhead that may
reach levels near 30% depending on the volume of data that needs to be
logged and the hardware performance.

4. Drop table has the same effect for every block that needs to be reused.

Questions and Answers

1. Why inserts after a truncate incur in high overhead when flashback database
is enabled, and why this is solved after the time specified on

Blocks belonging to a truncated table will be logged into flashback logs if an

insert operation is run before the DB_FLASHBACK_RETENTION_TARGET
is met. So in this case there will be extra block reads. As a result, the
overhead will be higher.

That overhead can be avoided if the insert is run only once the
DB_FLASHBACK_RETENTION_TARGET is met. For instance if the
parameter is set to 2 hours and we have a batch that truncates and insert
data on the same tables, we can setup the job to truncate, wait for 2 hours
and then execute the insert.

The same is valid for 11gR1 where the single instance flashback "block new"
optimization does not kick in until one inserts data into space that was
deallocated (including truncated) at least flashback retention time ago.

2. Can explain "why" direct load and lob inserts impact performance?
Impact of Truncate or Drop Table
When Flashback Database is Enabled
When flashback is enabled, Oracle will need to read the block, if required to
restore a dropped or truncated table, so that we can log it in flashback logs.
So in this case there will be extra block reads. As a result, the overhead will
be higher.

Most database changes are done via SQL update statements. For an update,
Oracle needs to read the block before updating it. So it does not take much
effort to write the block image that is already in memory to flashback logs, the
log we use for flashback.

There is no extra block reads when flashback database is enabled. This is

why flashback database typically only incur 2% overhead for OLTP workload.

For direct load and lob insert, without flashback database enabled, Oracle
does not read the block before loading the block.

3. Which is the difference between 10g and 11g that makes insert batches to
work better after the DB_FLASHBACK_RETENTION_TARGET was met?

For Single instance, 11gR1 added the Flashback 'block-new" optimization. If

one inserts data into space that was deallocated beyond flashback retention,
the optimization will most likely kick in to reduce the overhead of enabling
flashback database. When the optimization kicks in, Oracle avoids the extra
reads described on point 1.

4. Workaround for enabling fast start failover on busy systems is to let the
standby format the flashback logs then switchover to it and format the
flashback logs on the new standby, does this work on RAC also?

Yes. This works. You can enable flashback on a standby first so that the
standby will create and format all or most of the flashback logs needed.

If this is a RAC database, you need to invoke standby recovery on all

instances in order to create and format flashback logs on all instances.

Alejandro Vargas | Principal Support Consultant

Oracle Advanced Customer Services