Professional Documents
Culture Documents
http://www.oracle.com/technetwork/issue-archive/2012/12-sep/o52plsql-1709862.html 1 / 23
PLSQL 101 2/19/2014
Lets look at a concrete example to explore context switches more thoroughly and identify the
reason that FORALL and BULK COLLECT can have such a dramatic impact on performance.
Suppose my manager asked me to write a procedure that accepts a department ID and a salary
percentage increase and gives everyone in that department a raise by the specified percentage.
Taking advantage of PL/SQLs elegant cursor FOR loop and the ability to call SQL statements
natively in PL/SQL, I come up with the code in Listing 1.
Code Listing 1: increase_salary procedure with FOR loop
PROCEDURE increase_salary (
department_id_in IN employees.department_id%TYPE,
increase_pct_in IN NUMBER)
IS
BEGIN
FOR employee_rec
IN (SELECT employee_id
FROM employees
WHERE department_id =
increase_salary.department_id_in)
LOOP
UPDATE employees emp
SET emp.salary = emp.salary +
emp.salary * increase_salary.increase_pct_in
WHERE emp.employee_id = employee_rec.employee_id;
END LOOP;
END increase_salary;
Suppose there are 100 employees in department 15. When I execute this block,
BEGIN
increase_salary (15, .10);
END;
the PL/SQL engine will switch over to the SQL engine 100 times, once for each row being
updated. Tom Kyte, of AskTom (asktom.oracle.com), refers to row-by-row switching like this as
slow-by-slow processing, and it is definitely something to be avoided.
I will show you how you can use PL/SQLs bulk processing features to escape from slow-by-
slow processing. First, however, you should always check to see if it is possible to avoid the
context switching between PL/SQL and SQL by doing as much of the work as possible within
SQL.
Take another look at the increase_salary procedure. The SELECT statement identifies all the
employees in a department. The UPDATE statement executes for each of those employees,
applying the same percentage increase to all. In such a simple scenario, a cursor FOR loop is
not needed at all. I can simplify this procedure to nothing more than the code in Listing 2.
Code Listing 2: Simplified increase_salary procedure without FOR loop
PROCEDURE increase_salary (
department_id_in IN employees.department_id%TYPE,
increase_pct_in IN NUMBER)
IS
BEGIN
UPDATE employees emp
SET emp.salary =
emp.salary
+ emp.salary * increase_salary.increase_pct_in
WHERE emp.department_id =
increase_salary.department_id_in;
END increase_salary;
Now there is just a single context switch to execute one UPDATE statement. All the work is done
in the SQL engine.
Of course, in most real-world scenarios, lifeand codeis not so simple. We often need to
http://www.oracle.com/technetwork/issue-archive/2012/12-sep/o52plsql-1709862.html 2 / 23
PLSQL 101 2/19/2014
perform other steps prior to execution of our data manipulation language (DML) statements.
Suppose that, for example, in the case of the increase_salary procedure, I need to check
employees for eligibility for the increase in salary and if they are ineligible, send an e-mail
notification. My procedure might then look like the version in Listing 3.
Code Listing 3: increase_salary procedure with eligibility checking added
PROCEDURE increase_salary (
department_id_in IN employees.department_id%TYPE,
increase_pct_in IN NUMBER)
IS
l_eligible BOOLEAN;
BEGIN
FOR employee_rec
IN (SELECT employee_id
FROM employees
WHERE department_id =
increase_salary.department_id_in)
LOOP
check_eligibility (employee_rec.employee_id,
increase_pct_in,
l_eligible);
IF l_eligible
THEN
UPDATE employees emp
SET emp.salary =
emp.salary
+ emp.salary
* increase_salary.increase_pct_in
WHERE emp.employee_id = employee_rec.employee_id;
END IF;
END LOOP;
END increase_salary;
Lines Description
http://www.oracle.com/technetwork/issue-archive/2012/12-sep/o52plsql-1709862.html 3 / 23
PLSQL 101 2/19/2014
58 Declare a new nested table type and two collection variables based on this type.
One variable, l_employee_ids, will hold the IDs of all employees in the department.
The other, l_eligible_ids, will hold the IDs of all those employees who are eligible
for the salary increase.
12 Use BULK COLLECT to fetch all the IDs of employees in the specified department
15 into the l_employee_ids collection.
30 Use a FORALL statement to update all the rows identified by employee IDs in the
35 l_eligible_ids collection.
If you are fetching lots of rows, the collection that is being filled could consume too much session
memory and raise an error. To help you avoid such errors, Oracle Database offers a LIMIT clause
for BULK COLLECT. Suppose that, for example, there could be tens of thousands of employees
in a single department and my session does not have enough memory available to store 20,000
employee IDs in a collection.
Instead I use the approach in Listing 6.
Code Listing 6: Fetching up to the number of rows specified
DECLARE
c_limit PLS_INTEGER := 100;
CURSOR employees_cur
IS
SELECT employee_id
FROM employees
WHERE department_id = department_id_in;
TYPE employee_ids_t IS TABLE OF
employees.employee_id%TYPE;
l_employee_ids employee_ids_t;
BEGIN
OPEN employees_cur;
LOOP
FETCH employees_cur
http://www.oracle.com/technetwork/issue-archive/2012/12-sep/o52plsql-1709862.html 4 / 23
PLSQL 101 2/19/2014
With this approach, I open the cursor that identifies all the rows I want to fetch. Then, inside a
loop, I use FETCH-BULK COLLECT-INTO to fetch up to the number of rows specified by the
c_limit constant (set to 100). Now, no matter how many rows I need to fetch, my session will
never consume more memory than that required for those 100 rows, yet I will still benefit from the
improvement in performance of bulk querying.
About FORALL
Whenever you execute a DML statement inside of a loop, you should convert that code to use
FORALL. The performance improvement will amaze you and please your users.
The FORALL statement is not a loop; it is a declarative statement to the PL/SQL engine:
Generate all the DML statements that would have been executed one row at a time, and send
them all across to the SQL engine with one context switch.
As you can see in Listing 4, lines 30 through 35, the header of the FORALL statement looks just
like a numeric FOR loop, yet there are no LOOP or END LOOP keywords.
Here are some things to know about FORALL:
Each FORALL statement may contain just a single DML statement. If your loop contains two
updates and a delete, then you will need to write three FORALL statements.
PL/SQL declares the FORALL iterator (indx on line 30 in Listing 4) as an integer, just as it
does with a FOR loop. You do not need toand you should notdeclare a variable with this
same name.
In at least one place in the DML statement, you need to reference a collection and use the
FORALL iterator as the index value in that collection (see line 35 in Listing 4).
When using the IN low_value . . . high_value syntax in the FORALL header, the collections
referenced inside the FORALL statement must be densely filled. That is, every index value
between the low_value and high_value must be defined.
If your collection is not densely filled, you should use the INDICES OF or VALUES OF syntax
in your FORALL header.
FORALL and DML Errors
Suppose that Ive written a program that is supposed to insert 10,000 rows into a table. After
inserting 9,000 of those rows, the 9,001st insert fails with a DUP_VAL_ON_INDEX error (a
unique index violation). The SQL engine passes that error back to the PL/SQL engine, and if the
FORALL statement is written like the one in Listing 4, PL/SQL will terminate the FORALL
statement. The remaining 999 rows will not be inserted.
If you want the PL/SQL engine to execute as many of the DML statements as possible, even if
errors are raised along the way, add the SAVE EXCEPTIONS clause to the FORALL header.
Then, if the SQL engine raises an error, the PL/SQL engine will save that information in a
pseudocollection named SQL%BULK_EXCEPTIONS, and continue executing statements. When
all statements have been attempted, PL/SQL then raises the ORA-24381 error.
You canand shouldtrap that error in the exception section and then iterate through the
contents of SQL%BULK_EXCEPTIONS to find out which errors have occurred. You can then write
error information to a log table and/or attempt recovery of the DML statement.
Listing 7 contains an example of using SAVE EXCEPTIONS in a FORALL statement; in this case,
I simply display on the screen the index in the l_eligible_ids collection on which the error
occurred, and the error code that was raised by the SQL engine.
Code Listing 7: Using SAVE EXCEPTIONS with FORALL
BEGIN
FORALL indx IN 1 .. l_eligible_ids.COUNT SAVE EXCEPTIONS
UPDATE employees emp
SET emp.salary =
emp.salary + emp.salary * increase_pct_in
WHERE emp.employee_id = l_eligible_ids (indx);
EXCEPTION
WHEN OTHERS
THEN
IF SQLCODE = -24381
THEN
FOR indx IN 1 .. SQL%BULK_EXCEPTIONS.COUNT
LOOP
DBMS_OUTPUT.put_line (
SQL%BULK_EXCEPTIONS (indx).ERROR_INDEX
|| :
|| SQL%BULK_EXCEPTIONS (indx).ERROR_CODE);
END LOOP;
ELSE
RAISE;
END IF;
END increase_salary;
http://www.oracle.com/technetwork/issue-archive/2012/12-sep/o52plsql-1709862.html 5 / 23
PLSQL 101 2/19/2014
Suppose that I have written a function named betwnstr that returns the string between a start and
end point. Heres the header of the function:
FUNCTION betwnstr (
string_in IN VARCHAR2
, start_in IN INTEGER
, end_in IN INTEGER
)
RETURN VARCHAR2
If the employees table has 100 rows and 20 of those have department_id set to 10, then there
will be 20 context switches from SQL to PL/SQL to run this function.
You should, consequently, play close attention to all invocations of user-defined functions in SQL,
especially those that occur in the WHERE clause of the statement. Consider the following query:
SELECT employee_id
FROM employees
WHERE betwnstr (last_name, 2, 6) = 'MITHY'
In this query, the betwnstr function will be executed 100 timesand there will be 100 context
switches.
FORALL with Sparse Collections
If you try to use the IN low_value .. Next Steps
high_value syntax with FORALL and there
is an undefined index value within that DOWNLOAD
range, Oracle Database will raise the Oracle Database 11g
ORA-22160: element at index [N] does script for this article
not exist error.
To avoid this error, you can use the TEST your PL/SQL knowledge
INDICES OF or VALUES OF clauses. To
see how these clauses can be used, READ PL/SQL 101, Parts 1-8
lets go back to the code in Listing 4. In
this version of increase_salary, I declare
a second collection, l_eligible_ids, to READ more about INDICES OF and VALUES OF
hold the IDs of those employees who are
eligible for a raise.
Instead of doing that, I can simply remove all ineligible IDs from the l_employee_ids collection,
as follows:
FOR indx IN 1 .. l_employee_ids.COUNT
LOOP
check_eligibility (l_employee_ids (indx),
increase_pct_in,
l_eligible);
IF NOT l_eligible
THEN
l_employee_ids.delete (indx);
END IF;
END LOOP;
But now my l_employee_ids collection may have gaps in it: index values that are undefined
between 1 and the highest index value populated by the BULK COLLECT.
No worries. I will simply change my FORALL statement to the following:
FORALL indx IN INDICES OF l_employee_ids
UPDATE employees emp
SET emp.salary =
emp.salary
+ emp.salary *
increase_salary.increase_pct_in
WHERE emp.employee_id =
l_employee_ids (indx);
Now I am telling the PL/SQL engine to use only those index values that are defined in
l_employee_ids, rather than specifying a fixed range of values. Oracle Database will simply skip
any undefined index values, and the ORA-22160 error will not be raised.
This is the simplest application of INDICES OF. Check the documentation for more-complex
usages of INDICES OF, as well as when and how to use VALUES OF.
Bulk Up Your Code!
Optimizing the performance of your code can be a difficult and time-consuming task. It can also
be a relatively easy and exhilarating experienceif your code has not yet been modified to take
advantage of BULK COLLECT and FORALL. In that case, you have some low-hanging fruit to
pick!
http://www.oracle.com/technetwork/issue-archive/2012/12-sep/o52plsql-1709862.html 6 / 23
PLSQL 101 2/19/2014
The quiz appears below and also at PL/SQL Challenge, a Website that offers online quizzes
on the PL/SQL language as well as SQL and Oracle Application Express.
Question
Which of these blocks will uppercase the last names of all employees in the table?
a.
DECLARE
TYPE ids_t IS TABLE OF plch_employees.employee_id%TYPE;
l_ids ids_t := ids_t (100, 200, 300);
BEGIN
FORALL indx IN 1 .. l_ids.COUNT
LOOP
UPDATE plch_employees
SET last_name = UPPER (last_name)
WHERE employee_id = l_ids (indx);
END LOOP;
END;
/
b.
DECLARE
TYPE ids_t IS TABLE OF plch_employees.employee_id%TYPE;
c.
BEGIN
UPDATE plch_employees
SET last_name = UPPER (last_name);
END;
/
d.
DECLARE
TYPE ids_t IS TABLE OF plch_employees.employee_id%TYPE;
l_ids ids_t := ids_t (100, 200, 300);
BEGIN
FORALL indx IN INDICES OF l_ids
UPDATE plch_employees
SET last_name = UPPER (last_name)
WHERE employee_id = l_ids (indx);
END;
/
http://www.oracle.com/technetwork/issue-archive/2012/12-sep/o52plsql-1709862.html 7 / 23
PLSQL 101 2/19/2014
ORACLE CLOUD JAVA CUSTOMERS AND EVENTS COMMUNITIES SERVICES AND STORE
Learn About Oracle Cloud Learn About Java Explore and Read Customer Blogs Log In to My Oracle Support
Get a Free Trial Download Java for Consumers Stories Discussion Forums Training and Certification
Learn About PaaS Download Java for Developers All Oracle Events Wikis Become a Partner
Learn About SaaS Java Resources for Developers Oracle OpenWorld Oracle ACEs Find a Partner Solution
Learn About IaaS Java Cloud Service JavaOne User Groups Purchase from the Oracle Store
Java Magazine Social Media Channels
CONTACT AND CHAT
Phone: +1.800.633.0738
Global Contacts
Oracle Support
Partner Support
Subscribe Careers Contact Us Site Maps Legal Notices Terms of Use Privacy Oracle Mobile
http://www.oracle.com/technetwork/issue-archive/2012/12-sep/o52plsql-1709862.html 8 / 23
8 Bulk Update Methods Compared | Oracle FAQ 2/19/2014
http://www.orafaq.com/node/2450 9 / 23
8 Bulk Update Methods Compared | Oracle FAQ 2/19/2014
rec_cur c1%rowtype;
BEGIN
OPEN c1;
LOOP
FETCH c1 INTO rec_cur;
EXIT WHEN c1%notfound;
UPDATE test
SET fk = rec_cur.fk
, fill = rec_cur.fill
WHERE pk = rec_cur.pk;
END LOOP;
CLOSE C1;
END;
/
http://www.orafaq.com/node/2450 10 / 23
8 Bulk Update Methods Compared | Oracle FAQ 2/19/2014
The biggest drawback to this method is readability. Since Oracle does not yet provide support for record collections in
FORALL, we need to use scalar collections, making for long declarations, INTO clauses, and SET clauses.
DECLARE
CURSOR rec_cur IS
SELECT *
FROM test4;
pk_tab NUM_TAB_T;
fk_tab NUM_TAB_T;
fill_tab VC2_TAB_T;
BEGIN
OPEN rec_cur;
LOOP
FETCH rec_cur BULK COLLECT INTO pk_tab, fk_tab, fill_tab LIMIT 1000;
EXIT WHEN pk_tab.COUNT() = 0;
Method 6: MERGE
The modern equivalent of the Updateable Join View. Gaining in popularity due to its combination of brevity and
performance, it is primarily used to INSERT and UPDATE in a single statement. We are using the update-only version
here. Note that I have included a FIRST_ROWS hint to force an indexed nested loops plan. This is to keep the playing field
level when comparing to the other methods, which also perform primary key lookups on the target table. A Hash join may
or may not be faster, that's not the point - I could increase the size of the target TEST table to 500M rows and Hash would
be slower for sure.
MERGE /*+ FIRST_ROWS*/ INTO test
USING test2 new ON (test.pk = new.pk)
WHEN MATCHED THEN UPDATE SET
fk = new.fk
, fill = new.fill
/
http://www.orafaq.com/node/2450 11 / 23
8 Bulk Update Methods Compared | Oracle FAQ 2/19/2014
, fill = new.fill
/
test_rec TEST%ROWTYPE;
TYPE num_tab_t IS TABLE OF NUMBER(38);
TYPE vc2_tab_t IS TABLE OF VARCHAR2(4000);
pk_tab NUM_TAB_T;
fk_tab NUM_TAB_T;
fill_tab VC2_TAB_T;
cnt INTEGER := 0;
BEGIN
LOOP
FETCH test_cur BULK COLLECT INTO pk_tab, fk_tab, fill_tab LIMIT 1000;
EXIT WHEN pk_tab.COUNT() = 0;
CLOSE test_cur;
COMMIT;
PIPE ROW(cnt);
RETURN;
END;
/
Note that it receives its data via a Ref Cursor parameter. This is a feature of Oracle's parallel-enabled functions; they will
apportion the rows of a single Ref Cursor amongst many parallel slaves, with each slave running over a different subset of
the input data set.
Here is the statement that calls the Parallel Enabled Table Function:
SELECT sum(column_value)
FROM TABLE(test_parallel_update(CURSOR(SELECT * FROM test7)));
Note that we are using a SELECT statement to call a function that performs an UPDATE. Yeah, I know, it's nasty. You need
to make the function an AUTONOMOUS TRANSACTION to stop it from throwing an error. But just bear with me, it is the
closest PL/SQL equivalent I can make to a third-party ETL Tool such as DataStage with native parallelism.
http://www.orafaq.com/node/2450 12 / 23
8 Bulk Update Methods Compared | Oracle FAQ 2/19/2014
runs:
Run 1: The buffer cache is flushed and about 1 hour of unrelated statistics gathering has been used to age out the
disk cache.
Run 2: The buffer cache is flushed and the disk cache has been aged out with about 5-10mins of indexed reads.
Timings indicate that the disk cache is still partially populated with blocks used by the query.
Run 3: The buffer cache is pre-salted with the table and blocks it will need. It should perform very little disk IO.
RUN 1 RUN 2 RUN 3
----------------------------------- ----- ----- -----
1. Explicit Cursor Loop 931.3 783.2 49.3
2. Implicit Cursor Loop 952.7 672.8 40.2
3. UPDATE with nested SET subquery 941.4 891.5 31.5
4. BULK COLLECT / FORALL UPDATE 935.2 826.0 27.9
5. Updateable Join View 933.2 741.0 28.8
6. MERGE 854.6 838.5 28.4
7. Parallel DML MERGE 55.7 46.1 47.7
8. Parallel PL/SQL 28.2 27.2 6.3
ROUND 2
Let's see how a Foreign Key constraint affects things. For this round, I have created a parent table and a Foreign Key on
the FK column.
For brevity, this time we'll just flush the buffer cache and run about 5 minutes worth of indexed reads to cycle the disk
cache.
RUN 1 RUN 2
----------------------------------- ----- -----
1. Explicit Cursor Loop 887.1 874.6
2. Implicit Cursor Loop 967.0 752.1
3. UPDATE with nested SET subquery 920.1 795.2
4. BULK COLLECT / FORALL UPDATE 840.9 759.2
5. Updateable Join View 727.5 851.8
6. MERGE 807.8 833.6
7. Parallel DML MERGE 26.8 29.2
8. Parallel PL/SQL 25.3 23.8
Summary of findings:
It looks as though there is a small premium associated with checking the foreign key, although it does not appear
to be significant. It's worth noting that the parent table in this case is very small and quickly cached. A very large
parent table would result in considerably greater number of cache misses and resultant disk IO. Foreign keys are
often blamed for bad performance; whilst they can be limiting in some circumstances (e.g. direct path loads),
updates are not greatly affected when the parent tables are small.
I was expecting the Parallel DML MERGE to be slower. According to the Oracle Database Data Warehousing Guide
- 10g Release 2, INSERT and MERGE are "Not Parallelized" when issed against the child of a Foreign Key constraint,
whereas parallel UPDATE is "supported". As a test, I issued a similar MERGE statement and redundantly included
the WHEN NOT MATCHED THEN INSERT clause: it was not parallelized and ran slower. The lesson here: there may
http://www.orafaq.com/node/2450 13 / 23
8 Bulk Update Methods Compared | Oracle FAQ 2/19/2014
be merit in applying an upsert (insert else update) as an update-only MERGE followed by an INSERT instead of using
a single MERGE.
ROUND 3
The two things I hear most about Bitmap indexes is that:
They are inappropriate for tables that undergo concurrent updates, and
They are slow to update.
Surely no comparison of update methods could possibly be complete without a test of Bitmap index maintenance.
In this round, I have removed the Foreign Key used in Round 2, and included a Bitmap index on TEST.FK
RUN 1 RUN 2
----------------------------------- ----- -----
1. Explicit Cursor Loop 826.0 951.2
2. Implicit Cursor Loop 898.7 877.2
3. UPDATE with nested SET subquery 588.9 633.4
4. BULK COLLECT / FORALL UPDATE 898.0 926.7
5. Updateable Join View 547.8 687.1
6. MERGE 689.3 763.4
7. Parallel DML MERGE 30.2 28.4
8. Parallel PL/SQL ORA-00060: deadlock detected
Well, if further proof was needed that Bitmap indexes are inappropriate for tables that are maintained by multiple
concurrent sessions, surely this is it. The Deadlock error raised by Method 8 occurred because bitmap indexes are locked
at the block-level, not the row level. With hundreds of rows represented by each block in the index, the chances of two
sessions attempting to lock the same block are quite high. The very clear lesson here: don't update bitmap indexed tables
in parallel sessions; the only safe parallel method is PARALLEL DML.
The other intesting outcome is the differing impact of the bitmap index on SET-based updates vs transactional updates
(SQL solutions vs PL/SQL solutions). PL/SQL solutions seem to incur a penalty when updating bitmap indexed tables. A
single bitmap index has added around 10% to the overall runtime of PL/SQL solutions, whereas the set-based (SQL-based)
solutions run faster than the B-Tree indexes case (above). Although not shown here, this effect is magnified with each
additional bitmap index. Given that most bitmap-indexed tables would have several such indexes (as bitmap indexes are
designed to be of most use in combination), this shows that PL/SQL is virtually non-viable as a means of updating a large
number of rows.
SUMMARY OF FINDINGS
Context Switches in cursor loops have greatest impact when data is well cached. For updates with buffer cache hit-
ratio >99%, convert to BULK COLLECT or MERGE.
Use MERGE with a Hash Join when updating a significant proportion of blocks (not rows!) in a segment.
Parallelize large updates for a massive performance improvement.
Tune the number of parallel query servers used by looking for latch contention thread startup waits.
Don't rashly drop Foreign Keys without benchmarking; they may not be costing very much to maintain.
MERGE statements that UPDATE and INSERT cannot be parallelised when a Foreign Key is present. If you want to
keep the Foreign Key, you will need to use multiple concurrent sessions (insert/update variant of Method 8) to
achieve parallelism.
Don't use PL/SQL to maintain bitmap indexed tables; not even with BULK COLLECT / FORALL. Instead, INSERT
transactions into a Global Temporary Table and apply a MERGE.
-------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
-------------------------------------------------------------------------------
| 0 | MERGE STATEMENT | | 95331 | 7261K| 191K (1)|
| 1 | MERGE | TEST | | | |
| 2 | VIEW | | | | |
| 3 | NESTED LOOPS | | 95331 | 8937K| 191K (1)|
| 4 | TABLE ACCESS FULL | TEST2 | 95331 | 4468K| 170 (3)|
| 5 | TABLE ACCESS BY INDEX ROWID| TEST | 1 | 48 | 2 (0)|
| 6| INDEX UNIQUE SCAN | TEST_PK | 1 | | 1 (0)|
-------------------------------------------------------------------------------
http://www.orafaq.com/node/2450 14 / 23
8 Bulk Update Methods Compared | Oracle FAQ 2/19/2014
---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)|
---------------------------------------------------------------------------
| 0 | MERGE STATEMENT | | 95331 | 7261K| | 46318 (3)|
| 1 | MERGE | TEST | | | | |
| 2 | VIEW | | | | | |
| 3 | HASH JOIN | | 95331 | 8937K| 5592K| 46318 (3)|
| 4 | TABLE ACCESS FULL| TEST2 | 95331 | 4468K| | 170 (3)|
| 5 | TABLE ACCESS FULL| TEST | 10M| 458M| | 16949 (4)|
---------------------------------------------------------------------------
That's a pretty significant difference: the same method (MERGE) is 6-7 times faster when performed as a Hash Join.
Although the number of physical disk blocks and Current Mode Gets are about the same in each test, the Hash Join
method performs multi-block reads, resulting in fewer visits to the disk.
All 8 methods above were benchmarked on the assumption that the target table is arbitrarily large and the subset of
rows/blocks to be updated are relatively small. If the proportion of updated blocks increases, then the average cost of
finding those rows decreases; the exercise becomes one of tuning the data access rather than tuning the update.
http://www.orafaq.com/node/2450 15 / 23
8 Bulk Update Methods Compared | Oracle FAQ 2/19/2014
We can see here that the Parallel Co-ordinator spent 23.61 seconds (of the 57.94 elapsed) simply starting up the parallel
threads, and 30.3 seconds waiting for them to do their stuff.
And here are the wait events for just ONE of the parallel threads from the same test case:
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
cursor: pin S wait on X 3 0.02 0.06
PX Deq: Execution Msg 16 1.96 10.94
PX Deq: Msg Fragment 2 0.00 0.00
latch: parallel query alloc buffer 7 5.89 7.52
db file sequential read 825 0.10 12.00
read by other session 17 0.06 0.18
log buffer space 1 0.03 0.03
PX Deq Credit: send blkd 1 0.02 0.02
PX Deq: Table Q Normal 28 0.19 0.35
latch: cache buffers chains 1 0.01 0.01
db file parallel read 1 0.11 0.11
From this, we can see that of the 30.3 seconds the Co-ordinator spent waiting for the parallel threads, this one spent
7.52 waiting for shared resources (latches) held by other parallel threads, and just 12 seconds reading blocks from disk.
For comparison, here is the trace of the Co-ordinator session of a Parallel PL/SQL run:
SELECT sum(column_value)
FROM TABLE(test_parallel_update(
CURSOR(SELECT * FROM TEST7)
))
http://www.orafaq.com/node/2450 16 / 23
8 Bulk Update Methods Compared | Oracle FAQ 2/19/2014
The Parallel PL/SQL spent just 11.85 seconds starting parallel threads, compared to 23.61 seconds for PARALLEL DML. I
noticed from the trace that PARALLEL DML used 256 parallel threads, whereas the PL/SQL method used just 128. Looking
more closely at the trace files I suspect that the PARALLEL DML used 128 readers and 128 writers, although it hard to be
sure. Whatever Oracle is doing here, it seems there is certainly a significant cost of opening parallel threads.
Also, looking at the wait events for the Parallel PL/SQL slave thread, we see no evidence of resource contention as we did
in the PARALLEL DML example.
In theory, we should be able to reduce the cost of thread startup and also reduce contention by reducing the number of
parallel threads. Knowing from above that the parallel methods were 10-20 time faster than the non-parallel methods, I
suspect that benefits of parallelism diminish after no more than 32 parallel threads. In support of that theory, here is a
trace of a PARALLEL DML test case with 32 parallel threads:
First the Parallel Co-ordinator:
MERGE /*+ first_rows parallel(test5 32) parallel(test 32) */ INTO test
USING test5 new ON (test.pk = new.pk)
WHEN MATCHED THEN UPDATE SET
fk = new.fk
, fill = new.fill
http://www.orafaq.com/node/2450 17 / 23
8 Bulk Update Methods Compared | Oracle FAQ 2/19/2014
.:: Blogger Home :: Wiki Home :: Forum Home :: Privacy :: Contact ::.
http://www.orafaq.com/node/2450 18 / 23
plsql - Pl/SQL Bulk Bind/ Faster Update Statements - Stack Overflow 2/19/2014
current community
chat blog
Stack Overflow
Meta Stack Overflow
Careers 2.0
Stack Overflow
Questions
Tags
Tour
Users
Ask Question
Take the 2-minute tour
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required.
fieldnames fieldname_aat;
fieldvalues fieldvalue_aat;
up vote approved_components component_aat;
0 down
vote PROCEDURE partition_eligibility
favorite IS
BEGIN
FOR indx IN sendSubject_in.FIRST .. sendSubject_in.LAST
LOOP
approved_components(indx) := sendSubject_in(indx);
fieldnames(indx):= fieldname_in(indx);
fieldvalues(indx) := fieldvalue_in(indx);
http://stackoverflow.com/questions/9615934/pl-sql-bulk-bind-faster-update-statements 19 / 23
plsql - Pl/SQL Bulk Bind/ Faster Update Statements - Stack Overflow 2/19/2014
fieldvalues(indx) := fieldvalue_in(indx);
END LOOP;
END;
PROCEDURE update_components
IS
BEGIN
FORALL indx IN approved_components.FIRST .. approved_components.LAST
UPDATE Component
SET Fieldvalue = fieldvalues(indx)
WHERE Component_id = approved_components(indx)
AND Fieldname = fieldnames(indx);
END;
BEGIN
partition_eligibility;
update_components;
END BulkUpdate;
1 Answer
active oldest votes
There is something else going on, I suspect your individual updates are each taking a lot of time, maybe because there are
triggers or inefficient indexes. (Note that if each statement is expensive individually, using bulk updates won't save you a lot
of time since the context switches are negligible compared to the actual work).
Here is my test setup:
CREATE TABLE Component (
Component_id NUMBER,
fieldname VARCHAR2(100),
Fieldvalue VARCHAR2(100),
CONSTRAINT component_pk PRIMARY KEY (component_id, fieldname)
);
100000 rows updated in about 1.5 seconds on an unremarkable test machine. Updating the same data set row by row
takes about 4 seconds.
Can you run a similar script with a newly created table?
http://stackoverflow.com/questions/9615934/pl-sql-bulk-bind-faster-update-statements 20 / 23
plsql - Pl/SQL Bulk Bind/ Faster Update Statements - Stack Overflow 2/19/2014
Your Answer
Sign up or log in
Sign up using Google
Sign up using Facebook
Sign up using Stack Exchange
Post as a guest
Name
Email required, but not shown
Post Your Answer discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged sql plsql
bulk forall or ask your own question.
http://stackoverflow.com/questions/9615934/pl-sql-bulk-bind-faster-update-statements 21 / 23
plsql - Pl/SQL Bulk Bind/ Faster Update Statements - Stack Overflow 2/19/2014
Community Bulletin
event
Moderator candidates' answers to your questions ends in 6 days
event
2014 Community Moderator Election ends in 6 days
Related
3
Why is PL/SQL Bulk DML running slowing for large datasets with parent-child constrained tables?
0
How to insert into a table correctly using table of records and forall in pl/sql
1
Multiple SQL statements in FORALL loop
0
Procedure Error PLS-00436: implementation restriction: cannot reference fields of BULK In-BIND table of records
1
Update multiple columns in MERGE statement ORACLE
0
Dealing with PL/SQL Collections
3
Is this how non-bulk binding PL/SQL code should be translated to bulk-binding code, and is there ever a reason to forgo buk binding?
1
Bulk Collect into is storing less no of rows in collection when using LIMIT?
0
how dows bulk update flow of execution works
0
SELECT *DISTINCT* with Oracle CURSOR and BULK COLLECT TO
1. Programmers 1. English
2. Unix & Linux Language
http://stackoverflow.com/questions/9615934/pl-sql-bulk-bind-faster-update-statements 22 / 23
plsql - Pl/SQL Bulk Bind/ Faster Update Statements - Stack Overflow 2/19/2014
site design / logo 2014 stack exchange inc; user contributions licensed under cc by-sa 3.0 with attribution required
rev 2014.2.18.1378
http://stackoverflow.com/questions/9615934/pl-sql-bulk-bind-faster-update-statements 23 / 23