You are on page 1of 3

TechTip: Improve Performance When Writing to DB2 for i Tables, Part I

Written by Fernando Echeveste Friday, 04 September 2009 00:00 - Last Updated Monday, 31 August 2009 11:09

With a new OVRDBF feature, you can override the REUSEDLT(*YES) attribute of a physical file or table and effectively use the behavior of REUSEDLT(*NO). If your application inserts a large volume of rows into DB2 for i tables, the following features found on IBM i can influence the performance of your application: - Application-Level Blocked INSERT --Using a blocked INSERT statement, you can insert multiple rows into a table with a single INSERT statement. - DB2-Level Row Blocking --To improve performance, the SQL runtime attempts to retrieve and insert rows from the database manager a block at a time whenever possible. - Parallel Index Maintenance via DB2 Symmetric Multiprocessing (SMP) --With SMP enabled, blocked INSERT or WRITE operations can benefit, because the database engine maintains each index in parallel. - Enable Concurrent Write (ECW, Also Known as "Holey Inserts") --This function overcomes the contention caused by database processing serialization when multiple concurrent jobs add rows to the same database table.

In addition, you can also multi-thread your application or batch workload so that it inserts into a table concurrently. Given all these features, do you know when your application or nightly processing environment is benefiting from one or more of them? The benefit depends on several factors: the database model, the computing and I/O subsystem configuration, and the characteristics of the application. In other words, is your application or batch process serial or sequential by nature, or is it capable of running in parallel? Was it designed to insert single or multiple rows in one operation? In addition to these factors, the benefit also depends on whether the physical file or DB2 table allows reuse of deleted rows for insert operations.

Reuse or Not Reuse Deleted Rows?

When the physical files or tables in your database do not reuse deleted rows for insert operations, the new rows are inserted at the end of the table. In this case, your application could benefit from DB2-level row blocking. It could also benefit from application-level row blocking if the application is capable of inserting multiple rows in one insert operation. If the tables have SQL indexes or keyed logical files over them, maintaining them in parallel by DB2 could also provide beneficial results when not reusing deleted rows. This scenario would also require DB2 SMP to be installed and enabled on the system. However, there are two drawbacks when the tables do not reuse deleted rows:

1/3

TechTip: Improve Performance When Writing to DB2 for i Tables, Part I


Written by Fernando Echeveste Friday, 04 September 2009 00:00 - Last Updated Monday, 31 August 2009 11:09

- If the application is multi-threaded to insert rows concurrently into the same table, DB2 for i serializes the actual insert and allows only one job at a time to write into the data space because the Enable Concurrent Write feature requires the table to reuse deleted rows. - If the application also deletes rows from the tables, eventually you will need to reorganize (reclaim) the deleted rows space by executing a Reorganize Physical File Member (RGZPFM) operation. When the tables in your database do reuse deleted rows (this is the default for SQL-created tables), DB2 for i allows multiple jobs to perform insert requests concurrently. If the tables do not have deleted rows, the new rows are inserted at the end of the table. In this case, the application could benefit from DB2-level row blocking as well as Parallel Index Maintenance. However, there are major drawbacks when the table has deleted rows: - All insert requests on the table will try to reuse deleted record space, and your application will not benefit from DB2-level row blocking. - Parallel Index Maintenance is not used unless DB2-level row blocking is used. - If blocked insert requests are used at the application level, they will be converted to single-row inserts at the DB2 engine level. - Enable Concurrent Write loses some of its benefit because the concurrent inserts will be single-row inserts in order to reuse the deleted rows. Since having tables with both insert and delete operations is fairly common, the question is: How can your application benefit from all the features available and get the best performance?

New OVRDBF REUSEDLT(*NO) Option on DB2 for i

A new Override with Database File (OVRDBF) command option allows an application to temporarily override the REUSEDLT(*YES) attribute of a physical file or table and effectively use the behavior of REUSEDLT(*NO). For applications that require high velocity inserts, this option overcomes the major drawbacks of reusing deleted rows by inserting new rows at the end of the table, not reusing deleted rows space within the table. When this happens, your application can now benefit from DB2-level row blocking. It can also benefit from Parallel Index Maintenance if the table has indexes or keyed logical files. More importantly, DB2 for i will allow multiple jobs to perform blocked insert requests concurrently . The scope of the override is determined by the activation group of the program that calls the command, by the current call level, or by the job in which the override occurs.

2/3

TechTip: Improve Performance When Writing to DB2 for i Tables, Part I


Written by Fernando Echeveste Friday, 04 September 2009 00:00 - Last Updated Monday, 31 August 2009 11:09

When the OVRDBF REUSEDLT(*NO) Option May Be Helpful A real situation in which the OVRDBF REUSEDLT(*NO) can be a good fit is as follows: Let's say, for example, that you have a batch process running every night in your shop. After doing a performance analysis and tuning on the batch process, the process usually completes in three or four hours, and the tables in the database experience a high degree of concurrent insert (or write) activity. The window to run this process is open until the morning, so you are comfortable with how the process is operating. However, the first Wednesday of every month, the batch process handles twice the workload compared to the other days of the month, and you are concerned about whether the window to complete the process would be long enough for this particular night. For this "heavy Wednesday," you could run the batch process with the OVRDBF REUSEDLT(*NO) option in order to get the best performance. The unused deleted rows will remain on the table, but since the REUSEDLT parameter on the table is *YES, DB2 for i will eventually reuse the deleted rows when the batch process runs the other days of the month without the OVRDBF REUSEDLT(*NO) option. Take into consideration that if you decide to run with this option all the time, eventually you will have to run a Reorganize Physical File Member (RGZPFM) operation to reclaim the deleted rows space.

Stayed Tuned

In a future TechTip, I will explain the results of lab-testing this new feature as well as some of the performance considerations when inserting rows into a table. I will also give information about how to get this new feature so you can start benefiting from it in your shop.

3/3

You might also like