Oracle Business Intelligence Applications Version 7.9.6.

x Performance Recommendations
An Oracle Technical Note, 7 Edition
April 2011
th

Copyright © 2011, Oracle. All rights reserved.

1

Oracle Business Intelligence Applications Version 7.9.6.x Performance Recommendations

Introduction ............................................................................................................................................................. 4 Hardware recommendations for implementing Oracle BI Applications .................................................................. 4 Storage Considerations for Oracle Business Analytics Warehouse................................................................... 5 Introduction..................................................................................................................................................... 5 Shared Storage Impact Benchmarks ............................................................................................................. 5 Conclusion...................................................................................................................................................... 7 Source Tier ......................................................................................................................................................... 7 Oracle BI Enterprise Edition (OBIEE) / ETL Tier ................................................................................................ 7 Review of OBIEE/ETL Tier components ........................................................................................................ 7 Deployment considerations for the ETL components .................................................................................... 7 Target Tier .......................................................................................................................................................... 7 Oracle RDBMS ............................................................................................................................................... 7 Oracle Business Analytics Warehouse configuration ............................................................................................. 8 Database configuration parameters ................................................................................................................... 8 ETL impact on amount of generated REDO Logs .............................................................................................. 8 Oracle RDBMS System Statistics ....................................................................................................................... 9 Parallel Query configuration ............................................................................................................................... 9 Oracle Business Analytics Warehouse Tablespaces ....................................................................................... 10 Oracle BI Applications Best Practices for Oracle Exadata ................................................................................... 10 Handling BI Applications Indexes in Exadata Warehouse Environment .......................................................... 10 Gather Table Statistics for BI Applications Tables ........................................................................................... 11 Oracle Business Analytics Warehouse Storage Settings in Exadata ............................................................... 11 Parallel Query Use in BI Applications on Exadata ............................................................................................ 11 Compression Implementation Oracle Business Analytics Warehouse in Exadata .......................................... 12 Exadata Smart Flash Cache............................................................................................................................. 12 Database Parameter File for Analytics Warehouse on Exadata ...................................................................... 12 Informatica configuration for better performance.................................................................................................. 14 Informatica PowerCenter 8.6 32-bit vs. 64-bit .................................................................................................. 14 Informatica Session Logs ................................................................................................................................. 14 Informatica Lookups ......................................................................................................................................... 15 Disabling Lookup Cache for very large Lookups .............................................................................................. 15 Joining Staging Tables to Lookup Tables in Informatica Lookups ................................................................... 16 Informatica Custom Relational Connections for long running mappings.......................................................... 16 Informatica Session Parameters ...................................................................................................................... 17 Commit Interval ............................................................................................................................................ 17 DTM Buffer Size ........................................................................................................................................... 17 Additional Concurrent Pipelines for Lookup Cache Creation ....................................................................... 18 Default Buffer Block Size ............................................................................................................................. 18 Informatica Load: Bulk vs. Normal ................................................................................................................... 18 Informatica Bulk Load: Table Fragmentation ................................................................................................... 18 Use of NULL Ports in Informatica Mappings .................................................................................................... 19 Informatica Parallel Sessions Load on ETL tier................................................................................................ 19 Informatica Load Balancing Implementation .................................................................................................... 20 Bitmap Indexes usage for better queries performance ......................................................................................... 20 Introduction ....................................................................................................................................................... 20 DAC properties for handling bitmap indexes during ETL ................................................................................. 20 Bitmap Indexes handling strategies .................................................................................................................. 22 Disabling Indexes with DISTINCT_KEYS = 0 or 1 ........................................................................................... 25

2

Monitoring and Disabling Unused Indexes ....................................................................................................... 26 Handling Query Indexes during Initial ETL ....................................................................................................... 28 Partitioning guidelines for Large Fact tables ......................................................................................................... 29 Introduction ....................................................................................................................................................... 29 Convert to partitioned tables ............................................................................................................................. 30 Identify a partitioning key and decide on a partitioning interval .................................................................... 30 Create a partitioned table in Data Warehouse ............................................................................................. 31 Configure Informatica to support partitioned tables ..................................................................................... 34 Configure DAC to support partitioned tables ................................................................................................ 34 Unit test the changes for converted partitioned tables in DAC ..................................................................... 41 Interval Partitioning ........................................................................................................................................... 41 Informatica Workflows Session partitioning .......................................................................................................... 42 Workflow Session Partitioning for Parallel Writer Updates .............................................................................. 42 Table Compression implementation guidelines .................................................................................................... 44 Guidelines for Oracle optimizer hints usage in ETL mappings ............................................................................. 45 Hash Joins versus Nested Loops in Oracle RDBMS........................................................................................ 45 Suggested hints for Oracle Business Intelligence Applications 7.9.6 ............................................................... 48 Using Oracle Optimizer Dynamic Sampling for big staging tables ................................................................... 52 Custom Indexes in Oracle EBS for incremental loads performance .................................................................... 53 Introduction ....................................................................................................................................................... 53 Custom OBIEE indexes in EBS 11i and R12 systems ..................................................................................... 53 Custom EBS indexes in EBS 11i source systems ............................................................................................ 55 Oracle EBS tables with high transactional load ................................................................................................ 56 Custom EBS indexes on CREATION_DATE in EBS 11i source systems ....................................................... 57 Custom Aggregates for Better Query Performance .............................................................................................. 57 Introduction ....................................................................................................................................................... 57 Database Configuration Requirements for using MVs ..................................................................................... 57 Custom Materialized View Guidelines .............................................................................................................. 58 Integrate MV Refresh in DAC Execution Plan .................................................................................................. 62 Wide tables with over 255 columns performance ................................................................................................. 64 Introduction ....................................................................................................................................................... 64 Wide tables structure optimization ................................................................................................................... 64 Oracle BI Applications HIgh Availability ................................................................................................................ 65 Introduction ....................................................................................................................................................... 65 High Availability with Oracle Data Guard and Physical Standby Database ...................................................... 65 Oracle BI Applications ETL Performance Benchmarks ........................................................................................ 67 Oracle BI Applications 7.9.6.1, Siebel CRM 8.0 Adapter.................................................................................. 67 Oracle BI Applications 7.9.6.1, Oracle EBS R12 Projects Adapter .................................................................. 68 Oracle BI Applications 7.9.6.1, Oracle EBS 11i10 Enterprise Sales Adapter ................................................... 68 Oracle BI Applications 7.9.6.1, Oracle EBS 11i10 Supply Chain Adapter ........................................................ 69 Conclusion ............................................................................................................................................................ 70

3

iSCSI). Note: The document is intended for experienced Oracle BI Administrators.9.16Gb 16** 16Gb** 4 .16 8 . medium and large. Customers are encouraged to engage Oracle Expert Services to review their configurations prior to implementing the recommendations to their BI Applications environments.9.9. Oracle BI Applications Version 7. Source Data Volume SMALL: Up to 200Gb MEDIUM: 200Gb to 1Tb Target Tier # CPU cores Physical RAM Storage Space 8 16Gb Up to 400Gb 16 32Gb 400Gb .2Tb 32* 64Gb* 2Tb and higher High performance SCSI or network attached storage. It covers advanced performance tuning techniques in Informatica and Oracle RDBMS.6 delivers a number of adapters to various business applications on Oracle database.6. SATA.1 version is certified with other major data warehousing platforms. so all recommendations must be carefully verified in a test environment before applied to a production instance. iSCSI). HARDWARE RECOMMENDATIONS FOR IMPLEMENTING ORACLE BI APPLICATIONS Depending on source data volumes.x Performance Recommendations INTRODUCTION Oracle Business Intelligence (BI) Applications Version 7. SATA. Each Oracle BI Applications implementation requires very careful planning to ensure the best performance both during ETL and web queries or dashboard execution. DBAs and Applications implementers.Oracle Business Intelligence Applications Version 7. Hardware RAID controller with multiple I/O channels.6.9. Local (PATA. This article discusses performance topics for Oracle BI Applications 7.6 and higher using Informatica PowerCenter 8.6 ETL platform. The table below summarizes hardware recommendations for Oracle BI Applications tiers by the volume ranges.6 implementations can be categorized as small. Recommended two or more preferred RAID configuration I/O controllers Oracle BI Enterprise Edition / ETL Tier # CPU cores Physical RAM 4-8 8Gb 8 .9. LARGE: 1Tb and higher Storage System Local (PATA. 7.

Important: It is recommended to set up all Oracle BI Applications tiers in the same local area network. keep Linux server #2 idle.Storage Space 100Gb local 200Gb local 400Gb local * Consider implementing Oracle RAC with multiple nodes to accommodate large numbers of concurrent users accessing web reports and dashboards. are mounted as EXT3 file systems: o o Server #1 uses volume1 Server #2 uses volume2 Execution test description: • • • Set record block size for I/O operations to 32k. volume1 and volume2. Execute parallel load using eight child processes to imitate average workload during ETL run. o Test#2: execute parallel load above on both NFS volume1 and volume2 using Linux servers #1 and #2. The internal benchmarks for running Oracle BI Applications on Exadata will be published soon. Storage Considerations for Oracle Business Analytics Warehouse Introduction Oracle BI Applications ETL execution plans are optimized to maximize hardware utilization on ETL and target tiers and reduce ETL runtime. The following benchmarks helped to measure the impact from sharing the same NetApp filer storage between two target databases. Oracle positions Exadata solution as fast and efficient hardware for addressing I/O bottlenecks in large volume environments. the recommended db block size in a target database. Usually a well-optimized infrastructure consumes higher CPU and memory on an ETL tier and causes rather heavy storage I/O load on a target tier during an ETL execution. Configuration description: • • • • Linux servers #1 and #2 have the following configurations: 2 quad-core 1.8 GHz Intel Xeon CPU 32 GB RAM Shared NetApp filer volumes. 5 . Shared Storage Impact Benchmarks Sharing storage among heavy I/O processes could easily degrade ETL performance and result in extended ETL runtime. The storage could easily become a major bottleneck as the result of such actions as: • • • Setting excessive parallel query processes (refer to ‘Parallel Query Configuration’ section for more details) Running multiple I/O intensive applications. concurrently loading data in two parallel ETL executions. such as databases. Run the following test scenarios: o Test#1: execute parallel load above on NFS volume1 using Linux server #1. Installation of any of these three tiers over Wide Area Network (WAN) may cause timeouts during ETL mappings execution on the ETL tier. ** Consider installing two or more servers on ETL tier and implementing Informatica Load Balancing across all ETL tier servers. on a shared storage Choosing sub-optimal storage for running BI Applications tiers.

83 KB/sec 3038416. seek 200 Kbytes and so on.60 KB/sec 45778.10 KB/sec 70104. Read: read an existing file. Rewrite. Rewrite: re-write in an existing file. and Pwrite (buffered write operation) were impacted the most.19 KB/sec 216 min Initial Write. The test summary: Test Type "Initial write " "Rewrite " "Read " "Re-read " "Reverse Read " "Stride read " "Random read " "Mixed workload " "Random write " "Pwrite " "Pread " Total Time Test #1 46087.45 KB/sec 1765427. Mixed workload: read and write a file with accesses made to random locations in the file. Random Write.70 KB/sec 68053. Random Read.90 KB/sec 30106. 6 . Random Write: write a file with accesses made to random locations in the file.06 KB/sec 23794. Stride Read.46 KB/sec 1724525.49 KB/sec 1755344. seek 200 Kbytes. Re-Read: re-read an existing file. Record Rewrite: write and re-write the same record in a file.82 KB/sec 25367.The following benchmarks describe performance measurements in KB / sec: Initial Write: write a new file.92 KB/sec 1795288.17 KB/sec 1783300.53 KB/sec 3223637.78 KB/sec 1754192.30 KB/sec 110 min Test #2 30039. for example: read at offset zero for a length of 4 Kbytes. Reverse Read: read a file backwards.21 KB/sec 2837808. Random Read: read a file with accesses made to random locations in the file.63 KB/sec 2704878. read for a length of 4 Kbytes. Mixed Workload and Pread (buffered read operation) were impacted the least by the concurrent load.25 KB/sec 2078320.27 KB/sec 2456869.34 KB/sec 2578445.05 KB/sec 3134220. Initial Read. Read operations do not require specific RAID sync-up operations therefore read requests are less dependent on the number of concurrent threads. while Reverse Read. Strided Read: read a file with a strided access behavior.

4 Informatica PowerCenter 8. if the target database platform is Oracle. Avoid sharing the same RAID controller(s) across multiple databases. Refer to Oracle Metalink for VLM / AWE implementation for your platform. Informatica and DAC repositories can be deployed as separate schemas in the same database. Set up periodic monitoring of your I/O system during both ETL and end user queries load for any potential bottlenecks. configuration and usage in Oracle BI Applications environment.6 Client Informatica PowerCenter 8.3.1.1. or a Windows server. then consider implementing Very Large Memory (VLM) on Unix / Linux and Address Windowing Extensions (AWE) for Windows 32 bit Platforms. The Informatica client and DAC client can be located on an ETL Administration client machine.3. running under 64-bit Operating System (OS). Source Tier Oracle BI Applications data loads may cause additional overhead of up to fifteen percent of CPU and memory on a source tier.Conclusion Make sure you carefully plan for storage deployment. Target Tier Oracle RDBMS Oracle recommends deploying Oracle Business Analytics Warehouse on Oracle RDBMS 64-bit.1 Data Warehouse Administration Console server 10.1. There might be a bigger impact on the I/O subsystem.4.4.3. VLM/AWE implementations would increase database address space to allow for more database buffers or a larger indirect data buffer window.6 Server Data Warehouse Administration Console (DAC) client 10. especially during full ETL loads. running Informatica and DAC servers. 7 . Using several I/O controllers or a hardware RAID controller with multiple I/O channels on the source side would help to minimize the impact on Business Applications during ETL runs and speed up data extraction into a target data warehouse. as Oracle Business Analytics Warehouse. Oracle BI Enterprise Edition (OBIEE) / ETL Tier Review of OBIEE/ETL Tier components The Oracle BIEE/ETL Tier is composed of the following parts: Oracle Business Intelligence Server 10.1 Informatica BI Applications Repository (usually stored in a target database) DAC BI Applications Repository (usually stored in a target database) Deployment considerations for the ETL components • • • • • The Informatica server and DAC server should be installed on a dedicated machine for best performance. IBM DB2 or Microsoft SQL Server. The Informatica server and DAC server cannot be installed separately on different servers. The Informatica server and DAC server host machine should be physically located near the source data machine to improve network performance. If 64-bit OS is not available.

2. ETL impact on amount of generated REDO Logs Initial ETL may cause higher than usual generation of REDO logs.init. and other database features in both ETL and front-end queries logic.ora template file and follow its guidelines to configure target database parameters specific to your data warehouse tier hardware.ora files with recommended and required parameters located in the <ORACLEBI_HOME>\dwrep\Documentation\ directory: init10gR2.ora – init. Since Oracle BI Applications extensively use bitmap indexes.ora .4 or higher. Allocate up to 10-15% of additional space to accommodate for archived REDO logs during Initial ETL. Below is a calculation of generated REDO amount in an internal initial ETL run: redo log file sequence: start : 641 (11 Jan 21:10) end : 1624 (12 Jan 10:03) total # of redo logs : 983 log file size : 52428800 redo generated: 983*52428800 = 51537510400 (48 GB) Data Loaded in warehouse: SQL> select sum(bytes)/1024/1024/1024 Gb from dba_segments where owner='DWH' and segment_type='TABLE'.6 is certified with Oracle RDBMS 10g and 11g.0. ORACLE BUSINESS ANALYTICS WAREHOUSE CONFIGURATION Database configuration parameters Oracle Business Intelligence Applications version 7.1 customers must upgrade their Oracle Business Analytics Warehouses to the latest Patchset. Note: init. partitioned tables.ora template for Oracle RDBMS 11g init11gR2.7 or higher.ora template for Exadata / 11gR2 is provided in Exadata section of this document.0. execute Initial ETL. when loading large data volumes in a data warehouse database. Important: Oracle 10.ora – init. If your target database is configured to run in ARCHIVELOG mode.0. you can consider two options: 1. Oracle BI Applications include template init.ora template for Oracle RDBMS 11gR2 Review an appropriate init. 2.ora template for Oracle RDBMS 10g init11g.1.Note: You cannot use sga_target or db_cache_size parameters if you enable VLM / AWE by setting 'use_indirect_data_buffers = true'. it is important that Oracle BI Applications customers install the latest database releases for their Data Warehouse tiers: Oracle 10g customers should use Oracle 10. You would have to manually resize all SGA memory components and use db_block_buffers instead of db_cache_size to specify your data cache. take a cold backup and switch the database back to ARCHIVELOG mode. Switch the database to NOARCHIVELOG mode.2. Oracle 11g customers should use Oracle 11. 8 .9.

Oracle recommends two options to gather system statistics: Run the dbms_stats. - Important: Execute dbms_stats. CPU speed. Parallel Query configuration The Data Warehouse Administration Console (DAC) leverages the Oracle Parallel Query option for computing statistics and building indexes on target tables. and ultimately impact BI Applications performance. when it computes the cost of query execution plans. Optimizer takes system statistics into account. Since DAC creates indexes and computes statistics on target tables in parallel on a single table and across multiple tables. then the dbms_stats. and increasing response time when the resources are shared by many concurrent transactions. when the database is not idle. excessive temporary space consumption. By default DAC creates indexes with the 'PARALLEL' clause and computes statistics with pre-calculated degree of parallelism. 9 . Oracle computes desired system statistics when database is under significant workload. Oracle BI Applications customers are required to gather workload statistics on both source and target Oracle databases prior to running initial ETL.gather_system_stats('stop') procedure at the end of the workload window.gather_system_stats('start') procedure at the beginning of the workload window. It could easily lead to increased resource contention. Failure to gather workload statistics may result in sub-optimal execution plans for queries. value from v$sysstat where name like 'Parallel%'. The system load from parallel operations can be observed by executing the following query: SQL> select name. Run dbms_stats.49 Oracle RDBMS System Statistics Oracle has introduced workload statistics in Oracle 9i to gather important information about system such as single and multiple block read time.Gb ---------280. located in <ORACLEBI_HOME>\dwrep\Documentation for details on setting the following parameters: parallel_max_servers parallel_min_servers parallel_threads_per_cpu Important: Parallel execution is non-scalable. Reduce the "parallel_threads_per_cpu" and "parallel_max_servers" value if the system is overloaded. the parallel execution may cause performance problems if the values parallel_max_servers and parallel_threads_per_cpu are too high. Usually half an hour is sufficient to generate the valid statistic values.gather_system_stats('interval'. Refer to the init.ora template files. and various system throughputs. creating I/O bottlenecks. interval=>N) where N is the number of minutes when statistics gathering will be stopped automatically.gather_system_stats.

and all indexes into an INDEX tablespace. as you may not only impact your ETL performance but also compromise data integrity in your warehouse. mostly bitmaps. Refer to the corresponding section in the document for more details. During incremental loads. by default DAC drops and rebuilds indexes.Oracle Business Analytics Warehouse Tablespaces By default. so you should separate all indexes in a dedicated tablespace and. Consider pinning the critical target tables in smart flash cache 10 . Depending on your hardware configuration on the target tier you can improve its performance by rearranging your data warehouse tablespaces. for end user star queries Exadata Storage Indexes functionality cannot be considered as unconditional replacement for BI Apps indexes. move the INDEX tablespace to a separate controller. The best practices for handling BI Applications indexes in Exadata Warehouse: • • Turn on Index usage monitoring to identify any unused indexes and drop / disable them in your env. Such configuration would help to speed up Target Load (SIL) mappings for fact tables by balancing I/O load on multiple RAID controllers.8Tb > 200Gb Important!!! Make sure you use Locally Managed tablespaces with AUTOALLOCATE clause. if you have multiple RAID / IO Controllers. Use standard (primary) block size for your warehouse tablespaces. Do not drop any ETL indexes. The following table summarizes space allocation estimates in a data warehouse by its data volume range: SMALL: Target Data Volume Up to 400Gb Temporary Tablespace DATA Tablespace INDEX Tablespace 40 – 60Gb 350Gb 50Gb MEDIUM: 400Gb to 2Tb 60 – 150Gb 350Gb – 1.8Tb 50 – 200Gb LARGE: 2Tb and higher 150 – 250Gb > 1. DO NOT build your warehouse on non-standard block tablespaces. You may also consider isolating staging tables (_FS) and target fact tables (_F) on different controllers. as it may cause excessive space consumption and result in queries slower performance. You can employ storage indexes only in those cases when BI Applications query indexes deliver inferior performance and you ran the comprehensive tests to ensure no regressions for all other queries without the query indexes. ORACLE BI APPLICATIONS BEST PRACTICES FOR ORACLE EXADATA Handling BI Applications Indexes in Exadata Warehouse Environment Oracle Business Analytic Applications Suite uses two types of indexes: • • ETL indexes for optimizing ETL performance and ensuring data integrity Query indexes. DO NOT use UNIFORM extents size. Note that the INDEX Tablespace may increase if you enable more query indexes in your data warehouse. DAC deploys all data warehouse entities into two tablespaces: all tables into a DATA tablespace.

Set deferred_segment_creation = TRUE to defer a segment creation until the first record is inserted.ora section below. As the result.GATHER_TABLE_STATS call in <SqlQuery name = "ORACLE_ANALYZE_TABLE" STORED_PROCEDURE = "TRUE"> section. You will have to manually specify INITIAL and NEXT extents size of 8Mb for nonpartitioned segments. 11 . hierarchies. If you choose to drop some indexes in Exadata environment. You should consider switching to ‘FOR ALL COLUMNS SIZE AUTO’ syntax in DBMS_STATS. Exadata hardware provides much better scalability for I/O resources. Drop selected query indexes and disable them in DAC to use Exadata Storage Indexes / Full Table Scans only after running comprehensive benchmarks and ensuring no impact on any other queries performance.GATHER_TABLE_STATS call in DAC: 1. Use 8Mb large extent size for partitioned fact tables and non-partitioned large segments. 3. primarily to increase for better compression rate.ora template in the section below. Save the changes. so you can consider turning on Parallel Query for slow queries by setting PARALLEL attribute for large tables participating in the queries. For example: SQL> ALTER TABLE W_GL_BALANCE_F PARALLEL. during an ETL. Gather Table Statistics for BI Applications Tables Out of the box Data Warehouse Admin Console (DAC) uses ‘FOR INDEXED COLUMNS’ syntax for computing BI Applications table statistics.xml file for editing. Setting cell_partition_large_extents = TRUE will ensure all partitioned tables get created with INITIAL extent size of 8Mb. It is NOT recommended to use non-standard block size tablespaces for deploying production warehouse. Optimizer may choose sub-optimal execution plan and result in slower performance. Refer to init. then there would be more critical columns with NULL statistics. Next time you run an ETL.• • Consider building custom aggregates to pre-aggregate more data and simplify queries performance. You may consider using 16K block size as well. 2. Oracle Business Analytics Warehouse Storage Settings in Exadata • The recommended database block size (db_block_size parameter) is 8K. such as dimensions. as Oracle applies compression at block level. Replace ‘FOR INDEXED COLUMNS’ with ‘FOR ALL COLUMNS SIZE AUTO’ in DBMS_STATS. Since DAC manages parallel jobs. such as Informatica mappings or indexes creation. the use of Parallel Query in ETL mappings could generate more I/O overhead and cause performance regressions for ETL jobs. etc. DO NOT use UNIFORM extent size for your warehouse tablespaces. Refer to init. Make sure you use locally managed tablespaces with AUTOALLOCATE option. • • • • Parallel Query Use in BI Applications on Exadata All BI Applications tables are created without any degree of parallelism in BI Applications schema. DAC will compute the statistics for BI Applications tables for all columns. Navigate to your <DAC_HOME>/CustomSQLs and open customsql. Use your primary database block size 8k (or 16k) for your warehouse tablespaces. It does not cover statistics for non-indexed columns participating in end user query joins.

Make sure you benchmark the query performance prior to implementing the changes in your Production environment.dbf.init. and check such stats as num_rows.ora parameter file for Business Analytics Warehouse on Oracle Exadata. and improve queries performance in Exadata environment. ########################################################################### # # Oracle BI Applications . which could impact your ETL performance. Exadata Smart Flash Cache The use of Smart Flash Cache in Oracle Business Analytics Warehouse can significantly improve end user queries performance. Choose either Basic or Advanced compression types for your compression candidates. the following compressed segment needs to be recompressed.ora template This file contains a listing of init. Refer to Table Compression Implementation Guidelines section in this document for additional information on compression for BI Applications Warehouse. use the following syntax: ALTER TABLE W_PARTY_D STORAGE (CELL_FLASH_CACHE KEEP). Make sure that the active ones remain uncompressed. Database Parameter File for Analytics Warehouse on Exadata Use the template file below for your init. The initial ETL plan contains several mappings with heavy updates. The Exadata Storage Server will cache data for W_PARTY_D table more aggressively and will try to keep the data from this table longer than cached data from other tables. However. For example. Implement large facts table partitioning and compress inactive historic partitions only. Review periodically the allocated space for a compressed segment. blocks and avg_row_len in user_tables view. Important!!! Use manual Flash Cache pinning only for the most common critical tables. To manually pin a table in Exadata Smart Flash Cache. You can consider pinning most frequently used dimensions which impact your queries performance.2 / Exadata ########################################################################### db_name control_files = <database name> = /<dbf file loc>/ctrl01. The following guidelines will help to ensure successful compression implementation in your Exadata environment: • • • • Consider implementing compression after running an Initial ETL.ora parameters for 11. Compression Implementation Oracle Business Analytics Warehouse in Exadata Table compression can significantly reduce a segment size. /<dbf file loc>/ctrl02. it may result in their slower mapping performance and larger consumed space. This segment should be re-compressed reduce its footprint and improve its queries performance.dbf 12 . depending on the nature DML operations in ETL mappings. as it consumes too many blocks: Num_rows 541823382 Avg_row_len 181 Blocks 13837818 Compression ENABLED The simple calculation (num_rows * avg_row_len / 8k block size) * ~25% (block overhead) gives ~15M blocks for an uncompressed segment.

MANDATORY OPTIMIZER PARAMETERS ---------------------star_transformation_enabled query_rewrite_enabled = TRUE = TRUE 13 .db_block_size db_block_checking db_block_checksum cell_partition_large_extents deferred_segment_creation user_dump_dest background_dump_dest core_dump_dest max_dump_file_size processes sessions db_files session_max_open_files dml_locks cursor_sharing cursor_space_for_time session_cached_cursors open_cursors db_writer_processes aq_tm_processes job_queue_processes timed_statistics statistics_level sga_max_size sga_target shared_pool_size shared_pool_reserved_size workarea_size_policy pre_page_sga pga_aggregate_target log_checkpoint_timeout log_checkpoints_to_alert log_buffer undo_management undo_tablespace undo_retention parallel_adaptive_multi_user parallel_max_servers parallel_min_servers = 8192 # or 16384 (for better compression) = FALSE = TYPICAL = TRUE = TRUE = /<DUMP_HOME>/admin/<dbname>/udump = /<DUMP_HOME>/admin/<dbname>/bdump = /<DUMP_HOME>/admin/<dbname>/cdump = 20480 = 1000 = 2000 = 1024 = 100 = 1000 = EXACT = FALSE = 500 = 1000 = 2 = 1 = 2 = true = typical = 45G = 40G = 2G = 100M = AUTO = FALSE = 16G = 3600 = TRUE = 10485760 = AUTO = UNDOTS1 = 90000 = FALSE = 128 = 32 # ------------------.

Exp_Get_Integration_Id: 2. including the detailed percentage run time. Below is an example of the execution summary from an Informatica session log: ***** RUN INFO FOR TGT LOAD ORDER GROUP [1].210526 percent mplt_SIL_CustomerLocationUseDimension. cached in memory.Exp_Scd2_Dates: 44. Total Run Time = [561. and allow maximum two gigabytes for any application.query_rewrite_integrity _b_tree_bitmap_plans _optimizer_autostats_job = TRUSTED = FALSE = FALSE INFORMATICA CONFIGURATION FOR BETTER PERFORMANCE Informatica PowerCenter 8.9.6.843748] secs Total Idle Time = [322. and their performance is heavily impacted by data from incremental extracts and high watermark warehousing volumes. Informatica 64-bit takes the advantage of more physical RAM for performing complex transformations in memory and eliminating costly disk I/O operations. So. 64-bit 32-bit OS memory can address only 2 ^ 32 bytes.464472] Thread work time breakdown: Fil_W_CUSTOMER_LOC_USE_D: 2. Informatica Session Logs Oracle BI Applications 7.157895 percent mplt_Get_Etl_Proc_Wid. idle time.453112] secs Busy Percentage = [37.Exp_Decide_Etl_Proc_Wid: 3.684211 percent mplt_Get_Etl_Proc_Wid.000000] secs Busy Percentage = [100. Oracle BI Applications ETL mappings use complex Informatica transformations such as lookups.6 provides a true 64-bit performance and the ability to scale because no intermediate staging or hashing files on disk are required for processing.6 uses Informatica PowerCenter 8.6 64-bit version for Medium and Large environments.EXP_Constant_for_Lookup: 1.526316 percent Lkp_W_CUSTOMER_LOC_USE_D: 13. CONCURRENT SET [1] ***** Thread [READER_1_1_1] created for [the read stage] of partition point [Sq_W_CUSTOMER_LOC_USE_DS] has completed. The internal BI Applications ETL benchmarks for Informatica 8.6 32-bit vs.Exp_W_CUSTOMER_LOC_USE_D_Transform: 3. Additionally BI Applications ETL execution plans employ parallel mappings execution. etc. thus causing rather dramatic regression in ETL performance.755389] Thread [TRANSF_1_1_1] created for [the transformation stage] of partition point [Sq_W_CUSTOMER_LOC_USE_DS] has completed.LKP_ETL_PROC_WID: 20. Informatica computes it for a single thread in a mapping as follows: 14 .052632 percent mplt_Get_Etl_Proc_Wid. or four gigabytes of RAM. So 32-bit ETL tier can quickly exhaust the available memory and end up with very expensive I/O paging and swapping operations. which has improved log reports.157895 percent Thread [WRITER_1_*_1] created for [the write stage] of partition point [W_CUSTOMER_LOC_USE_D] has completed. Oracle Business Intelligence Applications customers are strongly encouraged to use Informatica 8.105263 percent Exp_W_CUSTOMER_LOC_USE_D_Update_Flg: 10.171875] secs Total Idle Time = [0. Total Run Time = [559. Informatica PowerCenter 8. On the contrast. Total Run Time = [559.000000 percent mplt_SIL_CustomerLocationUseDimension. All threads statistics must be reviewed together. 64-bit showed at least two times better throughputs for 64-bit configuration.000000] Busy Percentage for a single thread cannot be considered as an absolute measure of performance for a whole mapping.6 32-bit vs.812502] secs Total Idle Time = [348. Each session log provides the detailed information about transformations as well as summary of a mapping execution.109055] secs Busy Percentage = [42.105263 percent mplt_Get_Etl_Proc_Wid.

Connect to Informatica Workflow Manager. then you need to review the detailed transformations execution summary and identify the most expensive transformation.2% of all TRANSF runtime. If the report shows high Busy Percentage for the WRITER Thread. and find the desired transformation in the Transformations folder on the Mapping tab. If constraining a large lookup is not possible. so it may be considered a candidate for investigation. consider pushing lookups with row counts less than two million into the Reader SQL as OUTER JOINS. consider pushing it down as an OUTER JOIN into the mapping’s Reader Query. it may not necessarily be a performance bottleneck. the lookup data can be stored in memory and transformation processes the rows very fast. Additionally Informatica takes more time to build such large lookups. The log above shows that most probably the mapping is well balanced between Reader and Transformation threads and it keeps Writer busy with inserts. If Lookup table row count is too high. Refer to the section “Informatica Load: Bulk vs. Such update would slow down the Reader SQL execution. Consult Oracle Development prior to re-writing Oracle Business Intelligence Applications mappings. if Lookup data is very large (typically over 20M). and the mapping is overloaded with lookups. • If functional logic permits. including each lookup’s percentage runtime. As a result. such lookup transformations adversely affect the overall mapping performance. But. so they cannot be constrained or pushed down into Reader queries. Disabling Lookup Cache for very large Lookups Informatica uses Lookup cache to store the lookup data on the ETL tier in flat files (dat and idx). then consider disabling the lookup cache. you may want to turn off Bulk Mode. Informatica Lookups Too many Informatica Lookups in an Informatica mapping may cause significant performance slowdown. If the report shows high Busy Percentage (> 60 . the lookup cannot fit into the allocated memory and the data has to be paged in and out many times during a single session. Informatica will cache a smaller subset in its Lookup Cache. The Integration Service builds cache in memory when it processes the first row of data in the cached Lookup Transformation. Review the guidelines below for handling Informatica Lookups in Oracle Business Intelligence Applications mappings: • Inspect Informatica session logs for the number of lookups. 15 . then you may need to review the mapping’s Reader Source Qualifier Query for any performance bottlenecks. Such lookup could cause significant performance overhead on ETL tier. • If a Reader Source Qualifier query is not a bottleneck in a slow mapping. open the session properties. Depending on the processed data volumes. • Make sure you test the changes to avoid functional regressions before implementing optimizations in your production environment. If Lookup data is small. consider reducing a large lookup row count by adding more constraining predicates to the lookup query WHERE clause. Then uncheck Lookup Cache Enabled property and save the session. • If you identify a very large lookup with row count more than 15-20 million. In the example above the transformation “mplt_SIL_CustomerLocationUseDimension. Important: Some lookups could be reusable within a mapping or across multiple mappings. but it might improve overall mapping’s performance. • Check “Lookup table row count” and “Lookup cache row count” numbers for each Lookup Transformation. Normal” for more details.Busy Percentage = (Total Run Time – Total Idle Time) / Total Run Time If the report log shows high Busy Percentage (> 70 .Exp_Scd2_Dates” consumes 44.70%) for the TRANSF Thread.80%) for the READER Thread.

DATASOURCE_NUM_ID Such change ensured the lookup row count drop from > 22M to 180K and helped to improve the mapping performance. 16 .GEO_WID as GEO_WID.DATASOURCE_NUM_ID=W_RESPONSE_FS.EFFECTIVE_FROM_DT as EFFECTIVE_FROM_DT. Then Oracle Optimizer would choose INDEX FAST FULL SCAN to retrieve the lookup values from index blocks rather than scanning the whole table. Check the explain plan for the lookup query to ensure index access path. • Make sure you test the modified mapping with the selected disabled lookups in a test environment and benchmark its performance prior to implementing the change in the production system. Joining Staging Tables to Lookup Tables in Informatica Lookups If you identify bottlenecks with lookups having very large rowcounts. Disabling lookup cache may work faster for very large lookups under following conditions: • • Lookup query must use index access path.ROW_WID as ROW_WID.EFFECTIVE_FROM_DT as EFFECTIVE_FROM_DT. Consider creating an index for all columns.ROW_WID as ROW_WID. For example.GEO_WID as GEO_WID. The issued lookup query uses bind variables. W_RESPONSE_FS WHERE W_PARTY_D. W_PARTY_D. W_PARTY_D.DATASOURCE_NUM_ID as DATASOURCE_NUM_ID.PARTY_ID AND W_PARTY_D. W_PARTY_D. W_PARTY_D.INTEGRATION_ID as INTEGRATION_ID.INTEGRATION_ID as INTEGRATION_ID. you could speed up the large data ETL mappings by turning off automated PGA structures allocation and set SORT and HASH areas manually for the selected sessions. W_PARTY_D.EFFECTIVE_TO_DT as EFFECTIVE_TO_DT FROM W_PARTY_D Can be modified to: SELECT DISTINCT W_PARTY_D. which are used in the lookup query. W_PARTY_D.EFFECTIVE_TO_DT as EFFECTIVE_TO_DT FROM W_PARTY_D. Remember that Informatica would fire the lookup query for every record from its Reader thread. W_PARTY_D. Informatica Custom Relational Connections for long running mappings If you plan to summarize very large volumes of data (usually over 100 million records). When the lookup cache is disabled. W_PARTY_D.DATASOURCE_NUM_ID as DATASOURCE_NUM_ID.Disabling the lookup cache for heavy lookups will help to avoid excessive paging on the ETL tier.INTEGRATION_ID=W_RESPONSE_FS. the original query for Lkp_W_PARTY_D_With_Geo_Wid SELECT DISTINCT W_PARTY_D. W_PARTY_D. This approach can be applied selectively to both initial and incremental mappings after thorough benchmarks. otherwise data retrieval would be very expensive on the source lookup database tier. It would not store any data in its flat files on ETL tier. Informatica will execute the lookup query and cache much fewer rows. the Integration Service issues a select statement against the lookup source database to retrieve lookup values for each row from the Reader Thread. so it is parsed only once in the lookup source database. As a result. W_PARTY_D. and speed up the rows processing on its Transformation thread. you can consider constraining them by updating the Lookup queries and joining to a staging table used in the mapping.

Click on ‘Mapping’ tab 3. alter session set hash_area_size = 2000000000. you can increase only the sort_area_size as sorting operations for aggregate mappings are more memory intensive. which impact Informatica mappings performance. alter session set sort_area_size = 1000000000. Each mapping that is a candidate to use the custom Relational connections. should meet the requirements below: The mapping doesn’t use heavy transformations on ETL tier The Reader query joins very large tables Its Reader query execution plan uses HASH JOINS Connect to Informatica Workflow Manager and complete the following steps for each identified mapping: 1. The recommended range for commit intervals is from 10. Open a session in Task Developer 2. Define a new Target connection 'DataWarehouse_Manual_PGA' 3. set sort_area_size and hash_area_size to higher values. The larger the commit interval. Use the same values as in ‘DataWarehouse’ connection 4. Informatica Session Parameters There are three major properties. If you have limited system memory. Commit Interval The target-based commit interval determines the commit points at which the Integration Service commits data writes in the target database.000 up to 200. alter session set workarea_size_policy = manual. Follow the steps below to create a new Relational Connection with custom session parameters in Informatica: 1. defined in Informatica Workflow Manager for each session. Informatica uses DTM buffer memory to create the internal data structures and buffer blocks used to bring data into and out of the Integration Service. However too large commit interval may cause database logs to fill and result in session failure. Save the changes. the better the overall mapping’s performance. Select ‘Connections’ in the left pane 4. Open Informatica Workflow Manager and navigate to Connections -> Relational -> New 2. 17 .000. Click on ‘Connection Environment SQL’ and insert the following commands: Repeat the same steps to define another custom Relational connection to your Oracle Source database. Hash joins involving bigger tables can still perform better with smaller hash_area_size. DTM Buffer Size The DTM Buffer Size specifies the amount of memory the Integration Service uses for DTM buffer memory.000.To speed up such ETL mappings execution. Oracle BI Applications Informatica mappings have the default setting 10. Select the defined Custom value for Source or Target connection 5.

As the table grows larger. SQL> commit. Oracle BI Applications Informatica mappings have the default setting 128. and test the mapping with the updated setting. Oracle BI Applications execution plans take advantage of parallel workflows execution. Even though it does bypass database block cache. To determine whether your mapping. Default Buffer Block Size The buffer block size specifies the amount of buffer memory used to move a block of data from the source to the target. If you observe significant increase in the writer execution time at the end of the log. You can consider turning on lookup cache creation concurrency when you have one or two long running mappings. slows down because of writer thread. Every time Oracle scans for 12 contiguous blocks in a target table to perform a new write transaction.000 (512K). Important: Make sure you carefully analyze long running mapping bottlenecks before turning on lookup cache build concurrency in your production environment. Avoid using ‘Auto’ value for Default Buffer Block Size. The internal tests showed that the commit size for Normal load did not affect the number of allocated extents for one million rows in W_RESPONSE_F fact. which loads very large data in bulk mode. which are overloaded with lookups. Informatica Load: Bulk vs. open its Informatica session log.000. used in the internal benchmarks. You can reduce lookup cache build time by enabling parallel lookup cache creation by setting the value larger than one. Normal The Informatica writer thread may become a bottleneck in some mappings that use bulk mode to load very large volumes (>200M) into a data warehouse. it takes longer and longer to scan the segment for chunks of 12 contiguous blocks. Oracle BI Applications Informatica mappings have the default setting 0. as it may cause performance regressions for your sessions. the Informatica Writer thread may slow down the mapping’s overall performance. Important: Make sure you test the changes in your development repository and benchmark ETL performance before making changes to your production environment. Informatica Bulk Load: Table Fragmentation Informatica Bulk Load for very large volumes may not only slow down the mapping performance but also cause significant table fragmentation. Enabling concurrent lookup cache creation may result in additional overhead on a target database and longer execution time.000) at the beginning and the end of the log. However for the Bulk Load the number of 18 . The database session performs two direct path writes to insert each new portion of data. You can run the following SQL to update the Buffer Block Size to 512K for all mappings in your Informatica repository: SQL> update opb_cfg_attr set attr_value='512000' where attr_value='128000' and attr_id = 5. The analysis of a trace file from a Writer database session shows that Informatica uses direct path insert to load data in Bulk mode. The internal tests showed better performance for both Initial and Incremental ETL with Default Buffer Block Size set to 512. and compute the time to write the same set of blocks (usually 10. then you should consider either increasing commit size for the mapping or changing the session load mode from Bulk to Normal in Informatica Workflow Manager.Additional Concurrent Pipelines for Lookup Cache Creation Additional Concurrent Pipelines for Lookup Cache Creation parameter defines the concurrency for lookup cache creation.

To ensure effective performance of Informatica mappings: Avoid using NULL ports in Informatica transformations. The performance gap becomes larger when more ports are used in a mapping. the drop in throughput has been more significant for the latter scenario. depending on the number of NULL ports. It also includes NULL values into INSERT statements.5K rps 10K commit 80 ext / 30K rps 200 ext / 37K rps 1K commit 80 ext / 27K rps 960 ext / 8K rps 10 rows commit 80 ext / 14K rps > 5K ext (out of space) / 600 rps Important!!! To ensure bulk load performance and avoid or minimize target table fragmentation. The internal tests demonstrated that Informatica treats equally NULL and non-NULL values and allocates critical resources for processing NULL ports. • • • • 19 . and its execution runtime slows down dramatically. Review slow mappings for NULL ports or any other potentially redundant ports. The session CPU time grows nearly proportionally to the number of connected ports. such mappings performance can drop two times or even more. so does the row width. As soon as certain threshold of ports reached. the overall ETL execution would take much longer time to complete. make sure you set larger commit size in Informatica mappings. the internal Informatica session processing for wide mappings becomes even more complex. Navigate to DAC’s Setup screen -> Informatica Servers tab -> Maximum Sessions in the lower pane for both Informatica and Repository connections. Try to keep the total number of ports no greater than 50 per mapping. executed by WRITER thread on data warehouse tier. Keep in mind that too many Informatica sessions. may overload either source or target database. Set smaller number of connections to Informatica Integration Service in DAC. While processing large data volumes and executing in parallel.5K rps 100K commit 80 ext / 33K rps 190 ext / 55. The recommended range is from 5 to 10 sessions.extents increased rather significantly with commit size going down. Use of NULL Ports in Informatica Mappings The use of connected or disconnected ports with hard-coded NULL values in Informatica mappings can be yet another reason for slower ETL mappings performance. processed by Informatica. The commit size also affected the mapping performance for both Normal and Bulk load. The internal study showed that. Informatica Load type Normal mode Bulk mode 1M commit 80 ext / 34K rps 80 ext / 55. Benchmark your ETL performance in your test environment prior to implementing the change in the production system. Informatica Parallel Sessions Load on ETL tier Informatica mappings with complex transformations and heavy lookups typically consume larger amounts of memory during ETL execution. such mappings may easily overload the ETL server and cause very heavy memory swapping and paging. As the result. Ensure you have enough physical memory on your ETL tier server. To avoid such potential bottlenecks: • Consider implementing Informatica 64-bit version on your ETL tier. which could be eliminated. The table below shows the number of extents (ext) and throughput (rps) for each tested scenario. running in parallel. Refer to Hardware Recommendations section for more details.

0 introduced the use of the Bitmap Index feature of the Oracle RDBMS. To implement Informatica Load Balancing in DAC perform the following steps.6. Register additional Informatica Server(s) in DAC.9.9. and inserts are performed with indexes in place. Create Informatica services on each Informatica node and subscribe them to the single domain BITMAP INDEXES USAGE FOR BETTER QUERIES PERFORMANCE Introduction Oracle Business Intelligence Applications Version 7. consider implementing Informatica Load Balancing to balance the Informatica load across multiple ETL tiers and speed up mappings execution. Bitmap indexes provide significant performance improvements on data warehouse star queries. This is especially the case when there are a large number of such indexes.9. there are no indexes created on the tables except for the unique B-Tree indexes to preserve data integrity. Version 7. It does not run more sessions than the value specified for each of them. Any repository updates or configuration changes. 20 . must be replicated across all the participating nodes in the multiple domains configuration. Dropping all bitmap indexes on a large table prior to an ETL run. To minimize the overhead from Informatica repositories maintenance. For an incremental ETL run. performed on one node. DAC will create ETL indexes on a loaded table. You can register one or more Informatica servers and the Informatica Repository Server in DAC and specify the number of workflows that can be executed in parallel. the quality of the existing bitmap indexes may degrade as more updates. DAC properties for handling bitmap indexes during ETL DAC handles the same indexes differently for initial and incremental ETL runs.6 Configure the database connection information in Informatica Workflow Manager. The internal benchmarks showed performance gains when B-Tree indexes on the foreign keys and attributes were replaced with bitmap indexes. Version 7. Conversely. and then recreating them after the ETL completion may be quite expensive and time consuming. making such indexes less effective unless they are rebuilt. Prior to an initial load in a data warehouse. 2. consider the load balancing implementation below: • • Configure a single Informatica domain and deploy a single PowerCenter Repository service in it. In comparison with B-Tree indexes. deletes. their use may cause ETL performance degradations both in Oracle 10g and 11g. DAC’s index handling will vary based on the combination of the several DAC properties and individual index usage settings. Although bitmap indexes improve star queries response time. which will be required for faster execution of subsequent mappings. The DAC server automatically load balances across the servers. Important: Deploying multiple Informatica domains and repository services on different server nodes would cause additional maintenance overhead. Refer to the section Registering Informatica Servers in the DAC Client in the publication Oracle Business Intelligence Applications Installation Guide for Informatica PowerCenter Users. This section reviews the index processing behavior of the DAC and provides the recommendations for bitmap indexes handling during ETL runs. or when there is little change expected in the number of records updated or inserted into a table during each ETL run. 1.Informatica Load Balancing Implementation To improve the performance on the ETL tier. During the initial ETL run. Refer to the section Process of Configuring the Informatica Repository in Workflow Manager in the publication Oracle Business Intelligence Applications Installation Guide for Informatica PowerCenter Users.

no indexes will be dropped during an incremental ETL Y - DB2/390 customers may want to set it to N. since the indexes will be used to speed up subsequent mappings. or you set Drop/Create Indices. Initial ETL: Y – all indexes irrespective of any other settings will be dropped and created N . to handle indexes during ETL runs: Parameter Parameter Values Name Type Effect DAC will drop all indexes on a target table. applicable to bitmap indexes only. truncated before a load.a Bitmap index will be dropped prior to an ETL run. Index Usage Index 21 . Always Drop and Create Bitmap or Always Drop & N/A Create to True. N – an index will not be dropped in an incremental ETL run only. The property applies to Oracle data warehouse platform only. the index would not be dropped and recreated during subsequent ETL runs.1. Important: When set to N.1. applicable to all indexes. ETL | QUERY ETL .indexes with Always Drop & Create (Bitmap) will be dropped during an incremental ETL N . If an index is inactivated in DAC.a Bitmap index will not be dropped in an incremental ETL run only. It is used mostly in small Execution plans. If an index is inactivated in DAC. and then recreate them after loading the table. N/A The index property Always Drop & Create Bitmap does not override Drop/Create Indices execution plan property if the latter is set to N'.no indexes will be dropped during an initial ETL Default Value Incremental ETL: Drop/Create Execution Y|N Indices Plan Y . The property Always Drop and Create is an index specific property.4. available in DAC 10. so the value should be changed to N.3.The following table summarizes the list of parameters. N/A Always Drop Index & Create Y|N - The index property Always Drop & Create does not override Drop/Create Indices execution plan property if the latter is set to N'. unless you are executing a micro ETL in which case it would be too expensive to drop and create all indexes. N .an index is required to improve web queries performance. Query . Always Drop & Create Index Bitmap Y|N Y . DAC will re-create the dropped ETL indexes after loading the table. DAC drops ETL indexes on a table if it truncates the table before the load. Y – an index will be dropped prior to an ETL run. The recommended default value for other platforms is Y. this parameter overrides all other index level properties. The property Always Drop and Create is an index specific property.an index is required to improve subsequent ETL mappings performance. the index would not be dropped and recreated during subsequent ETL runs.

click on the Design button under top menu. You can connect to your DAC repository and execute the following queries: SQL> alter session set nls_date_format='DD-MON-YYYY:HH24:MI:SS'. and new indexes have been created in the DAC repository since the last ETL run. -. Rebuild the DAC execution plan. To disable the identified redundant indexes in DAC and drop them in Data Warehouse: Check the Inactive checkbox against the indexes. Reducing the number of redundant bitmap indexes is an essential step for improving initial and incremental loads. included into filtering conditions in RPD repository. False True | False - This parameter is useful when the current execution plan has Drop/Create Indexes set to True. To identify all enabled BITMAP indexes on a table in DAC metadata repository: Log in into your repository through the DAC user interface. Click Query sub-tab Enter Table name and check ‘Is Bitmap’ box in the query row and click Go. enabled in the DAC metadata repository. Disable redundant bitmap indexes in DAC. 1. connect to BI Server Administration Tool and generate the list of dependencies for each column using Query Repository and Related To features.Identify your ETL Run and put its format into the subsequent queries: select . especially for dimension and lookup tables. This parameter specifies the maximum number of indexes that the DAC server will 1 create in parallel for a single table. 2. False . even though the indexed columns might not be used in filtering conditions in the Oracle BI Server repository. Decide whether to drop or keep bitmap indexes during incremental loads. created and maintained as part of ETL runs. and therefore.START_TS) DAY TO SECOND ) || ' ' ' ' days ' hrs ' min ' sec ' PLAN_RUN_TIME 22 . Connect to your target database schema and drop the disabled indexes. which should be permanently dropped in the target schema. || || || ROW_WID. NAME ETL_RUN EXTRACT(DAY FROM (END_TS EXTRACT(HOUR FROM (END_TS EXTRACT(MINUTE FROM (END_TS EXTRACT(SECOND FROM (END_TS START_TS) DAY TO SECOND ) || START_TS) DAY TO SECOND ) || . Num Parallel Physical Indexes per Data Table Source Number Bitmap Indexes handling strategies Review the following recommendations for effective bitmap indexes management in your environment. - To identify the list of the exposed columns. Analyze the total time to build indexes and computing statistics during an incremental run. select your custom container in the pull down menu and select the Indices tab in the right pane. Pre-packaged Oracle BI Applications releases include bitmap indexes.START_TS) DAY TO SECOND ) || .Verify And Create NonSystem Existing Indices True – The DAC server will verify that all indexes defined in the DAC repository are created in the target database.DAC will not run any reconciliation checks between its repository and the target database.

run_step_wid = stp.STEP_NAME .SDTL.row_wid FROM w_etl_defn_run run . w_etl_defn_prm prm WHERE prm.etl_defn_wid = run. (SELECT ind_ref.AND ref_idx.row_wid AND sdtl.idx_name . STP. EXTRACT(DAY FROM (SDTL.START_TS) DAY TO SECOND ) ||' || EXTRACT(HOUR FROM (SDTL.obj_type = 'W_ETL_TABLE' AND tbl_ref.app_wid = app.obj_wid .start_ts DESC -. ind.Identify your custom Execution Plan Name: SELECT DISTINCT app.row_wid AND ind. w_etl_run_sdtl sdtl .start_ts) DAY TO SECOND) || ' min ' || EXTRACT(SECOND FROM(sdtl.inactive_flg = 'N' ) ref_idx WHERE def. EXTRACT(DAY FROM(sdtl.END_TS .tbl_name table_name .row_wid ='<Unique ETL ID from the first query>’ AND sdtl. W_ETL_RUN_SDTL SDTL days ' hrs ' min ' sec' 23 .Table Stats computing time: select TBL.end_ts end_time .obj_wid = ind.name idx_name .sdtl. -.start_ts start_time .run_wid AND def.SDTL.obj_wid = tbl.row_wid = '<Unique ETL ID from the first query>’.end_ts .end_ts .sdtl.row_wid AND tbl_ref.SDTL.tbl_name = 'W_OPTY_D' ORDER BY sdtl.obj_ref_wid = ind.row_wid AND run.start_ts) DAY TO SECOND) || ' days ' || EXTRACT(HOUR FROM(sdtl.from W_ETL_DEFN_RUN order by START_TS DESC.START_TS) DAY TO SECOND ) ||' || EXTRACT(SECOND FROM (SDTL.END_TS .soft_del_flg = 'N' AND ind_ref. w_etl_table tbl . -.app_wid = app. sdtl.Indexes build time: SELECT ref_idx. ref_idx.end_ts .sdtl.etl_defn_wid AND prm. w_etl_app app WHERE ind_ref.NAME TABLE_NAME .type_cd = 'Create Index' AND sdtl.app_wid = ‘<Your custom Execution Plan Name from the second query>’ AND tbl_ref.end_ts . w_etl_app app . w_etl_obj_ref tbl_ref . w_etl_run_step stp .end_ts .row_wid AND tbl_ref.start_ts) DAY TO SECOND) || ' hrs ' || EXTRACT(MINUTE FROM(sdtl.END_TS .START_TS) DAY TO SECOND ) ||' TBL_STATS_TIME from W_ETL_DEFN_RUN DEF .END_TS .START_TS) DAY TO SECOND ) ||' || EXTRACT(MINUTE FROM (SDTL. w_etl_obj_ref ind_ref .name tbl_name FROM w_etl_index ind .obj_type = 'W_ETL_INDEX' AND ind_ref.sdtl. sdtl. tbl.soft_del_flg = 'N' AND tbl_ref.sdtl.app_wid = ‘<Your custom Execution Plan Name from the second query>’ AND ind_ref.SDTL.row_wid = stp.index_wid = ref_idx.table_wid AND ind.start_ts) DAY TO SECOND) || ' sec' idx_bld_time FROM w_etl_defn_run def .obj_wid -. W_ETL_RUN_STEP STP .

ROW_WID order by SDTL. The same SQL would complete much faster if the indexes get dropped prior to the query execution. and the cumulative incremental load time does not fit into your load window.END_TS . which cannot be partitioned effectively by range. W_ETL_RUN_STEP STP .SUCESS_ROWS STP. Refer to the partitioning sections for more details. 3.ROW_WID and SDTL.SDTL. -. It may be used for large dimension tables. Option 2 is not recommended for fact tables (%_F).RUN_WID and DEF.SDTL.SDTL. W_ETL_RUN_SDTL SDTL where DEF.FAILED_ROWS SDTL.RUN_WID and DEF.ROW_WID and SDTL. updates or deletes could significantly increase the SQL DML execution time.END_TS . You should measure the cumulative time to run a specific task plus the time to rebuild indexes and compute required database statistics before deciding whether to drop or keep bitmap indexes in place during incremental loads. .ROW_WID=STP.RUN_STEP_WID = STP.SDTL.Informatica jobs for the selected ETL run: select SDTL. the system property defines how DAC will handle all bitmap indexes for all containers in the data warehouse schema.START_TS desc. Configure DAC not to drop selected bitmap indexes during incremental loads. . Since the DAC system property Drop and Create Bitmap Indexes Always overrides the index property Always Drop & Create.SDTL. Important: Bitmap indexes present on target tables during inserts.ROW_WID ='<Unique ETL ID from the first query>’ and SDTL. you can consider two options: Option 1: range partition large fact tables if they show up in the report. To workaround this limitation: days ' hrs ' min ' sec' 24 . Option 2: If the incremental volumes are low. Refer to the next chapter for the implementation.END_TS . .ROW_WID=STP. W_ETL_TABLE TBL where DEF.NAME SESSION_NAME SDTL. then you can consider keeping the selected indexes in place during incremental loads.TYPE_CD = 'Informatica' order by SDTL.START_TS desc.START_TS) DAY TO SECOND ) ||' INFA_RUN_TIME from W_ETL_DEFN_RUN DEF . it would take more time to rebuild the dropped bitmap indexes and compute required statistics.WRITE_THRUPUT .TYPE_CD = 'Analyze Table' and SDTL. EXTRACT(DAY FROM (SDTL.END_TS . leave bitmap indexes on the reported tables for the next incremental run and then compare the load times. Alternatively.TABLE_WID = TBL.START_TS) DAY TO SECOND ) ||' || EXTRACT(SECOND FROM (SDTL.END_TS .END_TS .RUN_STEP_WID = STP. .READ_THRUPUT SDTL.SDTL. If your benchmarks show that it is less time consuming to leave bitmap indexes in place on large dimension tables during incremental loads and the incremental volumes are relatively small. If the report shows significant amounts of time to rebuild indexes and compute statistics.START_TS) DAY TO SECOND ) ||' || EXTRACT(HOUR FROM (SDTL.START_TS) DAY TO SECOND ) ||' || EXTRACT(MINUTE FROM (SDTL.ROW_WID ='<Unique ETL ID from the first query>’ and SDTL..

Additional considerations for handling bitmap indexes during incremental loads.obj_type = 'W_ETL_TABLE' 25 . disable them in DAC repository and drop in database.w_etl_obj_ref tbl_ref. since the script requires access to two database schemes: ACCEPT DAC_OWNER PROMPT 'Enter DAC Repository schema name: ' ACCEPT DWH_OWNER PROMPT 'Enter Data Warehouse schema name: ' SELECT row_wid FROM "&&DAC_OWNER". The higher value of PCTFREE will mitigate the impact to some degree. so they can be safely dropped in your Data Warehouse schema and disabled in DAC repository. that will have a large volume of data updates and inserts.w_etl_app app. Depending on end user data and its distribution there may be some indexes on columns with just one distinct value. All bitmap indexes should be dropped for transaction fact tables.w_etl_app. during an incremental run. all_indexes all_ind WHERE ind_ref. "&&DAC_OWNER".5 – 1 percent of total records. "&&DAC_OWNER".w_etl_obj_ref ind_ref. or delete occurs on table columns with enabled indexes. For large tables with few data updates.soft_del_flg = 'N' AND ind_ref. For large tables with a small number of bitmap indexes. insert. the bitmap indexes quality will degrade. "&&DAC_OWNER". Important: You must uncheck the Inactive checkbox for these indexes before the next initial load. "&&DAC_OWNER".w_etl_index SET inactive_flg = 'Y' WHERE row_wid IN ( SELECT ind_ref. - - Disabling Indexes with DISTINCT_KEYS = 0 or 1 Oracle BI Applications delivers a number of indexes to optimize both ETL and end user queries performance.w_etl_table tbl.• • • Log in into your repository through DAC user interface. click on the Design button under the top menu. and select the Indices tab in the right pane. Since the Inactive property is used both for true inactive indexes and "hidden from incremental load" indexes. When an update. If you choose to keep some bitmap indexes in place during incremental runs. consider dropping and recreating the bitmap indexes since the time to rebuild would be short. Oracle RDBMS packs bitmap indexes in a data block much more tightly compared to B*Tree indexes. which should not be dropped during incremental runs. ACCEPT APP_ID PROMPT 'Enter your DAC container from the list above: ' UPDATE "&&DAC_OWNER".app_wid = '&&APP_ID' AND ind_ref.w_etl_index ind. otherwise DAC will not create them after the initial load completion. Such indexes will not be used in any queries.obj_wid FROM "&&DAC_OWNER". The following script helps to identify all such indexes. Check both the check boxes Always Drop & Create and Inactive against the indexes.row_wid AND tbl_ref. such as over 0. the property Always Drop & Create could be used for convenience to distinguish between two different categories. with over 20 million records. the indexes can be enabled during incremental runs without significant performance degradations.obj_type = 'W_ETL_INDEX' AND ind_ref. 4. consider creating the indexes with the storage parameter PCTFREE value to at least 50 or higher. Click on the Query sub-tab and get the list of all indexes defined on the target table.obj_wid = ind. You have to either connect as DBA user or implement additional grants.

distinct_keys <= 1 all_ind.obj_wid = tbl. COMMIT: EXIT.owner = '&&DWH_OWNER').app_wid = '&&APP_ID' tbl_ref. To implement index usage monitoring: 1. spool off.name all_ind.soft_del_flg = 'N' tbl_ref. 4.type_cd = 'Query' all_ind.app_wid = app.sql SELECT 'DROP INDEX ' || owner|| '. <SqlQuery name = "ETL_ORACLE_CREATE_INDEX"> and <SqlQuery name = "QUERY_ORACLE_CREATE_INDEX"> sections with: <SqlQuery name = "ORACLE_CREATE_INDEX"> BEGIN execute immediate 'CREATE %1 INDEX 26 .num_rows >= 1 ind. 2.xml and replace <SqlQuery name = "ORACLE_CREATE_INDEX">. Create a backup copy of <dac_home>/bifoundation/dac/CustomSQLs/CustomSQL. Open CustomSQL.' FROM all_indexes WHERE distinct_keys <=1 and and uniqueness = 'NONUNIQUE' owner='&&DWH_OWNER'.bat.table_name = tbl.obj_ref_wid = ind.inactive_flg = 'N' all_ind.sql Monitoring and Disabling Unused Indexes In addition to indexes with distinct_keys<=1 there could be more redundant query indexes in your data warehouse.' || index_name || ' . AND AND AND AND AND AND AND AND AND AND AND AND AND tbl_ref.bat <ORACLE_HOME>/bin/sqlplus <dwh_user>/<dwh_pwd>@<dwh_db> @<dac_home>/bifoundation/dac/scripts/pre_sql. These indexes could impact incremental ETL runtime Informatica mappings performance.name all_ind.row_wid ind. not used by any end user queries. Create a table in your data warehouse schema to load data from v$object_usage view: CREATE TABLE myobj_usage AS SELECT * FROM v$object_usage. -. -. Set "Script before every ETL" System parameter in DAC to pre_etl. pre_etl.Drop the indexes in the schema: spool drop_dist_indexes.sql INSERT INTO myobj_usage SELECT * FROM v$object_usage.table_wid ind.@drop_dist_indexes.sql 3. Such indexes can be identified by monitoring index usage in your warehouse over extended period of time (usually 3-4 months).row_wid tbl_ref.xml 5. Create the following scripts on DAC tier in the directory <dac_home>/bifoundation/dac/scripts pre_sql.uniqueness = 'NONUNIQUE' all_ind.index_name = ind.Execute the spooled SQL file to drop the identified indexes: -.-COMMIT.

END.%2 ON %3 ( %4 ) NOLOGGING'. If you implement index monitoring for the first time after completing ETLs. execute immediate 'ALTER INDEx %2 MONITORING USAGE'. </SqlQuery> 6. END. associated with a Child table via the child table Foreign Key (FK) and the FK Normal Index on the Child table. Note that BITMAP indexes are correctly flagged as used in the same scenario and reported in v$object_usage. / To query the unused indexes in your data warehouse execute the following SQL: SELECT DISTINCT index_name FROM myobj_usage WHERE used = 'NO'. execute immediate 'ALTER INDEX %2 MONITORING USAGE'. execute immediate 'ALTER INDEX %2 MONITORING USAGE'. but Oracle does not report them as used in v$object_usage. Important!!! There are two known cases when optimizer uses indexes but DOES NOT mark as used with Index Usage Monitoring turned on: • DML operations against Parent table (such as DELETE or UPDATE). do use the Child table FK index. END.index_name||' monitoring usage'. execute the following PL/SQL block to enable monitoring for all indexes: DECLARE CURSOR c1 IS SELECT index_name FROM user_indexes WHERE index_name NOT IN (SELECT index_name FROM v$object_usage WHERE MONITORING = 'YES'). BEGIN FOR rec IN c1 LOOP EXECUTE IMMEDIATE 'alter index '||rec. END. END LOOP. 27 . </SqlQuery> <SqlQuery name = "ETL_ORACLE_CREATE_INDEX"> BEGIN execute immediate 'CREATE %1 INDEX %2 ON %3 ( %4 ) NOLOGGING'. </SqlQuery> <SqlQuery name = "QUERY_ORACLE_CREATE_INDEX"> BEGIN execute immediate 'CREATE %1 INDEX %2 ON %3 ( %4 ) NOLOGGING PARALLEL'.

END LOOP. Reset "Script before every ETL" System parameter in DAC 3.xml from its backup copy. After identifying redundant indexes. Creation of such large number of query indexes can extend both initial and incremental ETL windows. • Identify and preserve all activated query indexes PRIOR to executing the first ETL run: CREATE TABLE psr_initial_query_idx AS SELECT ind_ref. and yet. 2. Important!!! Make sure you monitor the index usage for the extended period of at least 1-2 months before deciding which additional indexes could be disabled in DAC and dropped in your target schema. Restore <dac_home>/bifoundation/dac/CustomSQLs/CustomSQL. not report them in v$object_usage.index_name||' nomonitoring usage'. Such case may not be a critical one for BI Analytics warehouse. Make sure you carefully review the reported ‘unused’ indexes prior to dropping them in the database and disabling in DAC repository. You can consider disabling ALL query indexes and reduce your ETL runtime in the following scenarios: 1. w_etl_obj_ref tbl_ref. Important: If you plan to implement partitioning for your warehouse tables and you want to take advantage of conversion scripts in the next section. ind. BEGIN FOR rec IN c1 LOOP EXECUTE IMMEDIATE 'alter index '||rec.NAME idx_name. follow the steps below to turn off index monitoring: 1. created on the target tables prior to implementing partitioning. disabling them in DAC and dropping in your data warehouse. Execute the following PL/SQL block to disable index monitoring: DECLARE CURSOR c1 IS SELECT index_name FROM user_indexes WHERE index_name IN (SELECT index_name FROM v$object_usage WHERE MONITORING = 'YES'). using composite indexes. Handling Query Indexes during Initial ETL Oracle BI Applications delivers a number of query indexes. since it doesn’t use composite BITMAP indexes.NAME tbl_name FROM w_etl_index ind. or partitioning large fact tables and maintain local query indexes on the latest range partitions.• Optimizer may use extended statistics for computing correct table selectivity. then you need to have query indexes. st nd 28 . which are not used during ETL but required for OBIEE queries better performance. Disable query indexes -> run an initial ETL -> enable query indexes -> run an incremental ETL -> run OBIEE reports 2. END. You st nd cannot use this option for 1 ETL –> OBIEE –> 2 ETL sequence. w_etl_obj_ref ind_ref. used in ETL or OBIEE queries. Disable query indexes -> run an incremental ETL -> enable query indexes -> run another incremental ETL -> run OBIEE reports To summarize. while composite NORMAL indexes are used on surrogate keys (unique indexes) and critical columns. Most of query indexes are created as BITMAP indexes in Oracle database. tbl. you can disable query indexes only for the following pattern: 1 ETL –> 2 ETL –> OBIEE.obj_wid. This article discusses several options how to reduce index maintenance such as disabling unused query indexes.

isunique = 'N' AND ind. Follow the steps below to implement fact table partitioning in your data warehouse schema and DAC repository. PARTITIONING GUIDELINES FOR LARGE FACT TABLES Introduction Taking advantage of range and composite range-range partitioning for fact tables will not only reduce index and statistics maintenance time during ETL. which apply for composite range-range partitioning only. or convert the populated tables into partitioned objects after the full load.WHERE w_etl_table tbl. DAC will recreate all disabled query indexes.DRP_CRT_ALWAYS_FLG = 'Y' OR ind. SQL> commit. Please note that there are some steps.table_wid AND ind.obj_wid = tbl. and then rebuild disabled indexes after the load and compute statistics on updated partitions only. you can consider partitioning by month. 29 . Since the majority of inserts and updates impact the last partition(s).type_cd = 'Query' AND (ind. To implement the support for partitioned tables in Oracle Business Analytics Data Warehouse. you need to update DAC metadata and manually convert the candidates into partitioned tables in the target database. • Disable the identified query indexes PRIOR to starting the first ETL run: SQL> UPDATE w_etl_index SET inactive_flg = 'Y' WHERE row_wid IN (SELECT obj_wid FROM psr_initial_query_idx). • Execute your second ETL run.obj_ref_wid = ind.app_wid = :APP_ID AND tbl_ref. • • Execute your first ETL run. Enable all preserved indexes PRIOR to starting the second ETL run: SQL> UPDATE w_etl_index SET inactive_flg = 'N' WHERE row_wid IN (SELECT obj_wid FROM psr_initial_query_idx). are good candidates for partitioning.inactive_flg = 'N' AND ind. since the optimizer would build more efficient execution plans using partitions elimination logic. with more than 20 million rows. Large fact tables.row_wid AND tbl_ref.app_wid = :APP_ID AND ind_ref. quarter.soft_del_flg = 'N' AND ind_ref.obj_type = 'W_ETL_INDEX' AND ind_ref. To build an optimal partitioned table with reasonable data distribution.soft_del_flg = 'N' AND tbl_ref. You can either identify and partition target fact tables before initial run. but also improve web queries performance. year. etc.row_wid AND ind.app_wid = app.DRP_CRT_BITMAP_FLG = 'Y') Where APP_ID can be identified from: SELECT row_wid FROM w_etl_app.row_wid AND tbl_ref. Online reports and dashboards should also render results faster.obj_wid = ind. SQL> commit. you would need to disable only local indexes on a few impacted partitions. w_etl_app app ind_ref.obj_type = 'W_ETL_TABLE' AND tbl_ref.

decide on the appropriate partitioning key and partitioning range for your future partitioned table.Convert to partitioned tables Perform the following steps to convert a regular table into an range partitioned table. Basing on the compiled data. - The following table summarizes the recommended partitioning keys for some large Oracle BI Applications Fact tables: Area Financials Financials Financials Financials Financials Financials Sales Sales Sales Sales Procurement Table Name W_AP_XACT_F W_AR_XACT_F W_GL_REVN_F W_GL_COGS_F W_TAX_XACT_F W_GL_OTHER_F W_SALES_ORDER_LINE_F W_SALES_PICK_LINE_F W_SALES_INVOICE_LINE_F W_SALES_SCHEDULE_LINE_F W_PURCH_SCHEDULE_LINE_F Partitioning Key POSTED_ON_DT_WID POSTED_ON_DT_WID POSTED_ON_DT_WID POSTED_ON_DT_WID POSTED_ON_DT_WID ACCT_PERIOD_END_DT_WID ORDERED_ON_DT_WID ACT_PICKED_ON_DT_WID INVOICED_ON_DT_WID ORDERED_ON_DT_WID ORDERED_ON_DT_WID 30 . Connect to the Oracle BI Server repository and check the usage or dependencies on each column in the logical and presentation layers. Analyze the summarized data distribution in the target table by each potential partitioning key candidate and data volumes per time range. The proposed partitioning guidelines assume that the majority of incremental ETL volume data (~90%) is new records. The recommended partitioning range for most implementations is a month. Yearly range: you are recommended to maintain only one. Depending on the chosen range granularity. CURRENT partition. define index and table actions for PREVIOUS and CURRENT partitions Quarterly range: you may consider maintaining just one. you may consider rebuilding local indexes for the most impacted latest partitions: Monthly range: you are advised to maintain two latest partitions. i.e. quarter or year. CURRENT partition. which end up in the one or two latest partitions. Identify a partitioning key and decide on a partitioning interval Choosing the correct partitioning key is the most important factor for effective partitioning. since it defines how many partitions will be involved in web queries or ETL updates. Review the following guidelines for selecting a column for a partitioning key: • • • • • Identify eligible columns of type DATE for implementing range partitioning. though you can consider a quarter or a year for your partitioning ranges. month.

You can consider two options to convert a table into a partitioned one: (a) create table as select. 1.range-partitioning W_PROJ_EXP_LINE_F . partition PART_2008 values less than (2009). partition PART_MAX values less than (maxvalue) ) tablespace BIAPPS_DATA nologging parallel enable row movement as select * from W_WRKFC_EVT_MONTH_F_ORIG. The second option would be preferred in high availability data warehouses when you have to carry out partitioning with end users accessing the data. Create a partitioned table in Data Warehouse You can pre-create a partitioned table prior to the initial load. 2. The example below uses the following tables for converting into partitioned objects: • • W_WRKFC_EVT_MONTH_F . or (b) create table exchange partition syntax and then split partitions. you DO NOT need to re-run the initial load.Siebel Sales W_REVN_F CLOSE_DT_WID Consider implementing composite range-to-range partitioning for Financials and Projects large fact tables using the following partitioning and sub-partitioning keys: Area Table Name Partitioning Key Sub-partitioning Key Financials W_GL_LINKAGE_INFORMATION_G DISTRIBUTION_SOURCE POSTED_ON_DT_WID (*) Financials W_GL_BALANCE_F Projects Projects Projects W_PROJ_EXP_LINE_F W_PROJ_COST_LINE_F W_PROJ_REVENUE_LINE_F CHANGED_ON_DT CHANGED_ON_DT CHANGED_ON_DT CHANGED_ON_DT BALANCE_DT_WID EXPENDITURE_DT_WID PROJ_ACCOUNTING_DT_WID GL_ACCOUNTING_DT_WID (*) Implementing sub-partitioning for W_GL_LINKAGE_INFORMATION_G is recommended only if end users compress inactive sub-partitions with historic data to reclaim space. partition PART_2007 values less than (2008). is simpler and faster. create table as select. If you have already completed the initial load into a regular table and then decided to partition it. using range partitioning by year: SQL> create table W_WRKFC_EVT_MONTH_F partition by range (EVENT_YEAR)( partition PART_MIN values less than (2006). Rename the original table SQL> rename W_WRKFC_EVT_MONTH_F to W_WRKFC_EVT_MONTH_F_ORIG. partition PART_2006 values less than (2007). 31 . The internal tests showed that the first option. partition PART_2010 values less than (2011).composite range-range partitioning. Create the partitioned table. partition PART_2009 values less than (2010). or load data into the regular table and then create its partitioned copy and migrate the summarized data. There are no queries which would benefit from partitioning on POSTED_ON_DT_WID column.

subpartition PART_200801_2008 values less than (20090000) . partition PART_MAX values less than (maxvalue) ( subpartition PART_MAX_MIN values less than (19980000) . If you implement composite range-range partitioning. subpartition PART_200801_1999 values less than (20010000) . use the following sample syntax: SQL> create table W_PROJ_EXP_LINE_F partition by range (CHANGED_ON_DT) subpartition by range (EXPENDITURE_DT_WID) (partition PART_MIN values less then (TO_DATE('01-JAN-2008'. subpartition PART_MIN_2001 values less than (20020000) . subpartition PART_MAX_2007 values less than (20080000) . subpartition PART_MAX_MAX values less than (maxvalue) ) ) nologging parallel enable row movement as (select * from W_PROJ_EXP_LINE_F_ORIG). The composite range-range example uses Quarter for partitioning and Year for sub-partitioning ranges. subpartition PART_MIN_2007 values less than (20080000) . so the table partition values are defined using format YYYY. EXPENDITURE_DT_WID column has number(8) precision. 32 . subpartition PART_MIN_1999 values less than (20010000) . subpartition PART_200801_2003 values less than (20040000) .EVENT_YEAR column in the example above uses number(4) precision. subpartition PART_200801_2009 values less than (20100000) . partition PART_200801 values less than (TO_DATE('01-APR-2008'.. subpartition PART_MIN_MAX values less than (maxvalue) ) . subpartition PART_200801_1998 values less than (19990000) . subpartition PART_MAX_2003 values less than (20040000) . subpartition PART_MIN_2004 values less than (20050000) .'DD-MON-YYYY')) ( subpartition PART_MIN_MIN values less than (19980000) . If you choose WID column for a partitioning key. so the table partition values are defined using format YYYYMMDD. subpartition PART_MIN_2008 values less than (20090000) . subpartition PART_MIN_2006 values less than (20070000) . subpartition PART_200801_MAX values less than (MAXVALUE) ) . subpartition PART_200801_2004 values less than (20050000) . subpartition PART_MIN_2009 values less than (20100000) . subpartition PART_MAX_2008 values less than (20090000) . subpartition PART_200801_2007 values less than (20080000) . subpartition PART_200801_2006 values less than (20070000) . subpartition PART_MAX_2005 values less than (20060000) . subpartition PART_MAX_1999 values less than (20010000) .. subpartition PART_MIN_2002 values less than (20030000) . subpartition PART_200801_2001 values less than (20020000) . subpartition PART_MAX_2004 values less than (20050000) . subpartition PART_MAX_2009 values less than (20100000) . .'DD-MON-YYYY')) ( subpartition PART_200801_MIN values less than (19980000) . subpartition PART_MAX_2006 values less than (20070000) . subpartition PART_MAX_2002 values less than (20030000) .. subpartition PART_MIN_2003 values less than (20040000) . then you have to define your partition ranges using format YYYYMMDD. subpartition PART_MAX_1998 values less than (19990000) . subpartition PART_200801_2005 values less than (20060000) . subpartition PART_MAX_2001 values less than (20020000) .. subpartition PART_MIN_1998 values less than (19990000) . . subpartition PART_200801_2002 values less than (20030000) . subpartition PART_MIN_2005 values less than (20060000) .

NAME = 'W_WRKFC_EVT_MONTH_F' AND I.7.sql SQL> SELECT ‘ALTER INDEX ‘|| INDEX_NAME ||’ rename to ‘|| INDEX_NAME || ‘_ORIG. The script creates indexes with maximum seven positions.' .sql 4.I.ROW_WID = I..ISUNIQUE.NAME||' ASC')) ||MAX(DECODE(POSTN.NAME ||' (' ||MAX(DECODE(POSTN.C.' .NAME||' ASC')) ||MAX(DECODE(POSTN. Make sure you check partitioning column data type prior to partitioning a table.' . ‘ FROM USER_INDEXES WHERE TABLE_NAME = ‘W_WRKFC_EVT_MONTH_F_ORIG’.sql SQL> SELECT 'CREATE ' ||DECODE(ISUNIQUE.'BITMAP ') ||'INDEX ' ||I.sql If you want to keep indexes on the original renamed table until successful partitioning conversion completion.'Y'.NAME||' ASC')) ||MAX(DECODE(POSTN.1. Run the spooled file indexes.'LOCAL') ||' NOLOGGING.5.ISBITMAP.sql in warehouse schema.'Y'.NAME||' ASC')) ||MAX(DECODE(POSTN.NAME.1.sql 33 .4.INDEX_WID AND I. Quarter or Month correspondingly.'UNIQUE ') ||DECODE(ISBITMAP.6. Create Global and Local indexes.' .'||C.' FROM USER_INDEXES WHERE TABLE_NAME = 'W_WRKFC_EVT_MONTH_F_ORIG'.'||C.NAME||' ASC')) ||MAX(DECODE(POSTN.NAME||' ASC')) ||') tablespace USERS_IDX ' ||CHR(10) ||DECODE(ISUNIQUE.INACTIVE_FLG = 'N' GROUP BY T. then update modify "MAX(DECODE(POSTN. Drop / Rename indexes on renamed table To drop indexes on the renamed table: SQL> spool drop_ind. SQL> @indexes.'||C.NAME ||CHR(10) ||' ON ' ||T. W_ETL_INDEX_COL C WHERE T. If you have indexes with more than seven column positions.NAME. Execute the following queries as DAC Repository owner: SQL> spool indexes. YYYYQQ or YYYYMMDD for partitioning by Year.' .' .3. 3.sql SQL> SELECT 'DROP INDEX '|| INDEX_NAME||'..'||C. then use the following commands: SQL> spool rename_ind. W_ETL_INDEX I.'GLOBAL'. 4.'||C.NAME||' ASC')) ||CHR(10) ||MAX(DECODE(POSTN.Important: You must use the exact format YYYY. SQL> spool off SQL> @drop_ind.ROW_WID = C.TABLE_WID AND T.))" sentence.' FROM W_ETL_TABLE T. SQL> spool off SQL> @rename_ind. SQL> spool off.2.'||C.'Y'.

then you can maintain CURRENT partition only. CASCADE => true. then use the following names and values: 34 . If you choose quarterly or yearly range. If you choose monthly partitions. Define the following source system parameters: • • • Select Design Menu Click on Source System Parameters tab in the right pane Click New Button and define two new parameters with the following attributes: Name: $$CURRENT_YEAR_WID Data Type: SQL Value (click on checkbox icon to define the following parameters): Logical Data Source: DBConnection_OLAP Enter the following SQL: SELECT TO_CHAR(ROW_WID) FROM W_YEAR_D WHERE W_CURRENT_CAL_YEAR_CODE = 'Current' Name: $$PREVIOUS_YEAR_WID Data Type: SQL Value (click on checkbox icon to define the following parameters): Logical Data Source: DBConnection_OLAP Enter the following SQL: SELECT TO_CHAR(ROW_WID) FROM W_YEAR_D WHERE W_CURRENT_CAL_YEAR_CODE = 'Previous' Important: Make sure you select the correct Logical Data Source. DBConnection_OLAP.2.auto_sample_size. method_opt => 'FOR ALL INDEXED COLUMNS SIZE AUTO'). END. which points to your target data warehouse.4. tabname => 'W_WRKFC_EVT_MONTH_F'.Gather_table_stats( NULL. estimate_percent => dbms_stats. You should consider implementing PREVIOUS and CURRENT partitions only for monthly or more granular ranges. Configure Informatica to support partitioned tables Enable Row Movement Set skip_unusable_indexes = TRUE in DataWarehouse Relational Connection in Informatica Workflow Manager. Open Workflow Manager -> Connections -> Relational -> edit DataWarehouse -> Update Connection Environment SQL: ALTER SESSION SET SKIP_UNUSABLE_INDEXES=TRUE. Compute Statistics on Partitioned Table SQL> BEGIN dbms_stats. Maintaining PREVIOUS partition for partitioning by a quarter or a year may introduce unnecessary overhead and extend your incremental ETL execution time. Configure DAC to support partitioned tables Create new source system parameters Important: This example below shows how to set up rebuilding indexes and maintaining statistics for last two PREVIOUS and CURRENT partitions for range partitioning by year. when you define these new system parameters.

Define ‘CURRENT_YEAR_WID Local Index’ SQL: Name: Disable CURRENT_YEAR_WID Local Index Type: SQL 35 . For example: Name: $$THIRD_MONTH_WID Value: SELECT TO_CHAR(ROW_WID-1) FROM W_MONTH_D WHERE W_CURRENT_CAL_MONTH_CODE ='Previous' Name: $$THIRD_MONTH_WID Value: SELECT TO_CHAR(ROW_WID-2) FROM W_MONTH_D WHERE W_CURRENT_CAL_MONTH_CODE ='Previous' Update Index Action Framework Create the following Index Actions in DAC Action Framework: 1.Name: $$PREVIOUS_MONTH_WID Value: SELECT TO_CHAR(ROW_WID) FROM W_MONTH_D WHERE W_CURRENT_CAL_MONTH_CODE ='Previous' Name: $$CURRENT_MONTH_WID Value: SELECT TO_CHAR(ROW_WID) FROM W_MONTH_D WHERE W_CURRENT_CAL_MONTH_CODE = 'Current' If you choose Quarterly partitions. • • Click ‘Add’ button to define the second SQL command. then use the following names / values: Name: $$PREVIOUS_QTR_WID Value: SELECT TO_CHAR(ROW_WID) FROM W_QTR_D WHERE W_CURRENT_CAL_QTR_CODE = 'Previous' Name: $$CURRENT_QTR_WID Value: SELECT TO_CHAR(ROW_WID) FROM W_QTR_D WHERE W_CURRENT_CAL_QTR_CODE = 'Current' Note: If you need to maintain more than two partitions during the incremental ETLs. then you can create more variables and repeat the steps for them below.) at the end of SQLs in Text Area. Year Partitioning: Disable Local Index Parameter • • • • • Navigate to Tools -> Seed Data -> Actions -> Index Actions -> New Enter Name: Year Partitioning: Disable Local Index Click on ‘Check’ Icon in Value field Click on Add button in the new open window Define ‘PREVIOUS_YEAR_WID Local Index’ SQL: Name: Disable PREVIOUS_YEAR_WID Local Indexes Type: SQL Database Connection: target Valid Database Platform: ORACLE • Enter the following command in the lower right Text Area: alter index getIndexName() modify partition PART_@DAC_$$PREVIOUS_YEAR_WID unusable Important!!! Do not use semicolon (.

Note: If you Quarterly or Monthly partition range. Year Partitioning: Enable Local Sub-Partitioned Index Parameter (for composite partitioning only) • • • • • Click ‘New’ in Index Actions window to create a new parameter Enter Name: Year Partitioning: Enable Local Index Click on ‘Check’ Icon in Value field Click on Add button in the new open window Define the following value: 36 . 2. 3. Month. Note: If you use Quarterly or Monthly partition range. then you need to define separate actions for each range. Important!!! If you implement partitioning by Year. Year Partitioning: Enable Local Index Parameter • • • • • Click ‘New’ in Index Actions window to create a new parameter Enter Name: Year Partitioning: Enable Local Index Click on ‘Check’ Icon in Value field Click on Add button in the new open window Define the following two values: Name Enable PREVIOUS_YEAR_WID Local Index Type: SQL Database Connection: target Valid Database Platform: ORACLE Enter the following command in the lower right Text Area: alter index getIndexName() rebuild partition PART_@DAC_$$PREVIOUS_YEAR_WID nologging Name Enable CURRENT_YEAR_WID Local Index Type: SQL Database Connection: target Valid Database Platform: ORACLE Enter the following command in the lower right Text Area: alter index getIndexName() rebuild partition PART_@DAC_$$CURRENT_YEAR_WID nologging • Save the changes. then use PREVIOUS_MONTH_WID / CURRENT_MONTH_WID or PREVIOUS_QTR_WID / CURRENT_QTR_WID in Action names and SQLs.Database Connection: target Valid Database Platform: ORACLE • Enter the following command in the lower right Text Area: alter index getIndexName() modify partition PART_@DAC_$$CURRENT_YEAR_WID unusable • Save the changes. then use PREVIOUS_MONTH_WID / CURRENT_MONTH_WID or PREVIOUS_QTR_WID / CURRENT_QTR_WID in Action names and SQLs. Quarter.

Name Enable Local Sub-partitioned Index Type: Stored Procedure Database Connection: target Valid Database Platform: ORACLE Enter the following command in the lower right Text Area: DECLARE CURSOR C1 IS SELECT DISTINCT SUBPARTITION_NAME FROM USER_IND_SUBPARTITIONS WHERE INDEX_NAME='getIndexName()' AND STATUS = 'UNUSABLE'. 4. 5. END • Save the changes. Year Partitioning: Create Local Bitmap Index Parameter • • • • • Click ‘New’ in Index Actions window to create a new parameter Enter Name: Year Partitioning: Create Local Bitmap Index Click on ‘Check’ Icon in Value field Click on Add button in the new open window Define the following value: Name Create Local Bitmap Indexes Type: SQL Database Connection: target Valid Database Platform: ORACLE Enter the following command in the lower right Text Area: Create bitmap index getIndexName() on getTableName()(getUniqueColumns()) tablespace getTableSpace() local parallel nologging Save the changes.SUBPARTITION_NAME||''. BEGIN FOR REC IN C1 LOOP EXECUTE IMMEDIATE 'alter index getIndexName() rebuild subpartition '||REC. Year Partitioning: Create Local B-Tree Index Parameter • • • • • Click ‘New’ in Index Actions window to create a new parameter Enter Name: Year Partitioning: Create Local B-Tree Index Click on ‘Check’ Icon in Value field Click on Add button in the new open window Define the following value: Name Create Local B-Tree Index Type: SQL Database Connection: target Valid Database Platform: ORACLE Enter the following command in the lower right Text Area: 37 . END LOOP.

INDEX_NAME=UPI.PARTITION_POSITION AND UPI.TABLE_NAME=UPI.INDEX_NAME AND UIP.PARTITION_POSITION=UIP.TABLE_NAME AND UTP. USER_TAB_PARTITIONS UTP WHERE UIP. Year Partitioning: Gather Partition Stats Parameter • • • • • Navigate to Tools -> Seed Data -> Actions -> Table Actions -> New Enter Name: Year Partitioning: Gather Partition Stats Click on ‘Check’ Icon in Value field Click on Add button in the new open window Define the following value: Name: Gather Partition Stats Type: Stored Procedure Database Connection: target Valid Database Platform: ORACLE Enter the following command in the lower right Text Area: DECLARE CURSOR C1 IS SELECT DISTINCT UTP.Create index getIndexName() on getTableName()(getUniqueColumns()) tablespace getTableSpace() local parallel nologging Save the changes. Year Partitioning: Create Global Unique Index Parameter • • • • • Click ‘New’ in Index Actions window to create a new parameter Enter Name: Year Partitioning: Create Global Unique Index Click on ‘Check’ Icon in Value field Click on Add button in the new open window Define the following value: Name Create Local B-Tree Indexes Type: SQL Database Connection: target Valid Database Platform: ORACLE Enter the following command in the lower right Text Area: Create unique index getIndexName() on getTableName()(getUniqueColumns()) tablespace getTableSpace() global parallel nologging Save the changes.PARTITION_NAME FROM USER_IND_PARTITIONS UIP. 6. USER_PART_INDEXES UPI.STATUS = 'USABLE' AND UTP.TABLE_NAME = 'getTableName()' 38 . Update Table Action Framework Create the following Table Action in DAC Action Framework: 1.

AUTO_SAMPLE_SIZE. ESTIMATE_PERCENT => DBMS_STATS.AUTO_SAMPLE_SIZE. CASCADE => FALSE. then use PREVIOUS_MONTH_WID / CURRENT_MONTH_WID or PREVIOUS_QTR_WID / CURRENT_QTR_WID in Action names and SQLs.DEFAULT_DEGREE).INDEX_NAME=UPI.PARTITION_NAME IN ('PART_@DAC_$$CURRENT_YEAR_WID'. USER_TAB_PARTITIONS UTP WHERE UIP.DEFAULT_DEGREE). GRANULARITY => 'PARTITION'. BEGIN FOR REC IN C1 LOOP DBMS_STATS. ESTIMATE_PERCENT => DBMS_STATS. PARTNAME => REC.'PART_@DAC_$$PREVIOUS_QTR_WID'). END LOOP. DEGREE => DBMS_STATS. METHOD_OPT => 'FOR ALL INDEXED COLUMNS SIZE AUTO'. PARTNAME => REC. USER_PART_INDEXES UPI.INDEX_NAME AND UIP. Note: If you Quarterly or Monthly partition range.TABLE_NAME=UPI.GATHER_TABLE_STATS( NULL.AND UTP. BEGIN FOR REC IN C1 LOOP DBMS_STATS. METHOD_OPT => 'FOR ALL INDEXED COLUMNS SIZE AUTO'. 2.'PART_@DAC_$$PREVIOUS_YEAR_WID').GATHER_TABLE_STATS( NULL.PARTITION_NAME. Quarter Composite Partitioning: Gather Partition Stats Parameter (for composite partitioning only) Navigate to Tools -> Seed Data -> Actions -> Table Actions -> New Enter Name: Quarter Composite Partitioning: Gather Partition Stats Click on ‘Check’ Icon in Value field Click on Add button in the new open window Define the following value: Name: Gather Partition Stats Type: Stored Procedure Database Connection: target Valid Database Platform: ORACLE Enter the following command in the lower right Text Area: DECLARE CURSOR C1 IS SELECT DISTINCT UTP. GRANULARITY => 'PARTITION'. • Save the changes.PARTITION_POSITION AND UPI.PARTITION_NAME IN ('PART_@DAC_$$CURRENT_QTR_WID'. • • • • • 39 . TABNAME => 'getTableName()'. TABNAME => 'getTableName()'.STATUS = 'USABLE' AND UTP. CASCADE => FALSE.PARTITION_NAME.TABLE_NAME AND UTP. DEGREE => DBMS_STATS.PARTITION_NAME FROM USER_IND_PARTITIONS UIP. END LOOP.PARTITION_POSITION=UIP. END.TABLE_NAME = 'getTableName()' AND UTP.

retrieved by your query. Every time. and click on ‘Actions’ sub-tab in the lower pane. Un-checking these properties would signal DAC to skip any actions. defined in Index Action Framework. Important!!! Make sure you exclude the selected global index from the index query result set. Navigate to Design -> Indices -> Query ->Table Name 'W_WRKFC_EVT_MONTH_F'. Navigate to Design -> Tables -> Query -> Name 'W_WRKFC_EVT_MONTH_F' -> Go. The global index must NOT have any assigned index action tasks. DAC encounter ‘Drop Index’ step for an updated index. it will make it unusable for the last two partitions.END. • • • • • • • • • • • • • • Right click your mouse on the generated list (Upper right pane) and select ‘Add Actions’ Select ‘Drop Index’ from Action Type field Select ‘Incremental’ from Load Type field Click on Checkbox icon in Action field Select ‘Year Partitioning: Disable Local Indexes’ Action Name Click OK in Choose Action window Click OK in Add Actions window. when attaching ‘Year Partitioning: Create Local Bitmap Index’. Important: Make sure you choose ‘Initial’ from Load Type field. defined in Index Action Framework. DAC will override these actions with the steps. Attach Index Action to the desired indexes • Retrieve all local indexes on partitioned tables. 40 . Attach Table Action to the converted partitioned table • Retrieve the partitioned tables. ‘Year Partitioning: Create Local B-Tree Index’ and ‘Year Partitioning: Create Global Unique Index’ Index Action Tasks. ‘Year Partitioning: Create Local B-Tree Index’ and ‘Year Partitioning: Create Global Unique Index’ to the appropriate indexes. used in an initial ETL run. and for ‘Create Index’ – rebuild the index for the last two partitions. check ‘Is Bitmap’ checkbox -> Go. Important!!! DO NOT change ‘Drop / Create Always’ or ‘Drop / Create Always Bitmap’ properties for the modified indexes. then select the desired index in the right upper pane. Then click ‘New’ button in the lower pane and fill in the appropriate values in the new line. If you want to attach the defined Index Actions for an individual index. • Repeat the same steps above to attach ‘Year Partitioning: Create Local Bitmap Index’. The steps above apply to all indexes. Right click your mouse on the generated list (Upper right pane) and select ‘Add Actions’ one more time Select ‘Create Index’ from Action Type field Select ‘Incremental’ from Load Type field Click on Checkbox icon in Action field Select ‘Year Partitioning: Enable Local Indexes’ Action Name Click OK in Choose Action window Click OK in Add Actions window. Even though you select Drop/Create Index Action Type.

Interval Partitioning. Important!!! DO NOT execute them in your data warehouse. Unit test the changes for converted partitioned tables in DAC You can generate the list of actions for a single task. You can specify INTERVAL 100 for such range format. so Oracle wouldn’t create any partitions in-between. Whenever DAC encounter ‘Analyze Table’ step for an updated table. If you want to attach the defined Table Action for an individual table. • Exit unit testing window. 20050201. Important!!! Make sure you use ‘Quarter Composite Partitioning: Gather Partition Stats’ parameter for composite range-range tables. The majority of recommended partitioning keys in Oracle BI Applications are using DATE format YYYYMMDD. to validate the correct sequence of steps without executing them. then select the desired table in the right upper pane. With Interval Partitioning there is no need to pre-create partitions for data in the future. In the example with POSTED_ON_WID there is a very large gap between ranges 20041201 and 20050101. Click ‘Yes’ to proceed with unit testing. Then click ‘New’ button in the lower pane and fill in the appropriate values in the new line. which populates a partitioned table. For example. Follow the steps below to unit test the sequence of steps for a partitioned table: • • • • • • • Select ‘Execute’ button from your top sub-menu Select your execution plan in the upper right pane Click ‘Ordered tasks’ sub-tab in the lower right pane Retrieve the task which populates your partitioned table Click ‘Unit test’ button in the lower right pane menu. and click on ‘Actions’ sub-tab in the lower pane. Interval Partitioning Oracle 11G introduced a new partitioning type. etc. POSTED_ON_WID column is based on monthly range partitions with values less than 20041101. Validate the generated sequence of steps in the new output window. Oracle will skip creating partitions for ranges with no data. 20050101.• • • • • • • Right click your mouse on the generated list (Upper right pane) and select ‘Add Actions’ Select ‘Analyze Table’ from Action Type field Select ‘Incremental’ from Load Type field Click on Checkbox icon in Action field Select ‘Year Partitioning: Gather partition stats’ Action Name Click OK in Choose Action window Click OK in Add Actions window. it will override the default action by the set of steps from Table Action Framework. 20041201. the syntax for creating an interval partitioned table: 41 . For example. Oracle automatically creates new partitions with pre-defined range interval.

use @DAC_$$PREVIOUS_MONTH_WID instead of PART_@DAC_$$PREVIOUS_MONTH_WID. The impact may not be significant. DAC will kick in its routine to turn off local indexes on the newly created partition during the next incremental ETL. For example.169323] secs Busy Percentage = [49. making Informatica Writer Thread the primary bottleneck during ETL. You can quickly find such cases by analyzing Thread Busy % and volume of updates in Informatica session logs.Requested: 3753687 Applied: 3753687 Affected: 3753687 Rejected: 0 WRITER_1_*_1> WRT_8043 *****END LOAD SESSION***** WRITER_1_*_1> WRT_8006 Writer run completed. as soon as the first record exceeds the last partition range value. MANAGER> PETL_24031 ***** RUN INFO FOR TGT LOAD ORDER GROUP [1].562755] secs Total Idle Time = [5467. CONCURRENT SET [1] ***** Thread [READER_1_1_1] created for [the read stage] of partition point [SQ_JOINER] has completed. Total Run Time = [10753. Important!!! Make sure you remove PART_ prefix from partition names in DAC Action Framework scripts above. when Oracle creates a new interval partition. since the DML operations with local indexes in place will be done only for a single day of incremental data. Important!!! Oracle creates a new interval partition and partitioned local indexes. Workflow Session Partitioning for Parallel Writer Updates Row by row updates can significantly slow down mappings performance. you may expect possibly slower mapping performance. Name: $$CURRENT_MONTH_WID Value: SELECT partition_name FROM user_tab_partitions WHERE table_name = 'W_WRKFC_EVT_MONTH_F' AND partition_position = (SELECT MAX(partition_position) FROM user_tab_partitions WHERE table_name = 'W_WRKFC_EVT_MONTH_F'). So during an ETL. INFORMATICA WORKFLOWS SESSION PARTITIONING This section covers techniques and recommendations for mapping partitioning to speed up workflows executions for large volume mappings or slow ETL jobs.SQL> create table W_WRKFC_EVT_MONTH_F partition by range (EVENT_YEAR) interval(100) ( partition PART_MIN values less than (19900101)) tablespace BIAPPS_DATA nologging parallel enable row movement as select * from W_WRKFC_EVT_MONTH_F_ORIG. You also need to use the following SQLs to assign to DAC variables: Name: $$PREVIOUS_MONTH_WID Value: SELECT partition_name FROM user_tab_partitions WHERE table_name = 'W_WRKFC_EVT_MONTH_F' AND partition_position = (SELECT MAX(partition_position)-1 FROM user_tab_partitions WHERE table_name = 'W_WRKFC_EVT_MONTH_F'). as all local indexes on the new partition will be enabled during the run.159460] 42 . For example: LOAD SUMMARY ============ WRT_8036 Target: W_CAMP_HIST_F (Instance Name: [W_CAMP_HIST_F]) WRT_8041 Updated rows .

2. Unique) Scans rather than expensive Full Table Scans for each update record. SQL> CREATE TABLE w_camp_hist_fs PARTITION BY HASH(integration_id) PARTITIONS 4 AS SELECT * FROM w_camp_hist_fs_bak. Every time you execute an incremental ETL. choose the column with the largest value of distinct keys. which will ensure even data distribution across all table partitions. General recommendations for hash partitioning implementation: 1.003050] Thread work time breakdown: … … Thread [WRITER_1_*_1] created for [the write stage] of partition point [W_CAMP_HIST_F] has completed.082997] secs Total Idle Time = [5244. Implement Staging Table HASH Partitioning Oracle Table Partitioning provides an option to implement hash partitions. consider using the unique keys. Ensure no BITMAP indexes on the Target table during the concurrent UPDATE executions. … X_LAST_UPD_WID = ? WHERE ROW_WID = ? Otherwise every single row update would perform Full Table Scan and result in very low throughput of few rows per second. etc). DAC will truncate staging tables (_DS. SQL> SELECT partition_name FROM user_tab_partitions WHERE table_name='W_CAMP_HIST_FS'.Thread [TRANSF_1_1_1] created for [the transformation stage] of partition point [SQ_JOINER] has completed. etc. Small volume updates can be sped up by ensuring indexes on columns in WHERE clause of UPDATE statement. and Informatica reported Writer thread Busy Percentage =50%. such as ROW_WID. In our example the following UPDATE statement in WRITER should have the index on ROW_WID column: WRITER WRITER_1_*_1> WRT_8124 Target Table W_CAMP_HIST_F :SQL UPDATE statement: UPDATE W_CAMP_HIST_F SET PARTY_WID = ?. Hash Partitioning the identified staging table (W_GEO_DS in our example) will ensure even data distribution across all partitions for each incremental ETL run: SQL> RENAME w_camp_hist_fs TO w_camp_hist_fs_bak. Make sure you create an index on your target table columns in WHERE clause of your UPDATE statement to use Index Access path for each UPDATE DML. Important!!! Make sure you have the required indexes for your Update transformations to use Index (if possible. If there are no unique keys. and then Informatica SDE mappings populate them with incremental changes extracted from Source environments. Otherwise you may end up with deadlocks during your ETL.599181] secs Busy Percentage = [50. The additional improvements for long running update mappings can be achieved by parallelizing the concurrent updates in the same target table.967105] The Writer shows that all processed rows were updates. When picking the partitioning key. INTEGRATION_ID. Total Run Time = [5758. PARTITION_NAME -----------------------------SYS_P41 SYS_P42 SYS_P43 SYS_P44 Note: No changes need to be done to the table definition in DAC.931512] secs Busy Percentage = [20. Total Run Time = [10696.883913] secs Total Idle Time = [4606. _FS. Requirements for Implementing Concurrent Updates 1. 43 .

Override each session’s SQL override. Consider implementing table compression for Partitioned Fact tables at partition level: a. The target tables for such mappings should NOT be compressed. If you couldn’t change their Load type to Bulk. where data loaded once. faster end user query performance. make sure you create them on the hash partitioned table as well. then leave their corresponding target tables uncompressed.e.0. while keeping incremental ETL within acceptable execution time frame. Make sure you carefully benchmark the mappings using compressed tables before implementing the compression in your production data warehouse. FROM W_CAMP_HIST_FS partition(SYS_P41) W_CAMP_HIST_FS 4. so table compression may cause performance overhead. Oracle Business Intelligence Applications delivers several Informatica mappings.7 or higher. 6. especially for very large incremental volumes. 5. which perform mass updates during Initial ETL. 3. Open the desired workflow in Informatica Workflow Manager. Create 4-6 hash partitions at most. The recommended Oracle Database version is 11. Incremental Informatica mappings always use Normal Load mode. updated by SIL_PositionDimensionHierarchy_AsIsUpdate_Full. 2. You don’t need to create them as global or local. Review the following recommendations and guidelines before table compression implementation: 1. You must apply the following database patches 8834636 and 8930565. which joins a staging and a target table.1. Compressed tables require significantly less disk storage and result in improved query performance due to reduced I/O and buffer cache requirements. Configure your workflow to run the four new sessions in parallel and remove the original session. hard-coding a unique partition name instead of the staging table name. is an example of compression exception. 2. If there is a CDC Lookup. There is a smaller set of initial mappings. Table compression should be implemented for target tables after careful analysis of DML operation types. Important!!! If there are any indexes on the original staging table.2. should be uncompressed 44 . so their target tables can be compressed to deliver comparable or better ETL performance. TABLE COMPRESSION IMPLEMENTATION GUIDELINES Oracle Database table compression can be applied effectively to optimize the space consumption and reduce memory use in buffer cache in Oracle Business Analytics Data Warehouse. 3. loaded during incremental ETLs. especially in a warehouse environment. Check with Oracle Support for any additional database patches. The majority of Initial Informatica mappings use Bulk Load. Active partitions. It’s a valuable feature. In this case you apply hash partitioning to W_POSITION_DH table using ROW_WID for its partitioning key. Create Parallel Sessions in Workflow Manager Create the same number of the sessions as the number of hash partitions. Create four copies of the original Session in the opened Workflow. 5. You may consider applying the same approach to such mappings as SIL_PositionDimensionHierarchy_AsIsUpdate. The larger number of parallel sessions would increase the load on Informatica tier when building CDC Lookups for each hash partition. as well as Target database tier performing more concurrent updates. which reads and updates W_POSITION_DH table. each session running against a dedicated partition and configure the workflow to run the sessions in parallel: 1. 3. W_POSITION_DH. 6. Save the changes and test the updated mapping. Table compression requires careful analysis and planning to take the advantage of efficient space consumption. and read many times by end user queries. otherwise you would have to use Action Framework for them. i. data volumes and ETL performance benchmarks. which use Normal Load type in Informatica. Building more hash partitions and corresponding parallel sessions would not make the mapping running faster. 4. then make sure you update it to point to a dedicated partition of the staging table for each of the four sessions.

PAY_ELEMENT_TYPES_F. in spite of the higher estimated query cost.RUN_RESULT_ID.CREATED_BY.RESULT_VALUE. The real life example below provides the comparison between NESTED LOOPS and HASH JOIN execution. GUIDELINES FOR ORACLE OPTIMIZER HINTS USAGE IN ETL MAPPINGS Hash Joins versus Nested Loops in Oracle RDBMS Though Oracle optimizer chooses the most efficient plan with the least cost for a query. PAY_RUN_RESULT_VALUES. After compressing a table you need to rebuild all its indexes (ALTER INDEX . Important: Oracle might take up to 8-10 hours just to build hashes in memory for very large tables (over 100 million records). PAY_PAYROLL_ACTIONS. Although this approach may start returning results sooner the overall time to fetch all the records could be considerably longer. but the overall time for fetching all the records would be reduced quite dramatically. sometimes database hints can help to improve efficiency and increase overall ETL performance.INPUT_VALUE_ID. PAY_ASSIGNMENT_ACTIONS. reported in very large volume Oracle Business Analytics Data Warehouses. Initial records fetch may take more time as hash joins are built in memory. Older. PAY_INPUT_VALUES_F. PAY_ELEMENT_TYPES_F. PAY_ASSIGNMENT_ACTIONS. PER_TIME_PERIODS. The numbers are applicable to the specific test case configuration. PAY_RUN_RESULT_VALUES.LAST_UPDATE_DATE LAST_UPDATE_DATE1. PAY_PAYROLL_ACTIONS.LAST_UPDATE_DATE LAST_UPDATE_DATE2 FROM PAY_RUN_RESULT_VALUES.LAST_UPDATE_DATE. such as index fast full scan) for a table involved in the query.OUTPUT_CURRENCY_CODE. measured by overall time to load all the records. used in a query.ASSIGNMENT_ACTION_ID. REBUILD syntax). which would vary depending on hardware specifications and database settings. 45 .ELEMENT_TYPE_ID. PAY_ELEMENT_TYPES_F. ETL is a batch process. PAY_PAYROLL_ACTIONS.ASSIGNMENT_ID.PAY_ADVICE_DATE.b.CREATION_DATE.LAST_UPDATED_BY.INPUT_CURRENCY_CODE. relatively static partitions can be good compression candidates 7. PAY_PAYROLL_ACTIONS. Specifying the hint USE_HASH would change the execution plan to use a full table scan (in some cases the optimizer might still use indexes. PAY_RUN_RESULTS.START_DATE. PAY_PAYROLL_ACTIONS. PAY_INPUT_VALUES_F. so it is important not to kill the query. so you should avoid using nested loops by incorporating hint USE_HASH for tables with volumes over ten million records. have indexes defined on the joining columns in a WHERE clause.END_DATE. the optimizer might choose Nested loop join over Hash join accessing a table using an index defined on a column. If tables. PAY_RUN_RESULT_VALUES. used in a join. Table --------------------PAY_RUN_RESULT_VALUES PAY_RUN_RESULTS PAY_ASSIGNMENT_ACTIONS PAY_INPUT_VALUES_F PAY_ELEMENT_TYPES_F PAY_PAYROLL_ACTIONS PAY_ELEMENT_CLASSIFICATIONS PER_TIME_PERIODS No of Rows ---------900 Million 14 Million 50 Million 10000 10000 1445896 1897 52728 SELECT PAY_ASSIGNMENT_ACTIONS. PER_TIME_PERIODS. PAY_RUN_RESULTS.

LAST_UPDATE_DATE >= TO_DATE('01/01/2007 00:00:00'.CLASSIFICATION_ID AND PER_TIME_PERIODS.INPUT_VALUE_ID AND PAY_RUN_RESULTS.EFFECTIVE_START_DATE AND PAY_ELEMENT_TYPES_F.UOM = 'M' AND PAY_RUN_RESULT_VALUES.EFFECTIVE_DATE BETWEEN PAY_ELEMENT_TYPES_F.'MM/DD/YYYY HH24:MI:SS') OR PAY_INPUT_VALUES_F.RUN_RESULT_ID AND PAY_RUN_RESULT_VALUES.ACTION_STATUS = 'C' AND PAY_PAYROLL_ACTIONS.ACTION_STATUS = 'C' AND PAY_INPUT_VALUES_F.ELEMENT_TYPE_ID AND PAY_ASSIGNMENT_ACTIONS.TIME_PERIOD_ID = PAY_PAYROLL_ACTIONS.TIME_PERIOD_ID AND PAY_INPUT_VALUES_F. PAY_ELEMENT_CLASSIFICATIONS.LAST_UPDATE_DATE >= TO_DATE('01/01/2007 00:00:00'.NAME = 'Pay Value' AND CLASSIFICATION_NAME NOT LIKE '%Information%' AND CLASSIFICATION_NAME NOT LIKE '%Employer%' AND CLASSIFICATION_NAME NOT LIKE '%Balance%' AND PAY_RUN_RESULTS.EFFECTIVE_START_DATE AND PAY_INPUT_VALUES_F. PAY_PAYROLL_ACTIONS. PER_TIME_PERIODS WHERE (PAY_PAYROLL_ACTIONS.PAY_ELEMENT_TYPES_F.EFFECTIVE_END_DATE AND PAY_ELEMENT_CLASSIFICATIONS.ASSIGNMENT_ACTION_ID AND PAY_RUN_RESULTS.SOURCE_TYPE IN ('I'.EFFECTIVE_DATE BETWEEN PAY_INPUT_VALUES_F. 'E') The Explain Plan for the query is below: Plan hash value: 1498624813 Id 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 Operation SELECT STATEMENT CONCATENATION NESTED LOOPS HASH JOIN TABLE ACCESS BY INDEX ROWID NESTED LOOPS HASH JOIN TABLE ACCESS FULL MERGE JOIN SORT JOIN HASH JOIN MERGE JOIN SORT JOIN TABLE ACCESS FULL FILTER SORT JOIN TABLE ACCESS FULL TABLE ACCESS FULL FILTER SORT JOIN TABLE ACCESS FULL INDEX RANGE SCAN TABLE ACCESS FULL TABLE ACCESS BY INDEX ROWID INDEX UNIQUE SCAN NESTED LOOPS HASH JOIN NESTED LOOPS NESTED LOOPS NESTED LOOPS NESTED LOOPS NESTED LOOPS Name Rows 60 55 59 PAY_ASSIGNMENT_ACTIONS 7 38937 5503 PAY_ELEMENT_CLASSIFICATIONS 1626 5505 3369 3369 3527 15 PAY_INPUT_VALUES_F 15 96393 96393 52728 TempS Cost (%CPU) Time pc 14040 111K (2) 00:22:23 Bytes 12870 12980 8064K 147 7604K 961K 47154 806K 355K 355K 292K 675 675 3765K 11M 3765K 1184K 83423 83304 7 47490 9053 13 9039 8931 8930 8579 156 155 8424 7424 349 106 105 3 20007 3 2 19634 19630 19524 14863 8538 6974 6740 (2) (2) (0) (1) (4) (0) (4) (4) (4) (4) (3) (2) 00:16:42 00:16:40 00:00:01 00:09:30 00:01:49 00:00:01 00:01:49 00:01:48 00:01:48 00:01:43 00:00:02 00:00:02 PAY_PAYROLL_ACTIONS PER_TIME_PERIODS (4) 00:01:42 (5) 00:01:30 (1) 00:00:05 (3) (2) (0) (4) (0) (0) (1) (1) (1) (1) (1) (1) (1) 00:00:02 00:00:02 00:00:01 00:04:01 00:00:01 00:00:01 00:03:56 00:03:56 00:03:55 00:02:59 00:01:43 00:01:24 00:01:21 PAY_ELEMENT_TYPES_F PAY_ASSIGNMENT_ACTIONS_N50 PAY_RUN_RESULTS PAY_RUN_RESULT_VALUES PAY_RUN_RESULT_VALUES_PK 654 27468 654 27468 35 9986K 190M 1 14 1 4 936 4 820 1460 232K 1552 225K 1579 198K 223 24084 234 19890 46 .CLASSIFICATION_ID = PAY_ELEMENT_TYPES_F.ACTION_POPULATION_STATUS = 'C' AND PAY_ASSIGNMENT_ACTIONS.ELEMENT_TYPE_ID = PAY_ELEMENT_TYPES_F.'MM/DD/YYYY HH24:MI:SS')) AND PAY_PAYROLL_ACTIONS.INPUT_VALUE_ID = PAY_INPUT_VALUES_F.RUN_RESULT_ID = PAY_RUN_RESULTS.'MM/DD/YYYY HH24:MI:SS') OR PAY_ELEMENT_TYPES_F.EFFECTIVE_END_DATE AND PAY_PAYROLL_ACTIONS.PAYROLL_ACTION_ID AND PAY_PAYROLL_ACTIONS.PAYROLL_ACTION_ID = PAY_PAYROLL_ACTIONS.LAST_UPDATE_DATE >= TO_DATE('01/01/2007 00:00:00'.ASSIGNMENT_ACTION_ID = PAY_ASSIGNMENT_ACTIONS.

The reported throughput achieved is 700 RPS. Note: The optimizer chose to access the tables through index paths. After adding the hint USE_HASH(PAY_RUN_RESULT_VALUES PAY_RUN_RESULTS PAY_INPUT_VALUES_F PAY_ASSIGNMENT_ACTIONS PAY_ELEMENT_TYPES_F PAY_PAYROLL_ACTIONS PAY_ELEMENT_CLASSIFICATIONS PER_TIME_PERIODS) to the preceding query. and then joined the result sets using Nested Loops.32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 ROWID ROWID TABLE ACCESS FULL TABLE ACCESS BY INDEX INDEX RANGE SCAN TABLE ACCESS BY INDEX INDEX UNIQUE SCAN TABLE ACCESS BY INDEX ROWID INDEX RANGE SCAN TABLE ACCESS BY INDEX ROWID INDEX RANGE SCAN TABLE ACCESS BY INDEX ROWID INDEX UNIQUE SCAN TABLE ACCESS FULL TABLE ACCESS BY INDEX ROWID INDEX UNIQUE SCAN NESTED LOOPS NESTED LOOPS HASH JOIN TABLE ACCESS BY INDEX ROWID NESTED LOOPS NESTED LOOPS NESTED LOOPS MERGE JOIN SORT JOIN TABLE ACCESS FULL FILTER SORT JOIN TABLE ACCESS FULL TABLE ACCESS BY INDEX ROWID INDEX UNIQUE SCAN TABLE ACCESS BY INDEX ROWID INDEX RANGE SCAN INDEX RANGE SCAN TABLE ACCESS FULL TABLE ACCESS BY INDEX ROWID INDEX UNIQUE SCAN TABLE ACCESS BY INDEX ROWID INDEX UNIQUE SCAN PAY_INPUT_VALUES_F PAY_PAYROLL_ACTIONS PAY_PAYROLL_ACTIONS_N5 PER_TIME_PERIODS PER_TIME_PERIODS_PK PAY_ASSIGNMENT_ACTIONS PAY_ASSIGNMENT_ACTIONS_N50 PAY_RUN_RESULTS PAY_RUN_RESULTS_N50 PAY_RUN_RESULT_VALUES PAY_RUN_RESULT_VALUES_PK PAY_ELEMENT_TYPES_F PAY_ELEMENT_CLASSIFICATION _S PAY_ELEMENT_CLASSIFICATION _PK 1 241 72295 1 1 7 35 1 20 1 1 9873 1 1 1 1 1 1 213 217 31 32 14 14 939 939 1 1 7 35 20 9873 1 1 1 1 45 9640 155 6585 341 (2) 00:00:02 (1) 00:01:20 (1) 00:00:05 (0) 00:00:01 (0) (0) (0) (0) (0) (0) (0) (2) 00:00:01 00:00:01 00:00:01 00:00:01 00:00:01 00:00:01 00:00:01 00:00:02 23 147 20 14 404K 29 1 0 7 3 4 2 3 2 105 1 0 (0) 00:00:01 (0) 00:00:01 (4) (4) (4) (0) (4) (4) (5) (5) (3) (2) 00:01:46 00:01:46 00:01:46 00:00:01 00:01:45 00:01:34 00:01:32 00:01:31 00:00:02 00:00:02 PAY_RUN_RESULTS PAY_INPUT_VALUES_F 234 205 191 20 31737 27993 3348 2720 630 630 37560 37560 23 8809 8808 8805 4 8699 7829 7612 7580 156 155 7424 7423 1 0 PAY_PAYROLL_ACTIONS PER_TIME_PERIODS PER_TIME_PERIODS_PK PAY_ASSIGNMENT_ACTIONS PAY_ASSIGNMENT_ACTIONS_N50 PAY_RUN_RESULTS_N50 PAY_ELEMENT_TYPES_F PAY_RUN_RESULT_VALUES PAY_RUN_RESULT_VALUES_PK PAY_ELEMENT_CLASSIFICATIONS PAY_ELEMENT_CLASSIFICATION_ PK (5) 00:01:30 (5) 00:01:30 (0) 00:00:01 (0) 00:00:01 (0) 00:00:01 (0) (0) (2) (0) (0) (0) 00:00:01 00:00:01 00:00:02 00:00:01 00:00:01 00:00:01 147 7 3 2 105 3 2 1 0 404K 14 29 (0) 00:00:01 The query took more than 48 hours to execute and produced 128 million records even though the first record was fetched within 1.5 hours of the execution. the optimizer produced the following execution plan: Plan hash value: 3421230164 Id 0 1 2 3 4 5 6 Operation SELECT STATEMENT HASH JOIN HASH JOIN TABLE ACCESS FULL HASH JOIN HASH JOIN HASH JOIN Name Rows 10 10 10 15 103K 1624 1700 Bytes TempSpc 2340 2340 2050 675 15M 231K 204K Cost (%CPU) 932K (5) 932K (5) 932K (5) 155 (2) 932K (5) 167K (3) 166K (3) Time 03:06:29 03:06:29 03:06:28 00:00:02 03:06:27 00:33:28 00:33:23 PAY_INPUT_VALUES_F 47 .

SCA. PRJ.6 The following table summarizes the database hints. OM. SCM.6 mappings performance in internal performance tests. the query completed much faster. PRJ _Customer PRJ FIN. SDE_ORA_PartyOrganizationDimension Incr. SCM.9. 48 . OM. PRJ. OM FIN. Adapter Mapping name ETL Recommended hints mode Common Dimensions Siebel.9. OM SIL_PartyDimension_Organization SDE_PartyPersonDimension Initial Initial Initial / Incr. ts SIL_Agg_OverlappingCampaign_Contac ts Incr. SDE_ORA_PartyOrganizationDimension Initial OM. DOM_REL) */ /*+ USE_HASH(HZ_ORGANIZATION_PROFILES HZ_PARTIES) */ /*+ INDEX (TARGET_TABLE W_GL_ACCOUNT_D_U1) INDEX (SCD_OUTER W_GL_ACCOUNT_D_U1)*/ $$HINT1:/*+ INDEX(PP HZ_PARTIES_U1)*/ $$HINT2: /*+ USE_HL(OC TMP1)*/ /*+ FULL(PER_ALL_PEOPLE_F) */ Siebel CRM SIL_ResponseFact_Full Initial /*+ NO_INDEX(w_regn_d) NO_INDEX(w_segment_d) NO_INDEX(offer) NO_INDEX(terr) NO_INDEX(W_WAVE_D) NO_INDEX(w_ld_wave_d) NO_INDEX(w_source_d) */ $$HINT1: /*+ USE_HASH(PTY PER DS) FULL(PTY) */ $$HINT1 /*+ NO_INDEX(SRC) NO_INDEX(OSRC) */ $$HINT2 /*+ FULL(W_CAMP_HIST_F) */ $$HINT3 /*+ FULL(W_CAMP_HIST_F) */ $$HINT1 /*+ NO_INDEX(SRC) NO_INDEX(OSRC) */ $$HINT2 /*+ FULL(W_CAMP_HIST_F) */ $$HINT3 /*+ FULL(W_CAMP_HIST_F) */ SDE_ORA_PartyPersonDimension_Cust Initial omer SCA. Below is the summary of two executions: Query CPU Cost First records Fetch Start Time After 1 hour 30 min After 5 hours Reported Informatica Throughput 700 rows / sec 3000 rows / sec Mapping Execution Time 48 hours 10 hours No Hints (nested loops) Hash Join hint 111K 923K Suggested hints for Oracle Business Intelligence Applications 7. Initial SIL_PartyDimension_Person Initial SIL_Agg_OverlappingCampaign_Accoun Incr. Incr. OM.7 8 9 10 11 12 13 14 15 TABLE ACCESS FULL HASH JOIN HASH JOIN TABLE ACCESS FULL TABLE ACCESS FULL TABLE ACCESS FULL TABLE ACCESS FULL TABLE ACCESS FULL TABLE ACCESS FULL 10527 431K 670K 51M 682K 39M PAY_PAYROLL_ACTIONS 96393 3765K PAY_ASSIGNMENT_ACTIONS 10M 203M PAY_RUN_RESULTS 9986K 190M PER_TIME_PERIODS 52728 1184K PAY_RUN_RESULT_VALUES 912M 11G PAY_ELEMENT_CLASSIFICATIONS 1626 47154 PAY_ELEMENT_TYPES_F 47M 4896K 105 166K 128K 7424 105K 20007 349 751K 13 (2) (3) (3) (5) (3) (4) (1) (4) (0) 00:00:02 00:33:22 00:25:48 00:01:30 00:21:03 00:04:01 00:00:05 02:30:21 00:00:01 Even though the estimated cost went up. HCM. PRJ _Customer_Full SCA. /*+ USE_HASH(PTY PER DS) NO_INDEX(PTY) */ /*+ NO_INDEX(ORG) */ set DTM Buffer Size to 32000000 set Default Buffer Block Size to 128000 $$HINT1: /*+ USE_HASH(PER PTY CNP SUP)*/ $$HINT2: /*+ USE_HASH(PP) */ $$HINT1: /*+ USE_HASH(HZ_ORGANIZATION_PROFILES HZ_PARTIES ) */ $$HINT2: /*+USE_HASH(DOM_ULT_DUNS. which helped to improve Oracle Business Intelligence Applications 7. SCA. SCA SIL_GLAccountDimension_SCDUpdate SDE_ORA_PartyContactStaging SDE_ORA_INVENTORYPRODUCTDIM ENSION_FULL Incr. SIL_PartyDimension_Person SCA OM Siebel PRJ.

SIL_Agg_ResponseCampaignOffer and SIL_Agg_ResponseCampaign SIL_Agg_ProductLineRevn_CloseDate SIL_Agg_ProductLineRevn_OpenDate SIL_Agg_SalesPipelineRevn_CloseDate SIL_Agg_SalesPipelineRevn_OpenDate

Incr. Incr.

/*+ FULL(W_PARTY_PER_D) */

/*+ FULL(W_REVN_F) */

EBS Supply Chain 11.5.10 SDE_ORA_PurchaseReceiptFact SDE_ORA_StandardCostGeneral_Full SIL_ExpenseFact_FULL SIL_APInvoiceDistributionFact_Full Initial / Incr. Initial / Incr. Initial Initial /*+ FULL(RCV_TRANSACTIONS) */ /+ USE_HASH(MTL_SYSTEM_ITEMS_B)/ /*+ USE_HASH(W_PROJECT_D) */ /*+ USE_HASH(W_AP_INV_DIST_FS PO_PLANT_LOC PO_RCPT_LOC OPERATING_UNIT_ORG PAYABLES_ORG PURCHASE_ORG W_LEDGER_D INV_TYPE DIST_TYPE SPEND_TYPE APPROVAL_STATUS PAYMENT_STATUS W_AP_TERMS_D W_PROJECT_D EXPENDITURE_ORG CREATED_BY CHANGED_BY W_XACT_SOURCE_D W_Financial_Resource_D W_GL_ACCOUNT_D W_PARTY_D W_SUPPLIER_ACCOUNT_D) */ /*+ FULL(M) */ /*+ USE_HASH(PPV1 PPV2 POR1 POR2 PPEVS1 PPE1 PPE2 PPA2) */ $$HINT1 /*+ USE_HASH(PA_TASKS PA_TASK_TYPES PA_PROJ_ELEMENT_VERSIONS PA_PROJ_ELEMENTS PA_PROJECT_STATUSES PA_PROJ_ELEM_VER_STRUCTURE PA_PROJECTS_ALL PA_PROJECT_TYPES_ALL PA_PROJ_ELEM_VER_SCHEDULE) */ $$HINT2 /*+ USE_HASH(PA_PROJECTS_ALL PA_PROJECT_TYPES_ALL PA_TASKS) */ $$HINT3 /*+ USE_HASH(PE PEV PPS) */ SIL_PRODUCTTRANSACTIONFACT SIL_PURCHASECOSTFACT SIL_APINVOICEDISTRIBUTIONFACT Initial Initial Incr. /*+ USE_HASH(SRC_PRO_D TO_PRO_D) */ /*+ USE_HASH(W_PROJECT_D) */ Apply hint to Lkp_W_AP_INV_DIST_F query: /*+ INDEX(TARGET_TABLE) */ EBS Human Resources R12 SDE_ORA_PayrollFact_Full Initial $$HINT1: /*+ USE_HASH( PAY_RUN_RESULT_VALUES PAY_RUN_RESULTS PAY_INPUT_VALUES_F PAY_ASSIGNMENT_ACTIONS PAY_ELEMENT_TYPES_F PAY_PAYROLL_ACTIONS PAY_ELEMENT_CLASSIFICATIONS PER_TIME_PERIODS ) */ $$HINT2: /*+ ORDERED USE_HASH( PAY_RUN_RESULT_VALUES PAY_RUN_RESULTS PAY_INPUT_VALUES_F PAY_ASSIGNMENT_ACTIONS PAY_ELEMENT_TYPES_F PAY_PAYROLL_ACTIONS PAY_ELEMENT_CLASSIFICATIONS PER_TIME_PERIODS ) */ $$HINT3: /*+ FULL(PER_ALL_ASSIGNMENTS_F) FULL(PER_ALL_PEOPLE_F) */ SDE_ORA_PayrollFact_Agg_Items_Deri Initial ve_Full PLP_RECRUITMENTHIREAGGREGAT E_LOAD PLP_WorkforceEventFact_Month Incr. Initial /*+ parallel(W_PAYROLL_FS,4)*/ $$HINT1: /*+ USE_HASH (FACT MONTH PERF LOC SOURCE AGE EMP)*/ /*+ FULL(suph) */ EBS Financials R12 SDE_ORA_APTransactionFact_Liability Distribution Incr. /*+ parallel(AP_INVOICE_DISTRIBUTIONS_ALL,4) use_hash(AP_INVOICES_ALL AP_INVOICE_DISTRIBUTIONS_ALL

SDE_ORA_BOMHeaderDimension_Full

Initial

SDE_ORA_PROJECT_HIERARCHYDIM Initial / ENSION_STAGE1 Incr. SDE_ORA_TASKS Initial / Incr.

49

PO_HEADERS_ALL PO_DISTRIBUTIONS_ALL PO_LINES_ALL)*/ SDE_ORA_Stage_GLJournals_Derive SDE_ORA_CustomerFinancialProfileDi mension SDE_ORA_ARTransactionFact_CreditM emoApplication PLP_APIncrActivityLoad Incr. Initial / Incr. Incr. /*+ PARALLEL (W_ORA_GL_JOURNALS_F_TMP, 4) */ /*+ USE_HASH (HZ_PARTIES)*/ /*+ USE_HASH(AR_PAYMENT_SCHEDULES_ALL RA_CUSTOMER_TRX_ALL RA_CUSTOMER_TRX_ALL1 AR_PAYMENT_SCHEDULES_ALL1 AR_DISTRIBUTIONS_ALL) */ /*+ index(W_AP_XACT_F, W_AP_XACT_F_M1) */ /*+ full(W_GL_ACCOUNT_D) full(W_STATUS_D) full(W_AP_XACT_F) full(W_XACT_TYPE_D) full(D1) full(D2) full( D3)*/ EBS Projects R12 SDE_ORA_ProjectFundingHeader SDE_ORA_ProjectInvoiceLine_Fact SDE_ORA_ProjectCostLine_Fact Initial / Incr. Initial Initial /*+ USE_HASH(PA_PROJECTS_ALL PA_TASKS PA_AGREEMENTS_ALL PA_SUMMARY_PROJECT_FUNDINGS) */ /*+ USE_HASH(pa_draft_invoice_items pa_tasks pa_draft_invoices_all pa_projects_all pa_agreements_all pa_lookups) */ /*+ USE_HASH(pa_cost_distribution_lines_all pa_expenditure_items_all pa_expenditures_all pa_implementations_all pa_implementations_all_1 gl_sets_of_books pa_project_assignments pa_resource_list_members pa_lookups pa_projects_all pa_project_types_all pa_expenditure_types) */ /*+ INDEX(LOOKUP_TABLE W_PARTY_D_M3) */

Incr.

PLP_APXactsGroupAccount_A_Stage_F Initial ull

SIL_ProjectFundingHeader_Fact

Incr.

EBS Order Management (Enterprise Sales) 11.5.10 SIL_SalesPickLinesFact_Full SIL_SalesOrderLinesFact_Full SIL_SalesInvoiceLinesFact_Full SIL_SalesScheduleLinesFact_Full Initial Initial Initial Initial /*+ FULL(A18) FULL(A19) FULL(A20) FULL(A21) FULL(A22) */ /*+ FULL(A18) FULL(A19) FULL(A20) FULL(A21) FULL(A22) */ /*+ FULL(A18) FULL(A19) FULL(A20) FULL(A21) FULL(A22) */ /*+ FULL(A18) FULL(A19) FULL(A20) FULL(A21) FULL(A22) */ EBS Service 11.5.10 SDE_ORA_EntitlementDimension SDE_ORA_AgreeDimension SDE_ORA_AbsenceEvent SIL_ActivityFact_Full Initial Initial Initial Initial /*+ parallel(OKC_K_LINES_TL,4) parallel (OKC_K_LINES_B,4) */ /*+ NO_MERGE(fndv) */ /*+ use_hash(per_absence_attendances per_all_assignments_f per_absence_attendance_types per_abs_attendance_reasons ) */ /*+ use_hash(W_ACTIVITY_FS W_FS_ACT_CST_FS W_SOURCE_D W_ENTLMNT_D W_AGREE_D W_REGION_D W_SRVREQ_D W_ASSET_D) */ PSFT HCM 8.9, 9.x SDE_PSFT_UserDimension_PersonalInf Initial ormation SDE_PSFT_SupplierAccountDimension Initial /*+ use_hash(login person address names perdata bus_email alt_email bus_phones cell_phones fax_phones pgr_phones) */ /*+ use_hash(v vaddr vcont vphn) */

You may consider using the recommended hints for the mappings above for other versions only after careful testing and benchmarking ETL runtime and performance. These hints are not included into the packaged mappings. Each mapping may have $$HINT placeholders, defined in DAC. You can consider applying them to your environments after verifying mappings execution with the hints in your test environment. You can manually define $$HINT variables in your DAC and Informatica repositories by following the steps below: Connect to Informatica PowerCenter 8.6 Designer Check out the selected mapping and drag it to Mapplet Designer palette

50

-

Navigate to Mapplets menu -> Parameters and Variables Click on the Add New Variable icon Fill in the following fields in a new line: o o o o o o Name: $$HINT1, 2, etc. Type: Parameter Datatype: String Precision: make sure you specify the sufficient precision to cover your hint value Click OK Save the changes and check in the mapping into the Informatica repository

-

Connect to Informatica PowerCenter 8.6 Workflow Manager Check out the corresponding session and drag it to Task Developer palette Open the session Click on Mapping tab and select SQ (SQL Qualifier) under Sources folder in the left pane Click on Select Query attribute value Insert the defined $$HINT? Variable Save the changes Connect to DAC client Select the custom container Click on Design button and select Tasks menu in the right pane Retrieve the task which corresponds to the selected Informatica mapping Click on Parameters menu in the lower pane Fill in the fields in a new line: o o o o o Name: use the exact variable name defined in Informatica above Data Type: Text Load Type: select the load type from the list of values Value: enter the hint value here Save the changes

-

Verify the changes by inspecting the session log for the select mapping during the next ETL. Important: DAC 10.1.3.4.1 invokes Informatica PowerCenter 8.6 command line API with <–lpf> option. Some of the recommended hints can be very long and not fit into a single line. As a result, Informatica may not pick the valid parameter values. If your DAC and Informatica servers share the same machine, you can resolve the issue by implementing the following steps: Connect to your machine, running DAC and Informatica servers Open <DAC_HOME>\Conf\infa_command.xml Replace each occurrence of <-lpf> with <-paramfile> in the configuration file

51

3 64bit Test configuration: query involves a large staging table with over 100 Million rows.6 sec Statistics computing: 53 min 26 sec Query Execution Time 2 hours 27 min 45 sec 2 hours 20 min 43 sec The overall run time for the second case was approximately 45 minutes longer compared to the dynamic sampling scenario. Refer to the publication Oracle Database Performance Tuning Guide (10g Release 2) for more details.9. 52 . mark the task Analyze Table for your staging table as Completed and restart the Execution Plan. utilizing dynamic sampling feature at much shorter time compared to gathering table stats using conventional methods. the staging table may become so large (hundreds of millions of rows). that the Analyze Table job would take many hours to complete. Computed statistics on the staging table using DBMS_STATS package. you can abort the execution plan in DAC. show that optimizer can produce efficient execution plans. so it should be selectively applied only to the mappings. Enabling Dynamic Sampling at the system level may cause additional performance overhead.- Save the changes Restart DAC server and test the solution. Oracle RDBMS offers a faster. Oracle CBO determines whether a query would benefit from dynamic sampling at the query compile time.0. Statistics / Sampling Execution Time Dynamic sampling: 10. The internal tests. during the initial loads that process very large data volumes. joined with two smaller dimension tables Test Scenarios No statistics were collected on the staging table. and to apply the relevant single table predicates to estimate selectivity for each predicate. Oracle Optimizer would issue a recursive SQL statement to scan a small random sample of the table's blocks. Note that the DAC version released with Oracle Business Intelligence Applications Version 7. Using Oracle Optimizer Dynamic Sampling for big staging tables A typical Source Dependent Extract (SDE) task contains the following steps: Truncate staging table Extract data from one or more OLTP tables into staging table Analyze staging table The last step computes statistics on the table to make sure that Oracle Cost Based Optimizer (CBO) builds the best execution plan for the next task.6 does not disable computing statistics at a table level.2. However. In some cases sample cardinality can also be used to estimate table cardinality. Below are the details of one of the internal benchmark tests: Hardware configuration: 8 CPU cores x 16Gb RAM x 2Tb NAS server with Linux 64bit OS Target Database: Oracle 10. yet accurate enough alternative to use dynamic sampling instead of gathering table statistics. which run the queries against the large staging table by inserting Dynamic Sampling hints into the appropriate mapping SQLs. The purpose of dynamic sampling is to improve server performance by determining more accurate estimates for predicate selectivity and statistics for tables and indexes. performed on large staging tables. The optimizer estimated the identical run time for both cases execution plans. To workaround this limitation.

CREATE index AP. The creation of custom indexes on LAST_UPDATE_DATE columns for tables in this category has been reviewed and approved by Oracle’s EBS Performance Group.GL_JE_HEADERS(LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE> .OBIEE_AP_EXP_REP_HEADERS_ALL ON AP.AP_AE_HEADERS_ALL(LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE> . CREATE index AP.9 or lower and it has been migrated to OATM* then replace <IDX_TABLESPACE> with APPS_TS_TX_IDX prior to running the DDL. DDL script for custom index creation: CREATE index AP. replace <IDX_TABLESPACE> with <PROD>X. CREATE index GL. but there are no performance implications reported with indexes on LAST_UPDATE_DATE column.CST_COST_TYPES(LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE> .OBIEE_AP_INVOICE_PAYMENTS_ALL ON tablespace <IDX_TABLESPACE> . If your source system is EBS 11i release 11.AP_PAYMENT_SCHEDULES_ALL(LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE> .5.5. where <PROD> is an owner of the table which will be indexed on LAST_UPDATE_DATE column.OBIEE_GL_JE_HEADERS ON GL. which hampers performance of incremental loads.5.OBIEE_AP_AE_HEADERS_ALL ON AP. There are three categories of such source EBS tables: Tables that do not have indexes on LAST_UPDATE_DATE in the latest EBS releases.10 EBS 11i release 11.CUSTOM INDEXES IN ORACLE EBS FOR INCREMENTAL LOADS PERFORMANCE Introduction Oracle EBS source database tables contain mandatory LAST_UPDATE_DATE columns.AP_INVOICE_PAYMENTS_ALL(LAST_UPDATE_DATE) CREATE index AP.OBIEE_CST_COST_TYPES ON CST. CREATE index CST. 53 .AP_EXPENSE_REPORT_HEADERS_ALL(LAST_UPDATE_DATE) CREATE index AP. tablespace <IDX_TABLESPACE> .OBIEE_AP_INVOICES_ALL ON AP. CREATE index AP. which do not have indexes on LAST_UPDATE_DATE in any EBS releases.AP_INVOICES_ALL(LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE> . - Custom OBIEE indexes in EBS 11i and R12 systems The first category covers tables.HOLDS_ALL(LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE> .OBIEE_AP_HOLDS_ALL ON AP. Tables that cannot have indexes on LAST_UPDATE_DATE because of serious performance degradations in the source EBS environments. Some source tables used by Oracle BI Applications do not have an index on LAST_UPDATE_DATE column. which are used by Oracle BI Applications for capturing incremental data changes. If your source system is on of the following: EBS R12 EBS 11i release 11. Tables that have indexes on LAST_UPDATE_DATE columns.9 or lower and it has not been migrated to OATM*. AP. All Oracle EBS 11i and R12 customers should create the custom indexes using the DDL script provided below.OBIEE_AP_PAYMENT_SCHEDULES_ALL ON AP. introduced in Oracle EBS Release 12.

HZ_RELATIONSHIPS(LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE> .OBIEE_HZ_CUST_SITE_USES_ALL ON AR.CREATE index AR.HZ_PERSON_PROFILES(LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE> .RCV_SHIPMENT_HEADERS (LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE> .OBIEE_AR_CASH_RECEIPTS_ALL ON AR. CREATE index AR. 54 . CREATE index WSH.OBIEE_HZ_CUST_ACCOUNT_ROLES ON AR. CREATE index ONT. CREATE index ONT. CREATE index AR.OBIEE_HZ_PARTY_SITES ON AR.OBIEE_PAY_INPUT_VALUES_F ON PER.OBIEE_RCV_SHIPMENT_HEADERS ON PO. CREATE index AR.OE_ORDER_HOLDS_ALL(LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE> .OBIEE_AP_NOTES ON AP. Autopatch might fail during future upgrades if Oracle EBS introduces indexes on LAST_UPDATE_DATE columns for these tables. recommended for Supply Chain Analytics on AP_NOTES.HZ_LOCATIONS(LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE> . CREATE index AR.AR_CASH_RECEIPTS_ALL (LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE> .HZ_ORGANIZATION_PROFILES(LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE> . CREATE index PER.OBIEE_OE_ORDER_HOLDS_ALL ON ONT. CREATE index PER.OBIEE_HZ_CONTACT_POINTS ON AR.OE_ORDER_HEADERS_ALL(LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE> .HZ_CUST_SITE_USES_ALL(LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE> .OBIEE_HZ_RELATIONSHIPS ON AR. CREATE index AR. CREATE index WSH.OBIEE_HZ_CUST_ACCT_SITES_ALL ON AR.RCV_SHIPMENT_LINES (LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE> . CREATE index PO.OBIEE_RCV_SHIPMENT_LINES ON PO.PAY_ELEMENT_TYPES_F (LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE> .PAY_INPUT_VALUES_F (LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE> . CREATE index AR.HZ_CUST_ACCOUNT_ROLES(LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE> .OBIEE_HZ_ORGANIZATION_PROFILES ON AR.WSH_DELIVERY_DETAILS (LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE> . Important: Since all indexes in this section have the prefix OBIEE_ and do not follow standard Oracle EBS Index naming conventions. CREATE index AR. Important: Make sure you use FND_STATS to compute statistics on the newly created indexes and update statistics on newly indexed table columns in the EBS database. CREATE index AR. In such cases conflicting OBIEE_ indexes should be dropped and Autopatch restarted. CREATE index PO.OBIEE_WSH_NEW_DELIVERIES ON WSH.WSH_NEW_DELIVERIES (LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE> .OBIEE_WSH_DELIVERY_DETAILS ON WSH.OBIEE_HZ_LOCATIONS ON AR. There is one more custom index.SOURCE_OBJECT_ID column: CREATE index AP.OBIEE_HZ_PERSON_PROFILES ON AR. HZ_CUST_ACCT_SITES_ALL(LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE> . CREATE index AR.HZ_CONTACT_POINTS(LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE> .OBIEE_OE_ORDER_HEADERS_ALL ON ONT.OBIEE_PAY_ELEMENT_TYPES_F ON PER.HZ_PARTY_SITES(LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE> .AP_NOTES (SOURCE_OBJECT_ID) tablespace <IDX_TABLESPACE> .

where <PROD> is an owner of the table which will be indexed on LAST_UPDATE_DATE column.RA_CUSTOMER_TRX_ALL (LAST_UPDATE_DATE) INITIAL 4K NEXT 4M MINEXTENTS 1 MAXEXTENTS 50 PCTINCREASE 0 INITRANS 4 MAXTRANS 255 PCTFREE 10 tablespace <IDX_TABLESPACE> .PO_REQUISITION_LINES_ALL (LAST_UPDATE_DATE) INITIAL 4K NEXT 250K MINEXTENTS 1 MAXEXTENTS 50 PCTINCREASE 0 INITRANS 4 MAXTRANS 255 PCTFREE 10 tablespace <IDX_TABLESPACE> . CREATE index PO. CREATE index PO.RCV_TRANSACTIONS (LAST_UPDATE_DATE) INITIAL 4K NEXT 2M MINEXTENTS 1 MAXEXTENTS 50 PCTINCREASE 0 INITRANS 2 MAXTRANS 255 PCTFREE 10 tablespace <IDX_TABLESPACE> . CREATE index PO. 55 .PO_LINE_LOCATIONS_N11 ON PO.PO_LINES_ALL (LAST_UPDATE_DATE) INITIAL 4K NEXT 4K MINEXTENTS 1 MAXEXTENTS 50 PCTINCREASE 0 INITRANS 2 MAXTRANS 255 PCTFREE 10 tablespace <IDX_TABLESPACE> .9 or lower and it has not been migrated to OATM*. replace <IDX_TABLESPACE> with <PROD>X.PO_DISTRIBUTIONS_N13 ON PO. CREATE index PO.PO_REQUISITION_LINES_N17 ON PO.PO_DISTRIBUTIONS_ALL (LAST_UPDATE_DATE) INITIAL 4K NEXT 2M MINEXTENTS 1 MAXEXTENTS 50 PCTINCREASE 0 INITRANS 2 MAXTRANS 255 PCTFREE 10 tablespace <IDX_TABLESPACE> . officially introduced Oracle EBS Release 12.5. If your source system is one of the following: EBS R12 EBS 11i release 11.PO_HEADERS_ALL (LAST_UPDATE_DATE) INITIAL 4K NEXT 1M MINEXTENTS 1 MAXEXTENTS 50 PCTINCREASE 0 INITRANS 2 MAXTRANS 255 PCTFREE 10 tablespace <IDX_TABLESPACE> .RA_CUSTOMER_TRX_N14 ON AR. CREATE index AR.PO_LINE_LOCATIONS_ALL (LAST_UPDATE_DATE) INITIAL 4K NEXT 2M MINEXTENTS 1 MAXEXTENTS 50 PCTINCREASE 0 INITRANS 2 MAXTRANS 255 PCTFREE 10 tablespace <IDX_TABLESPACE> .PO_HEADERS_N9 ON PO. If you source system is EBS 11i release 11.PO_LINES_N10 ON PO.PO_REQUISITION_HEADERS_ALL (LAST_UPDATE_DATE) INITIAL 4K NEXT 250K MINEXTENTS 1 MAXEXTENTS 50 PCTINCREASE 0 INITRANS 4 MAXTRANS 255 PCTFREE 10 tablespace <IDX_TABLESPACE> .9 or lower and it has been migrated to OATM* then replace <IDX_TABLESPACE> with APPS_TS_TX_IDX prior to running the DDL.Custom EBS indexes in EBS 11i source systems The second category covers tables.5. All Oracle EBS 11i and R12 customers should create the custom indexes using the DDL script provided below. CREATE index PO.10 EBS 11i release 11.PO_REQUISITION_HEADERS_N6 ON PO. CREATE index PO. DDL script for custom index creation: CREATE index PO.PO_REQ_DISTRIBUTIONS_ALL (LAST_UPDATE_DATE) INITIAL 4K NEXT 250K MINEXTENTS 1 MAXEXTENTS 50 PCTINCREASE 0 INITRANS 4 MAXTRANS 255 PCTFREE 10 tablespace <IDX_TABLESPACE> . CREATE index PO.5. which have indexes on LAST_UPDATE_DATE.PO_REQ_DISTRIBUTIONS_N6 ON PO. Make sure you don't change the index name to avoid any future patch or upgrade failures on the source EBS side.RCV_TRANSACTIONS_N23 ON PO.

and review Segment Statistics section for resource contentions caused by custom LAST_UPDATE_DATE indexes. The old tablespace model standard naming convention for tablespaces is a product's Oracle schema name with the suffixes D for Data tablespaces and X for Index tablespaces.1 for more details. Oracle EBS 11. Make sure you use the following pattern for creating custom indexes on the listed tables below: CREATE index <Ppod>. so introduction of indexes on LAST_UPDATE_DATE may cause additional overhead for some OLTP operations.5. designated for transaction table indexes. The majority of the customers will not have any significant impact on OLTP Applications performance. respectively.5. *) Oracle Applications Tablespace Model (OATM): Oracle EBS release 11. For example. Refer to Oracle Metalink Note 248857.10 releases can migrate to OATM using OATM Migration utility.5.OBIEE_<Table_Name> ON <Prod>. Customers running pre-11. Oracle BI Applications customers may consider creating custom indexes on LAST_UPDATE_DATE for these tables only after benchmarking incremental ETL performance and analyzing OLTP applications impact. To analyze the impact on EBS source database.9 and lower uses two tablespaces for each Oracle Applications product. OATM uses 12 locally managed tablespaces across all products. Refer to Oracle RDBMS documentation for more details on AWR usage. you can generate an Automatic Workload Repository (AWR) report during the execution of OLTP batch programs. any future upgrades would not be affected. Indexes on transaction tables are held in a separate tablespace APPS_TS_TX_IDX. producing heavy inserts / updates into the tables below. Prod AP AP AP AP AR AR AR AR BOM Table Name AP_EXPENSE_REPORT_LINES_ALL AP_INVOICE_DISTRIBUTIONS_ALL AP_AE_LINES_ALL AP_PAYMENT_HIST_DISTS AR_PAYMENT_SCHEDULES_ALL AR_RECEIVABLE_APPLICATIONS_ALL RA_CUST_TRX_LINE_GL_DIST_ALL RA_CUSTOMER_TRX_LINES_ALL BOM_COMPONENTS_B 56 . Oracle EBS tables with high transactional load The following Oracle EBS tables are used for high volume transactional data processing. one for the tables and one for the indexes. the default tablespaces for Oracle Payables tables and indexes are APD and APX.Important: Make sure you use FND_STATS to compute statistics on the newly created indexes and update statistics on newly indexed table columns in the EBS database.10 and R12 use the new Oracle Applications Tablespace Model. Since all custom indexes above follow Oracle EBS index standard naming conventions.<Table_Name> (LAST_UPDATE_DATE) tablespace <IDX_TABLESPACE> .

Since query rewrite is transparent. using Oracle Materialized Views (MV) to build complex views and precompute summaries. which can be used by Oracle BI Applications for capturing initial data subsets. The database optimizer transparently rewrites a physical SQL. Database Configuration Requirements for using MVs 1. in conjunction with Query Rewrite can significantly improve the end user queries performance.ora: query_rewrite_enabled = true query_rewrite_integrity = trusted star_transformation_enabled = true 2. Issue the following database grants to your warehouse schema: GRANT query rewrite TO <dwh_user>. You do not need to expose the MV in in RPD physical or logical layers. which may result in rather complex physical SQLs (sometimes multiple physical SQLs per logical query). Make sure you set the following parameters in your Target Warehouse init. handled by OBIEE. Query Rewrite is critical for BI Analytics Warehouse logical queries. or make any changes to your logical SQL. Pre-aggregation. You can use the same guidelines for creating custom indexes on CREATION_DATE columns for improving initial ETL performance after careful benchmarking of EBS source environment performance. 57 . to use a custom MV. MVs can be added or dropped in the physical warehouse schema without invalidating the original logical maps in OBIEE. You may consider creating custom indexes on CREATION_DATE if your initial ETL extracts a subset of historic data.BOM CST GL GL GL INV INV ONT PER PO BOM_STRUCTURES_B CST_ITEM_COSTS GL_BALANCES GL_DAILY_RATES GL_JE_LINES MTL_MATERIAL_TRANSACTIONS MTL_SYSTEM_ITEMS_B OE_ORDER_LINES_ALL PAY_PAYROLL_ACTIONS RCV_SHIPMENT_LINES WSH WSH_DELIVERY_ASSIGNMENTS WSH WSH_DELIVERY_DETAILS Custom EBS indexes on CREATION_DATE in EBS 11i source systems Oracle EBS source database tables contain another mandatory column CREATION_DATE. CUSTOM AGGREGATES FOR BETTER QUERY PERFORMANCE Introduction Oracle BI Server Enterprise Edition (OBIEE) logical model for Oracle Business Intelligence Applications allows for building logical business queries. GRANT create materialized view TO <dwh_user>. generated by OBIEE.

ROW_WID = T631953. NVL(T257401. Identify a slow physical SQL generated by OBIEE.ORDERED_ON_DT_WID AND T257401. 3). 1. W_XACT_TYPE_D T473562 /* Dim_W_XACT_TYPE_D_Purchase_Order_Shipment_Type */.PER_NAME_YEAR AS c2.T631953. T31328.CAL_MONTH. and review the SQL logic: SELECT SUM(CASE WHEN T263758.MONTH_NAME.ROW_WID IN (0) OR T278452.Custom Materialized View Guidelines The following example provides step-by-step instructions how to build an MV and ensure query rewrite.CANCELLED_AMT) * T631953.ROW_WID = T631953.ROW_WID = T631953.ROW_WID = T631953.LINE_AMT . W_PURCH_SCHEDULE_LINE_F T631953 /* Fact_W_PURCH_SCHEDULE_LINE_F_POApproval_Date */ WHERE (T31328.PER_NAME_YEAR.W_SUBSTATUS_CODE <> 'CANCELLED' AND T473562.INVENTORY_PROD_WID AND T263758."APPROVAL_STATUS_WID") 58 .XV_LOB.26 The query execution plan is below: Plan hash value: 909913791 --------------------------------------------------------------------------------------------------------------------------------------| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Pstop | --------------------------------------------------------------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 51 | 8670 | | 250K (4)| 01:15:01 | | | | 1 | HASH GROUP BY | | 51 | 8670 | | 250K (4)| 01:15:01 | | | |* 2 | HASH JOIN | | 670K| 108M| | 249K (4)| 01:15:00 | | | | 3 | INDEX FULL SCAN | W_STATUS_D_U2 | 68 | 816 | | 1 (0)| 00:00:01 | | | |* 4 | HASH JOIN | | 670K| 100M| | 249K (4)| 01:14:59 | | | | 5 | VIEW | index$_join$_006 | 108 | 1080 | | 3 (34)| 00:00:01 | | | |* 6 | HASH JOIN | | | | | | | | | | 7 | BITMAP CONVERSION TO ROWIDS | | 108 | 1080 | | 1 (0)| 00:00:01 | | | |* 8 | BITMAP INDEX FULL SCAN | IDX_XACT_TYPE_D | | | | | | | | | 9 | INDEX FAST FULL SCAN | W_XACT_TYPE_D_P1 | 108 | 1080 | | 1 (0)| 00:00:01 | | | |* 10 | HASH JOIN | | 673K| 95M| | 249K (4)| 01:14:59 | | | | 11 | VIEW | index$_join$_005 | 108 | 1080 | | 3 (34)| 00:00:01 | | | |* 12 | HASH JOIN | | | | | | | | | | 13 | BITMAP CONVERSION TO ROWIDS | | 108 | 1080 | | 1 (0)| 00:00:01 | | | |* 14 | BITMAP INDEX FULL SCAN | IDX_XACT_TYPE_D | | | | | | | | | 15 | INDEX FAST FULL SCAN | W_XACT_TYPE_D_P1 | 108 | 1080 | | 1 (0)| 00:00:01 | | | |* 16 | HASH JOIN | | 676K| 89M| 65M| 249K (4)| 01:14:59 | | | |* 17 | HASH JOIN | | 676K| 58M| | 54434 (4)| 00:16:20 | | | |* 18 | TABLE ACCESS FULL | W_STATUS_D | 5 | 115 | | 2 (0)| 00:00:01 | | | |* 19 | HASH JOIN | | 1554K| 99M| | 54417 (4)| 00:16:20 | | | | 20 | PART JOIN FILTER CREATE | :BF0000 | 372 | 6696 | | 8 (0)| 00:00:01 | | | | 21 | TABLE ACCESS BY INDEX ROWID| W_DAY_D | 372 | 6696 | | 8 (0)| 00:00:01 | | | |* 22 | INDEX RANGE SCAN | X_PER_NAME_YEAR | 372 | | | 1 (0)| 00:00:01 | | | | 23 | PARTITION RANGE JOIN-FILTER | | 8811K| 411M| | 54328 (4)| 00:16:18 |:BF0000|:BF0000| |* 24 | TABLE ACCESS FULL | W_PURCH_SCHEDULE_LINE_F | 8811K| 411M| | 54328 (4)| 00:16:18 |:BF0000|:BF0000| | 25 | TABLE ACCESS FULL | W_INVENTORY_PRODUCT_D | 23M| 1064M| | 148K (5)| 00:44:28 | | | -------------------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------2 .CONSIGNED_TYPE_WID AND T631953. W_STATUS_D T263758 /* Dim_W_STATUS_D_Purchase_Order_Status */.XV_LOB. T31328. 'Unknown'). Elapsed: 00:02:06. T31328. NVL(T257401. 3) AS c5. SUBSTR(T31328. W_XACT_TYPE_D T476739 /* Dim_W_XACT_TYPE_D_Purchase_Order_Consigned_Type */."ROW_WID"="T631953".GLOBAL1_EXCHANGE_RATE ELSE 0 END) AS c1.W_XACT_TYPE_CODE <> 'CONSIGNED-CONSUMED') GROUP BY T31328.ROW_WID = T631953.PER_NAME_YEAR = '2010' AND T476739.SHIPMENT_TYPE_WID AND T31328. W_DAY_D T31328 /* Dim_W_DAY_D_Common */.DELETE_FLG = 'N' AND T278452.CAL_MONTH AS c3.MONTH_NAME.W_STATUS_CLASS IN ('PURCH_CYCLE')) AND T476739. 1. W_STATUS_D T278452 /* Dim_W_STATUS_D_Purchase_Order_Cycle_Status */.CYCLE_STATUS_WID AND T473562.W_STATUS_CODE = 'APPROVED' THEN (T631953. 'Unknown') AS c6 FROM W_INVENTORY_PRODUCT_D T257401 /* Dim_W_INVENTORY_PRODUCT_D */. 1.ROW_WID = T631953.access("T263758".APPROVAL_STATUS_WID AND T278452. SUBSTR(T31328.W_XACT_TYPE_CODE <> 'PREPAYMENT' AND (T278452.

w_status_d t263758 WHERE t631953."W_STATUS_CLASS"='PURCH_CYCLE' OR "T278452"."SHIPMENT_TYPE_WID") access(ROWID=ROWID) filter("T473562".MONTH_NAME.per_name_year. / Elapsed: 00:01:17.inventory_prod_wid."ROW_WID"="T631953". query rewrite is now possible when your SELECT statements contain analytic 59 . t631953.per_name_year.access("T31328".line_amt ."CONSIGNED_TYPE_WID") access(ROWID=ROWID) filter("T476739"."W_XACT_TYPE_CODE"<>'PREPAYMENT') access("T257401".MVIEW_REFRESH."ORDERED_ON_DT_WID") 22 – access("T31328".line_amt . t631953."ROW_WID"="T631953".shipment_type_wid.cancelled_amt )* t631953. w_day_d t31328. sum(t631953.delete_flg. sum(t631953.CAL_MONTH.MONTH_NAME.w_status_code = 'APPROVED' THEN (t631953.row_wid = t631953. t31328.global1_exchange_rate) amt.t631953.delete_flg.08 The MV will be populated as soon as you execute ‘CREATE MATERIALIZED VIEW’ DDL.row_wid AND t263758.approval_status_wid. This query can be rewritten to move the aggregation logic into a Materialized View: Note: Consider using the same aliases to physical tables in your MV as in the original physical SQL."PER_NAME_YEAR"='2010' - 2.cancelled_amt) cancelled_amt."W_XACT_TYPE_CODE"<>'CONSIGNED-CONSUMED') access("T473562".ordered_on_dt_wid = t31328.consigned_type_wid."ROW_WID"=0) AND "T278452". Note: Starting from Oracle 10g. sum((t631953.CAL_MONTH. t31328.cancelled_amt) * t631953.t631953.4 6 8 10 12 14 16 17 18 access("T476739". t31328. t631953."CYCLE_STATUS_WID") filter(("T278452".line_amt) line_amt. SUM ( CASE WHEN t263758."ROW_WID"="T631953".consigned_type_wid.global1_exchange_rate ELSE 0 END ) AS amt0 FROM w_purch_schedule_line_f t631953."INVENTORY_PROD_WID") access("T278452".approval_status_wid. CREATE MATERIALIZED VIEW CUST_W_PURCH_SCHED_LINE_F_MV1 BUILD IMMEDIATE REFRESH COMPLETE ENABLE QUERY REWRITE AS SELECT t31328. t631953.shipment_type_wid. Create a Materialized View. t631953. t631953. t631953.cycle_status_wid. t31328. t631953."ROW_WID"="T631953".inventory_prod_wid. t631953. t631953."W_SUBSTATUS_CODE"<>'CANCELLED') 19 ."ROW_WID"="T631953". t631953. The subsequent refreshes will be handled via DBMS_MVIEW.approval_status_wid GROUP BY t31328. t631953.cycle_status_wid.

and set operations such as UNION.sql REWRITE_TABLE table columns for your reference: STATEMENT_ID MV_OWNER ID for the query MV's schema 60 .access("T476739". Create the REWRITE_TABLE table by running the following SQL: SQL> @<ORACLE_HOME>\rdbms\admin\utlxrw."ROW_WID"="CUST_W_PURCH_SCHED_LINE_F_MV1"."ROW_WID"=0) AND "T278452".access("T473562"."CONSIGNED_TYPE_WID") 6 . END."W_XACT_TYPE_CODE"<>'PREPAYMENT') 16 .access("T278452". / 4."W_XACT_TYPE_CODE"<>'CONSIGNED-CONSUMED') 10 ."DELETE_FLG"='N') 19 ."W_SUBSTATUS_CODE"<>'CANCELLED') 18 ."ROW_WID"="CUST_W_PURCH_SCHED_LINE_F_MV1".functions.filter(("T278452"."CYCLE_STATUS_WID") 17 ."SHIPMENT_TYPE_WID") 12 . MINUS or INTERSECT."W_STATUS_CLASS"='PURCH_CYCLE' OR "T278452"."ROW_WID"="CUST_W_PURCH_SCHED_LINE_F_MV1". Compute statistics on each created MV: BEGIN DBMS_STATS. Troubleshoot Query Rewrite You can use the DBMS_MVIEW. 5."ROW_WID"="CUST_W_PURCH_SCHED_LINE_F_MV1". 3. method_opt => 'FOR ALL COLUMNS'). full outer joins.filter("T473562". Important!!! Depending on the logic complexity and data volumes collected in an MV you can consider adding indexes on MV columns for improving MV query performance as well.filter("CUST_W_PURCH_SCHED_LINE_F_MV1"."INVENTORY_PROD_WID") Line #18 confirms that optimizer chose the newly created MV in the latest execution plan for the original SQL. 'CUST_W_PURCH_SCHED_LINE_F_MV1'.access(ROWID=ROWID) 8 .filter("T476739". 1.access(ROWID=ROWID) 14 .GATHER_TABLE_STATS(USER."PER_NAME_YEAR"='2010' AND "CUST_W_PURCH_SCHED_LINE_F_MV1".access("T257401".EXPLAIN_REWRITE procedure to find out why your query failed to rewrite. Verify the use of MV and query rewrite in the original physical SQL by re-running the query and checking its plan: Plan hash value: 3197775814 -------------------------------------------------------------------------------------------------------------------| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 51 | 7089 | 97151 (1)| 00:29:09 | | 1 | HASH GROUP BY | | 51 | 7089 | 97151 (1)| 00:29:09 | | 2 | NESTED LOOPS | | | | | | | 3 | NESTED LOOPS | | 48291 | 6555K| 97147 (1)| 00:29:09 | |* 4 | HASH JOIN | | 48291 | 4291K| 412 (6)| 00:00:08 | | 5 | VIEW | index$_join$_006 | 108 | 1080 | 3 (34)| 00:00:01 | |* 6 | HASH JOIN | | | | | | | 7 | BITMAP CONVERSION TO ROWIDS | | 108 | 1080 | 1 (0)| 00:00:01 | |* 8 | BITMAP INDEX FULL SCAN | IDX_XACT_TYPE_D | | | | | | 9 | INDEX FAST FULL SCAN | W_XACT_TYPE_D_P1 | 108 | 1080 | 1 (0)| 00:00:01 | |* 10 | HASH JOIN | | 48522 | 3838K| 408 (5)| 00:00:08 | | 11 | VIEW | index$_join$_005 | 108 | 1080 | 3 (34)| 00:00:01 | |* 12 | HASH JOIN | | | | | | | 13 | BITMAP CONVERSION TO ROWIDS| | 108 | 1080 | 1 (0)| 00:00:01 | |* 14 | BITMAP INDEX FULL SCAN | IDX_XACT_TYPE_D | | | | | | 15 | INDEX FAST FULL SCAN | W_XACT_TYPE_D_P1 | 108 | 1080 | 1 (0)| 00:00:01 | |* 16 | HASH JOIN | | 48755 | 3380K| 405 (5)| 00:00:08 | |* 17 | TABLE ACCESS FULL | W_STATUS_D | 5 | 115 | 2 (0)| 00:00:01 | |* 18 | MAT_VIEW REWRITE ACCESS FULL| CUST_W_PURCH_SCHED_LINE_F_MV1 | 112K| 5250K| 401 (5)| 00:00:08 | |* 19 | INDEX UNIQUE SCAN | W_INV_PROD_D_P1 | 1 | | 1 (0)| 00:00:01 | | 20 | TABLE ACCESS BY INDEX ROWID | W_INVENTORY_PRODUCT_D | 1 | 48 | 2 (0)| 00:00:01 | -------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): -------------------------------------------------4 .

GLOBAL1_EXCHANGE_RATE 10 ELSE 11 0 12 END) AS c1. 13 T31328. 3 MV_NAME VARCHAR2(30) := 'CUST_W_PURCH_SCHED_LINE_F_MV1'. 18 W_DAY_D T31328.PER_NAME_YEAR AS c2. In our example. message. 15 SUBSTR(T31328.EXPLAIN_REWRITE(QUERY => 'Your query statement'.XV_LOB. ''Unknown'') AS c6 17 FROM W_INVENTORY_PRODUCT_D T257401.CANCELLED_AMT) * 9 T631953. 61 .LINE_AMT . / You can use the following query to show EXPLAIN_REWRITE log SELECT FROM WHERE AND sequence. 3) AS c5. STATEMENT_ID => ‘Your statement label’). or if it rewrites. END. 1. 5 BEGIN 6 QUERY := 'SELECT SUM(CASE 7 WHEN T263758. Execute DBMS_MVIEW. which materialized view(s) will be used: BEGIN DBMS_MVIEW. 16 NVL(T257401.W_STATUS_CODE = ''APPROVED'' THEN 8 (T631953.EXPLAIN_REWRITE EXPLAIN_REWRITE procedure provides the details for query rewrite failure. 14 T31328.MV_NAME SEQUENCE QUERY QUERY_BLOCK_NO REWRITTEN_TXT MESSAGE PASS MV_IN_MSG MEASURE_IN_MSG JOIN_BACK_TBL JOIN_BACK_COL ORIGINAL_COST REWRITTEN_COST FLAGS Name of the MV Seq # of error message User query Block # of the current sub query Rewritten query message EXPLAIN_REWRITE error message Query Rewrite pass # MV in current message Measure in current message Join back table in current message Join back column in current message Cost of original query Cost of rewritten query. MV => 'Your MV name’. original_cost.CAL_MONTH AS c3. 4 STATEMENT_ID VARCHAR2(30) := 'Test#1 '||User.MONTH_NAME. It shows a zero if there was no rewrite of a query or if a different materialized view was used Associated flags 2.T631953. to check whether optimizer picks CUST_W_PURCH_SCHED_LINE_F_MV1 Materialized View run: SQL> DECLARE 2 QUERY VARCHAR2(4000). rewritten_cost REWRITE_TABLE mv_name = 'Your MV name’ statement_id = ‘Your statement label’.

W_SUBSTATUS_CODE <> ''CANCELLED'' AND 33 T473562.PER_NAME_YEAR = ''2010'' AND 30 T476739. /*+ REWRITE_OR_ERROR */.ROW_WID = T631953.W_STATUS_CLASS IN (''PURCH_CYCLE'')) AND 36 T476739.W_XACT_TYPE_CODE <> ''PREPAYMENT'' AND 34 (T278452.ROW_WID = T631953.MONTH_NAME.ROW_WID = T631953. 21 W_XACT_TYPE_D T473562.------------. rewritten_cost 2 FROM REWRITE_TABLE 3 WHERE mv_name = 'CUST_W_PURCH_SCHED_LINE_F_MV1' 4 AND statement_id = 'Test#1 ' || User 5 / SEQUENCE --------1 2 MESSAGE ORIGINAL_COST REWRITTEN_COST -------------------------------------------------------------------------------.ROW_WID = T631953. MV => MV_NAME.CONSIGNED_TYPE_WID AND 31 T631953. you can use a hint. STATEMENT_ID => STATEMENT_ID).CAL_MONTH. 23 W_PURCH_SCHEDULE_LINE_F T631953 24 WHERE (T31328. 62 .INVENTORY_PROD_WID AND 26 T263758. 3). Integrate MV Refresh in DAC Execution Plan The best option to maintain up-to-date custom MVs is to merge their refresh into your DAC ETL Execution Plan. Starting with Oracle 10g. message. There are other Query Rewrite restrictions.ROW_WID = T631953. original_cost.DELETE_FLG = ''N'' AND 32 T278452.APPROVAL_STATUS_WID AND 27 T278452. 20 W_STATUS_D T278452. ''Unknown'')'. 38 T31328.EXPLAIN_REWRITE(QUERY => QUERY. select /*+ REWRITE_OR_ERROR */ * from dual ORA-30393: a query block in the statement did not rewrite The most common cause for unsuccessful query rewrite is not matching columns and / or aggregate functions used in MVs. 1.19 W_STATUS_D T263758. 40 NVL(T257401. 22 W_XACT_TYPE_D T476739. 44 / PL/SQL procedure successfully completed SQL> SQL> SELECT sequence. which will stop the execution of a SQL statement if query rewrite cannot be done: SQL> select /*+ REWRITE_OR_ERROR */ * from dual.W_XACT_TYPE_CODE <> ''CONSIGNED-CONSUMED'') 37 GROUP BY T31328. 39 SUBSTR(T31328. documented in Oracle Database manuals.XV_LOB. 43 END. Ensure proper dependencies in your execution plan when you add your MV refresh custom task.ROW_WID IN (0) OR 35 T278452. 41 42 DBMS_MVIEW. The careful analysis of the execution sequence will help you to identify the best place in the execution tree to run your custom MV refresh calls in parallel with other tasks without extending the total plan runtime.ORDERED_ON_DT_WID AND 25 T257401.CYCLE_STATUS_WID AND 28 T473562.ROW_WID = T631953. CUST_W_PURCH_SCHED_LINE_F_MV1 13 9 The log tells that the query is successfully rewritten with Materialized View CUST_W_PURCH_SCHED_LINE_F_MV1.-------------QSM-01151: query was rewritten 13 9 QSM-01033: query rewritten with materialized view.PER_NAME_YEAR.SHIPMENT_TYPE_WID AND 29 T31328.

'C').The following PLSQL call ensures COMPLETE refresh for MV W_PURCH_SCHED_LINE_F_MV1: BEGIN DBMS_MVIEW. degree => DBMS_STATS. cascade => TRUE. • Click OK to save the changes. 'C'). Important!!! Make sure you add the call to DBMS_STATS to compute statistics FOR ALL COLUMNS SIZE AUTO on each MV as part of DAC Execution plan customization. 63 . as DAC will compute its statistics but use CASCADE => FALSE. Note: If there are no indexes defined on an MV.REFRESH('getTableName()'.AUTO_SAMPLE_SIZE. Define Related Tables • • Search for your Fact or Aggregate table. estimate_percent => DBMS_STATS. so you need to use CASDADE = TRUE to update index statistics as well. method_opt => 'FOR ALL COLUMNS SIZE AUTO'. they will not be dropped / created during MV refresh. END. DBMS_STATS. then you don’t need DBMS_STATS call in the SQL Statement.REFRESH('CUST_W_PURCH_SCHED_LINE_F_MV1'. The following sections describe step-by-step instructions for integrating MV refresh into DAC Execution Plan. Click on ‘Check Box’ icon in Value field to open Value screen Click Add button and enter the following values in the right upper pane: o o o o o • Name: Refresh MV Type: SQL Database Connection: target Table Type: All Target Valid Database Platforms: Oracle Enter the following text in ‘SQL Statement’ tab in the right lower pane: BEGIN DBMS_MVIEW. in the Tables View. Click Related Tables Tab in the lower right pane and add your MV as the related table to the original Fact. If you created any indexes on MV. Register Materialized Views • • • Click Design Button -> Table tab in the right pan Click New -> Define your custom MV as a table in DAC Save changes. tabname=> 'getTableName()'. Create Materialized View Refresh Task Action • • • • Open DAC Client and navigate to Tools -> Seed Data -> Actions -> Task Actions Click ‘New’ Button to create a new Task “Refresh Materialized View” and Click ‘Save’ to save the record.GATHER_TABLE_STATS(ownname => 'getTableOwner()'. END.DEFAULT_DEGREE). you used in your MV query definition (W_PURCH_SCHEDULE_LINE_F in our example.

there are very few customers’ implementations. Since the unused columns will store NULLs. Oracle will allocate another block for the next row-piece. Important: Optimized wide tables must be created from scratch. chapter "Customizing DAC Objects and Designing Subject Areas" for more details. After rebuilding a wide table make sure all ETL and Query indexes get created as well. Oracle does not allocate space to NULL columns at the end of the table. which could have over 255 columns after end user customizations. Refer to BI Apps Administration Guide. The table below shows the comparison statistics for a sample W_ORG_D with 254 and 300 columns: W_ORG_D with 300 columns Time: 186 sec Statistics ---------------------------------------------------------657 recursive calls 0 db block gets 134975 consistent gets 134867 physical reads 0 redo size 382 bytes sent via SQL*Net to client 372 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 6 sorts (memory) 0 sorts (disk) 1 rows processed W_ORG_D with 254 columns Time: 54 sec Statistics ---------------------------------------------------------0 recursive calls 0 db block gets 134888 consistent gets 134864 physical reads 0 redo size 382 bytes sent via SQL*Net to client 372 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed Depending on the queries complexity the amount of physical reads also could be much higher for wide tables with more than 255 columns. Wide tables structure optimization Since the wide dimension tables were designed to consolidate attributes from multiple source databases. which would use all pre-defined attributes.Rebuild Execution Plan Reassemble your Subject Areas and rebuild your Execution plan to pick the new dependencies. since the existing tables already have the chained rows. So. 64 . As the result. Though there aren’t any differences in logical wide table structure. W_SOURCE_D. The described limitation would have critical impact on Oracle BI Applications Dashboards performance. so it would not create chained row-pieces. exceeding 255 columns limit. Oracle BI Applications physical data model contains several wide dimension tables. the Oracle will split wide table rows into 255 row-pieces for tables. consider rebuilding wide tables with over 255 columns and moving the columns with NULLs to the end. Oracle would have to generate recursive calls to dynamically allocate space for the chained rows during their read/write time. W_PERSON_D. WIDE TABLES WITH OVER 255 COLUMNS PERFORMANCE Introduction Oracle Database supports relational tables with up to 1000 columns. any ‘ALTER TABLE’ command would not resolve the chaining problem. Even if there is enough free space in a single block. such as W_ORG_D.

The advantage of a physical standby database is that Data Guard applies the changes very fast using low-level mechanisms and bypassing SQL layers. incremental loads may not always complete within a predefined blackout window and cause extended downtime. The standby database can be opened in read-only mode in-between redo synchronizations. As the result. but it later can be altered to a different structure. OBIEE does not require write access to BI Applications Data Warehouse for either executing end user logical SQL queries or developing additional contents in RPD or Web Catalog. created from its backup. Such customers can consider implementing high availability solution using Oracle Data Guard with a physical Standby database. Data Guard synchronizes a logical standby database by transforming the data from the primary database redo logs into SQLs and executing them in the standby database. environment configuration. physical and logical. Data Guard synchronizes a physical standby database with its primary one by applying the primary database redo logs. causing invalid or incomplete results on dashboards ETL runs may result in significant hardware resource consumption. Data Guard with Physical Standby Database option provides both efficient and comprehensive disaster recovery as well as reliable high availability solution to Oracle BI Applications customers. Global businesses. A standby database is a copy of a production database. A logical standby database is created as a copy of a primary database. The standby database must be kept in recovery mode for Redo Apply. etc.ORACLE BI APPLICATIONS HIGH AVAILABILITY Introduction Both initial and incremental data loads into Oracle BI Applications Data Warehouse must be executed during scheduled maintenance or blackout windows for the following reasons: End user data could be inconsistent during ETL runs. operating 24 hours around o’clock not always could afford few hours downtime. A physical standby database must be physically identical to its primary database on a block-for-block basis. The internal benchmarks on a low-range outdated hardware have showed four times faster Redo Apply on a physical standby database compared to ETL execution on a primary database: Step Name Row Count Redo Size ETL SQL time on Primary DB 01:59:31 04:11:07 09:17:19 00:24:31 Redo Apply time 00:10:20 00:16:35 03:16:04 00:08:23 SDE_ORA_SalesProductDimension_Full SDE_ORA_CustomerLocationDimension_Full SDE_ORA_SalesOrderLinesFact_Full W_SALES_ORDER_LINE_F_U1 Index 2621803 4221350 22611530 621 Mb 911 Mb 12791 Mb 610 Mb 65 . such as number of source databases. There are two types of standby databases. slowing down end user queries The time to execute periodic incremental loads depends on a number of factors. each source database incremental volume. Important: A primary database must run in ARCHIVELOG mode all the times. Redo Apply for Physical Standby option synchronizes a Standby Database much faster compared to SQL Apply for Logical Standby. A logical standby database has to be open all the times to allow Data Guard to perform SQL updates. hardware specifications. High Availability with Oracle Data Guard and Physical Standby Database Oracle Data Guard configuration contains a primary database and supports up to nine standby databases.

SQL> ALTER DATABASE OPEN. When the incremental ETL load into the Primary database is over. are expected to deliver up to 8-10 times better Redo Apply time on a physical standby database. to imitate heavy incremental load. compared to the ETL execution time on the primary database. DBA starts Redo Apply in Data Guard to apply the generated redo logs to the Physical Standby Database. The modern production systems with primary and standby database. DBA schedules the downtime or blackout window on the Standby database for applying redo logs. The diagram below describes Data Guard configuration with Physical Standby database: The primary instance runs in “FORCE LOGGING” mode and serves as a target database for routine incremental ETL or any maintenance activities such as patching or upgrade. deployed on separate servers. The Physical Standby instance runs in read-only mode during ETL execution on the Primary database. with both Primary and Standby databases deployed on the same server. DBA shuts down OBIEE tier and switches the Physical Standby database into ‘RECOVERY’ mode.Creation Total 29454683 14933 Mb 15:52:28 03:51:22 The target hardware was configured intentionally on a low-range Sun server. - - - 66 . DBA opens Physical Standby Database in read-only mode and starts OBIEE tier: SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL.

4 ETL Load type: Full Load of two years of historic data. ‘FORCE LOGGING’ mode would increase the incremental load time into a Primary database.6 GHz 8Gb IBM AIX 5. Oracle BI Applications 7. - Additional considerations for deploying Oracle Data Guard with Physical Standby for Oracle BI Applications: 1. since Oracle would logging index rebuild DDL queries.4 64-bit IBM AIX 5.9 GHz 64Gb ETL IBM 9115-505 4 x 1. it also requires additional hardware to keep two large volume databases and store daily archived logs.5Tb iSCSI 1.1. Switch OBIEE from Primary to Standby before starting another ETL.1. In such configuration the downtime can be minimized to two short switchovers: Switch OBIEE from Standby to Primary after ETL completion into Primary database.9. ETL run time: 11 hours 26 min The following table contains the execution details for the longest mappings in the full ETL run: Session Name SDE_PartyPersonDimension SIL_ResponseFact_Full Run Time 2:01:37 1:57:28 Success Rows Read Write Throughput. so customers can consider switching OBIEE from the Standby to Primary.3 Informatica 8. DB tiers loads.1. various tiers configurations. However it offers these benefits: 1. and then start applying redo logs to the Standby instance. 3.0. Throughput.0 Adapter Environment configuration: Tier Model CPU RAM Storage 1.5Tb iSCSI OS Software Source IBM 9117-570 8 x 1. Important!!! The following numbers apply only to the specific hardware configurations and source data sets. 2. Disaster recovery and complete data protection 3.3 Oracle 11.3.7 64-bit 500Gb Local HDD IBM AIX 5. Siebel CRM 8.0.6 SP4 / OBIEE 10.0 / Oracle 10. Such deployment results in more complex configuration. and so on.6.2. and before starting Redo Apply into Standby database. source data volumes. They are provided for reference purposes within the context of each specific configuration. rows / sec rows / sec 21294734 3943 4480 32245373 4594 5746 67 . Reliable backup solution ORACLE BI APPLICATIONS ETL PERFORMANCE BENCHMARKS The execution time for each ETL run depends on such factors as hardware specifications.3 Siebel CRM 8.Easy-to-manage switchover and failover capabilities in Oracle Data Guard allow quick role reversals between primary and standby. High Availability Solution to Oracle BI Applications Data Warehouse 2. Primary database has to be running in ARCHIVELOG mode to capture all REDO changes.9 GHz 64Gb Target IBM 9117-570 8 x 1.

7 64-bit Informatica 8.6.1. rows / sec rows / sec 128723375 1459 1496 117891397 2283 2363 68597993 1609 1685 68648978 2339 2884 129439913 4690 7102 117918424 7074 7161 7187362 1757 1783 1740225 523 1697 3076049 955 957 7187362 3142 7704 Oracle BI Applications 7.0.4 Ghz PE2850 Intel Xeon CPUs ETL 4Gb 200Gb Local HDD ETL Load type: Full Load of seven years of historic data.3. Oracle EBS 11i10 Enterprise Sales Adapter Environment configuration: Tier Model CPU RAM Storage OS Software 68 .6.1.9.2.9.6 Ghz PE6850 Intel Xeon CPUs Dell 2 x dual-core 3.SIL_PartyPersonDimension_Full SDE_ResponseFact SIL_PartyPersonDimension_UpdateCampaignOfferInfo SIL_ActivityFact_Full SDE_ResponseDimension SIL_ResponseDimension_Full SDE_ActivityFact SIL_PartyDimension_Person SIL_RevenueFact_Full SIL_PersonFact_Full SIL_CampaignHistoryFact_Full 1:21:29 0:54:44 0:53:20 0:52:16 0:48:25 0:46:11 0:45:03 0:44:28 0:42:20 0:42:12 0:37:45 21294735 32245373 4109910 10505208 32271503 32271504 10505208 21294734 5626059 9151737 10490109 4657 10334 1416 3395 11753 11909 3948 8100 2251 3671 4711 4696 11959 1397 4426 12509 12318 4728 9877 2923 6740 5867 Oracle BI Applications 7.1.6 SP4 / OBIEE 10.1. Throughput.3 64-bit Windows 2003 Windows 2003 Oracle 11.4 Target Dell 2 x quad-core 3. Oracle EBS R12 Projects Adapter Environment configuration: Tier Source Model Sun E6500 CPU RAM Storage OS Software 16 x 900Mhz UltraSparc 16Tb NetApp Network 32Gb II CPUs Attached Storage (NAS) 16Gb 2Tb Netapp NAS Storage Sun Solaris Oracle EBS R12 / Oracle 9 10. The following table contains the execution details for the longest Projects mappings in the full ETL run: Session Name SIL_ProjectCostLine_Fact SIL_ProjectExpLine_Fact SIL_ProjectRevenueLine_Fact SDE_ORA_ProjectRevenueLine SDE_ORA_ProjectCostLine SDE_ORA_ProjectExpLine SIL_ProjectTaskDimension SIL_ProjectRevenueHdr_Fact_Full SDE_ORA_ProjectInvoiceLine_Fact SDE_ORA_Project_Tasks Run Time 24:31:59 14:21:31 11:51:12 8:16:54 7:40:36 4:38:22 1:08:45 0:56:05 0:54:16 0:38:44 Success Rows Read Write Throughput.0.

1.0. rows / sec rows / sec 61076675 1729 3357 44797448 1423 2194 44912891 1428 2204 61076675 2194 2494 44797448 2130 2482 44797448 2910 5363 44912891 3071 3397 44797448 3967 3968 4377020 521 471 6101903 627 783 6101903 997 1494 7128617 911 5690 2519900 419 8630 2620885 664 665 2246234 978 74875 2620886 2186 2170 Oracle BI Applications 7.3 64-bit Oracle 11.0.1.0.4 OBIEE 10.7 64-bit / Linux Informatica 8.4 OBIEE 10.Source Sun E6500 16 x 400Mhz UltraSparc 16Tb NetApp Network 20Gb II CPUs Attached Storage (NAS) Sun Solaris Oracle EBS R12 / Oracle 9 10. The following table contains the execution details for the longest Sales mappings in the full ETL run: Session Name SIL_SalesInvoiceLinesFact_Full SIL_SalesOrderLinesFact_Full SIL_SalesScheduleLinesFact_Full SDE_ORA_SalesInvoiceLinesFact_Full SDE_ORA_SalesOrderLinesFact_Full PLP_SalesCycleLinesFact_Load_Full SDE_ORA_SalesScheduleLinesFact_Full SIL_SalesBookingLinesFact_Load_OrderLine_Debt PLP_SalesOrderLinesFact_RollupAmt_Update_Full SDE_ORA_SalesPickLinesFact_Full SIL_SalesPickLinesFact_Full PLP_SalesOrderLinesAggregate_Load_Full PLP_SalesInvoiceLinesAggregate_Load_Full SDE_ORA_SalesProductDimension_Full SDE_ORA_SalesCycleLinesFact_HoldDurationExtract SIL_SalesProductDimension_Full Run Time 9:50:02 8:45:51 8:45:08 7:44:06 5:52:47 4:17:19 4:04:34 3:08:54 2:48:34 2:43:05 1:42:34 2:11:12 1:40:52 1:06:16 0:48:23 0:22:17 Success Rows Read Write Throughput.86 Ghz 16Gb 1Tb Netapp NAS Storage & ETL PE2950 Intel Xeon CPUs ETL Load type: Full Load of seven years of historic data.3.9.3 64-bit Oracle 11. rows / sec rows / sec 82351451 2371 2464 69 . Throughput.1.7 64-bit / Linux Informatica 8.1. Oracle EBS 11i10 Supply Chain Adapter Environment configuration: Tier Source Model Sun E6500 CPU RAM Storage OS Software 16 x 400Mhz UltraSparc 16Tb NetApp Network 20Gb II CPUs Attached Storage (NAS) Sun Solaris Oracle EBS R12 / Oracle 9 10.0.86 Ghz 16Gb 1Tb Netapp NAS Storage & ETL PE2950 Intel Xeon CPUs ETL Load type: Full Load of seven years of historic data.2. Throughput. The following table contains the execution details for the longest Supply Chain mappings in the full ETL run: Session Name SIL_APInvoiceDistributionFact_Full Run Time 9:40:13 Success Rows Read Write Throughput.3.6 SP4 / RedHat 3.6 SP4 / RedHat 3.6.1.2.4 Target Dell 2 x quad-core 1.4 Target Dell 2 x quad-core 1.

This list of areas for performance improvements is not complete.9.6. make sure you trace various components. and carefully benchmark any recommendations or solutions discussed in this article or other sources. If you observe any performance issues with your Oracle BI Applications implementation.SDE_ORA_APInvoiceDistributionFact_Full SDE_ORA_EmployeeExpenseFact_Full SIL_ExpenseFact_Full SIL_ProductTransactionFact_Full SDE_ORA_ProductTransactionFact_Full SDE_ORA_PartyContactStaging_Full SDE_ORA_PurchaseReceiptFact_Full SDE_ORA_CustomerLocationDimension_Full SDE_ORA_InventoryProductDimension_Full SIL_PurchaseCostFact_Full SDE_ORA_CustomerFinancialProfileDimension_Full SIL_PurchaseScheduleLinesFact_Full SDE_ORA_PurchaseCostFact_Full SIL_PurchaseOrderFact_Full SDE_ORA_PurchaseRequisitionLinesFact_Full SIL_RequisitionLinesCostFact_Full SDE_ORA_PurchaseScheduleLinesFact_Full SDE_ORA_RequisitionLinesCostFact_Full SDE_ORA_PurchaseOrderFact_Full SIL_InventoryProductDimension_Full SIL_PurchaseRequisitionLinesFact_Full SIL_CustomerFinancialProfileDimension_Full SIL_PurchaseReceiptFact_Full 7:55:10 5:17:33 5:12:05 4:37:47 4:20:57 3:06:30 1:58:32 1:32:53 1:31:25 1:17:32 1:09:07 0:59:24 0:52:47 0:52:18 0:37:10 0:31:15 0:30:13 0:26:31 0:26:14 0:22:42 0:19:19 0:18:52 0:18:25 82351451 40789178 40789180 19955307 19955307 27123133 3147105 4224659 5241770 2922581 4678902 2837469 2922581 2859692 2520984 2581337 2837469 2581337 2859692 2620886 2520984 2855450 3147105 2894 2145 2189 1202 1280 2439 450 761 5553 632 1309 803 931 923 1145 1410 1586 1663 1868 1976 2257 2640 2953 4063 2394 2500 1985 3531 8032 2565 915 6138 806 1775 941 1400 1216 1970 1980 2395 2934 2449 2143 2697 2684 3470 CONCLUSION This document consolidates the best practices and recommendations for improving performance for Oracle Business Intelligence Applications Version 7. 70 . before implementing the changes in the production environment.

Oracle Business Intelligence Applications Version 7.506. PeopleSoft. for any purpose.6. This document is not warranted to be error-free. electronic or mechanical.650.A. nor subject to any other warranties or conditions.650.506. Oracle. JD Edwards.x Performance Recommendations April 2011 Oracle Corporation World Headquarters 500 Oracle Parkway Redwood Shores. Worldwide Inquiries: Phone: +1. This document is provided for information purposes only and the contents hereof are subject to change without notice. whether expressed orally or implied in law.7000 Fax: +1.9. Other names may be trademarks of their respective owners.S.com Copyright © 2011. without our prior written permission. Oracle. and Siebel are registered trademarks of Oracle Corporation and/or its affiliates. 71 . All rights reserved. CA 94065 U. including implied warranties and conditions of merchantability or fitness for a particular purpose. We specifically disclaim any liability with respect to this document and no contractual obligations are formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any means.7200 oracle.