Professional Documents
Culture Documents
The paper assesses key Database In-Memory characteristics and restrictions, with the goal of helping you to utilize the option to
enhance Oracle E-Business Suite performance. This approach shows where it may help (such as with full table scans), and where it
may not (such as row-by-row functions). Examples are provided to show how performance can be improved by simply placing objects
into the In-Memory column store, and how you can go on to significantly improve performance by employing SQL profiles and hints.
The examples in this document take a step away from the classic headline feature of analytical reports that you might have been
expecting. Instead, they show how Database In-Memory can be applied to Oracle Forms queries and DML statements, and
consequently how you can use it to maximize business flexibility in Oracle E-Business Suite forms and screens without needing to
create a large number of indexes that would otherwise be required, for example, to enable querying an arbitrary set of form fields.
In addition, this document shows you how to selectively circumvent the SQL hints that are used extensively across Oracle E-Business
Suite, in such a way that you achieve maximum performance in a particular application without needing to drop indexes that would
inevitably degrade performance elsewhere.
Oracle Database In-Memory shouldn’t be considered as a generic tuning option, but one of a number of possible methods that can be
deployed when dealing with performance issues or scalability concerns. This paper is best used in conjunction with other resources such
as the Oracle Database In-Memory white paper.
Introduction
The Oracle E-Business Suite database uses the conventional row format, which is the most easily understood and efficient structure for
OLTP. In this format, data is maintained on a row-by-row basis using insert, update, and delete DML operations.
The Database In-Memory column store (IM column store) is a new static pool in the Oracle Database System Global Area (SGA). It is
contained within the Database In-Memory Area, as shown in Figure 1. The IM column store supplements the buffer cache, allowing data
such as groups of columns, tables, partitions, and materialized views to be stored in memory in both row and column formats. This
storage differs from other parts of the SGA in that the data is compressed, and is not aged out or displaced.
These row and column formats are most easily understood by mapping the data to a spreadsheet. The rows of the spreadsheet
correspond to each record; individual columns typically have analytic functions applied such as sum, mean or average. Analytic style
queries typically perform complex aggregation and summary functions on a small subset of columns, but access the majority of the rows
in the table. The IM column store transparently enables the database to perform scans, joins, and aggregates much faster than when
exclusively using the row format.
In Oracle E-Business Suite, particularly with Oracle Business Intelligence and other reporting applications, the value proposition can be
increased by storing more data in the IM column store by only populating essential columns or partitions (rather than entire tables), and
using one of the available compression options. The highest efficiencies occur when only a few columns are selected, but the query
accesses a large portion of the data.
Oracle E-Business Suite includes several multi-purpose screens that can be functionally used in many different ways. Some of these
screens have many queryable fields, and providing an out-of-the-box index for every column would lead to an unnecessary maintenance
overhead. Ideally, you will create custom indexes to suit the specific way your users work. Even then, you may only be able to cater for
the most common operational scenarios or modus operandi, with the indexes being ignored by the optimizer when a query needs to
access most or all of the data via a full table scan. This type of query - where adding indexes is in principle desirable but in practice not
feasible - is the kind that will typically benefit most when the associated objects are resident in the IM column store.
The advantage of having data in both formats is that the optimizer will analyze the query and decide which will provide the best
performance. As a general rule, the optimizer will choose a full table scan (via the IM column store) for analytical operations that access
the majority of rows in the table, but it will choose an index access for more selective queries if an index exists.
Another common scenario that needs to be considered is that Oracle E-Business Suite implementations tend to have several lifecycle
phases, not only throughout the month, but also (for example), at month end, quarter end, and year end. You may not realize the full
effect of an application change until you experience a significant issue with a component that was not included in your testing. For this
reason, consider using SQL profiles (as described in Appendix B) to modify individual query execution plans. As you will see later in this
document, these provide another useful tool that can help improve the performance of Database In-Memory queries, in a very controlled
manner, especially where (as is common with Oracle E-Business Suite) you may not want to, or should not, change the core objects or
application code.
The section then goes on to review the various compression ratios that can be applied to increase the data density. Once you have this
information, you will be in a good position to estimate the total amount of space that you will need. This is essential if you plan to procure
additional memory or reallocate machine resources, even on a test system. Many Database In-Memory objects are likely to be large
transient transaction tables containing thousands or millions of rows. It is therefore also important that you include projected data growth.
There are two ways to reduce the amount of memory that you need. Firstly, you may only need to populate a subset of columns, rather
than the whole of an object, into the IM column store. This is described further in the next section. Second, with Oracle E-Business Suite
it is important to remember that there are usually several specific business phases. If particular programs or functions are exclusive to a
specific part of the business cycle, then clearly they do not require space in the IM column store at other times. While these phases may
not be clearly defined within your business model, almost every organization will have very specific month/quarter/year-end processing
periods. Identifying them accurately will enable you to release the space for use by other objects during the rest of the month.
» Note: The initial population of the IM column store is a CPU and disk intensive activity and may affect the performance of other
concurrent workloads.
» Note: Refer to Using Oracle Database In-Memory with Oracle E-Business Suite (MOS Doc ID 2025309.1), which lists patches
The IM column store is particularly intended to benefit the following types of operations:
» A query that scans many rows but only references or accesses a small subset of columns from a table, partition, or materialized
view.
» A query that scans many rows, and applies filters that use operators such as =, <, >, and IN.
» A query that uses arithmetic functions such as sum, mean or average.
» A query that processes data in groups such as 'sales per calendar month', uses buckets, or involves very large tree-walks.
» A query that uses a nested loop join.
» An analytical query with a high aggregation cost.
» Accelerating joins by converting predicates on small dimension tables into filters on a large fact table.
» Delete and update statements that are in turn based on sub-queries that perform full table scans of large tables.
» Note: In summary, look for statements with a high aggregate cost but have a low to moderate frequency of execution.
Almost all objects in the database are eligible to be populated into the IM column store, but there are some exceptions. A database
object is not eligible to be populated into the IM column store if it is:
» An object owned by the SYS user and stored in the SYSTEM or SYSAUX tablespace.
» An object less than 64KB as these would waste a considerable amount of space, as IM memory is allocated in 1MB chunks.
» An Index Organized Table (IOT) such as those implemented with Advance Queuing (AQ$ objects) and Oracle Text Search index
modules (DR$ objects). IOTs are fundamentally row-based.
» A clustered table. However, Oracle E-Business Suite does not use any of these at present.
» An object that includes either LONGs (deprecated since Oracle 8) or out of line LOBS. An example is FND_LOBS, which stores
information about all LOBs managed by the Generic File Manager (GFM).
» A “busy” object. Issuing the In-Memory alter table command will result in an ORA-54 error (resource busy and acquire with
NOWAIT) if the object is locked due to DML. This will occur with OE_ORDER_LINES_ALL if the Order-to-Cash flow is running.
Although not a good candidate, this also occurs with FND_CONCURRENT_REQUESTS if the concurrent managers are running.
» Note: Avoid using the Oracle E-Business Suite Vision database for testing as many tables are below the 64KB threshold (refer to the
Enabling Database In-Memory section for further information) and some also contain LONGs. Small databases and test environments
should be considered unsuitable, as they are unlikely to have similar data distribution and skew as your production database. It is
essential to test with large data volumes in order to generate appropriate execution plans, check the level of compression that can be
achieved, and review scalability.
Most systems will have insufficient memory to store an entire production database, but may have enough for a particular schema. The
IM column store can be enabled at the tablespace level, so that all tables and materialized views in the tablespace are automatically
marked as candidates to be populated into the IM column store. If you have enough space, all the objects will be populated into the IM
column store, with the exception of any excluded by the list of conditions above.
You might, for example, plan to populate all the General Ledger objects into the IM column store and then test the relative performance
of month-end reports. Although this provides a fast approach for testing, it may be far from representative, and you may not see much
benefit as the individual processes may not be suitable.
Even if an object exists in the Database IM column store, it may not be used for a variety of reasons. The IM column store does not
typically improve performance for queries that:
» Only populate the minimum set of columns directly used in the select, and column predicates used in the where clause of the
SQL statement.
» Consider including all of the columns that are referenced across any reports or concurrent processes that use the same object,
but can be run with a range of parameters, options, or arguments.
» For dynamic SQL, consider including all the columns that could be accessed across the range of functional permutations.
» If an object is already partitioned, only populate the minimum number of necessary partitions into the IM column store.
Importantly, for this to be effective with Oracle E-Business Suite, ensure that the Application SQL includes predicates that match
the partition key. This is likely to be the case where tables are supplied with partitions already defined, but needs careful review if
implementing a new partitioning, or sub-partitioning scheme.
» Test performance at various levels of compression.
Each of the six compression levels and their relative optimizations are shown in Table 1.
MEMCOMPRESS FOR QUERY HIGH Optimized for query performance as well as space saving.
MEMCOMPRESS FOR CAPACITY LOW Balanced with a greater bias towards space saving.
By default, data is compressed using the FOR QUERY LOW option, which provides the best performance for queries. The FOR
CAPACITY options apply additional compression, which has a larger penalty on decompression as each entry must be decompressed
before WHERE clause predicates can be applied.
Oracle provides a utility called the Compression Advisor (DBMS_COMPRESSION), which has been enhanced to support Database In-
Memory and can be used to predict IM column store requirements. The Advisor uses a sample of the table data and provides an
estimate of the compression ratio that can be achieved using MEMCOMPRESS. For further information about this utility, refer to the
Oracle Database documentation, the Oracle Technology Network site, and the Oracle Database In-Memory white paper.
Table 2 shows the original table size (on disk), the Database In-Memory size, and the corresponding compression ratio for a set of
tables that were used during an Order Management test. The default compression option is FOR QUERY LOW. The compression ratio
depends on the type of columns and number of duplicate values. In this test, the entire tables were populated into the IM column store
(of course, less space would have been used with only the actual subset of columns essential to the queries).
FOR CAPACITY HIGH
FOR CAPACITY LOW
NO MEMCOMPRESS
FOR DML
(Default)
For tables that have already been populated into the IM column store, use the following query to calculate the compression ratio:
SQL> SELECT SEGMENT_NAME,ROUND(SUM(BYTES)/1024/1024/1024,2) "ORIG. BYTES GB",
ROUND(SUM(INMEMORY_SIZE)/1024/1024/1024,2) "IN-MEMORY GB",
ROUND(SUM(BYTES-BYTES_NOT_POPULATED)*100/SUM(BYTES),2) "% BYTES IN-MEMORY",
ROUND(SUM(BYTES-BYTES_NOT_POPULATED)/SUM(INMEMORY_SIZE),2) "COMPRESSION RATIO"
FROM V$IM_SEGMENTS
GROUP BY OWNER,SEGMENT_NAME
ORDER BY SUM(BYTES) DESC;
» A list of the candidate objects that should be populated into the Database IM column store.
» The recommended compression factor for each object, and estimated performance benefits.
» An estimate of the object In-Memory size.
Refer to the Oracle Database In-Memory Advisor Best Practices white paper for further sizing information and best practices.
Using Priority
Populating the IM column store is both CPU and I/O intensive, which can adversely affect the performance of other concurrent
workloads and operations. If you do not have enough capacity in the IM column store for all of the objects (as may happen over time
when objects grow), using priority enables you to specify which objects should be populated first, or take preference. These will usually
correspond to functions or programs that are used the most, or return the greatest benefit in terms of releasing system resources for
other operations.
There are five priority levels. In order, they are: CRITICAL, HIGH, MEDIUM, LOW, and NONE. The default is NONE, in which an object
is populated only after being scanned (i.e. a full table scan). When the database is opened, objects assigned a priority are automatically
populated into the IM column store, starting with those in the highest priority band and proceeding until either all the objects been added
or no more space is left in the IM column store.
» Note: The intended population sequence will be overridden if an object (that may have a lower priority) is scanned before population
completes, as this will trigger its population into the IM column store.
» Populate objects into the Database IM column store using the following command:
SQL> ALTER <TABLE/MATERIALIZED VIEW> <TABLE NAME/ MATERIALIZED VIEW NAME> INMEMORY <PRIORITY>;
For example:
SQL> ALTER TABLE AP.AP_INVOICES_ALL INMEMORY PRIORITY CRITICAL;
Note that the IM column store is not a cache, so when it is full space is not freed by removing the least recently used objects. You need
to free space manually with commands such as:
SQL> ALTER <TABLE/MATERIALIZED VIEW> <TABLE NAME/ MATERIALIZED VIEW NAME> NO INMEMORY;
For example:
SQL> ALTER TABLE AP.AP_INVOICES_ALL NO INMEMORY;
There are three parameters in particular that need to be considered, which are discussed in the next section.
The Database In-Memory area is a static pool within the SGA, and is defined by the INMEMORY_SIZE database initialization parameter
(minimum value: 100MB). As it is a static pool, automatic memory management cannot extend or shrink this area. As noted earlier,
changing the size of the In-Memory area requires a database restart. Therefore, it is important to establish how much memory to allocate
on a system where downtime must be kept to a minimum. When defining the IM column store, ensure that SGA_TARGET is large
enough for other database structures, but not sized so large that it causes system performance issues such as paging.
Note: The ultimate goal is to choose a size for the IM column store that is large enough to contain all the objects that were identified
during your analysis, and if using a more granular approach, that it is sufficient for the subset of objects used during any of the Oracle E-
Business Suite application phases. For further information regarding the SGA_TARGET and PGA_TARGET, refer to Oracle Database
In-Memory Option (DBIM) Basics and Interaction with Data Warehousing Features (MOS Doc ID 1903683.1).
For more details about on setting this profile option, refer to MOS Doc ID 1121043.1.
» INMEMORY_MAX_POPULATE_SERVERS
This parameter limits the maximum number of background worker processes used to populate the IM column store so that they do not
overload the system. The default is the lower of either half the effective CPU thread count, or the PGA_AGGREGATE_TARGET value
divided by 512M. A certain percentage of the total CPU cores should be allocated for In-Memory background population.
» Note: If you set the value of this parameter too high (close to the number of cores on your system), you may have insufficient CPU
for the rest of the system to run. If you set the value too low (and so do not have enough worker processes), the (re)population
speed of the IM column store may be slower. In order to size these background processes properly, you need to perform pre-
production tests that simulate the actual production environment in terms of job types and distribution. Ensure that you include
reports and concurrent processes that use both Database In-Memory and parallel query/DML, and specifically focus on resource-
intensive combinations such as overnight batch jobs and month-end processes.
Refer to the Oracle Database In-Memory white paper for further information about Database In-Memory specific database
parameters.
The following example shows the before and after Database In-Memory enablement state for the Order Entry Header and Line tables,
using critical priority and the default compression.
-- Before population into the IM column store
SQL> ALTER TABLE ONT.OE_ORDER_HEADERS_ALL INMEMORY PRIORITY CRITICAL MEMCOMPRESS FOR QUERY LOW;
SQL> ALTER TABLE ONT.OE_ORDER_LINES_ALL INMEMORY PRIORITY CRITICAL MEMCOMPRESS FOR QUERY LOW;
For more details on the optimizations used with Database In-Memory, including In-Memory Scans, In-Memory Joins with bloom filters,
and In-Memory Aggregation with the Vector Group By transformation, refer to the Oracle Database In-Memory white paper.
Oracle decides how objects are distributed across the IM column stores in the cluster, but it can be overridden using one of the following
two options. DISTRIBUTE (available on all systems) specifies how data is distributed across the RAC nodes. DUPLICATE ALL (only
available on Oracle Engineered Systems) specifies if and how data is duplicated across the Oracle RAC nodes; this option can be used
to provide fault tolerance by mirroring the IM column stores.
DISTRIBUTE
With the DISTRIBUTE clause, each Oracle RAC node in the cluster only stores a portion of the data. It is important to note that In-
Memory Compression Units (IMCU) cannot be shipped across the RAC interconnect, even though more than one Database IM column
store may need to be accessed to satisfy a Database In-Memory query. This means that a process can only directly access the subset
of data in the IM column store on the node it is running on. Serial query execution will result in logical and physical I/O to read the data,
which is not available in the IM column store on the node where the process is running.
When data is populated into Database In-Memory in an Oracle RAC environment, it is affinitized to a specific Oracle RAC instance. This
means that parallel server processes need to be employed to execute queries that access the Database In-Memory objects with a
degree equal to at least the number of nodes involved in the distributed population.
» AUTO (Default). Oracle decides the best way to distribute the object across the cluster.
» BY ROWID RANGE to distribute by rowid range.
» BY PARTITION to distribute partitions to different nodes.
» BY SUBPARTITION to distribute sub-partitions to different nodes.
DUPLICATE/DUPLICATE ALL
This clause specifies if and how data is duplicated across Oracle RAC instances; this option can provide fault tolerance by mirroring the
IM column store. It has the following options:
This approach may be further complicated if you have also affinitized Oracle Forms and self service. Refer to the following documents
for further information about how to implement affinity:
» Configuring and Managing Oracle E-Business Suite Release 12.1.x Application Tiers for Oracle RAC (Doc ID 1311528.1)
» Configuring and Managing Oracle E-Business Suite Release 12.2.x Forms and Concurrent Processing for Oracle RAC (Doc ID
2029173.1)
The DUPLICATE ALL In-Memory option (only available on Engineered Systems) replicates the IM column store across each of the
Oracle RAC nodes, which means that In-Memory is easily integrated into your existing system, regardless of whether you affinitize your
Oracle E-Business Suite workload or not. This option is not available on a Non-Engineered System and this limitation means that it is
not possible to duplicate the objects in the IM column stores on each of the nodes. When you have more than one IM Column store, by
default, the objects are distributed across the IM column stores. This results in an In-Memory scan of the subset of data populated into
the IM column store on the local node (where the query coordinator is running) with the remainder of the data accessed via the buffer
cache (and disk), regardless of whether it exists in the IM column stores on other nodes.
One approach on a Non-Engineered System to address the limitation of not being able to use DUPLICATE ALL is to override the default
DISTRIBUTE mechanism; populate a complete object into a single IM column store on one node; and affinitize the associated workload
to that node. This means that you can take full advantage of In-Memory, and also benefit from the reduced interconnect traffic between
the Oracle RAC nodes.
There are some issues with just having objects in a single IM column store (on a single node) that need to be considered:
» High availability and Fault Tolerance: On Engineered Systems, switching or failing over to another node does not cause a
problem when using Oracle Database In-Memory as DUPLICATE ALL replicates the objects across all of the Oracle RAC nodes
with an IM column store. On Non-Engineered Systems, ensure that your DR procedures include what to do in the event of a
failure of the node hosting the IM column store(s).
» Job Serialization: Scheduling multiple jobs within a batch window on a single node may become a concern.
» Affinitize Workload: Redirect all of the applications workload that benefits from Database In-Memory to the node hosting the IM
column store that contains the table(s).
When using affinity with Oracle E-Business Suite, you will need to implement Oracle RAC services to populate objects into specific IM
column stores and then configure PARALLEL_INSTANCE_GROUP to restrict parallel query operations to (a set of) specific instances.
This approach becomes more complex when using affinity with multiple IM column stores; it works best when there are no overlaps on
the objects used by the concurrent processes and batch programs that are affinitized to each node.
The ultimate goal is to choose a size for the IM column store that is large enough to contain all the objects that were identified during
your analysis. When defining the IM column store, ensure that SGA_TARGET is large enough for other database structures, but not
sized so large that it causes system performance issues such as paging. If you don't have enough PGA when executing joins and
aggregations, the work will spill to the TEMP tablespace, which is on disk; any benefit achieved by accessing the data in memory will be
negated by the disk I/O.
» Setting PARALLEL_DEGREE_POLICY=AUTO allows Oracle to decide how many parallel server processes should be created to
service a query. When Automatic DOP is set at the system/session level, the parallel query coordinator is aware of the home locations
of the IMCUs and ensures that at least one parallel slave is allocated to each node with an IM column store. This ensures that a query
will be serviced by Database In-Memory. However, setting Automatic DOP at the system level is not generally recommended for
Oracle E-Business Suite as the optimizer is likely to choose parallelism for all queries (not only Database In-Memory queries)
whenever full table scans are performed. An excessive number of parallel processes may degrade overall system performance, even
though this may be mitigated to some extent, by statement queuing (waiting for parallel servers to become available) and increasing
PARALLEL_MIN_TIME_THRESHOLD. However, when using statement queuing, the execution time for a statement can significantly
vary between runs depending on the concurrent workload and availability of the parallel servers.
» Use DUPLICATE_ALL on Oracle Engineered systems so that the objects are replicated across all of the Database In-Memory column
stores. If you use Application affinity and do not set DUPLICATE ALL, you will need to populate individual objects into specific IM
column stores. This has the advantage of increasing the number of objects that can be populated into Database In-Memory across
your system. Refer to Appendix D: Using Oracle E-Business Suite Application Affinity for an example of configuring and using
services.
» Using DISTRIBUTE/DUPLICATE without parallelism will result in an In-Memory scan of the portion of the objects populated into the
IM column store on the local node; it will perform physical reads from disk for the remaining data, regardless of whether it exists in the
IM column stores on other nodes as IMCUs cannot be shipped across the RAC interconnect.
» If Using Oracle E-Business Suite Application Affinity:
» This approach enables you to take full advantage of Database In-Memory without requiring a lot of monitoring and manual
intervention to control the system load when setting Automatic DOP at the system level.
» Set PARALLEL_FORCE_LOCAL=TRUE to restrict the parallel server processes to only run on a single node. Avoid Automatic
DOP by setting PARALLEL_DEGREE_POLICY= MANUAL at the system level.
» Implement Automatic DOP by embedding a /*+ PARALLEL(AUTO) */ SQL hint in the query - this will compute the degree of
parallelism for the objects on the node where the query is being serviced (Note that, as mentioned, this hint does not honor the
IMCU home location and therefore will lose the benefits of using Database In-Memory parallelism if the object(s) are not in the
local IM column store).
» Use Oracle RAC services and PARALLEL_INSTANCE_GROUP (as discussed in Appendix D: Using Oracle E-Business Suite
Application Affinity) to control which node is used when objects are populated into an IM column store.
» If not Using Oracle E-Business Suite Application Affinity:
» Set PARALLEL_FORCE_LOCAL=FALSE so that the parallel server processes will run on all of the nodes.
» The situation is more complex when using DISTRIBUTE and DUPLICATE when Automatic DOP isn’t set at the system/session
level, the parallel query coordinator isn’t aware of the IMCU home locations (even when set at the query/session level). While this
isn’t an issue when using DUPLICATE ALL, it is likely to be an issue with DISTRIBUTE or DUPLICATE as it will result in physical
reads for objects that are not fully populated into the IM column store on the node where the query is being serviced. For this
reason if you decide to use DISTRIBUTE or DUPLICATE you will need to set Automatic DOP at the system level
(PARALLEL_DEGREE_POLICY =AUTO) even though this is not recommended with Oracle E-Business Suite. You will need to
be prepared to proactively monitor and manage the system load.
DISTRIBUTE/DUPLICATE Examples
The tests in this section show the important relationship between DISTRIBUTE/DUPLICATE on a 4-Node Oracle RAC Engineered
Systems system. They are based on a 13.3GB non-partitioned OE_ORDER_LINES_ALL table, which has parallel processing disabled
at the table level. Table 3 shows a summary of the results.
The disk metric shows a full table scan, with overall execution taking 18.90 seconds.
The following query confirms that the table is evenly distributed across all four Oracle RAC nodes:
SQL> SELECT INST_ID,SEGMENT_NAME, POPULATE_STATUS, INMEMORY_DISTRIBUTE FROM GV$IM_SEGMENTS;
» Note: The INMEMORY_DISTRIBUTE column shows that AUTO was used. As this is a non-partitioned table, AUTO will result in
DISTIBUTE BY ROWID RANGE.
The results are as follows:
SQL> SELECT COUNT(*) FROM (SELECT HEADER_ID, COUNT(LINE_ID)
FROM OE_ORDER_LINES_ALL GROUP BY HEADER_ID);
Even though an INMEMORY FULL TABLE SCAN was performed, the serial execution resulted in disk reads for the data not present in
the IM column store on this node. Consequently, there was a very large increase in the execution time, to 136.15 seconds. This
demonstrates the importance of the next test.
In order to see the cursor execution plan, run the following query:
Note
-----
- automatic DOP: Computed Degree of Parallelism is 4 because of degree limit
- parallel scans affinitized for inmemory
This test only took 10.28 seconds. The execution plan shows an INMEMORY FULL TABLE SCAN, and the parallel scans were
affinitized for Database In-Memory. Performance improved 1.8x over the baseline. Recall that this is not recommended for Oracle E-
Business Suite even though this results in the fastest time.
The following query confirms that the table has been duplicated across all four Oracle RAC nodes:
SQL> SELECT INST_ID,SEGMENT_NAME,POPULATE_STATUS FROM GV$IM_SEGMENTS;
This test took about the same time as on the single node with no parallelism test (Test 2).
It should be noted that for each of the cases that showed an improvement, there were several programs that did not. Reasons for this
include no clear high-load statements in a trace file due to application logic; row-by-row processing; the presence of embedded hints that
preclude Database In-Memory usage; and use of data import or other non In-Memory functions.
The examples in this section take a step away from the classic headline feature of analytical reports that you might have been
expecting. Instead, they show you how Database In-Memory can be applied to Oracle form queries and DML statements, and how to
provide maximum business flexibility in Oracle E-Business Suite without needing to create a huge number of indexes that allow any form
field to be queried. The Receiving Transaction Processor, which is one of the core processes that spans many operations, saw
performance improve by 100x.
» Note: The same principles that apply to Oracle Forms queries also apply to Oracle Self-Service screens and other modules.
Background
The Order Organizer, shown in Figure 4, enables users to process orders and returns. Navigate to it in a Vision Instance by selecting
Order Management Super User Vision Operations USA | Orders and Returns | Order Organizer. This Form can show recent orders,
orders past their requested shipment date, orders that are on hold, order lines with priority shipments, or orders for a particular customer.
It can also be used to make changes to orders including, for example, applying holds or placing cancellations, to a range of orders.
As is evident, these forms are exceptionally flexible, with several tabs as well as a huge number of fields that can be queried either
individually or collectively. While this degree of flexibility enables easy mapping to business functions, it will typically result in
performance issues if a user only queries a non-indexed column, or a column with poor selectivity. If the Order Organizer is always used
and queried in a particular way, associated indexes could be added. However, as mentioned earlier, adding indexes for every queryable
field to allow the maximum functionality is inadvisable due to the maintenance overhead. While it may speed up queries in this form, it
could have a significant negative impact on associated processes.
From a user perspective, having some fields indexed and others not may be interpreted as sporadic response times to users who would
not have the knowledge of the underlying schema. A very common problem that we see is a user report that says “The system is slow
today”. It may well be the case that they actually ran a different set of queries using different combinations of fields, some of which are
indexed and some that are not. This is one of the main issues when an Oracle E-Business Suite user reports that the Order Organizer
Find Window occasionally “freezes”.
In this case it is easy to see how Database In-Memory provides a very useful alternative to a prohibitive number of indexes. The result of
a query based on a non-indexed column is typically a full table scan on what is potentially a very large table. Most of the time will be
spent performing physical and/or logical reads, depending on whether the table is already in the buffer cache. Populating the associated
tables into the IM column store is unlikely to affect the response times for queries based on indexed columns, but should result in a
substantial performance improvement especially when a full table scan is performed.
The end-to-end results for the baseline and Database In-Memory tests are shown in Table 5.
The top SQL accounted for approximately 80% of the elapsed time in the baseline test, with 34 seconds being used in other operations.
This top SQL reduced to only 0.38 seconds, which is a performance increase of 424X.Some of the other operations were optimized from
34 seconds to 17.6 seconds. The end-to-end run time (for the user) was more than 10X faster.
The SQL execution plans for the baseline and Database In-Memory tests are shown below:
-- Baseline (not Database In-Memory)
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.05 0.08 0 0 0 0
Execute 1 0.01 0.02 0 0 0 0
Fetch 1387 0.86 161.66 248319 425664 0 1386
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1389 0.92 161.76 248319 425664 0 1386
-- Database In-Memory
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.05 0.11 0 0 0 0
Execute 1 0.04 0.04 0 0 0 0
Fetch 1387 0.04 0.22 32 40 0 1386
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1389 0.13 0.38 32 40 0 1386
The performance of the baseline query could be improved by creating an index on the shipment_priority_code column, which would
have avoided the data throwaway, but this table is already heavily indexed. Database In-Memory provides a simple solution to this.
Background
Most people understand the concept of credit rating or credit worthiness. Related to this is the concept of credit exposure, which is
defined by the total amount of credit extended to a borrower by a lender – the size of the credit exposure indicates the risk of loss if a
borrower defaults. Depending on the transaction history, a company will have a set of rules and conditions that define how frequently
they need to re-evaluate their credit exposure image.
Oracle Order Management enables a periodic rebuild of a credit exposure image (orders, invoices and payments) for all customers (or
customer sites). The Initialize Credit Summaries concurrent program calculates and updates a customer’s credit exposure based on
factors such as their setup for each credit check rule. The credit check process exposure data is stored in a summary table. This is
generally referenced to establish a customer’s credit standing, as it is a much faster strategy than reviewing real time transactional data.
Test Configuration
In this test, 2.4M Order lines (2400 orders with 1000 lines per order) were populated into the “OE_ORDER_LINES_ALL" table. The
following tables were populated into the IM column store:
OE_ORDER_HEADERS_ALL HZ_CUST_ACCOUNTS
OE_ORDER_LINES_ALL HZ_CUST_ACCT_SITES_ALL
OE_PRICE_ADJUSTMENTS HZ_CUST_SITE_USES_ALL
OE_PAYMENTS RA_INTERFACE_LINES_ALL
OE_PAYMENT_TYPES_ALL
The associated SQL was previously tuned for performance by using LEADING, USE_NL, NO_UNNEST, and CARDINALITY hints. As
mentioned, some of these hints preclude Database In-Memory operations. The SQL is fairly lengthy, but can be simplified as follows:
-- Simplified Insert
SQL> INSERT INTO OE_INIT_CREDIT_SUMM_ ……………………
SELECT /*+ leading ( L ) use_nl (H SU S CA CA_L SU_L S_L ) */
……………………
FROM OE_ORDER_LINES_ALL ……………………
AND EXISTS ( SELECT /*+ no_unnest */ NULL FROM OE_PAYMENT_TYPES_ALL ……………………
UNION ALL SELECT /*+ no_unnest */ NULL FROM OE_PAYMENT_TYPES_ALL ……………………
UNION SELECT /*+ cardinality (rl 10) leading (rl h l) */
……………………
AND EXISTS ( SELECT /*+ no_unnest */ NULL FROM OE_PAYMENT_TYPES_ALL ……………………
UNION ALL SELECT /*+ no_unnest */ NULL FROM OE_PAYMENT_TYPES_ALL ……………………
Approximately 48% of the time was spent on direct path write temp and direct path read temp as the hash join spilled to disk (TEMP
Tablespace).
As previously mentioned, this raises an important point in that there is a clear statement on which to focus. The results of the Database
In-Memory tests are shown in Table 7. Several tests were performed, but only three have been included in this paper.
Test 2: Force full table scans: Using a SQL profile to 35 ** Removing the USE_NL hint resulted in hash joins and full table scans.
remove the USE_NL hint that was causing index scans. HASH JOIN data spilled to disk (TEMP Tablespace).
**Clearly, the run time would have reduced if this had not spilled to disk.
Test 3: Further optimization: Using a SQL profile to add 14 This test demonstrates additional optimization.
USE_HASH and PARALLEL hints.
The Database In-Memory tests are expanded below. Note that the SQL has been simplified throughout.
» Note: If a query shows Index Range Scans and Nested Loop Joins with a large row set, eliminating or suppressing indexes will
probably force a Database In-Memory Scan.
Change Reason
Removal of USE_NL hint The USE_NL hint forces nested loop joins and index scans.
Addition of USE_HASH hint This hint was added to force a hash join and full table scans.
Addition of PARALLEL hint The hash join data above 900 MB spilled to disk. Using the PARALLEL hint resulted in the data being divided across multiple
processes. The per-process hash join data reduced and no longer spilled to disk.
The execution plan shows the Database In-Memory full scans as expected, with hash joins and PARALLEL (px)processes.
SQL> INSERT INTO OE_INIT_CREDIT_SUMM_ ……………………
SELECT /*+ leading(L) Parallel(L) */ ……………………
FROM OE_ORDER_LINES_ALL ……………………
AND EXISTS ( SELECT /*+ no_unnest */ NULL FROM OE_PAYMENT_TYPES_ALL ……………………
UNION ALL SELECT /*+ no_unnest */ NULL FROM OE_PAYMENT_TYPES_ALL ……………………
UNION SELECT /*+ cardinality ( rl 10 ) leading(rl h l) use_hash(rl h l ca) Parallel(rl) Parallel(h) Parallel(l) */
……………………
AND EXISTS ( SELECT /*+ no_unnest */ NULL FROM OE_PAYMENT_TYPES_ALL ……………………
UNION ALL SELECT /*+ no_unnest */ NULL FROM OE_PAYMENT_TYPES_ALL ……………………
Note: When using this SQL profile, the original SQL runs with the modified execution plan without any changes to the original Oracle E-
Business Suite code.
Populating the objects into the Database IM column store resulted in a 35 minute runtime. Further tuning reduced this to 14 minutes,
which equates to a further improvement of 2.5X.
Background
The Receiving Transaction Processor (RCVTP) processes pending or unprocessed receiving transactions. When it operates (on-line,
immediate or batch modes) it is controlled by a profile option that can be set at the site, application, responsibility, and user levels. It is
used extensively within Oracle E-Business Suite, and its functions include:
» Validates Advance Shipment Notice (ASN), and Advance Shipment and Billing and Notice (ASBN) information.
» Creates receipt headers and receipt lines.
» Maintains transaction history, inventory, and supply information.
» Accrues uninvoiced receipt liabilities.
» Maintains a range of quantity information on purchase orders and requisitions.
» Closes purchase orders for receiving.
Performance issues tend to occur when running in batch mode. In this mode, the transactions are stored in the receiving interface tables
but not processed until user runs the Receiving Transaction Processor. In addition to being populated by the receiving forms, they can
also be populated with large numbers of transaction from the receiving interface tables. Performance tends to decrease when there are
many “lot or serial controlled items” to be processed in the batch.
Test Configuration
In this test, a Purchase Order with a serial controlled item was created and approved. Next, 18,000 receipts were populated with direct
delivery routing, thereby automatically creating and populating inventory transactions into the receiving interface tables. Finally, the
Receiving Transaction Processor was run. The following tables contained 18,000 entries each, and were populated into the IM column
store:
» RCV_HEADERS_INTERFACE
» RCV_TRANSACTIONS_INTERFACE
The Database In-Memory statistics for these tables are as follows:
OWNER NAME SIZE_MB INMEMORY_SIZE_MB COMP_RATIO INMEMORY_COMPRESSION
---------- ------------------------------ ---------- ---------------- ---------- --------------------
PO RCV_TRANSACTIONS_INTERFACE 1600 97 16.4842241 FOR DML
PO RCV_HEADERS_INTERFACE 320 50 6.35148515 FOR QUERY LOW
Notice that the RCV_TRANSACTIONS_INTERFACE table was compressed with a 16.48 compression ratio. Also this example shows
the use of the FOR DML compression option, which is optimized for DML performance.
Baseline Test
There are two statements that account for 5.9 hours (21346 seconds) and 1.8 hours (6944 seconds):
SQL> UPDATE RCV_TRANSACTIONS_INTERFACE SET PARENT_TRANSACTION_ID = :B3, SHIPMENT_LINE_ID = :B2
WHERE PARENT_INTERFACE_TXN_ID = :B1 AND PARENT_TRANSACTION_ID IS NULL
For this type of data and processing mode, the number of executions is high, and you can see that it corresponds to the number of rows.
Despite this, the two queries that took the longest time reduced from 7.8 hrs (471 mins) to 4.85 mins with Database In-Memory. This
represents about a 100x increase in performance.
The program still took 2.6 hours to run, and could have been reviewed again to look for other tuning opportunities. Although these two
performance issues could have been addressed by adding an index, in this case the Database In-Memory gains were achieved without
any modification.
It is always good practice to review AWR reports along with system load charts, so that you always know what is happening on the
system and can check for unexpected events or extraneous loads. Always consider taking AWR snapshots for a process that runs
longer than a few minutes. If the process spans several AWR reports (default 1 hour), always ensure that you have (and review)
individual AWR reports for the entire period, as some of the metrics will not appear until the final report when, for example, a query
completes. A very common mistake is to use an AWR report that spans the entire process. This misses out short-term spikes, and more
importantly, actual problems may be masked. Consider an analogy of driving 100 miles over a total of four hours. You start by driving at
10 mph for three hours, and then at 70 mph for one hour. An overall trip report shows that you drove at 25 mph with no real issues. You
can’t see that you were limited to 10 mph for the first three hours, perhaps because of heavy traffic. The same is true of AWR reports,
and this is one of the most common reasons why performance issues are overlooked – the peaks and troughs tend to be ironed out and
so the overall picture is misleading.
There are several Database In-Memory metrics in an AWR report. These range from the SGA Memory Summary shown in Figure 5 to
the Database In-Memory Segments by Scans in Figure 6 that show you the actual usage.
» Note: The Database In-Memory Segments by Scans section of the AWR report is very useful for checking which objects in the IM
column store are being used in which Oracle E-Business Suite application cycle throughout the month.
» Note: In particular, look for high-load SQL statements that have a low or moderate number of executions. It is unlikely that statements
with a high number of executions querying only a few rows each time will benefit. For further information about how to review AWR
reports and ensure they are as useful, refer to the Review AWR Reports section in Appendix A.
Concurrent programs typically have the highest load, and although the program name usually appears in the AWR report, you will need
to review the Concurrent Request details to establish the parameters that were used. While many concurrent programs simply create
reports, many update data and therefore cannot be run iteratively, even in development, test, or UAT environments.
Figure 8 compares significant statistics from an AWR Load Profile without and with Database In-Memory. This set of data was created
during an Oracle Discrete Manufacturing Non-Engineered System Benchmark.
Physical reads 72,905 11,157 Physical reads reduces, and Logical reads increases.
Read IO requests 650 346 Reduces with the amount of IM column store data accessed.
A reduction in physical reads will always have a corresponding increase in logical reads, which will inevitably have a knock-on increase
in the overall system CPU utilization. If you have a very large Database In-Memory area, you may need to look at further tuning the SQL
as a next step.
Figure 9 compares the top 12 highest-load SQL statements, ordered by physical reads. The left side is the traditional database and the
right side is with Database In-Memory. The statistics and SQL IDs for those visible on screen for both AWRs are highlighted with
different colours.
Notice that the standard system has two very high-load merge SQL statements that are significantly in excess of any others, at 81M and
74M disk reads respectively. These were responsible for the majority of the system load, and were processing a large amount of data.
Sometimes this is a valid situation and cannot be avoided. The highest load statement (81M disk reads) reduces to 5.7M reads with
Database In-Memory, but also reduces from 31 minutes to 10 minutes (elapsed). Note also that these tables have just been populated
into the IM column store, and the SQL has not been modified using a SQL profile.
The top SQL statements are compared in Table 11. All the SQL statements would have run, but they are either not in the subset in the
image, or they were no longer sufficiently noteworthy to be reported in the AWR report. As mentioned previously, the highest load SQL
statements reduce from 81M reads to 5.7M reads, which represents a substantial reduction in disk access.
Conclusion
This paper has shown how the Oracle Database In-Memory option is a valuable tool that can be used to address various performance
issues. However, deciding what, when and how to use Database In-Memory requires careful thought, and is further complicated by the
Oracle E-Business Suite business cycle, which typically varies throughout the month. You may need to develop a strategy whereby key
objects are placed into the IM column store specifically to speed up month-end processing, but are then replaced with other sets that
map to other business functions throughout the month.
Simply populating an object into the Database IM column store is not necessarily a panacea. It may be that there are no clear high-load
statements in a trace file, due to the application logic, row-by-row processing, the presence of embedded hints that preclude Database
In-Memory usage, or the use of data import or other non-IM functions. Also, operations such as sorting and classic aggregation will still
take time. If performance degrades when using Database In-Memory, then there is likely to be a reason such as in the in the example
where hash joins spilled to disk.
Not only can Database In-Memory be applied to analytical reports, which is one of its headline features, but this paper has shown it can
also be applied to Oracle Forms queries and DML statements, and in addition, used as an alternative to creating multiple indexes on the
same table.
One of the biggest issues is the use of SQL hints that are used extensively across Oracle E-Business Suite. The examples have shown
how to circumvent these by using a SQL profile, and then how to perform additional tuning to improve performance further. This example
shows that combining Database In-Memory with other approaches can result in a significantly higher level of performance.
White Papers
These papers are generic and do not specifically focus on Oracle E-Business Suite.
» Oracle Database In-Memory
• This paper provides an extensive overview of Oracle Database In-Memory and is a useful precursor to this paper. It should be
considered as the source for many of the generic statements in this document. This is available on the Oracle Optimizer blog:
https://blogs.oracle.com/In-Memory/ and Oracle TechNet: http://www.oracle.com/technetwork/database/in-
memory/overview/twp-oracle-database-in-memory-2245633.html
» Oracle Database In-Memory Advisor
• The Oracle Database In-Memory Advisor (MOS Doc ID 1965343.1)
• Configuring and Managing Oracle E-Business Suite Release 12.1.x Application Tiers for Oracle RAC (MOS Doc ID 1311528.1)
• Configuring and Managing Oracle E-Business Suite Release 12.2.x Forms and Concurrent Processing for Oracle RAC (MOS
Doc ID 2029173.1)
» Adjusting SGA_TARGET and PGA_TARGET
Refer to the following document when sizing the IM column store.
• Oracle Database In-Memory Option (DBIM) Basics and Interaction with Data Warehousing Features (MOS Doc ID 1903683.1).
» Gather Statistics
Before making any changes, ensure that you have all the necessary indexes and that the system and object statistics are up to date.
This document provides an extensive overview of the Oracle database statistics gathering process and describes several methods for
collecting or setting statistics.
• Best Practices for Gathering Statistics with Oracle E-Business Suite (MOS Doc ID 1586374.1)
• Collecting Diagnostic Data for Performance Issues in Oracle E-Business Suite (MOS Doc ID 1121043.1)
» Review AWR Reports
In addition to other diagnostics, this document summarizes how to monitor performance data for Oracle Forms, modules and concurrent
reports using ASH and AWR:
• Performance Diagnosis with Automatic Workload Repository (AWR) (MOS Doc ID 1674086.1)
Blogs
The Oracle Database In-Memory blog contains a wealth of generic information, technical details, ideas and news on Oracle Database In-
Memory from the author of the Oracle White Paper on Oracle Database In-Memory.
» Oracle Database In-Memory on RAC - Part I
• This article starts with background information on how the IM column stores are populated on Oracle RAC and then discusses
how to manage parallelization.
Execution plan:
------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1069K| 43M| 4120K (1)| 00:02:41 |
| 1 | NESTED LOOPS | | 1069K| 43M| 4120K (1)| 00:02:41 |
| 2 | NESTED LOOPS | | 1069K| 38M| 4120K (1)| 00:02:41 |
|* 3 | TABLE ACCESS INMEMORY FULL | OE_ORDER_LINES_ALL | 4069K| 89M| 45164 (6)| 00:00:02 |
|* 4 | TABLE ACCESS BY INDEX ROWID| OE_ORDER_HEADERS_ALL | 1 | 15 | 1 (0)| 00:00:01 |
|* 5 | INDEX UNIQUE SCAN | OE_ORDER_HEADERS_U1 | 1 | | 0 (0)| 00:00:01 |
|* 6 | INDEX UNIQUE SCAN | HZ_CUST_SITE_USES_U1 | 1 | 5 | 0 (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------
This SQL includes a USE_NL Hint which forces a nested loop join and index scan of two of the tables. Ideally we want to convert this
into full table scans for Database In-Memory. Therefore we need to create a SQL profile to do exactly that - override the existing
USE_NL hint and force a Database In-Memory full table scan and hash join.
In order to achieve the desired execution plan, use the following SQL:
-- SQL 2
SQL> SELECT /*+ LEADING(L) FULL(H) FULL(SU_L) USE_HASH(L H SU_L) */
L.ORDERED_QUANTITY, L.UNIT_SELLING_PRICE , H.INVOICE_TO_ORG_ID
FROM OE_ORDER_LINES_ALL L, OE_ORDER_HEADERS_ALL H, HZ_CUST_SITE_USES_ALL SU_L
WHERE H.HEADER_ID = L.HEADER_ID AND H.BOOKED_FLAG = 'Y' AND H.OPEN_FLAG = 'Y'
AND L.OPEN_FLAG = 'Y' AND NVL ( L.INVOICED_QUANTITY , 0 ) = 0
AND SU_L.SITE_USE_ID = L.INVOICE_TO_ORG_ID;
Execution plan:
--------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2656K| 108M| | 43298 (7)| 00:00:02 |
|* 1 | HASH JOIN | | 2656K| 108M| | 43298 (7)| 00:00:02 |
|* 2 | TABLE ACCESS INMEMORY FULL | OE_ORDER_HEADERS_ALL | 37262 | 545K| | 65 (48)| 00:00:01 |
|* 3 | HASH JOIN | | 4962K| 132M| 165M| 43168 (6)| 00:00:02 |
|* 4 | TABLE ACCESS INMEMORY FULL| OE_ORDER_LINES_ALL | 4963K| 108M| | 30861 (8)| 00:00:02 |
| 5 | TABLE ACCESS INMEMORY FULL| HZ_CUST_SITE_USES_ALL | 7701 | 38505 | | 9 (34)| 00:00:01 |
--------------------------------------------------------------------------------------------------------------
SQL> SELECT SQL_ID,SQL_PROFILE, SQL_TEXT FROM V$SQL WHERE SQL_TEXT LIKE '%L.OPEN_FLAG%';
SQL_ID 9dbcsbdbf5cf1 represents the preferred execution plan for SQL 2. You can display the execution plan with the outline option
using the following command:
SQL> SELECT *
FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR('9DBCSBDBF5CF1',0, 'OUTLINE'));
PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------------------------
SQL_ID 9dbcsbdbf5cf1, child number 0
-------------------------------------
SELECT /*+ LEADING(L) FULL(H) FULL(SU_L) USE_HASH(L H SU_L) */ L.ORDERED_QUANTITY,L.UNIT_SELLING_PRICE ,
H.INVOICE_TO_ORG_ID FROM OE_ORDER_LINES_ALL L, OE_ORDER_HEADERS_ALL H,HZ_CUST_SITE_USES_ALL SU_L
WHERE H.HEADER_ID = L.HEADER_ID AND H.BOOKED_FLAG = 'Y' AND H.OPEN_FLAG = 'Y' AND L.OPEN_FLAG = 'Y'
AND NVL (L.INVOICED_QUANTITY , 0 ) = 0 AND SU_L.SITE_USE_ID =L.INVOICE_TO_ORG_ID
Outline Data
-------------
/*+
BEGIN_OUTLINE_DATA
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('12.1.0.2')
DB_VERSION('12.1.0.2')
OPT_PARAM('_b_tree_bitmap_plans' 'false')
OPT_PARAM('_fast_full_scan_enabled' 'false')
ALL_ROWS
OUTLINE_LEAF(@"SEL$1")
FULL(@"SEL$1" "L"@"SEL$1")
FULL(@"SEL$1" "SU_L"@"SEL$1")
FULL(@"SEL$1" "H"@"SEL$1")
LEADING(@"SEL$1" "L"@"SEL$1" "SU_L"@"SEL$1" "H"@"SEL$1")
USE_HASH(@"SEL$1" "SU_L"@"SEL$1")
USE_HASH(@"SEL$1" "H"@"SEL$1")
SWAP_JOIN_INPUTS(@"SEL$1" "H"@"SEL$1")
END_OUTLINE_DATA
*/
If this is the first time you are creating a profile, connect as the SYS user and execute the following grant:
SQL> GRANT ADMINISTER SQL MANAGEMENT OBJECT to APPS;
Execute the first statement (SQL-1) and check its execution plan to confirm that the SQL profile is being used:
SQL> SELECT /*+ LEADING (L) USE_NL(H SU_L) */ L.ORDERED_QUANTITY,L.UNIT_SELLING_PRICE,
H.INVOICE_TO_ORG_ID FROM OE_ORDER_LINES_ALL L,OE_ORDER_HEADERS_ALL H, HZ_CUST_SITE_USES_ALL SU_L
WHERE H.HEADER_ID = L.HEADER_ID AND H.BOOKED_FLAG = 'Y' AND H.OPEN_FLAG = 'Y' AND L.OPEN_FLAG = 'Y'
AND NVL ( L.INVOICED_QUANTITY , 0 ) = 0 AND SU_L.SITE_USE_ID = L.INVOICE_TO_ORG_ID;
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------
SQL_ID 3npc1kr6r527z, child number 0
-------------------------------------
SELECT /*+ LEADING (L) USE_NL(H SU_L) */ L.ORDERED_QUANTITY, L.UNIT_SELLING_PRICE , H.INVOICE_TO_ORG_ID FROM
OE_ORDER_LINES_ALL L, OE_ORDER_HEADERS_ALL H,HZ_CUST_SITE_USES_ALL SU_L
WHERE H.HEADER_ID = L.HEADER_ID AND H.BOOKED_FLAG = 'Y' AND H.OPEN_FLAG = 'Y' AND L.OPEN_FLAG = 'Y'
AND NVL (L.INVOICED_QUANTITY, 0 ) = 0 AND SU_L.SITE_USE_ID = L.INVOICE_TO_ORG_ID
--------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | | 20184 (100)| |
|* 1 | HASH JOIN | | 2656K| 108M| | 20184 (12)| 00:00:01 |
|* 2 | TABLE ACCESS INMEMORY FULL | OE_ORDER_HEADERS_ALL | 37262 | 545K| | 65 (48)| 00:00:01 |
|* 3 | HASH JOIN | | 4962K| 132M| 165M| 20054 (11)| 00:00:01 |
|* 4 | TABLE ACCESS INMEMORY FULL| OE_ORDER_LINES_ALL | 4963K| 108M| | 7747 (26)| 00:00:01 |
| 5 | TABLE ACCESS INMEMORY FULL| HZ_CUST_SITE_USES_ALL | 7701 | 38505 | | 9 (34)| 00:00:01 |
--------------------------------------------------------------------------------------------------------------
1 - access("H"."HEADER_ID"="L"."HEADER_ID")
2 - inmemory(("H"."OPEN_FLAG"='Y' AND "H"."BOOKED_FLAG"='Y'))
filter(("H"."OPEN_FLAG"='Y' AND "H"."BOOKED_FLAG"='Y'))
3 - access("SU_L"."SITE_USE_ID"="L"."INVOICE_TO_ORG_ID")
4 - inmemory(("L"."OPEN_FLAG"='Y' AND NVL("L"."INVOICED_QUANTITY",0)=0))
filter(("L"."OPEN_FLAG"='Y' AND NVL("L"."INVOICED_QUANTITY",0)=0))
Note
-----
- SQL profile PROFILE_3npc1kr6r527z used for this statement
The final line shows that SQL profile - PROFILE_3npc1kr6r527z (and therefore the expected execution plan) was used when running
SQL-1.
» To check the details of the columns populated into the IM column store:
V$IM_COLUMN_LEVEL
To check the used and allocated memory for the IM column store:
SQL> SELECT POOL, ALLOC_BYTES/1024/1024 ALLOC_BYTES, USED_BYTES/1024/1024 USED_BYTES, POPULATE_STATUS, CON_ID
FROM V$INMEMORY_AREA;
The following queries identify which tables and table partitions are enabled for the IM column store:
-- For Tables
SQL> SELECT OWNER, TABLE_NAME,INMEMORY,INMEMORY_PRIORITY,INMEMORY_COMPRESSION
FROM DBA_TABLES
WHERE INMEMORY='ENABLED'; -- FOR TABLES
OWNER TABLE_NAME INMEMORY INMEMORY_PRIORITY INMEMORY_COMPRESSION
---------- -------------------- -------- ----------------- -----------------
APPLSYS FND_LOOKUP_TYPES ENABLED CRITICAL FOR QUERY LOW
-- For Partitions
SQL> SELECT TABLE_OWNER, TABLE_NAME, PARTITION_NAME, INMEMORY, INMEMORY_PRIORITY, INMEMORY_COMPRESSION
FROM DBA_TAB_PARTITIONS
WHERE INMEMORY='ENABLED'
To check which objects have been populated into the IM column store:
-- Objects Populated In The IM column store
SQL> SELECT OWNER,SEGMENT_NAME NAME, BYTES,INMEMORY_SIZE,BYTES/INMEMORY_SIZE COMP_RATIO,POPULATE_STATUS STATUS,
BYTES_NOT_POPULATED
FROM V$IM_SEGMENTS;
To check whether the all the eligible objects have been populated into the IM column store. Note that this SQL will return zero rows (as
shown in the example) when the objects have been populated.
SQL> SELECT DATAOBJ,DATABLOCKS, SUM(BLOCKSINMEM)
FROM GV$IM_SEGMENTS_DETAIL
GROUP BY DATAOBJ, DATABLOCKS HAVING SUM(BLOCKSINMEM) <> DATABLOCKS;
no rows selected
To check the time taken by each object for population into the IM column store:
SQL> SELECT INST_ID, OBJECT_NAME, NUM_ROWS, TIME_TO_POPULATE, IMH.TIMESTAMP
FROM GV$IM_HEADER IMH, DBA_OBJECTS
WHERE DATA_OBJECT_ID=IMH.OBJD;
Once all the objects have completed population, you can check the time taken for population of the entire IM column store:
SQL> SELECT MIN(TIMESTAMP), MAX(TIMESTAMP), MAX(TIMESTAMP)-MIN(TIMESTAMP) TIME_TAKEN FROM GV$IM_HEADER;
This appendix shows how to use the Oracle RAC Server Control Utility (srvctl) to add Oracle Database services and associate them with
a specific Oracle RAC node so that objects can be loaded into particular IM column stores. This approach becomes more complex when
using affinity with multiple IM column stores; it works best when there are no overlaps on the objects used by the concurrent processes
and batch programs that are affinitized to each node.
» Note: When using this approach, you will need to manually populate the IM column stores; it is not possible to populate the same
object into multiple IM column stores.
» Note: The table must be assigned PRIORITY NONE. Using any other priority will result in the table being automatically distributed
across all of the IM column stores.
System Setup
The example in this section consists of a three node Oracle RAC cluster - each node has an IM column store (INMEMORY_SIZE=
3GB). Three tables were created: SOURCE_TESTn where n corresponds to the node/instance on which they will be populated.
The recommended Oracle E-Business Suite database initialization parameters are as follows:
» PARALLEL_DEGREE_POLICY=MANUAL
» PARALLEL_FORCE_LOCAL=TRUE
Use the following command if they are not already set on your system:
SQL> ALTER SYSTEM SET PARALLEL_DEGREE_POLICY = MANUAL;
System altered.
SQL> ALTER SYSTEM SET PARALLEL_FORCE_LOCAL = TRUE;
System altered.
Add a TNS entry for each of the services based on the following example:
SN1=
(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=1.1.1.1)(PORT=1521))
(ADDRESS=(PROTOCOL=tcp)(HOST=1.1.1.2)(PORT=1521))
(ADDRESS=(PROTOCOL=tcp)(HOST=1.1.1.3)(PORT=1521))
(CONNECT_DATA=
(SERVICE_NAME=SN1)
)
)
» Note: It is important to connect to the appropriate service prior to populating the table into the IM column store on a specific node.
This is repeated for SN2 with SOURCE_TEST2 and SN3 with SOURCE_TEST3.
The following SQL can be used to check that the tables are populated into each of the IM column stores:
SQL> SELECT INST_ID,SEGMENT_NAME,BYTES BYTES,INMEMORY_SIZE IM_SIZE,POPULATE_STATUS,INMEMORY_DISTRIBUTE,INMEMORY_DUPLICATE
FROM GV$IM_SEGMENTS ORDER BY SEGMENT_NAME,INST_ID;
Simply connect to the SN1 service and execute the query. This can be repeated for each of the other services and objects, with the
same result. The test shows that an In-Memory full table scan occurs (and there are no physical disk reads). If you inadvertently query
an object that does not exist in the local IM column store, you will see a radical difference in performance, which will be accounted for by
physical disk reads.
The following extract shows that an In-Memory full table scan occurs for the object in the local IM column store.
$ sqlplus apps/apps@SN1
SQL> SET AUTOTRACE ON TIMING ON LINESIZE 200 PAGES 120
SQL> SELECT COUNT(*) FROM (SELECT /*+ PARALLEL(AUTO) */ LINE, COUNT(OBJ#)FROM SOURCE_TEST1 GROUP BY LINE);
-----------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | | 23929 (6)| 00:00:01 | | | |
| 1 | SORT AGGREGATE | | 1 | | | | | | | |
| 2 | PX COORDINATOR | | | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10001 | 1 | | | | | Q1,01 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | | | | | Q1,01 | PCWP | |
| 5 | VIEW | | 319K| | | 23929 (6)| 00:00:01 | Q1,01 | PCWP | |
| 6 | HASH GROUP BY | | 319K| 1558K| 875M| 23929 (6)| 00:00:01 | Q1,01 | PCWP | |
| 7 | PX RECEIVE | | 319K| 1558K| | 23929 (6)| 00:00:01 | Q1,01 | PCWP | |
| 8 | PX SEND HASH | :TQ10000 | 319K| 1558K| | 23929 (6)| 00:00:01 | Q1,00 | P->P | HASH |
| 9 | HASH GROUP BY | | 319K| 1558K| 875M| 23929 (6)| 00:00:01 | Q1,00 | PCWP | |
| 10 | PX BLOCK ITERATOR | | 76M| 363M| | 1918 (9)| 00:00:01 | Q1,00 | PCWC | |
| 11 | TABLE ACCESS INMEMORY FULL|SOURCE_TEST1 | 76M| 363M| | 1918 (9)| 00:00:01 | Q1,00 | PCWP | |
-----------------------------------------------------------------------------------------------------------------------------------------
Note
-----
- dynamic statistics used: dynamic sampling (level=AUTO)
- automatic DOP: Computed Degree of Parallelism is 4 because of degree limit
- parallel scans affinitized for inmemory
Statistics
----------------------------------------------------------
86 recursive calls
0 db block gets
248 consistent gets
0 physical reads
0 redo size
544 bytes sent via SQL*Net to client
552 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
This test shows what happens when you access all three tables when using a service that is associated to one of the nodes. In this
example an In-Memory full table scan occurs for SOURCE_TEST1 on Node-1(VISION1), but the remainder of the data is accessed via
physical disk reads for SOURCE_TEST2 and SOURCE_TEST3, which are populated in IM column stores on the other nodes.
Note that there is a problem with Autotrace and the cursor execution plan (show below) that both display INMEMORY FULL for all of the
tables, whereas there were actually 8964283 physical disk reads (as shown in the autotrace statistics).
$ sqlplus apps/apps@SN1
SQL> SELECT COUNT(1) FROM (SELECT /*+ PARALLEL(AUTO) USE_HASH(T1 T2 T3) */ T1.*
FROM SOURCE_TEST1 T1, SOURCE_TEST2 T2, SOURCE_TEST3 T3
WHERE T1.LINE = T2.LINE AND T1.LINE= T3.LINE);
Execution Plan
----------------------------------------------------------
Plan hash value: 3990536709
-----------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 15 | | 1538K (24)| 00:01:01 | | | |
| 1 | SORT AGGREGATE | | 1 | 15 | | | | | | |
| 2 | PX COORDINATOR | | | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10003 | 1 | 15 | | | | Q1,03 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | 15 | | | | Q1,03 | PCWP | |
|* 5 | HASH JOIN | | 254G| 3560G| 75M| 1538K (24)| 00:01:01 | Q1,03 | PCWP | |
| 6 | PX RECEIVE | | 18M| 89M| | 435 (7)| 00:00:01 | Q1,03 | PCWP | |
| 7 | PX SEND HASH | :TQ10000 | 18M| 89M| | 435 (7)| 00:00:01 | Q1,00 | P->P | HASH |
| 8 | PX BLOCK ITERATOR | | 18M| 89M| | 435 (7)| 00:00:01 | Q1,00 | PCWC | |
| 9 | TABLE ACCESS INMEMORY FULL | SOURCE_TEST3 | 18M| 89M| | 435 (7)| 00:00:01 | Q1,00 | PCWP | |
|* 10 | HASH JOIN | | 4407M| 41G| 75M| 27692 (24)| 00:00:02 | Q1,03 | PCWP | |
| 11 | PX RECEIVE | | 18M| 89M| | 434 (7)| 00:00:01 | Q1,03 | PCWP | |
| 12 | PX SEND HASH | :TQ10001 | 18M| 89M| | 434 (7)| 00:00:01 | Q1,01 | P->P | HASH |
| 13 | PX BLOCK ITERATOR | | 18M| 89M| | 434 (7)| 00:00:01 | Q1,01 | PCWC | |
| 14 | TABLE ACCESS INMEMORY FULL| SOURCE_TEST2 | 18M| 89M| | 434 (7)| 00:00:01 | Q1,01 | PCWP | |
| 15 | PX RECEIVE | | 76M| 363M| | 1918 (9)| 00:00:01 | Q1,03 | PCWP | |
| 16 | PX SEND HASH | :TQ10002 | 76M| 363M| | 1918 (9)| 00:00:01 | Q1,02 | P->P | HASH |
| 17 | PX BLOCK ITERATOR | | 76M| 363M| | 1918 (9)| 00:00:01 | Q1,02 | PCWC | |
| 18 | TABLE ACCESS INMEMORY FULL| SOURCE_TEST1 | 76M| 363M| | 1918 (9)| 00:00:01 | Q1,02 | PCWP | |
-----------------------------------------------------------------------------------------------------------------------------------------
5 - access("T1"."LINE"="T3"."LINE")
10 - access("T1"."LINE"="T2"."LINE")
Note
-----
- automatic DOP: Computed Degree of Parallelism is 4 because of degree limit
- parallel scans affinitized for inmemory
Autotrace Statistics
----------------------------------------------------------
2364112 consistent gets
8964283 physical reads
CONNECT W ITH US
blogs.oracle.com/oracle Copyright © 2015, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposes only, and the
contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other
warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or
facebook.com/oracle fitness for a particular purpose. We specifically disclaim any liability with respect to this document, and no contractual obligations are
formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any
twitter.com/oracle means, electronic or mechanical, for any purpose, without our prior written permission.
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.
oracle.com
Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and
are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are
trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group.0115