You are on page 1of 42

Using Oracle Database In-Memory

with Oracle E-Business Suite

ORACLE WHITE PAPER | AUGUST 2015


Table of Contents
Executive Overview 1
Introduction 1
How to Approach Tuning 2
Why Not Just Use Another Index? 2
Planning Your Approach 2
Planning the Implementation of Database In-Memory 3
Collating Candidate SQL 3
Database In-Memory Object Criteria 4
Database In-Memory Query Criteria 4
Collating System Performance Issues 5
Predicting Database In-Memory Requirements - Compression 5
Oracle Database In-Memory Advisor 7
Using Priority 7
Sizing the Database In-Memory Column Store 8
Enabling Database In-Memory 8
Strategy for Specifying Database In-Memory Parameters 8
Populate the Database In-Memory Column Store 9
Check the Size and Population Status 10
Check Database In-Memory Object Usage 10
Oracle RAC Support for Database In-Memory - DISTRIBUTE/DUPLICATE 11
Degree of Parallelism 12
Oracle E-Business Suite Application Affinity 12
In-Memory Best Practices with Oracle E-Business Suite 13
DISTRIBUTE/DUPLICATE Examples 14
Database In-Memory System Load Considerations 18
Oracle E-Business Suite Use Cases and Examples 18
Example 1: Order Organizer Form 19
Example 2: Initialize Credit Summaries 21
Example 3: Receiving Transaction Processor 24
Example 4: Using AWR Reports – The System Perspective 26
Conclusion 29
Appendix A: Related Documentation 30
Oracle Database Documentation 30
White Papers 30
My Oracle Support Knowledge Documents 30
Blogs 31
Appendix B: Implementing Plan Changes Using SQL Profiles 32
SQL Profile Advantages 32
Creating a SQL Profile 32
Appendix C: Diagnostic Queries 35
Database In-Memory Views 35
Database In-Memory Data Dictionary Reference 35
Appendix D: Using Oracle E-Business Suite Application Affinity 37

USING ORACLE DATABASE IN-MEMORY WITH ORACLE E-BUSINESS SUITE


Executive Overview
This paper provides recommendations and guidelines for effective use of the Oracle Database In-Memory option with Oracle E-Business
Suite Release 12.1.3 and above. Database In-Memory is one of a number of options that can be deployed as a tailored solution to
address performance concerns and scalability requirements. It can be used with Oracle E-Business Suite OLTP and reporting, without
the need for any application changes.

The paper assesses key Database In-Memory characteristics and restrictions, with the goal of helping you to utilize the option to
enhance Oracle E-Business Suite performance. This approach shows where it may help (such as with full table scans), and where it
may not (such as row-by-row functions). Examples are provided to show how performance can be improved by simply placing objects
into the In-Memory column store, and how you can go on to significantly improve performance by employing SQL profiles and hints.

The examples in this document take a step away from the classic headline feature of analytical reports that you might have been
expecting. Instead, they show how Database In-Memory can be applied to Oracle Forms queries and DML statements, and
consequently how you can use it to maximize business flexibility in Oracle E-Business Suite forms and screens without needing to
create a large number of indexes that would otherwise be required, for example, to enable querying an arbitrary set of form fields.

In addition, this document shows you how to selectively circumvent the SQL hints that are used extensively across Oracle E-Business
Suite, in such a way that you achieve maximum performance in a particular application without needing to drop indexes that would
inevitably degrade performance elsewhere.

Oracle Database In-Memory shouldn’t be considered as a generic tuning option, but one of a number of possible methods that can be
deployed when dealing with performance issues or scalability concerns. This paper is best used in conjunction with other resources such
as the Oracle Database In-Memory white paper.

Introduction
The Oracle E-Business Suite database uses the conventional row format, which is the most easily understood and efficient structure for
OLTP. In this format, data is maintained on a row-by-row basis using insert, update, and delete DML operations.

The Database In-Memory column store (IM column store) is a new static pool in the Oracle Database System Global Area (SGA). It is
contained within the Database In-Memory Area, as shown in Figure 1. The IM column store supplements the buffer cache, allowing data
such as groups of columns, tables, partitions, and materialized views to be stored in memory in both row and column formats. This
storage differs from other parts of the SGA in that the data is compressed, and is not aged out or displaced.

Figure 1: The Database In-Memory Area within the SGA

These row and column formats are most easily understood by mapping the data to a spreadsheet. The rows of the spreadsheet
correspond to each record; individual columns typically have analytic functions applied such as sum, mean or average. Analytic style
queries typically perform complex aggregation and summary functions on a small subset of columns, but access the majority of the rows
in the table. The IM column store transparently enables the database to perform scans, joins, and aggregates much faster than when
exclusively using the row format.

1 | USING ORACLE DATABASE IN-MEMORY WITH ORACLE E-BUSINESS SUITE


This dual-format architecture, shown in Figure 2, provides the best performance for row-based OLTP transactions and DML operations
(via the buffer cache), and columnar operations for analytical queries typically used in reporting (via the IM column store). Both formats
are simultaneously active and mutually consistent.

Figure 2: The Row and Column Dual-Format Architecture

In Oracle E-Business Suite, particularly with Oracle Business Intelligence and other reporting applications, the value proposition can be
increased by storing more data in the IM column store by only populating essential columns or partitions (rather than entire tables), and
using one of the available compression options. The highest efficiencies occur when only a few columns are selected, but the query
accesses a large portion of the data.

How to Approach Tuning


When addressing performance, before tracing the application and reviewing SQL Trace files and SQL execution plans, always review
the way that the application is being used. Next check that object statistics are both up to date and representative of the underlying data
(refer to the Gather Statistics section of Appendix A for further information). There are several ways to optimize the applications that you
use, and the IM column store should be thought of as just one of the available optimization methods or tools, as are (for example),
partitioning and indexes.

Why Not Just Use Another Index?


Most OLTP systems use indexes to improve query performance for particular columns. These will only help up to a point. They don’t
tend to benefit analytic functions, especially when most of the data is being accessed and a full table scan (FTS) is therefore likely to be
performed. Also, even if indexes help performance, remember that their deployment introduces or increases the maintenance overhead
for every corresponding DML operation.

Oracle E-Business Suite includes several multi-purpose screens that can be functionally used in many different ways. Some of these
screens have many queryable fields, and providing an out-of-the-box index for every column would lead to an unnecessary maintenance
overhead. Ideally, you will create custom indexes to suit the specific way your users work. Even then, you may only be able to cater for
the most common operational scenarios or modus operandi, with the indexes being ignored by the optimizer when a query needs to
access most or all of the data via a full table scan. This type of query - where adding indexes is in principle desirable but in practice not
feasible - is the kind that will typically benefit most when the associated objects are resident in the IM column store.

The advantage of having data in both formats is that the optimizer will analyze the query and decide which will provide the best
performance. As a general rule, the optimizer will choose a full table scan (via the IM column store) for analytical operations that access
the majority of rows in the table, but it will choose an index access for more selective queries if an index exists.

Planning Your Approach


Understanding the Oracle E-Business Suite functionality and application lifecycle is an essential precursor to any tuning or performance
exercise. As a part of your analysis and testing, you could populate an object into the IM column store and then, if necessary, force the
optimizer to use a columnar operation by marking an index as invisible as opposed to dropping it. Using this approach means that if
performance doesn’t improve, you can restore the system by simply altering the index and make it visible again, rather than have to
rebuild the index from scratch.

2 | USING ORACLE DATABASE IN-MEMORY WITH ORACLE E-BUSINESS SUITE


However, while this may improve the performance of a particular query or module, it may have a detrimental effect on other Oracle E-
Business Suite OLTP transactions, concurrent processes, or other queries that use or reference the same object in other parts of the
application. The module interaction and other functional complexities make it difficult to test every permutation or plan for every
eventuality. For this reason, whenever making system changes, you should always consider your most critical business processes that
use or refer to particular objects to ensure that they are not adversely affected.

Another common scenario that needs to be considered is that Oracle E-Business Suite implementations tend to have several lifecycle
phases, not only throughout the month, but also (for example), at month end, quarter end, and year end. You may not realize the full
effect of an application change until you experience a significant issue with a component that was not included in your testing. For this
reason, consider using SQL profiles (as described in Appendix B) to modify individual query execution plans. As you will see later in this
document, these provide another useful tool that can help improve the performance of Database In-Memory queries, in a very controlled
manner, especially where (as is common with Oracle E-Business Suite) you may not want to, or should not, change the core objects or
application code.

Planning the Implementation of Database In-Memory


This section starts by walking you through how to build a list of the processes that you need to investigate. As part of this task, you will
need to establish a set of criteria for reviewing the objects referenced by the SQL statements to check that they are suitable candidates
to be populated into the IM column store.

The section then goes on to review the various compression ratios that can be applied to increase the data density. Once you have this
information, you will be in a good position to estimate the total amount of space that you will need. This is essential if you plan to procure
additional memory or reallocate machine resources, even on a test system. Many Database In-Memory objects are likely to be large
transient transaction tables containing thousands or millions of rows. It is therefore also important that you include projected data growth.

There are two ways to reduce the amount of memory that you need. Firstly, you may only need to populate a subset of columns, rather
than the whole of an object, into the IM column store. This is described further in the next section. Second, with Oracle E-Business Suite
it is important to remember that there are usually several specific business phases. If particular programs or functions are exclusive to a
specific part of the business cycle, then clearly they do not require space in the IM column store at other times. While these phases may
not be clearly defined within your business model, almost every organization will have very specific month/quarter/year-end processing
periods. Identifying them accurately will enable you to release the space for use by other objects during the rest of the month.

» Note: The initial population of the IM column store is a CPU and disk intensive activity and may affect the performance of other
concurrent workloads.
» Note: Refer to Using Oracle Database In-Memory with Oracle E-Business Suite (MOS Doc ID 2025309.1), which lists patches

Collating Candidate SQL


As mentioned, Database In-Memory helps provide the maximum return on investment by only requiring you to populate a subset of
essential columns that are directly accessed by, or otherwise essential to, queries or DML operations. Later in this paper you will see
that Database In-Memory can also benefit insert, update, or delete statements where subqueries perform full table scans on large tables.

The inputs to your analysis will typically include:


» AWR/ADDM/ASH reports. Review high-load queries that take a long time. You need to review at least 30 days to identify
specific business cycles where, for example, particular reports only run at a particular time during the month. The IM column
store could be better utilized for other objects (or a higher proportion of those objects) during the rest of the month.
» User performance complaints. Identify the subset of these relating to high-load SQL or system load issues that in turn may be
caused by other concurrent SQL loads. A SQL statement that causes an unacceptable delay for a user may not be sufficiently
large to show on an AWR report. Customer-facing user problems should generally be assigned the highest priority.
» Concurrent Processing batch window overflow. As systems grow, the amount of data inevitably increases and eventually
programs no longer fit into scheduled batch windows. This will eventually encroach on other processing, including for example,
critical system events such as cross-system synchronization, and overruns into the online working day.
» Oracle Database In-Memory Advisor. Refer to Oracle Database In-Memory Advisor section

3 | USING ORACLE DATABASE IN-MEMORY WITH ORACLE E-BUSINESS SUITE


Once you have collated a list of the candidates for Database In-Memory, use SQL Trace and TKPROF to review high-load SQL
statements that individually take a long time and access large amounts of data.

Database In-Memory Object Criteria


The first step is to decide on the objects to populate into the IM column store. This may require substantial analysis: Oracle E-Business
Suite has a huge number of highly customizable applications, and it is unlikely that any two customers will set up or use the applications
in exactly the same way. This flexibility means that it is not possible to provide a “one size fits all” list of candidate objects.

The IM column store is particularly intended to benefit the following types of operations:

» A query that scans many rows but only references or accesses a small subset of columns from a table, partition, or materialized
view.
» A query that scans many rows, and applies filters that use operators such as =, <, >, and IN.
» A query that uses arithmetic functions such as sum, mean or average.
» A query that processes data in groups such as 'sales per calendar month', uses buckets, or involves very large tree-walks.
» A query that uses a nested loop join.
» An analytical query with a high aggregation cost.
» Accelerating joins by converting predicates on small dimension tables into filters on a large fact table.
» Delete and update statements that are in turn based on sub-queries that perform full table scans of large tables.
» Note: In summary, look for statements with a high aggregate cost but have a low to moderate frequency of execution.
Almost all objects in the database are eligible to be populated into the IM column store, but there are some exceptions. A database
object is not eligible to be populated into the IM column store if it is:

» An object owned by the SYS user and stored in the SYSTEM or SYSAUX tablespace.
» An object less than 64KB as these would waste a considerable amount of space, as IM memory is allocated in 1MB chunks.
» An Index Organized Table (IOT) such as those implemented with Advance Queuing (AQ$ objects) and Oracle Text Search index
modules (DR$ objects). IOTs are fundamentally row-based.
» A clustered table. However, Oracle E-Business Suite does not use any of these at present.
» An object that includes either LONGs (deprecated since Oracle 8) or out of line LOBS. An example is FND_LOBS, which stores
information about all LOBs managed by the Generic File Manager (GFM).
» A “busy” object. Issuing the In-Memory alter table command will result in an ORA-54 error (resource busy and acquire with
NOWAIT) if the object is locked due to DML. This will occur with OE_ORDER_LINES_ALL if the Order-to-Cash flow is running.
Although not a good candidate, this also occurs with FND_CONCURRENT_REQUESTS if the concurrent managers are running.
» Note: Avoid using the Oracle E-Business Suite Vision database for testing as many tables are below the 64KB threshold (refer to the
Enabling Database In-Memory section for further information) and some also contain LONGs. Small databases and test environments
should be considered unsuitable, as they are unlikely to have similar data distribution and skew as your production database. It is
essential to test with large data volumes in order to generate appropriate execution plans, check the level of compression that can be
achieved, and review scalability.
Most systems will have insufficient memory to store an entire production database, but may have enough for a particular schema. The
IM column store can be enabled at the tablespace level, so that all tables and materialized views in the tablespace are automatically
marked as candidates to be populated into the IM column store. If you have enough space, all the objects will be populated into the IM
column store, with the exception of any excluded by the list of conditions above.

You might, for example, plan to populate all the General Ledger objects into the IM column store and then test the relative performance
of month-end reports. Although this provides a fast approach for testing, it may be far from representative, and you may not see much
benefit as the individual processes may not be suitable.

Database In-Memory Query Criteria


When considering candidate objects for Database In-Memory, review queries or sub-queries that perform full table scans, or query large
amounts of data but only return or use a small subset of rows. Full table scans of large tables with many “throw-away” rows are slow,
and excessive I/O reads may cause problems that affect the entire system. Disk I/O tends to be one of the slowest operations, so if the

4 | USING ORACLE DATABASE IN-MEMORY WITH ORACLE E-BUSINESS SUITE


full table scan cannot be avoided, populating these objects into the IM column store should improve performance for those SQL
operations and subsequently reduce the composite system I/O load.

Even if an object exists in the Database IM column store, it may not be used for a variety of reasons. The IM column store does not
typically improve performance for queries that:

» Contain complex predicates.


» Select or reference a relatively large number of columns.
» Have multiple large table joins.
» Are CPU-bound.
» Have SQL profiles or baselines that preclude a Database In-Memory operation.
» Contain a hint that precludes a Database In-Memory operation.
» Are indexed and only return a small amount of the data.
» Perform row-by-row functions.
Oracle E-Business Suite queries are typically optimized using embedded SQL hints that instruct the optimizer to use particular indexes
and thereby negate the use of the IM column store objects. For example, using an INDEX hint will prevent the expected full table scan
via the In-Memory column store. For further information, refer to Appendix B, which describes using SQL profiles with hints.

Collating System Performance Issues


At the system level, use the Oracle Database In-Memory Advisor to identify good candidates (refer to the Oracle Database In-Memory
Advisor section in Appendix A for further information). Within Oracle E-Business Suite, more granular performance strategies include:
» Slow user experience: Start by performing all the usual checks, including (for example) other intervening factors such as the
network. Also check that the form or screen is being used effectively, without open queries or returning unnecessarily large result
sets. Use the standard approach of tracing the problematic part of the transaction to identify the query and database objects.
» Slow programs or reports: Reviewing a trace file will enable you to compile a list of candidate objects to populate into the IM
column store. Sort the TKPROF output file using sort=exeela, prsela, fchela. If the program runs for a long time, also review
AWR and ASH reports as they will provide the system perspective.
» High system load: Use AWR and ASH reports to identify the SQL and related modules that perform large amounts of Disk I/O.
Always look for hot segments with high Disk I/O that are being used concurrently and are consuming excessive resources. It is
important to remember that SQL from concurrent programs may not always be the same and will depend on the program
parameters that were used. Similarly, data access methods within a program or report may change depending on the parameter
ranges. Always review the data from each of the distinct application phases and throughout the month; ensure that the report
retention spans more than 30 days so that anomalies can be quickly identified.
As mentioned, Oracle E-Business Suite customers typically have several application phases, each of which will have different
operational characteristics. For each phase, identify reports and processes:
» By the amount of data processed: Review reports and processes that access very large amounts of data and process
information in a columnar format. Although you may normally associate these with month/period end, there may be many other
candidates from other parts of the applications business cycle.
» By function: Review analytical and summary reports and processes that would be expected to benefit most with Database In-
Memory. Examples include Business Intelligence and General Ledger reports. Several reports within the same module generally
access the same underlying data. Many customers group these within a dedicated Concurrent Manager and some use node-
affinity with Oracle RAC. There are issues when using Database In-Memory that you need to be aware of. Refer to the
Parallelization Best Practices section regarding DISTRIBUTE/DUPLICATE for restrictions.

Predicting Database In-Memory Requirements - Compression


The traditional row-based approach using the buffer cache needs to scan the entire table or segment. In contrast, while analytic queries
examine all the entries in a column, they typically only access a subset of columns. Furthermore, data is automatically compressed as it
is populated into the IM column store, and so will need less space than on disk.

5 | USING ORACLE DATABASE IN-MEMORY WITH ORACLE E-BUSINESS SUITE


This introduces the following key factors to increase the number of objects in the Database IM column store:

» Only populate the minimum set of columns directly used in the select, and column predicates used in the where clause of the
SQL statement.
» Consider including all of the columns that are referenced across any reports or concurrent processes that use the same object,
but can be run with a range of parameters, options, or arguments.
» For dynamic SQL, consider including all the columns that could be accessed across the range of functional permutations.
» If an object is already partitioned, only populate the minimum number of necessary partitions into the IM column store.
Importantly, for this to be effective with Oracle E-Business Suite, ensure that the Application SQL includes predicates that match
the partition key. This is likely to be the case where tables are supplied with partitions already defined, but needs careful review if
implementing a new partitioning, or sub-partitioning scheme.
» Test performance at various levels of compression.
Each of the six compression levels and their relative optimizations are shown in Table 1.

Compression Level Optimization


NO MEMCOMPRESS Data is populated without any compression.

MEMCOMPRESS FOR DML Minimal compression optimized for DML performance.

MEMCOMPRESS FOR QUERY LOW Optimized for query performance (default).

MEMCOMPRESS FOR QUERY HIGH Optimized for query performance as well as space saving.

MEMCOMPRESS FOR CAPACITY LOW Balanced with a greater bias towards space saving.

MEMCOMPRESS FOR CAPACITY HIGH Optimized for space saving.

Table 1: Compression Levels

By default, data is compressed using the FOR QUERY LOW option, which provides the best performance for queries. The FOR
CAPACITY options apply additional compression, which has a larger penalty on decompression as each entry must be decompressed
before WHERE clause predicates can be applied.

Oracle provides a utility called the Compression Advisor (DBMS_COMPRESSION), which has been enhanced to support Database In-
Memory and can be used to predict IM column store requirements. The Advisor uses a sample of the table data and provides an
estimate of the compression ratio that can be achieved using MEMCOMPRESS. For further information about this utility, refer to the
Oracle Database documentation, the Oracle Technology Network site, and the Oracle Database In-Memory white paper.

Table 2 shows the original table size (on disk), the Database In-Memory size, and the corresponding compression ratio for a set of
tables that were used during an Order Management test. The default compression option is FOR QUERY LOW. The compression ratio
depends on the type of columns and number of duplicate values. In this test, the entire tables were populated into the IM column store
(of course, less space would have been used with only the actual subset of columns essential to the queries).
FOR CAPACITY HIGH
FOR CAPACITY LOW
NO MEMCOMPRESS

FOR QUERY HIGH


FOR QUERY LOW
Table Size (MB)

FOR DML

(Default)

IM Comp IM Comp IM Comp IM Comp IM Comp IM Comp


Table Name Size Ratio Size Ratio Size Ratio Size Ratio Size Ratio Size Ratio

RA_INTERFACE_LINES_ALL 1536 649 2 100 15 33 46 25 60 13 115 9 164

OE_PRICE_ADJUSTMENTS 537 240 2 135 4 142 4 84 6 73 7 68 8

OE_PAYMENTS 279 146 2 51 5 39 7 36 8 36 8 36 8

OE_ORDER_LINES_ALL 1308 771 2 610 2 257 5 239 5 198 7 165 8

Table 2: Example Compression Levels

6 | USING ORACLE DATABASE IN-MEMORY WITH ORACLE E-BUSINESS SUITE


Notice that the average default compression ratio for the main application tables is approximately 6, whereas it was 46 for the interface
table (RA_INTERFACE_LINES_ALL), which may have had sparsely populated blocks.

For tables that have already been populated into the IM column store, use the following query to calculate the compression ratio:
SQL> SELECT SEGMENT_NAME,ROUND(SUM(BYTES)/1024/1024/1024,2) "ORIG. BYTES GB",
ROUND(SUM(INMEMORY_SIZE)/1024/1024/1024,2) "IN-MEMORY GB",
ROUND(SUM(BYTES-BYTES_NOT_POPULATED)*100/SUM(BYTES),2) "% BYTES IN-MEMORY",
ROUND(SUM(BYTES-BYTES_NOT_POPULATED)/SUM(INMEMORY_SIZE),2) "COMPRESSION RATIO"
FROM V$IM_SEGMENTS
GROUP BY OWNER,SEGMENT_NAME
ORDER BY SUM(BYTES) DESC;

SEGMENT_NAME Orig. Bytes GB In-Memory GB % bytes In-Memory compression ratio


--------------------- -------------- ------------ ----------------- -----------------
OE_ORDER_LINES_ALL 1 0 100 5
OE_PRICE_ADJUSTMENTS 1 0 100 4

Oracle Database In-Memory Advisor


The Oracle Database In-Memory Advisor (My Oracle Support Knowledge Document 1965343.1) is licensed as part of the Database
Tuning pack and can be run on Oracle Database 11.2.0.3 and above. It analyzes SQL plan cardinality, use of parallel query, and other
statistics in Active Session History (ASH) and AWR data, to identify analytic workloads that will benefit from Database In-Memory.

The Advisor produces a report that includes the following recommendations:

» A list of the candidate objects that should be populated into the Database IM column store.
» The recommended compression factor for each object, and estimated performance benefits.
» An estimate of the object In-Memory size.
Refer to the Oracle Database In-Memory Advisor Best Practices white paper for further sizing information and best practices.

An example of the report is shown in Figure 3.

Figure 3: An example of the Database In-Memory Advisor report

Using Priority
Populating the IM column store is both CPU and I/O intensive, which can adversely affect the performance of other concurrent
workloads and operations. If you do not have enough capacity in the IM column store for all of the objects (as may happen over time
when objects grow), using priority enables you to specify which objects should be populated first, or take preference. These will usually
correspond to functions or programs that are used the most, or return the greatest benefit in terms of releasing system resources for
other operations.

There are five priority levels. In order, they are: CRITICAL, HIGH, MEDIUM, LOW, and NONE. The default is NONE, in which an object
is populated only after being scanned (i.e. a full table scan). When the database is opened, objects assigned a priority are automatically
populated into the IM column store, starting with those in the highest priority band and proceeding until either all the objects been added
or no more space is left in the IM column store.
» Note: The intended population sequence will be overridden if an object (that may have a lower priority) is scanned before population
completes, as this will trigger its population into the IM column store.

7 | USING ORACLE DATABASE IN-MEMORY WITH ORACLE E-BUSINESS SUITE


Sizing the Database In-Memory Column Store
Sizing the IM column store as a whole is a critical activity, as it requires downtime to implement or change it. In contrast, enabling or
disabling objects for In-Memory and populating objects into the IM column store can be done while the system is fully operational. The
Database Optimizer will decide on the appropriate action when dealing with objects that are in the process of being populated. Sizing
can be particularly challenging if you do not have a full size production clone, and this is especially true for Oracle E-Business suite
customers with multi-terabyte databases. The compression that you achieve depends on your particular data. Use the Oracle
Compression Advisor to determine the Database In-Memory requirements. Alternatively, while there are no guarantees (due to data
skew), it is not unreasonable to also use the compression ratio based on a smaller set of representative data.
» Note: If you find that objects are not fully populated into the IM column store, try allocating an additional 20% to the projected
cumulative size. Refer to Appendix C: Diagnostic Queries for the associated queries.

Enabling Database In-Memory


Enabling or disabling Database In-Memory for objects does not require any downtime. At a high level, you set the size of the IM column
store, and then configure the tables or partitions to be In-Memory as shown in the following example commands.

» Configured and initialize Database In-Memory using the following commands:


SQL> ALTER SYSTEM SET SGA_TARGET=10G SCOPE=SPFILE;
SQL> ALTER SYSTEM SET INMEMORY_SIZE=4G SCOPE=SPFILE;
SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP;

» Populate objects into the Database IM column store using the following command:
SQL> ALTER <TABLE/MATERIALIZED VIEW> <TABLE NAME/ MATERIALIZED VIEW NAME> INMEMORY <PRIORITY>;

For example:
SQL> ALTER TABLE AP.AP_INVOICES_ALL INMEMORY PRIORITY CRITICAL;

Note that the IM column store is not a cache, so when it is full space is not freed by removing the least recently used objects. You need
to free space manually with commands such as:
SQL> ALTER <TABLE/MATERIALIZED VIEW> <TABLE NAME/ MATERIALIZED VIEW NAME> NO INMEMORY;

For example:
SQL> ALTER TABLE AP.AP_INVOICES_ALL NO INMEMORY;

Strategy for Specifying Database In-Memory Parameters


Prior to setting any parameters, start by analyzing the applications that you use to understand which tables may benefit from In-Memory
processing. Your analysis will include identifying the SQL and applications module; reviewing the execution plans; identifying which of
the underlying objects to populate into the IM column store; and determining the best compression type to use. Next, populate these
using the correct priority and appropriate compression, analyze the performance increase, and tune the application iteratively.

There are three parameters in particular that need to be considered, which are discussed in the next section.

Setting Database In-Memory Parameters

The Database In-Memory area is a static pool within the SGA, and is defined by the INMEMORY_SIZE database initialization parameter
(minimum value: 100MB). As it is a static pool, automatic memory management cannot extend or shrink this area. As noted earlier,
changing the size of the In-Memory area requires a database restart. Therefore, it is important to establish how much memory to allocate
on a system where downtime must be kept to a minimum. When defining the IM column store, ensure that SGA_TARGET is large
enough for other database structures, but not sized so large that it causes system performance issues such as paging.

Note: The ultimate goal is to choose a size for the IM column store that is large enough to contain all the objects that were identified
during your analysis, and if using a more granular approach, that it is sufficient for the subset of objects used during any of the Oracle E-
Business Suite application phases. For further information regarding the SGA_TARGET and PGA_TARGET, refer to Oracle Database
In-Memory Option (DBIM) Basics and Interaction with Data Warehousing Features (MOS Doc ID 1903683.1).

8 | USING ORACLE DATABASE IN-MEMORY WITH ORACLE E-BUSINESS SUITE


The following command shows example Database In-Memory parameters:
SQL> show parameter inmemory
NAME TYPE VALUE
------------------------------------------- ----------- ------------------------------
inmemory_clause_default string
inmemory_force string DEFAULT
inmemory_max_populate_servers integer 2
inmemory_query string ENABLE
inmemory_size big integer 20G
inmemory_trickle_repopulate_servers_percent integer 1
optimizer_inmemory_aware boolean TRUE

There are two particularly noteworthy parameters:


» INMEMORY_QUERY (default: ENABLE)
By default, the Oracle Optimizer automatically directs any queries it determines will benefit from the Database In-Memory column
format to objects populated in the IM column store. Rather than having to iteratively populate or remove objects, you can disable this
redirection by setting INMEMORY_QUERY to DISABLE either at the session or system level. This can be useful when testing the
performance of Oracle E-Business Suite code, as it provides a quick way to toggle this functionality with the same data and set of
system conditions that might otherwise be difficult to replicate exactly. This enables very quick comparisons to be made for tests that
you may not have previously baselined.
» Note: You can set up Oracle E-Business Suite session parameters using the 'Initialization SQL Statement - Custom' profile option
for a particular User or Responsibility as follows:
BEGIN
FND_CTL.FND_SESS_CTL('','','TRUE','TRUE','','ALTER SESSION SET INMEMORY_QUERY=DISABLE');
END;

For more details about on setting this profile option, refer to MOS Doc ID 1121043.1.
» INMEMORY_MAX_POPULATE_SERVERS
This parameter limits the maximum number of background worker processes used to populate the IM column store so that they do not
overload the system. The default is the lower of either half the effective CPU thread count, or the PGA_AGGREGATE_TARGET value
divided by 512M. A certain percentage of the total CPU cores should be allocated for In-Memory background population.
» Note: If you set the value of this parameter too high (close to the number of cores on your system), you may have insufficient CPU
for the rest of the system to run. If you set the value too low (and so do not have enough worker processes), the (re)population
speed of the IM column store may be slower. In order to size these background processes properly, you need to perform pre-
production tests that simulate the actual production environment in terms of job types and distribution. Ensure that you include
reports and concurrent processes that use both Database In-Memory and parallel query/DML, and specifically focus on resource-
intensive combinations such as overnight batch jobs and month-end processes.
Refer to the Oracle Database In-Memory white paper for further information about Database In-Memory specific database
parameters.

The Oracle Database In-Memory area is sub-divided into two pools:


» A 1MB pool that stores the actual column formatted data populated into the IM column store
» A 64K pool that stores metadata about the objects that are populated into the IM column store
The following query shows the amount of memory allocated to each of these pools:
SQL> SELECT POOL,ALLOC_BYTES/1024/1024 ALLOC_BYTES_MB, USED_BYTES/1024/1024 USED_BYTES_MB, POPULATE_STATUS
FROM V$INMEMORY_AREA;

POOL ALLOC_BYTES_MB USED_BYTES_MB POPULATE_STATUS


-------------------------- -------------- ------------- --------------------------
1MB POOL 16382 1293 DONE
64KB POOL 4080 40.875 DONE

Populate the Database In-Memory Column Store


As previously mentioned, the default is for all the columns in an object with the INMEMORY attribute to be populated into the IM column
store, either automatically (if they have the PRIORITY attribute set) or when they are first scanned. Oracle E-Business Suite typically has

9 | USING ORACLE DATABASE IN-MEMORY WITH ORACLE E-BUSINESS SUITE


a large number of very large multi-purpose tables, so you may be able to save a significant amount of space by only populating the
columns that are used by a particular query, or sub-set of queries.

The following example shows the before and after Database In-Memory enablement state for the Order Entry Header and Line tables,
using critical priority and the default compression.
-- Before population into the IM column store

SQL> SELECT SEGMENT_NAME, BYTES/1024/1024 SIZE_IN_MB,INMEMORY,INMEMORY_COMPRESSION


FROM DBA_SEGMENTS
WHERE OWNER = 'ONT'
AND SEGMENT_NAME IN ('OE_ORDER_HEADERS_ALL', 'OE_ORDER_LINES_ALL');

SEGMENT_NAME SIZE_IN_MB INMEMORY INMEMORY_COMPRESS


-------------------- --------------- -------- -----------------
OE_ORDER_HEADERS_ALL 14.5 DISABLED
OE_ORDER_LINES_ALL 7017.875 DISABLED

-- After population into the IM column store

SQL> ALTER TABLE ONT.OE_ORDER_HEADERS_ALL INMEMORY PRIORITY CRITICAL MEMCOMPRESS FOR QUERY LOW;
SQL> ALTER TABLE ONT.OE_ORDER_LINES_ALL INMEMORY PRIORITY CRITICAL MEMCOMPRESS FOR QUERY LOW;

SEGMENT_NAME SIZE_IN_MB INMEMORY INMEMORY_COMPRESS


-------------------- --------------- -------- -----------------
OE_ORDER_HEADERS_ALL 14.5 ENABLED FOR QUERY LOW
OE_ORDER_LINES_ALL 7017.875 ENABLED FOR QUERY LOW

Check the Size and Population Status


Use the following query to check the population status and IM column store characteristics:
SQL> SELECT OWNER, SEGMENT_NAME, BYTES/1024/1024 SIZE_IN_MB,
INMEMORY_SIZE/1024/1024 INMEMORY_SIZE_MB,
BYTES/INMEMORY_SIZE COMP_RATIO, BYTES_NOT_POPULATED, POPULATE_STATUS
FROM V$IM_SEGMENTS;

OWNER SEGMENT_NAME SIZE_IN_MB INMEMORY_SIZE_MB COMP_RATIO BYTES_NOT_POPULATED POPULATE_STATUS


------ -------------------- ------------ ---------------- ---------- ------------------- ---------------
ONT OE_ORDER_HEADERS_ALL 14.5 13.1875 1.1 0 COMPLETED
ONT OE_ORDER_LINES_ALL 7017.875 1303.1875 5.4 0 COMPLETED

Check Database In-Memory Object Usage


The easiest way to check whether IM column store objects are being used as expected is to review an execution plan and look for
entries with the keyword: INMEMORY as in the following example.
-- Run the Query

SQL> VARIABLE B1 VARCHAR2(12);


SQL> EXEC :B1 := 'STANDARD';
SQL> SELECT COUNT(*) FROM
(
SELECT HEADER_ID
FROM APPS.OE_ORDER_HEADERS OE_ORDER_HEADERS_V
WHERE EXISTS
( SELECT 'X'
FROM APPS.OE_ORDER_LINES OE_ORDER_LINES_V
WHERE ( OE_ORDER_LINES_V.HEADER_ID=OE_ORDER_HEADERS_V.HEADER_ID
AND SHIPMENT_PRIORITY_CODE=:B1))
);

-- DATABASE IN-MEMORY Plan


------------------------------------------------------------------------------------------------------
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 12506 (100)| |
| 1 | SORT AGGREGATE | | 1 | 12 | | |
|* 2 | HASH JOIN RIGHT SEMI | | 6982 | 83784 | 12506 (28)| 00:00:01 |
| 3 | JOIN FILTER CREATE | :BF0000 | 7252 | 50764 | 12449 (28)| 00:00:01 |
|* 4 | TABLE ACCESS INMEMORY FULL| OE_ORDER_LINES_ALL | 7252 | 50764 | 12449 (28)| 00:00:01 |
| 5 | JOIN FILTER USE | :BF0000 | 94009 | 459K| 55 (20)| 00:00:01 |
|* 6 | TABLE ACCESS INMEMORY FULL| OE_ORDER_HEADERS_ALL | 94009 | 459K| 55 (20)| 00:00:01 |
------------------------------------------------------------------------------------------------------

10 | USING ORACLE DATABASE IN-MEMORY WITH ORACLE E-BUSINESS SUITE


-- NO IN-MEMORY Plan (for comparison):
-------------------------------------------------------------------------------------------------------------
PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 189K(100)| |
| 1 | SORT AGGREGATE | | 1 | 12 | | |
| 2 | NESTED LOOPS SEMI | | 6982 | 83784 | 189K (1)| 00:00:08 |
| 3 | INDEX FULL SCAN | OE_ORDER_HEADERS_U1 | 94009 | 459K| 232 (2)| 00:00:01 |
|* 4 | TABLE ACCESS BY INDEX ROWID BATCHED| OE_ORDER_LINES_ALL | 539 | 3773 | 3 (0)| 00:00:01 |
|* 5 | INDEX RANGE SCAN | OE_ORDER_LINES_N1 | 3 | | 2 (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------------

For more details on the optimizations used with Database In-Memory, including In-Memory Scans, In-Memory Joins with bloom filters,
and In-Memory Aggregation with the Vector Group By transformation, refer to the Oracle Database In-Memory white paper.

Oracle RAC Support for Database In-Memory - DISTRIBUTE/DUPLICATE


In an Oracle RAC environment, each node has its own IM column store. By default, objects will be distributed across all of the IM column
stores to provide the ability to scale out Database In-Memory across a RAC cluster. Importantly, this results in a “share-nothing”
Database In-Memory architecture in an Oracle RAC environment. On Oracle Engineered Systems, objects can be duplicated across the
In-Memory column stores on two or more of the nodes.

Oracle decides how objects are distributed across the IM column stores in the cluster, but it can be overridden using one of the following
two options. DISTRIBUTE (available on all systems) specifies how data is distributed across the RAC nodes. DUPLICATE ALL (only
available on Oracle Engineered Systems) specifies if and how data is duplicated across the Oracle RAC nodes; this option can be used
to provide fault tolerance by mirroring the IM column stores.

DISTRIBUTE
With the DISTRIBUTE clause, each Oracle RAC node in the cluster only stores a portion of the data. It is important to note that In-
Memory Compression Units (IMCU) cannot be shipped across the RAC interconnect, even though more than one Database IM column
store may need to be accessed to satisfy a Database In-Memory query. This means that a process can only directly access the subset
of data in the IM column store on the node it is running on. Serial query execution will result in logical and physical I/O to read the data,
which is not available in the IM column store on the node where the process is running.

When data is populated into Database In-Memory in an Oracle RAC environment, it is affinitized to a specific Oracle RAC instance. This
means that parallel server processes need to be employed to execute queries that access the Database In-Memory objects with a
degree equal to at least the number of nodes involved in the distributed population.

DISTRIBUTE has the following options:

» AUTO (Default). Oracle decides the best way to distribute the object across the cluster.
» BY ROWID RANGE to distribute by rowid range.
» BY PARTITION to distribute partitions to different nodes.
» BY SUBPARTITION to distribute sub-partitions to different nodes.

DUPLICATE/DUPLICATE ALL
This clause specifies if and how data is duplicated across Oracle RAC instances; this option can provide fault tolerance by mirroring the
IM column store. It has the following options:

NO DUPLICATE: (default) Data is not duplicated.


» DUPLICATE: Data is duplicated to a second IM column store on one other Oracle RAC instance. When this option is specified,
the database uses the DISTRIBUTE AUTO setting and ignores the DISTRIBUTE clause.
» DUPLICATE ALL: Data is duplicated across all Oracle RAC instances (only available on Engineered Systems).

11 | USING ORACLE DATABASE IN-MEMORY WITH ORACLE E-BUSINESS SUITE


Degree of Parallelism
By default, Database In-Memory queries do not automatically execute in parallel with each other, as they do with those running in the
remainder of the database. There are two main types of parallelism that need to be considered with Database In-Memory:

» PARALLEL_FORCE_LOCAL controls parallel execution across Oracle RAC nodes:


• FALSE: (default) Allows multi-node execution.
• TRUE: only allows single node execution (recommended for Oracle E-Business Suite).
» Parallel query execution can be enabled by setting the automatic degree of parallelism at the instance level (not recommended
for Oracle E-Business Suite), at the object level, or by using /*+ PARALLEL(AUTO) */ SQL hint in the query.
PARALLEL_FORCE_LOCAL should be set to TRUE for Oracle E-Business Suite. When set to FALSE, parallel query slave processes
will be spawned across several or all Oracle RAC nodes in the cluster. When set to TRUE, the parallel server processes are restricted
and can only operate on the Oracle RAC node where the query coordinator resides (the node on which the SQL was executed). This
means that the query will be restricted to perform an In-Memory scan of the local IM column store only, with the remainder of the query
being serviced via the buffer cache or disk storage, which will impact the performance of these queries.

The PARALLEL_DEGREE_POLICY parameter can be set to the following:


» AUTO: This enables automatic degree of parallelism, statement queuing, and Database In-Memory parallel execution.
» MANUAL: (default) This disables the automatic degree of parallelism, statement queuing, and Database In-Memory parallel
execution (recommended for Oracle E-Business Suite).
» LIMITED: This enables the automatic degree of parallelism for some statements, but statement queuing and Database In-
Memory parallel execution are disabled.
When using automatic degree of parallelism, Oracle automatically decides whether or not a statement should execute in parallel and
what degree of parallelism the statement should use. The optimizer automatically determines the degree of parallelism. The parallel
query coordinator ensures that at least one parallel slave is allocated to each node with an IM column store. The optimizer will limit the
degree of parallelism used (set by PARALLEL_DEGREE_LIMIT) to ensure parallel server processes do not flood the system.

The following parameters need to be configured:


» PARALLEL_MIN_TIME_THRESHOLD: This specifies the minimum execution time a statement should have before it is
considered for automatic degree of parallelism. The default is 10 seconds. If you have spare system resources, you can reduce
this time so that more plans qualify for parallel evaluation. Note that the automatic degree of parallelism is only enabled
if PARALLEL_DEGREE_POLICY is set to AUTO.
» PARALLEL_DEGREE_LIMIT: This specifies the maximum DOP that can be used per statement. By default this is set to the
value of CPUs on the system, but can also be set to the maximum I/O bandwidth or a specific integer.
When automatic degree of parallelism (Automatic DOP) is set at the instance level, the optimizer is likely to choose parallelism for all
queries (non only Database In-Memory queries) whenever full table scans are performed. This may create a significant overhead on the
system, especially if multiple statements end up running in parallel. An excessive number of processes will degrade overall system
performance.
The following is an example of the definition of a SALES table range partitioned by year, using the default Database In-Memory
compression; HIGH PRIORITY meaning that it will automatically be populated into the IM column store without needing to be scanned;
and the DISTRIBUTE option, where the table will be distributed across all nodes in the cluster that have an IM column store.
SQL> CREATE TABLE SALES (TRXN_NO INT PRIMARY KEY, SALES_YEAR DATE)
SEGMENT CREATION IMMEDIATE PARTITION BY RANGE (SALES_YEAR)
(PARTITION P_OLD_SALES VALUES LESS THAN (TO_DATE ('01072014','DDMMYYYY')),
PARTITION P_CURRENT_SALES VALUES LESS THAN (TO_DATE ('01082014','DDMMYYYY'))
INMEMORY MEMCOMPRESS FOR QUERY HIGH PRIORITY HIGH DISTRIBUTE AUTO NO DUPLICATE);

Oracle E-Business Suite Application Affinity


Application or node affinity with Oracle E-Business Suite is typically implemented by specializing jobs (via Inclusion/Exclusion rules) to
run on a concurrent manager that is assigned to run on a specific node. The simplest approach is to group batch and concurrent jobs
that access the same schema objects. However, affinity can be more complex. Consider an example where the Order Import concurrent
process runs on Standard Manager1, which is set to run only on Oracle RAC Node1. The associated Workflow Background Engine,
which processes order related item types (OEOH and OEOL), should also be configured to run on the same node as this will avoid

12 | USING ORACLE DATABASE IN-MEMORY WITH ORACLE E-BUSINESS SUITE


having to ship OM blocks between the RAC nodes. Another example is grouping Pick Release (Inventory) and Order Import (Order
Management). The Pick Release program finds and releases eligible delivery lines that meet specific release criteria, which are then
processed by the Order Import program.

This approach may be further complicated if you have also affinitized Oracle Forms and self service. Refer to the following documents
for further information about how to implement affinity:
» Configuring and Managing Oracle E-Business Suite Release 12.1.x Application Tiers for Oracle RAC (Doc ID 1311528.1)
» Configuring and Managing Oracle E-Business Suite Release 12.2.x Forms and Concurrent Processing for Oracle RAC (Doc ID
2029173.1)
The DUPLICATE ALL In-Memory option (only available on Engineered Systems) replicates the IM column store across each of the
Oracle RAC nodes, which means that In-Memory is easily integrated into your existing system, regardless of whether you affinitize your
Oracle E-Business Suite workload or not. This option is not available on a Non-Engineered System and this limitation means that it is
not possible to duplicate the objects in the IM column stores on each of the nodes. When you have more than one IM Column store, by
default, the objects are distributed across the IM column stores. This results in an In-Memory scan of the subset of data populated into
the IM column store on the local node (where the query coordinator is running) with the remainder of the data accessed via the buffer
cache (and disk), regardless of whether it exists in the IM column stores on other nodes.

One approach on a Non-Engineered System to address the limitation of not being able to use DUPLICATE ALL is to override the default
DISTRIBUTE mechanism; populate a complete object into a single IM column store on one node; and affinitize the associated workload
to that node. This means that you can take full advantage of In-Memory, and also benefit from the reduced interconnect traffic between
the Oracle RAC nodes.

There are some issues with just having objects in a single IM column store (on a single node) that need to be considered:

» High availability and Fault Tolerance: On Engineered Systems, switching or failing over to another node does not cause a
problem when using Oracle Database In-Memory as DUPLICATE ALL replicates the objects across all of the Oracle RAC nodes
with an IM column store. On Non-Engineered Systems, ensure that your DR procedures include what to do in the event of a
failure of the node hosting the IM column store(s).
» Job Serialization: Scheduling multiple jobs within a batch window on a single node may become a concern.
» Affinitize Workload: Redirect all of the applications workload that benefits from Database In-Memory to the node hosting the IM
column store that contains the table(s).
When using affinity with Oracle E-Business Suite, you will need to implement Oracle RAC services to populate objects into specific IM
column stores and then configure PARALLEL_INSTANCE_GROUP to restrict parallel query operations to (a set of) specific instances.
This approach becomes more complex when using affinity with multiple IM column stores; it works best when there are no overlaps on
the objects used by the concurrent processes and batch programs that are affinitized to each node.

In-Memory Best Practices with Oracle E-Business Suite

Recommended Parallelization Parameters for Oracle E-Business Suite


On all Systems:
Set PARALLEL_DEGREE_POLICY = MANUAL
Set PARALLEL _FORCE_LOCAL = TRUE
Use Oracle E-Business Suite Applications Affinity
On Engineered Systems:
Use DUPLICATE ALL to replicate the objects across all of the Database In-Memory column stores.
Alternatively use Oracle E-Business Suite Applications Affinity.

The ultimate goal is to choose a size for the IM column store that is large enough to contain all the objects that were identified during
your analysis. When defining the IM column store, ensure that SGA_TARGET is large enough for other database structures, but not
sized so large that it causes system performance issues such as paging. If you don't have enough PGA when executing joins and
aggregations, the work will spill to the TEMP tablespace, which is on disk; any benefit achieved by accessing the data in memory will be
negated by the disk I/O.

13 | USING ORACLE DATABASE IN-MEMORY WITH ORACLE E-BUSINESS SUITE


The following best practices for parallelization were derived from extensive testing:

» Setting PARALLEL_DEGREE_POLICY=AUTO allows Oracle to decide how many parallel server processes should be created to
service a query. When Automatic DOP is set at the system/session level, the parallel query coordinator is aware of the home locations
of the IMCUs and ensures that at least one parallel slave is allocated to each node with an IM column store. This ensures that a query
will be serviced by Database In-Memory. However, setting Automatic DOP at the system level is not generally recommended for
Oracle E-Business Suite as the optimizer is likely to choose parallelism for all queries (not only Database In-Memory queries)
whenever full table scans are performed. An excessive number of parallel processes may degrade overall system performance, even
though this may be mitigated to some extent, by statement queuing (waiting for parallel servers to become available) and increasing
PARALLEL_MIN_TIME_THRESHOLD. However, when using statement queuing, the execution time for a statement can significantly
vary between runs depending on the concurrent workload and availability of the parallel servers.
» Use DUPLICATE_ALL on Oracle Engineered systems so that the objects are replicated across all of the Database In-Memory column
stores. If you use Application affinity and do not set DUPLICATE ALL, you will need to populate individual objects into specific IM
column stores. This has the advantage of increasing the number of objects that can be populated into Database In-Memory across
your system. Refer to Appendix D: Using Oracle E-Business Suite Application Affinity for an example of configuring and using
services.
» Using DISTRIBUTE/DUPLICATE without parallelism will result in an In-Memory scan of the portion of the objects populated into the
IM column store on the local node; it will perform physical reads from disk for the remaining data, regardless of whether it exists in the
IM column stores on other nodes as IMCUs cannot be shipped across the RAC interconnect.
» If Using Oracle E-Business Suite Application Affinity:
» This approach enables you to take full advantage of Database In-Memory without requiring a lot of monitoring and manual
intervention to control the system load when setting Automatic DOP at the system level.
» Set PARALLEL_FORCE_LOCAL=TRUE to restrict the parallel server processes to only run on a single node. Avoid Automatic
DOP by setting PARALLEL_DEGREE_POLICY= MANUAL at the system level.
» Implement Automatic DOP by embedding a /*+ PARALLEL(AUTO) */ SQL hint in the query - this will compute the degree of
parallelism for the objects on the node where the query is being serviced (Note that, as mentioned, this hint does not honor the
IMCU home location and therefore will lose the benefits of using Database In-Memory parallelism if the object(s) are not in the
local IM column store).
» Use Oracle RAC services and PARALLEL_INSTANCE_GROUP (as discussed in Appendix D: Using Oracle E-Business Suite
Application Affinity) to control which node is used when objects are populated into an IM column store.
» If not Using Oracle E-Business Suite Application Affinity:
» Set PARALLEL_FORCE_LOCAL=FALSE so that the parallel server processes will run on all of the nodes.
» The situation is more complex when using DISTRIBUTE and DUPLICATE when Automatic DOP isn’t set at the system/session
level, the parallel query coordinator isn’t aware of the IMCU home locations (even when set at the query/session level). While this
isn’t an issue when using DUPLICATE ALL, it is likely to be an issue with DISTRIBUTE or DUPLICATE as it will result in physical
reads for objects that are not fully populated into the IM column store on the node where the query is being serviced. For this
reason if you decide to use DISTRIBUTE or DUPLICATE you will need to set Automatic DOP at the system level
(PARALLEL_DEGREE_POLICY =AUTO) even though this is not recommended with Oracle E-Business Suite. You will need to
be prepared to proactively monitor and manage the system load.

DISTRIBUTE/DUPLICATE Examples
The tests in this section show the important relationship between DISTRIBUTE/DUPLICATE on a 4-Node Oracle RAC Engineered
Systems system. They are based on a 13.3GB non-partitioned OE_ORDER_LINES_ALL table, which has parallel processing disabled
at the table level. Table 3 shows a summary of the results.

Test Detail Elapsed (secs)


Test 1 Baseline - No Database In-Memory (no parallelism) 18.90

Test 2 Single Node (no parallelism) 14.45

Test 3 DISTRIBUTE no parallelism 136.15

Test 4 DISTRIBUTE with parallelism 10.28

Test 5 DUPLICATE ALL (no parallelism) 14.20

Table 3: Results Summary

14 | USING ORACLE DATABASE IN-MEMORY WITH ORACLE E-BUSINESS SUITE


Notice the similar timings between test 2 and test 5, as the IM column stores would have been used in the same manner across both
tests. Also note that setting up DISTRIBUTE without setting parallelism can severely impact not only the query (as shown in Test 3), but
also other concurrent system activities.

The basic query used in this example is as follows:


SQL> SELECT COUNT(*) FROM (SELECT HEADER_ID, COUNT(LINE_ID) FROM OE_ORDER_LINES_ALL GROUP BY HEADER_ID;

Test 1: Baseline: No Database In-Memory


The IM column store is not used in this test run. The runtime execution plan is as follows:
SQL>SELECT COUNT(*) FROM (SELECT HEADER_ID, COUNT(LINE_ID) FROM OE_ORDER_LINES_ALL GROUP BY HEADER_ID)

call count cpu elapsed disk query current rows


------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 18.26 18.89 1738383 1738398 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 18.26 18.90 1738383 1738398 0 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation


---------- ---------- ---------- ---------------------------------------------------
1 1 1 SORT AGGREGATE (cr=1738398 pr=1738383 pw=0 time=18898543 us)
120954 120954 120954 VIEW (cr=1738398 pr=1738383 pw=0 time=19026092 us cost=195674 size=0 card= ….
120954 120954 120954 HASH GROUP BY (cr=1738398 pr=1738383 pw=0 time=18933294 us cost=195674 size= ….
223091287 223091287 223091287 TABLE ACCESS STORAGE FULL OE_ORDER_LINES_ALL (cr=1738398 pr=1738383 pw=0 ….

Elapsed times include waiting on following events:


Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
cell single block physical read 2 0.00 0.00
cell smart table scan 926 0.00 0.61

The disk metric shows a full table scan, with overall execution taking 18.90 seconds.

Test 2: Single Node (no parallelism)


In this test, OE_ORDER_LINES_ALL was populated into the IM column store on a single node only; refer to Appendix D: Using Oracle
E-Business Suite Application Affinity for an example of configuring and using services. The table was populated using the command:
SQL> ALTER TABLE OE_ORDER_LINES_ALL INMEMORY PRIORITY CRITICAL;

Querying GV$IM_SEGMENTS shows that table OE_ORDER_LINES_ALL is populated only on Node 1:


SQL> SELECT INST_ID,SEGMENT_NAME, POPULATE_STATUS FROM GV$IM_SEGMENTS;

INST_ID SEGMENT_NAME POPULATE_STATUS


------- ------------------- ---------------
1 OE_ORDER_LINES_ALL COMPLETED

The execution plan and statistics are as follows:


SQL> SELECT COUNT(*) FROM (SELECT HEADER_ID, COUNT(LINE_ID) FROM OE_ORDER_LINES_ALL GROUP BY HEADER_ID);

call count cpu elapsed disk query current rows


------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 14.39 14.44 2 4 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 14.40 14.45 2 4 0 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation


---------- ---------- ---------- ---------------------------------------------------
1 1 1 SORT AGGREGATE (cr=4 pr=2 pw=0 time=14447630 us)
120954 120954 120954 VIEW (cr=4 pr=2 pw=0 time=14572317 us cost=75895 size=0 card=121016)
120954 120954 120954 HASH GROUP BY (cr=4 pr=2 pw=0 time=14483104 us cost=75895 size=726096 card=121016)
223091287 223091287 223091287 TABLE ACCESS INMEMORY FULL OE_ORDER_LINES_ALL (cr=4 pr=2 pw=0 time=1409953 us ….

Elapsed times include waiting on following events:


Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
cell single block physical read 2 0.00 0.00
********************************************************************************

15 | USING ORACLE DATABASE IN-MEMORY WITH ORACLE E-BUSINESS SUITE


The execution was entirely performed in-memory and took 14.45 seconds, which was a 1.3x improvement over the baseline.

Test 3: DISTRIBUTE no parallelism


In this example, DISTRIBUTE uses AUTO, which is the default, and Oracle decides the best way to distribute the object across the
cluster. Also PARALLEL_DEGREE_POLICY=MANUAL was set, which disables Automatic DOP.

The table was populated as follows:


SQL> ALTER TABLE OE_ORDER_LINES_ALL INMEMORY DISTRIBUTE PRIORITY CRITICAL;

The following query confirms that the table is evenly distributed across all four Oracle RAC nodes:
SQL> SELECT INST_ID,SEGMENT_NAME, POPULATE_STATUS, INMEMORY_DISTRIBUTE FROM GV$IM_SEGMENTS;

INST_ID SEGMENT_NAME POPULATE_STATUS INMEMORY_DISTRIBUTE


--------- --------------------- ----------------- --------------------
1 OE_ORDER_LINES_ALL COMPLETED AUTO
2 OE_ORDER_LINES_ALL COMPLETED AUTO
3 OE_ORDER_LINES_ALL COMPLETED AUTO
4 OE_ORDER_LINES_ALL COMPLETED AUTO

» Note: The INMEMORY_DISTRIBUTE column shows that AUTO was used. As this is a non-partitioned table, AUTO will result in
DISTIBUTE BY ROWID RANGE.
The results are as follows:
SQL> SELECT COUNT(*) FROM (SELECT HEADER_ID, COUNT(LINE_ID)
FROM OE_ORDER_LINES_ALL GROUP BY HEADER_ID);

call count cpu elapsed disk query current rows


------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 78.23 136.14 1493970 1493976 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 78.23 136.15 1493970 1493976 0 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation


---------- ---------- ---------- ---------------------------------------------------
1 1 1 SORT AGGREGATE (cr=1493976 pr=1493970 pw=0 time=136145257 us)
120954 120954 120954 VIEW (cr=1493976 pr=1493970 pw=0 time=136273119 us cost=383895 size=0 card= ….
120954 120954 120954 HASH GROUP BY (cr=1493976 pr=1493970 pw=0 time=136180700 us cost=383895 size= ….
223091287 223091287 223091287 TABLE ACCESS INMEMORY FULL SOURCE_TEST (cr=1493976 pr=1493970 pw=0 time=….

Elapsed times include waiting on following events:


Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
cell single block physical read 209798 0.03 55.07
cell multiblock physical read 10060 0.12 15.31

Even though an INMEMORY FULL TABLE SCAN was performed, the serial execution resulted in disk reads for the data not present in
the IM column store on this node. Consequently, there was a very large increase in the execution time, to 136.15 seconds. This
demonstrates the importance of the next test.

Test 4: DISTRIBUTE with Parallelism


The table was populated using the DISTRIBUTE clause as in the previous test. The difference, in this case, is that Automatic DOP is set
at the system level (PARALLEL_DEGREE_POLICY=AUTO) and PARALLEL_FORCE_LOCAL=FALSE. The results are as follows:
SQL> SELECT COUNT(*) FROM (SELECT HEADER_ID, COUNT(LINE_ID)
FROM OE_ORDER_LINES_ALL GROUP BY HEADER_ID);

call count cpu elapsed disk query current rows


------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.01 1 2 0 0
Execute 1 0.02 9.01 0 7 0 0
Fetch 2 0.08 1.25 0 0 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.13 10.28 1 9 0 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation


---------- ---------- ---------- ---------------------------------------------------
1 1 1 SORT AGGREGATE (cr=7 pr=0 pw=0 time=10271280 us)

16 | USING ORACLE DATABASE IN-MEMORY WITH ORACLE E-BUSINESS SUITE


16 16 16 PX COORDINATOR (cr=7 pr=0 pw=0 time=10264250 us)
0 0 0 PX SEND QC (RANDOM) :TQ10001 (cr=0 pr=0 pw=0 time=0 us)
0 0 0 SORT AGGREGATE (cr=0 pr=0 pw=0 time=0 us)
0 0 0 VIEW (cr=0 pr=0 pw=0 time=0 us cost=3649 size=0 card=118560)
0 0 0 HASH GROUP BY (cr=0 pr=0 pw=0 time=0 us cost=3649 size=711360 card=118560)
0 0 0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us cost=3649 size=711360 card=118560)
0 0 0 PX SEND HASH :TQ10000 (cr=0 pr=0 pw=0 time=0 us cost=3649 size=711360 card= ….
0 0 0 HASH GROUP BY (cr=0 pr=0 pw=0 time=0 us cost=3649 size=711360 card=118560)
0 0 0 PX BLOCK ITERATOR (cr=0 pr=0 pw=0 time=0 us cost=1128 size=1338547722 card= ….
0 0 0 TABLE ACCESS INMEMORY FULL OE_ORDER_LINES_ALL (cr=0 pr=0 pw=0 time=0 us ….

Elapsed times include waiting on following events:


Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
PX Deq: Parse Reply 32 5.32 5.76
PX Deq: Execute Reply 431 0.03 1.19
********************************************************************************

In order to see the cursor execution plan, run the following query:

SQL> SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR('GQSQKYWHUQ930'));


-----------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | | 3649 (100)| | | | |
| 1 | SORT AGGREGATE | | 1 | | | | | | | |
| 2 | PX COORDINATOR | | | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10001 | 1 | | | | | Q1,01 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | | | | | Q1,01 | PCWP | |
| 5 | VIEW | | 118K| | | 3649 (4)| 00:00:01 | Q1,01 | PCWP | |
| 6 | HASH GROUP BY | | 118K| 694K| 2563M| 3649 (4)| 00:00:01 | Q1,01 | PCWP | |
| 7 | PX RECEIVE | | 118K| 694K| | 3649 (4)| 00:00:01 | Q1,01 | PCWP | |
| 8 | PX SEND HASH | :TQ10000 | 118K| 694K| | 3649 (4)| 00:00:01 | Q1,00 | P->P | HASH |
| 9 | HASH GROUP BY | | 118K| 694K| 2563M| 3649 (4)| 00:00:01 | Q1,00 | PCWP | |
| 10 | PX BLOCK ITERATOR | | 223M| 1276M| | 1128 (2)| 00:00:01 | Q1,00 | PCWC | |
|* 11 | TABLE ACCESS INMEMORY FULL|OE_ORDER_LINES_ALL | 223M| 1276M| | 1128 (2)| 00:00:01 | Q1,00 | PCWP | |
|----------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):


---------------------------------------------------

11 - inmemory(:Z>=:Z AND :Z<=:Z)

Note
-----
- automatic DOP: Computed Degree of Parallelism is 4 because of degree limit
- parallel scans affinitized for inmemory

This test only took 10.28 seconds. The execution plan shows an INMEMORY FULL TABLE SCAN, and the parallel scans were
affinitized for Database In-Memory. Performance improved 1.8x over the baseline. Recall that this is not recommended for Oracle E-
Business Suite even though this results in the fastest time.

Test 5: DUPLICATE ALL


The table was populated using the following command:
SQL> ALTER TABLE OE_ORDER_LINES_ALL INMEMORY DUPLICATE ALL PRIORITY CRITICAL;

The following query confirms that the table has been duplicated across all four Oracle RAC nodes:
SQL> SELECT INST_ID,SEGMENT_NAME,POPULATE_STATUS FROM GV$IM_SEGMENTS;

INST_ID SEGMENT_NAME POPULATE_STATUS


------- ----------------------- -----------------------
1 OE_ORDER_LINES_ALL COMPLETED
2 OE_ORDER_LINES_ALL COMPLETED
3 OE_ORDER_LINES_ALL COMPLETED
4 OE_ORDER_LINES_ALL COMPLETED

17 | USING ORACLE DATABASE IN-MEMORY WITH ORACLE E-BUSINESS SUITE


The execution plan and statistics are as follows:
SQL> SELECT COUNT(*)FROM (SELECT HEADER_ID, COUNT(LINE_ID)
FROM OE_ORDER_LINES_ALL GROUP BY HEADER_ID);

call count cpu elapsed disk query current rows


------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 14.12 14.19 2 4 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 14.13 14.20 2 4 0 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation


---------- ---------- ---------- ---------------------------------------------------
1 1 1 SORT AGGREGATE (cr=4 pr=2 pw=0 time=14194194 us)
120954 120954 120954 VIEW (cr=4 pr=2 pw=0 time=14326012 us cost=285758 size=0 card=118560)
120954 120954 120954 HASH GROUP BY (cr=4 pr=2 pw=0 time=14228351 us cost=285758 size=711360 card=118560)
223091287 223091287 223091287 TABLE ACCESS INMEMORY FULL OE_ORDER_LINES_ALL (cr=4 pr=2 pw=0 time=1459845 us …

This test took about the same time as on the single node with no parallelism test (Test 2).

Database In-Memory System Load Considerations


There are two system load factors that need to be taken into account when deploying Database In-Memory: time to populate the data
into the IM column store and the CPU usage:
» Time to Populate the Data: The time to populate the data will vary depending on the total amount of data. Key factors include
the amount of data; type of disk system; network between the storage and database servers; and speed of the servers. The
population time will therefore vary on a system by system basis, depending on the architecture, so you will need to benchmark
your own system. It does not matter whether you are populating only a few columns or many: the full record has to be read from
disk in either case, and this is the largest and most predominant bottleneck.
» CPU Increase: The Discrete Manufacturing Costing Benchmark was used to perform some high-level Database In-Memory
testing. In this test, the most important schema objects were simply populated into the IM column store. You would expect a
change in the system profile as the load migrates from disk I/O to CPU. The results of this test are shown in Table 4.

Avg User CPU Avg System CPU


Without Database In-Memory 17.7771 6.01351

With Database In-Memory 27.8292 6.6404

Table 4: Discrete Manufacturing Costing Benchmark CPU

Oracle E-Business Suite Use Cases and Examples


The Database In-Memory Object Criteria section provides guidance about the objects that could be populated into the IM column store,
and the Database In-Memory Query Criteria section provides guidance about the types of query that could use it. This section picks up
from the Collating Candidate SQL section, and provides some real-world examples.

It should be noted that for each of the cases that showed an improvement, there were several programs that did not. Reasons for this
include no clear high-load statements in a trace file due to application logic; row-by-row processing; the presence of embedded hints that
preclude Database In-Memory usage; and use of data import or other non In-Memory functions.

The examples in this section take a step away from the classic headline feature of analytical reports that you might have been
expecting. Instead, they show you how Database In-Memory can be applied to Oracle form queries and DML statements, and how to
provide maximum business flexibility in Oracle E-Business Suite without needing to create a huge number of indexes that allow any form
field to be queried. The Receiving Transaction Processor, which is one of the core processes that spans many operations, saw
performance improve by 100x.
» Note: The same principles that apply to Oracle Forms queries also apply to Oracle Self-Service screens and other modules.

18 | USING ORACLE DATABASE IN-MEMORY WITH ORACLE E-BUSINESS SUITE


A major issue that is addressed is the use of SQL hints, which are used extensively across Oracle E-Business Suite. One of the
examples shows how to circumvent these hints using a SQL profile, so that you can realize the full benefits of the Database In-Memory
option. It then goes on to show how, in this example, additional tuning improved performance by a further 2.5x. Not only do these
enhancements optimize end-user and batch performance, but they also improve the overall system scalability.

Example 1: Order Organizer Form


You would normally expect to optimize Oracle E-Business Suite reports or programs, but this first example is very interesting as it shows
that the same approach can be applied to almost any performance issue - in this case an Order Management Form.

Background
The Order Organizer, shown in Figure 4, enables users to process orders and returns. Navigate to it in a Vision Instance by selecting
Order Management Super User Vision Operations USA | Orders and Returns | Order Organizer. This Form can show recent orders,
orders past their requested shipment date, orders that are on hold, order lines with priority shipments, or orders for a particular customer.
It can also be used to make changes to orders including, for example, applying holds or placing cancellations, to a range of orders.

Figure 4: The Order Organizer Form

As is evident, these forms are exceptionally flexible, with several tabs as well as a huge number of fields that can be queried either
individually or collectively. While this degree of flexibility enables easy mapping to business functions, it will typically result in
performance issues if a user only queries a non-indexed column, or a column with poor selectivity. If the Order Organizer is always used
and queried in a particular way, associated indexes could be added. However, as mentioned earlier, adding indexes for every queryable
field to allow the maximum functionality is inadvisable due to the maintenance overhead. While it may speed up queries in this form, it
could have a significant negative impact on associated processes.

From a user perspective, having some fields indexed and others not may be interpreted as sporadic response times to users who would
not have the knowledge of the underlying schema. A very common problem that we see is a user report that says “The system is slow
today”. It may well be the case that they actually ran a different set of queries using different combinations of fields, some of which are
indexed and some that are not. This is one of the main issues when an Oracle E-Business Suite user reports that the Order Organizer
Find Window occasionally “freezes”.

In this case it is easy to see how Database In-Memory provides a very useful alternative to a prohibitive number of indexes. The result of
a query based on a non-indexed column is typically a full table scan on what is potentially a very large table. Most of the time will be
spent performing physical and/or logical reads, depending on whether the table is already in the buffer cache. Populating the associated
tables into the IM column store is unlikely to affect the response times for queries based on indexed columns, but should result in a
substantial performance improvement especially when a full table scan is performed.

19 | USING ORACLE DATABASE IN-MEMORY WITH ORACLE E-BUSINESS SUITE


Test Configuration
In this test, 2.4M Order lines (2400 orders with 1000 lines per order) were populated into the OE_ORDER_LINES_ALL table, and then
populated into the IM column store using the syntax:
SQL> ALTER TABLE <SCHEMA.TABLE_NAME> INMEMORY

Tests and Results


The following shows the query that uses “Standard Priority” in the Shipment Priority field, as shown in Figure 4:
-- Query
SQL> SELECT HEADER_ID
FROM OE_ORDER_HEADERS OE_ORDER_HEADERS_V
WHERE EXISTS
(SELECT 'X'
FROM OE_ORDER_LINES OE_ORDER_LINES_V
WHERE (OE_ORDER_LINES_V.HEADER_ID = OE_ORDER_HEADERS_V.HEADER_ID
AND SHIPMENT_PRIORITY_CODE = :SHIPMENT_PRIORITY_CODE))

The end-to-end results for the baseline and Database In-Memory tests are shown in Table 5.

End-to-End Top SQL Comment


Run Time (secs) Elapsed (secs)
Baseline Test 195 161 The top SQL accounted for 80% of the elapsed time.

Database In-Memory Test 18 0.38 A significant improvement was observed.

Table 5: Baseline and Database In-Memory Test Results

The top SQL accounted for approximately 80% of the elapsed time in the baseline test, with 34 seconds being used in other operations.
This top SQL reduced to only 0.38 seconds, which is a performance increase of 424X.Some of the other operations were optimized from
34 seconds to 17.6 seconds. The end-to-end run time (for the user) was more than 10X faster.

The SQL execution plans for the baseline and Database In-Memory tests are shown below:
-- Baseline (not Database In-Memory)
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.05 0.08 0 0 0 0
Execute 1 0.01 0.02 0 0 0 0
Fetch 1387 0.86 161.66 248319 425664 0 1386
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1389 0.92 161.76 248319 425664 0 1386

Rows Row Source Operation


------- ---------------------------------------------------
1386 HASH JOIN SEMI (cr=425664 pr=248319 pw=0 time=19782933 us cost=18877 size=5360 card=268)
1386 NESTED LOOPS SEMI (cr=425664 pr=248319 pw=0 time=19776507 us cost=18877 size=5360 card=268)
30634 STATISTICS COLLECTOR (cr=1281 pr=1228 pw=0 time=696889 us)
30634 TABLE ACCESS FULL OE_ORDER_HEADERS_ALL (cr=1281 pr=1228 pw=0 time=3823427 us cost=349 size=20763 ….
1386 TABLE ACCESS BY INDEX ROWID BATCHED OE_ORDER_LINES_ALL (cr=424383 pr=247091 pw=0 time=160837245 us ….
2284726 INDEX RANGE SCAN OE_ORDER_LINES_N1 (cr=65828 pr=8792 pw=0 time=9922539 us cost=2 size=0 card=35)….
0 TABLE ACCESS FULL OE_ORDER_LINES_ALL (cr=0 pr=0 pw=0 time=0 us cost=8 size=341 card=31)

-- Database In-Memory
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.05 0.11 0 0 0 0
Execute 1 0.04 0.04 0 0 0 0
Fetch 1387 0.04 0.22 32 40 0 1386
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1389 0.13 0.38 32 40 0 1386

Rows Row Source Operation


------- ---------------------------------------------------
1386 HASH JOIN RIGHT SEMI (cr=40 pr=32 pw=0 time=217530 us cost=3599 size=5360 card=268)
2373 JOIN FILTER CREATE :BF0000 (cr=37 pr=31 pw=0 time=83896 us cost=3558 size=2948 card=268)
2373 TABLE ACCESS INMEMORY FULL OE_ORDER_LINES_ALL (cr=37 pr=31 pw=0 time=79098 us cost=3558 size=2948 ….
1566 JOIN FILTER USE :BF0000 (cr=3 pr=1 pw=0 time=34183 us cost=40 size=20763 card=2307)
1566 TABLE ACCESS INMEMORY FULL OE_ORDER_HEADERS_ALL (cr=3 pr=1 pw=0 time=25918 us cost=40 size=20763 ….

The performance of the baseline query could be improved by creating an index on the shipment_priority_code column, which would
have avoided the data throwaway, but this table is already heavily indexed. Database In-Memory provides a simple solution to this.

20 | USING ORACLE DATABASE IN-MEMORY WITH ORACLE E-BUSINESS SUITE


» Note: Notice the use of a hash join with a bloom filter being pushed down (i.e. JOIN FILTER CREATE/USE) into the in-memory scan
of the OE_ORDER_HEADERS_ALL table. This results in a very fast and efficient query execution. There are similar situations
throughout the Oracle E-Business Suite where Database In-Memory can provide the same kinds of optimizations.

Example 2: Initialize Credit Summaries


This example shows how Database In-Memory is just one of the tools that can be used to improve performance. It is unusual in that the
SQL is based on an INSERT statement, as opposed to the standard query that you would typically expect to optimize. In addition, the
main statement has several embedded performance hints that preclude Database In-Memory operations. As mentioned previously,
these can be quite common across Oracle E-Business Suite. This example shows you how to circumvent these types of issues, and
then apply additional tuning to further improve performance. In other words, the optimal solution is not simply a case of just populating
the main tables into the IM column store. This example also shows a problem of joins spilling to TEMP, which is on disk, when you don’t
have enough PGA allocated. When this happens any benefit achieved by accessing the data in memory is negated by the disk I/O.

There are three stages in this test:


1. Baseline with the original code.
2. Database In-Memory with the original code.
3. Database In-Memory with further optimization.

Background
Most people understand the concept of credit rating or credit worthiness. Related to this is the concept of credit exposure, which is
defined by the total amount of credit extended to a borrower by a lender – the size of the credit exposure indicates the risk of loss if a
borrower defaults. Depending on the transaction history, a company will have a set of rules and conditions that define how frequently
they need to re-evaluate their credit exposure image.

Oracle Order Management enables a periodic rebuild of a credit exposure image (orders, invoices and payments) for all customers (or
customer sites). The Initialize Credit Summaries concurrent program calculates and updates a customer’s credit exposure based on
factors such as their setup for each credit check rule. The credit check process exposure data is stored in a summary table. This is
generally referenced to establish a customer’s credit standing, as it is a much faster strategy than reviewing real time transactional data.

Test Configuration
In this test, 2.4M Order lines (2400 orders with 1000 lines per order) were populated into the “OE_ORDER_LINES_ALL" table. The
following tables were populated into the IM column store:

OE_ORDER_HEADERS_ALL HZ_CUST_ACCOUNTS
OE_ORDER_LINES_ALL HZ_CUST_ACCT_SITES_ALL
OE_PRICE_ADJUSTMENTS HZ_CUST_SITE_USES_ALL
OE_PAYMENTS RA_INTERFACE_LINES_ALL
OE_PAYMENT_TYPES_ALL

The associated SQL was previously tuned for performance by using LEADING, USE_NL, NO_UNNEST, and CARDINALITY hints. As
mentioned, some of these hints preclude Database In-Memory operations. The SQL is fairly lengthy, but can be simplified as follows:
-- Simplified Insert
SQL> INSERT INTO OE_INIT_CREDIT_SUMM_ ……………………
SELECT /*+ leading ( L ) use_nl (H SU S CA CA_L SU_L S_L ) */
……………………
FROM OE_ORDER_LINES_ALL ……………………
AND EXISTS ( SELECT /*+ no_unnest */ NULL FROM OE_PAYMENT_TYPES_ALL ……………………
UNION ALL SELECT /*+ no_unnest */ NULL FROM OE_PAYMENT_TYPES_ALL ……………………
UNION SELECT /*+ cardinality (rl 10) leading (rl h l) */
……………………
AND EXISTS ( SELECT /*+ no_unnest */ NULL FROM OE_PAYMENT_TYPES_ALL ……………………
UNION ALL SELECT /*+ no_unnest */ NULL FROM OE_PAYMENT_TYPES_ALL ……………………

Approximately 48% of the time was spent on direct path write temp and direct path read temp as the hash join spilled to disk (TEMP
Tablespace).

21 | USING ORACLE DATABASE IN-MEMORY WITH ORACLE E-BUSINESS SUITE


Tests and Results
As this example is long and in-depth, the results below summarize the procedures and the results from each step. The end-to-end
results for the baseline and Database In-Memory tests are shown in Table 6.

End-to-End Top SQL Comment


Run Time (mins) Elapsed (mins)
Baseline Test 51 43 The top SQL accounted for 84% of the elapsed time.

Database In-Memory Test


14 12.6 A significant improvement was observed.
with additional tuning (see Test 3)

Table 6: Overall End-to-End Run Times

As previously mentioned, this raises an important point in that there is a clear statement on which to focus. The results of the Database
In-Memory tests are shown in Table 7. Several tests were performed, but only three have been included in this paper.

Database In-Memory Test Elapsed (mins) Results and Observations


Test 1: Original High Load SQL. 27 Most of the tables are accessed via indexes due to the USE_NL and
NO_UNNEST SQL hints. Database In-Memory helped performance
only with full table scans.

Test 2: Force full table scans: Using a SQL profile to 35 ** Removing the USE_NL hint resulted in hash joins and full table scans.
remove the USE_NL hint that was causing index scans. HASH JOIN data spilled to disk (TEMP Tablespace).
**Clearly, the run time would have reduced if this had not spilled to disk.

Test 3: Further optimization: Using a SQL profile to add 14 This test demonstrates additional optimization.
USE_HASH and PARALLEL hints.

Table 7: Database In-Memory Test Results

The Database In-Memory tests are expanded below. Note that the SQL has been simplified throughout.

Test 1: Original High Load SQL


As mentioned, the USE_NL hint resulted in nested loop joins and index scans. These are highlighted in the simplified execution plan of
the SQL during the baseline run.
Rows Row Source Operation
------- ---------------------------------------------------
0 LOAD TABLE CONVENTIONAL OE_INIT_CREDIT_SUMM_GTT (cr=41107716 pr=3460544 pw=3019845 time=550960116 us)
………………………………………………………
2086962 UNION-ALL (cr=41074294 pr=3460544 pw=3019845 time=362459116 us)
………………………………………………………
2081253 NESTED LOOPS (cr=17756241 pr=246476 pw=0 time=348731897 us cost=7881764 size=109810166 card=673682)
2081253 NESTED LOOPS (cr=15674988 pr=246476 pw=0 time=329997804 us cost=7881764 size=109810166 card=673682)
2081253 NESTED LOOPS (cr=15674602 pr=246476 pw=0 time=312903650 us cost=7207284 size=103073346 card=673682)
2081253 NESTED LOOPS (cr=13575801 pr=246476 pw=0 time=281593199 us cost=6532803 size=96336526 card=673682)
2081253 NESTED LOOPS (cr=11494152 pr=246396 pw=0 time=250085926 us cost=5858323 size=89599706 card=673682)
2081253 NESTED LOOPS (cr=9395300 pr=246335 pw=0 time=218489936 us cost=5183842 size=82862886 card=673682)
2081253 NESTED LOOPS (cr=7295992 pr=246335 pw=0 time=186803178 us cost=4508747 size=76195448 card=674296)
2259054 NESTED LOOPS (cr=4785018 pr=246207 pw=0 time=282632495 us cost=2305310 size=151869432 card=2233374)
………………………………………………………
2259054 TABLE ACCESS BY INDEX ROWID HZ_CUST_SITE_USES_ALL (cr=2277489 pr=97 pw=0 time=25058918 us cost=1 size=10 card=1)
………………………………………………………
2081253 TABLE ACCESS BY INDEX ROWID OE_ORDER_HEADERS_ALL (cr=2510974 pr=128 pw=0 time=26435035 us cost=1 size=45 card=1)
………………………………………………………
2081253 TABLE ACCESS BY INDEX ROWID HZ_CUST_SITE_USES_ALL (cr=2099308 pr=0 pw=0 time=21393412 us cost=1 size=10 card=1)
………………………………………………………
2081253 TABLE ACCESS BY INDEX ROWID HZ_CUST_ACCT_SITES_ALL (cr=2098852 pr=61 pw=0 time=21338562 us cost=1 size=10 card=1)
………………………………………………………
2081253 TABLE ACCESS BY INDEX ROWID HZ_CUST_ACCOUNTS (cr=2081649 pr=80 pw=0 time=21232619 us cost=1 size=10 card=1)
………………………………………………………
2081253 TABLE ACCESS BY INDEX ROWID HZ_CUST_ACCT_SITES_ALL (cr=2098801 pr=0 pw=0 time=21081238 us cost=1 size=10 card=1)
………………………………………………………
2081253 TABLE ACCESS BY INDEX ROWID HZ_CUST_ACCOUNTS (cr=2081253 pr=0 pw=0 time=8641494 us cost=1 size=10 card=1)
………………………………………………………
2311740 TABLE ACCESS BY INDEX ROWID BATCHED OE_ORDER_HEADERS_ALL (cr=1103 pr=88 pw=0 time=30784013 us cost=2
………………………………………………………
16183880 TABLE ACCESS BY INDEX ROWID OE_ORDER_LINES_ALL (cr=20808797 pr=606 pw=0 time=116276794 us cost=8 size=288
………………………………………………………
0 TABLE ACCESS BY INDEX ROWID HZ_CUST_SITE_USES_ALL (cr=0 pr=0 pw=0 time=0 us cost=23 size=10 card=1)
………………………………………………………
0 TABLE ACCESS BY INDEX ROWID HZ_CUST_ACCT_SITES_ALL (cr=0 pr=0 pw=0 time=0 us cost=14 size=10 card=1)
………………………………………………………
0 TABLE ACCESS BY INDEX ROWID HZ_CUST_SITE_USES_ALL (cr=0 pr=0 pw=0 time=0 us cost=1 size=10 card=1)
………………………………………………………
0 TABLE ACCESS BY INDEX ROWID HZ_CUST_ACCT_SITES_ALL (cr=0 pr=0 pw=0 time=0 us cost=1 size=10 card=1)
………………………………………………………
0 TABLE ACCESS BY INDEX ROWID HZ_CUST_ACCOUNTS (cr=0 pr=0 pw=0 time=0 us cost=1 size=10 card=1)
………………………………………………………

22 | USING ORACLE DATABASE IN-MEMORY WITH ORACLE E-BUSINESS SUITE


Test 2: Force full table scans
Test 2 was actually a transitional test and the results are not directly pertinent to this discussion. The key reason for including these
results is to show that by populating an object into the Database IM column store and forcing a full table scan that the overall
performance degraded from 27 to 35 minutes. Clearly this was a system issue rather than a Database In-Memory issue, but something
to bear in mind if you experience a similar reduction in performance. The SQL profile is discussed at length in Test 3.

» Note: If a query shows Index Range Scans and Nested Loop Joins with a large row set, eliminating or suppressing indexes will
probably force a Database In-Memory Scan.

Test 3: Further Optimization with Database In-Memory Operations


Removing the USE_NL hint resulted in hash joins and full table scans. About 62% of the time was spent on direct path write temp and
direct path read temp as the HASH JOIN data above 900MB spilled over to disk (TEMP tablespace), reducing performance. Clearly, the
run time would have been reduced if this hadn’t happened.
» Note: Forcing the full Database In-Memory table scans should have been quicker. This test shows that if performance is slower when
using Database In-Memory, then there is likely to be a reason (in this case hash joins spilling to disk).
This test was then specifically engineered to enhance Database In-Memory operations. There are three main changes, shown in Table
8, which were implemented using a SQL profile as opposed to modifying the SQL.

Change Reason
Removal of USE_NL hint The USE_NL hint forces nested loop joins and index scans.

Addition of USE_HASH hint This hint was added to force a hash join and full table scans.

Addition of PARALLEL hint The hash join data above 900 MB spilled to disk. Using the PARALLEL hint resulted in the data being divided across multiple
processes. The per-process hash join data reduced and no longer spilled to disk.

Table 8: Additional Tuning

The execution plan shows the Database In-Memory full scans as expected, with hash joins and PARALLEL (px)processes.
SQL> INSERT INTO OE_INIT_CREDIT_SUMM_ ……………………
SELECT /*+ leading(L) Parallel(L) */ ……………………
FROM OE_ORDER_LINES_ALL ……………………
AND EXISTS ( SELECT /*+ no_unnest */ NULL FROM OE_PAYMENT_TYPES_ALL ……………………
UNION ALL SELECT /*+ no_unnest */ NULL FROM OE_PAYMENT_TYPES_ALL ……………………
UNION SELECT /*+ cardinality ( rl 10 ) leading(rl h l) use_hash(rl h l ca) Parallel(rl) Parallel(h) Parallel(l) */
……………………
AND EXISTS ( SELECT /*+ no_unnest */ NULL FROM OE_PAYMENT_TYPES_ALL ……………………
UNION ALL SELECT /*+ no_unnest */ NULL FROM OE_PAYMENT_TYPES_ALL ……………………

Rows Row Source Operation


------- ---------------------------------------------------
0 LOAD TABLE CONVENTIONAL OE_INIT_CREDIT_SUMM_GTT (cr=33741 pr=0 pw=0 time=757262385 us)
………………………………………………………
2081253 PX COORDINATOR (cr=84 pr=0 pw=0 time=51810054 us)
………………………………………………………
0 PX BLOCK ITERATOR (cr=0 pr=0 pw=0 time=0 us cost=5 size=31230 card=3123)
0 TABLE ACCESS INMEMORY FULL HZ_CUST_ACCOUNTS (cr=0 pr=0 pw=0 time=0 us cost=5 size=31230 card=3123)
0 HASH JOIN (cr=0 pr=0 pw=0 time=0 us cost=4667 size=103061106 card=673602)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us cost=4 size=39980 card=3998)
………………………………………………………
0 TABLE ACCESS INMEMORY FULL HZ_CUST_ACCT_SITES_ALL (cr=0 pr=0 pw=0 time=0 us cost=4 size=39980 card=3998)
0 HASH JOIN (cr=0 pr=0 pw=0 time=0 us cost=4653 size=96325086 card=673602)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us cost=5 size=31230 card=3123)
………………………………………………………
0 TABLE ACCESS INMEMORY FULL HZ_CUST_ACCOUNTS (cr=0 pr=0 pw=0 time=0 us cost=5 size=31230 card=3123)
0 HASH JOIN (cr=0 pr=0 pw=0 time=0 us cost=4636 size=89589066 card=673602)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us cost=4 size=39980 card=3998)
………………………………………………………
0 TABLE ACCESS INMEMORY FULL HZ_CUST_ACCT_SITES_ALL (cr=0 pr=0 pw=0 time=0 us cost=4 size=39980 card=3998)
0 HASH JOIN (cr=0 pr=0 pw=0 time=0 us cost=4622 size=82853046 card=673602)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us cost=7 size=77010 card=7701)
………………………………………………………
0 TABLE ACCESS INMEMORY FULL HZ_CUST_SITE_USES_ALL (cr=0 pr=0 pw=0 time=0 us cost=7 size=77010 card=7701)
0 HASH JOIN (cr=0 pr=0 pw=0 time=0 us cost=4604 size=76186408 card=674216)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us cost=46 size=954315 card=21207)
………………………………………………………
0 TABLE ACCESS INMEMORY FULL OE_ORDER_HEADERS_ALL (cr=0 pr=0 pw=0 time=0 us cost=46 size=954315 card=21207)
0 HASH JOIN (cr=0 pr=0 pw=0 time=0 us cost=4527 size=151851276 card=2233107)
………………………………………………………

23 | USING ORACLE DATABASE IN-MEMORY WITH ORACLE E-BUSINESS SUITE


2259101 TABLE ACCESS INMEMORY FULL OE_ORDER_LINES_ALL (cr=37 pr=0 pw=0 time=2320084 us cost=4446 size=129538070
………………………………………………………
0 TABLE ACCESS INMEMORY FULL HZ_CUST_SITE_USES_ALL (cr=0 pr=0 pw=0 time=0 us cost=7 size=77010 card=7701)
………………………………………………………
0 TABLE ACCESS INMEMORY FULL OE_PAYMENT_TYPES_ALL (cr=0 pr=0 pw=0 time=0 us cost=4 size=17296 card=1081)
………………………………………………………
0 TABLE ACCESS INMEMORY FULL HZ_CUST_ACCT_SITES_ALL (cr=0 pr=0 pw=0 time=0 us cost=4 size=39980 card=3998)
0 HASH JOIN (cr=0 pr=0 pw=0 time=0 us cost=6660 size=701316 card=4554)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us cost=7 size=77010 card=7701)
………………………………………………………
0 TABLE ACCESS INMEMORY FULL HZ_CUST_SITE_USES_ALL (cr=0 pr=0 pw=0 time=0 us cost=7 size=77010 card=7701)
0 HASH JOIN (cr=0 pr=0 pw=0 time=0 us cost=6651 size=656352 card=4558)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us cost=5 size=31230 card=3123)

Note: When using this SQL profile, the original SQL runs with the modified execution plan without any changes to the original Oracle E-
Business Suite code.

Populating the objects into the Database IM column store resulted in a 35 minute runtime. Further tuning reduced this to 14 minutes,
which equates to a further improvement of 2.5X.

Example 3: Receiving Transaction Processor


This example shows two main performance problems that could have been addressed by adding an index, but in this case Database In-
Memory provides an alternative approach.

Background
The Receiving Transaction Processor (RCVTP) processes pending or unprocessed receiving transactions. When it operates (on-line,
immediate or batch modes) it is controlled by a profile option that can be set at the site, application, responsibility, and user levels. It is
used extensively within Oracle E-Business Suite, and its functions include:

» Validates Advance Shipment Notice (ASN), and Advance Shipment and Billing and Notice (ASBN) information.
» Creates receipt headers and receipt lines.
» Maintains transaction history, inventory, and supply information.
» Accrues uninvoiced receipt liabilities.
» Maintains a range of quantity information on purchase orders and requisitions.
» Closes purchase orders for receiving.
Performance issues tend to occur when running in batch mode. In this mode, the transactions are stored in the receiving interface tables
but not processed until user runs the Receiving Transaction Processor. In addition to being populated by the receiving forms, they can
also be populated with large numbers of transaction from the receiving interface tables. Performance tends to decrease when there are
many “lot or serial controlled items” to be processed in the batch.

Test Configuration
In this test, a Purchase Order with a serial controlled item was created and approved. Next, 18,000 receipts were populated with direct
delivery routing, thereby automatically creating and populating inventory transactions into the receiving interface tables. Finally, the
Receiving Transaction Processor was run. The following tables contained 18,000 entries each, and were populated into the IM column
store:
» RCV_HEADERS_INTERFACE
» RCV_TRANSACTIONS_INTERFACE
The Database In-Memory statistics for these tables are as follows:
OWNER NAME SIZE_MB INMEMORY_SIZE_MB COMP_RATIO INMEMORY_COMPRESSION
---------- ------------------------------ ---------- ---------------- ---------- --------------------
PO RCV_TRANSACTIONS_INTERFACE 1600 97 16.4842241 FOR DML
PO RCV_HEADERS_INTERFACE 320 50 6.35148515 FOR QUERY LOW

Notice that the RCV_TRANSACTIONS_INTERFACE table was compressed with a 16.48 compression ratio. Also this example shows
the use of the FOR DML compression option, which is optimized for DML performance.

24 | USING ORACLE DATABASE IN-MEMORY WITH ORACLE E-BUSINESS SUITE


Tests and Results
The end-to-end results for the baseline and Database In-Memory tests are shown in Table 9.

End-to-End Top SQL Comment


Run Time Elapsed
5.9 hours These two SQL statements account for the majority of the
Baseline Test 10.2 hours
1.8 hours run time.

1.05 mins A significant improvement was observed.


Database In-Memory Test 2.6 hours
3.8 mins

Table 9: Overall End-to-End Run Times

Baseline Test
There are two statements that account for 5.9 hours (21346 seconds) and 1.8 hours (6944 seconds):
SQL> UPDATE RCV_TRANSACTIONS_INTERFACE SET PARENT_TRANSACTION_ID = :B3, SHIPMENT_LINE_ID = :B2
WHERE PARENT_INTERFACE_TXN_ID = :B1 AND PARENT_TRANSACTION_ID IS NULL

call count cpu elapsed disk query current rows


------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 18000 21245.10 21346.61 147359 3841632008 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 18001 21245.10 21346.61 147359 3841632008 0 0

Rows (1st) Rows (avg) Rows (max) Row Source Operation


---------- ---------- ---------- ---------------------------------------------------
0 0 0 UPDATE RCV_TRANSACTIONS_INTERFACE (cr=213424 pr=147359 pw=0 time=45452985 us)
0 0 0 TABLE ACCESS FULL RCV_TRANSACTIONS_INTERFACE (cr=213424 pr=147359 pw=0 time=…….

SQL> SELECT COUNT(*)


FROM RCV_HEADERS_INTERFACE RHI WHERE RHI.RECEIPT_NUM = :B3 AND
(RHI.SHIP_TO_ORGANIZATION_ID = :B2 OR RHI.SHIP_TO_ORGANIZATION_CODE = :B1 )
AND RHI.PROCESSING_STATUS_CODE IN ('PENDING','RUNNING')

call count cpu elapsed disk query current rows


------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 18000 0.44 0.44 0 1 0 0
Fetch 18000 6501.67 6944.16 794863010 795484520 0 18000
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 36001 6502.12 6944.60 794863010 795484521 0 18000

Rows (1st) Rows (avg) Rows (max) Row Source Operation


---------- ---------- ---------- ---------------------------------------------------
1 1 1 SORT AGGREGATE (cr=44299 pr=44142 pw=0 time=418937 us)
0 0 0 TABLE ACCESS FULL RCV_HEADERS_INTERFACE (cr=44299 pr=44142 pw=0 time=…….

Database In-Memory Test


The Database In-Memory trace accounts for 2.61 hrs. Problematic queries identified in the baseline run completed in 4.8 minutes when
the tables were populated into the IM column store.
SQL> UPDATE RCV_TRANSACTIONS_INTERFACE
SET PARENT_TRANSACTION_ID = :B3, SHIPMENT_LINE_ID = :B2
WHERE PARENT_INTERFACE_TXN_ID = :B1 AND PARENT_TRANSACTION_ID IS NULL

call count cpu elapsed disk query current rows


------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 0.00 0.00 0 0 0 0
Execute 18000 57.67 63.87 0 5955192 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 18002 57.67 63.87 0 5955192 0 0

Rows (1st) Rows (avg) Rows (max) Row Source Operation


---------- ---------- ---------- ---------------------------------------------------
0 0 0 UPDATE RCV_TRANSACTIONS_INTERFACE (cr=903 pr=0 pw=0 time=10444 us)
0 0 0 TABLE ACCESS INMEMORY FULL RCV_TRANSACTIONS_INTERFACE (cr=903 pr=0 pw=0 time=…..

25 | USING ORACLE DATABASE IN-MEMORY WITH ORACLE E-BUSINESS SUITE


SQL> SELECT COUNT(*)
FROM RCV_HEADERS_INTERFACE RHI WHERE RHI.RECEIPT_NUM = :B3 AND
(RHI.SHIP_TO_ORGANIZATION_ID = :B2 OR RHI.SHIP_TO_ORGANIZATION_CODE = :B1 )
AND RHI.PROCESSING_STATUS_CODE IN ('PENDING','RUNNING')

call count cpu elapsed disk query current rows


------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 18000 0.48 0.45 0 1 0 0
Fetch 18000 226.89 227.74 1439 124670054 0 18000
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 36001 227.37 228.20 1439 124670055 0 18000

Rows (1st) Rows (avg) Rows (max) Row Source Operation


---------- ---------- ---------- ---------------------------------------------------
1 1 1 SORT AGGREGATE (cr=11433 pr=0 pw=0 time=13677 us)
0 0 0 TABLE ACCESS INMEMORY FULL RCV_HEADERS_INTERFACE (cr=11433 pr=0 pw=0 time=…….

For this type of data and processing mode, the number of executions is high, and you can see that it corresponds to the number of rows.
Despite this, the two queries that took the longest time reduced from 7.8 hrs (471 mins) to 4.85 mins with Database In-Memory. This
represents about a 100x increase in performance.

The program still took 2.6 hours to run, and could have been reviewed again to look for other tuning opportunities. Although these two
performance issues could have been addressed by adding an index, in this case the Database In-Memory gains were achieved without
any modification.

Example 4: Using AWR Reports – The System Perspective


This example approaches tuning from a system load and DBA perspective, and rather than looking at individual isolated performance
issues takes a holistic view of the system using Automatic Workload Repository (AWR) reports. This approach can be used, for
example, to identify the most resource intensive jobs run during overnight batch runs, or month-end processing. Note that a DBA may
need help from a system administrator to identify the programs and any associated parameters for concurrent requests. Keep in mind
that you should start your analysis by reviewing the AWRs corresponding to the most resource intensive periods.

It is always good practice to review AWR reports along with system load charts, so that you always know what is happening on the
system and can check for unexpected events or extraneous loads. Always consider taking AWR snapshots for a process that runs
longer than a few minutes. If the process spans several AWR reports (default 1 hour), always ensure that you have (and review)
individual AWR reports for the entire period, as some of the metrics will not appear until the final report when, for example, a query
completes. A very common mistake is to use an AWR report that spans the entire process. This misses out short-term spikes, and more
importantly, actual problems may be masked. Consider an analogy of driving 100 miles over a total of four hours. You start by driving at
10 mph for three hours, and then at 70 mph for one hour. An overall trip report shows that you drove at 25 mph with no real issues. You
can’t see that you were limited to 10 mph for the first three hours, perhaps because of heavy traffic. The same is true of AWR reports,
and this is one of the most common reasons why performance issues are overlooked – the peaks and troughs tend to be ironed out and
so the overall picture is misleading.

There are several Database In-Memory metrics in an AWR report. These range from the SGA Memory Summary shown in Figure 5 to
the Database In-Memory Segments by Scans in Figure 6 that show you the actual usage.

Figure 6: Database In-Memory Segments by Scans

Figure 5: SGA Memory Summary

» Note: The Database In-Memory Segments by Scans section of the AWR report is very useful for checking which objects in the IM
column store are being used in which Oracle E-Business Suite application cycle throughout the month.

26 | USING ORACLE DATABASE IN-MEMORY WITH ORACLE E-BUSINESS SUITE


Returning to the example, always start by reviewing the Load Profile example as shown in Figure 7.

Figure 7: Example Load Profile

» Note: In particular, look for high-load SQL statements that have a low or moderate number of executions. It is unlikely that statements
with a high number of executions querying only a few rows each time will benefit. For further information about how to review AWR
reports and ensure they are as useful, refer to the Review AWR Reports section in Appendix A.
Concurrent programs typically have the highest load, and although the program name usually appears in the AWR report, you will need
to review the Concurrent Request details to establish the parameters that were used. While many concurrent programs simply create
reports, many update data and therefore cannot be run iteratively, even in development, test, or UAT environments.

Figure 8 compares significant statistics from an AWR Load Profile without and with Database In-Memory. This set of data was created
during an Oracle Discrete Manufacturing Non-Engineered System Benchmark.

Figure 8: Comparing AWR Load Profiles


The workload was reasonably equitable, although as you can see from the metrics the AWR snapshots were at slightly different times
during the benchmark. The IM Scan Rows metric clearly shows that the right-hand statistics are for a system using Database In-
Memory. The key statistics are shown in Table 10.

27 | USING ORACLE DATABASE IN-MEMORY WITH ORACLE E-BUSINESS SUITE


Without Database Database In-
Statistic Comment
In-Memory Memory
Logical reads 75,206 128,182 Logical reads increases with the amount of IM column store data.

Physical reads 72,905 11,157 Physical reads reduces, and Logical reads increases.

Read IO requests 650 346 Reduces with the amount of IM column store data accessed.

Read IO (MB) 569 87 Reduces as less data is being read.

Table 10: Key AWR statistics

A reduction in physical reads will always have a corresponding increase in logical reads, which will inevitably have a knock-on increase
in the overall system CPU utilization. If you have a very large Database In-Memory area, you may need to look at further tuning the SQL
as a next step.

Figure 9 compares the top 12 highest-load SQL statements, ordered by physical reads. The left side is the traditional database and the
right side is with Database In-Memory. The statistics and SQL IDs for those visible on screen for both AWRs are highlighted with
different colours.

Notice that the standard system has two very high-load merge SQL statements that are significantly in excess of any others, at 81M and
74M disk reads respectively. These were responsible for the majority of the system load, and were processing a large amount of data.
Sometimes this is a valid situation and cannot be avoided. The highest load statement (81M disk reads) reduces to 5.7M reads with
Database In-Memory, but also reduces from 31 minutes to 10 minutes (elapsed). Note also that these tables have just been populated
into the IM column store, and the SQL has not been modified using a SQL profile.

Figure 9: High-load SQL ordered by physical reads

The top SQL statements are compared in Table 11. All the SQL statements would have run, but they are either not in the subset in the
image, or they were no longer sufficiently noteworthy to be reported in the AWR report. As mentioned previously, the highest load SQL
statements reduce from 81M reads to 5.7M reads, which represents a substantial reduction in disk access.

28 | USING ORACLE DATABASE IN-MEMORY WITH ORACLE E-BUSINESS SUITE


Physical Reads Elapsed Time (secs)

SQL ID Traditional Database In-Memory Improvement Traditional Database In-Memory Improvement

fub242tyghqpc 81,328,268 5,730,368 14.2 1,861.85 1,088.28 1.7

4h6hyrbd1u7q6 74,458,141 941.66

fzmc8a5ntgwhp 8,421,612 479,227 17.6 4,332.73 3,312.47 1.3

bxayry55ncf8s 6,936,216 570,505 12.2 2,859.95 2,758.05 1.0

5zqc7xg7sz3gt 6,109,482 115,786 52.8 4,223.34 86.2 49.0

2wnnscaa8d6t9 5,913,515 4,247.36

4s547krr802nq 3,811,712 596.86

chh2mpbttcskx 3,766,927 4,402,862 0.9 166.7 184.8 0.9

gfn64sh588pd3 3,463,244 69,641 49.7 2,085.61 402.01 5.2

gk1w3s3zwn57w 3,411,278 32.21

0ug4j5p5zg6rs 3,289,881 128,734 25.6 674.98 163.7 4.1

96wdcm40dd1s0 3,171,146 1,648.88

7dyu0z17agfbs 1,650,828 253,547 6.5 711.76 339.06 2.1

Table 11: Physical Reads Compared

Conclusion
This paper has shown how the Oracle Database In-Memory option is a valuable tool that can be used to address various performance
issues. However, deciding what, when and how to use Database In-Memory requires careful thought, and is further complicated by the
Oracle E-Business Suite business cycle, which typically varies throughout the month. You may need to develop a strategy whereby key
objects are placed into the IM column store specifically to speed up month-end processing, but are then replaced with other sets that
map to other business functions throughout the month.

Simply populating an object into the Database IM column store is not necessarily a panacea. It may be that there are no clear high-load
statements in a trace file, due to the application logic, row-by-row processing, the presence of embedded hints that preclude Database
In-Memory usage, or the use of data import or other non-IM functions. Also, operations such as sorting and classic aggregation will still
take time. If performance degrades when using Database In-Memory, then there is likely to be a reason such as in the in the example
where hash joins spilled to disk.

Not only can Database In-Memory be applied to analytical reports, which is one of its headline features, but this paper has shown it can
also be applied to Oracle Forms queries and DML statements, and in addition, used as an alternative to creating multiple indexes on the
same table.

One of the biggest issues is the use of SQL hints that are used extensively across Oracle E-Business Suite. The examples have shown
how to circumvent these by using a SQL profile, and then how to perform additional tuning to improve performance further. This example
shows that combining Database In-Memory with other approaches can result in a significantly higher level of performance.

29 | USING ORACLE DATABASE IN-MEMORY WITH ORACLE E-BUSINESS SUITE


Appendix A: Related Documentation
This appendix lists important Oracle core documentation, My Oracle Support Knowledge Documents, and blog references, which
provide useful information.

Oracle Database Documentation


This section contains generic information for Database In-Memory and is not specific to Oracle E-Business Suite:
» Oracle Database Reference 12c Release 1 (12.1) E41527-1
• For Oracle Database initialization parameters and generic Database In-Memory syntax.
» Oracle Database Concepts 12c Release 1 (12.1) E41396-11
• For overview information on the IM column store, and the row and columnar formats.
» Oracle Database Administrator's Guide 12c Release 1 (12.1) E41484-10
• For information on compression methods, general database parameters, and using the IM column store.

» Oracle Database In-Memory Data Sheet: http://www.oracle.com/technetwork/database/options/database-in-memory-ds-2210927.pdf

White Papers
These papers are generic and do not specifically focus on Oracle E-Business Suite.
» Oracle Database In-Memory
• This paper provides an extensive overview of Oracle Database In-Memory and is a useful precursor to this paper. It should be
considered as the source for many of the generic statements in this document. This is available on the Oracle Optimizer blog:
https://blogs.oracle.com/In-Memory/ and Oracle TechNet: http://www.oracle.com/technetwork/database/in-
memory/overview/twp-oracle-database-in-memory-2245633.html
» Oracle Database In-Memory Advisor
• The Oracle Database In-Memory Advisor (MOS Doc ID 1965343.1)

• Oracle Database In-Memory Advisor Best Practices white paper

My Oracle Support Knowledge Documents


This section links to essential documents that will assist your understanding throughout this paper.
» Using Oracle E-Business Suite Affinity
Refer to the following documents for further information about how to implement affinity.

• Configuring and Managing Oracle E-Business Suite Release 12.1.x Application Tiers for Oracle RAC (MOS Doc ID 1311528.1)

• Configuring and Managing Oracle E-Business Suite Release 12.2.x Forms and Concurrent Processing for Oracle RAC (MOS
Doc ID 2029173.1)
» Adjusting SGA_TARGET and PGA_TARGET
Refer to the following document when sizing the IM column store.
• Oracle Database In-Memory Option (DBIM) Basics and Interaction with Data Warehousing Features (MOS Doc ID 1903683.1).
» Gather Statistics
Before making any changes, ensure that you have all the necessary indexes and that the system and object statistics are up to date.
This document provides an extensive overview of the Oracle database statistics gathering process and describes several methods for
collecting or setting statistics.

• Best Practices for Gathering Statistics with Oracle E-Business Suite (MOS Doc ID 1586374.1)

30 | USING ORACLE DATABASE IN-MEMORY WITH ORACLE E-BUSINESS SUITE


» Collate Diagnostic Data
This document outlines the most common performance diagnostic data used by Oracle Development for analyzing and resolving
performance issues in Oracle E-Business Suite. It describes how to collect this data. This will help you evaluate whether Database In-
Memory might provide an appropriate solution.

• Collecting Diagnostic Data for Performance Issues in Oracle E-Business Suite (MOS Doc ID 1121043.1)
» Review AWR Reports
In addition to other diagnostics, this document summarizes how to monitor performance data for Oracle Forms, modules and concurrent
reports using ASH and AWR:

• Performance Diagnosis with Automatic Workload Repository (AWR) (MOS Doc ID 1674086.1)

Blogs
The Oracle Database In-Memory blog contains a wealth of generic information, technical details, ideas and news on Oracle Database In-
Memory from the author of the Oracle White Paper on Oracle Database In-Memory.
» Oracle Database In-Memory on RAC - Part I
• This article starts with background information on how the IM column stores are populated on Oracle RAC and then discusses
how to manage parallelization.

» Oracle Database In-Memory on RAC - Part 2


• This article explains how Oracle RAC services can be used to control how data is populated.
» Oracle Database In-Memory on RAC - Part 3
• This article reviews the DUPLICATE and DUPLICATE ALL sub-clauses.

31 | USING ORACLE DATABASE IN-MEMORY WITH ORACLE E-BUSINESS SUITE


Appendix B: Implementing Plan Changes Using SQL Profiles
As many Oracle E-Business Suite modules are closely interlinked, when creating, changing or dropping indexes you must check the
performance across all other parts of the application. Something that had previously never been a problem may now become a
business-critical issue, possibly affecting customer-facing operations. This has further support ramifications, as Oracle Support will ask
you to check for the existence of the correct indexes when you report a performance problem; any Oracle E-Business Suite performance
issues will need to be replicated in a standard environment.

SQL Profile Advantages


A SQL profile can be used to force Database In-Memory specific code changes such as adding new hints to existing SQL, and has the
advantage that it can be implemented without modifying the actual Oracle E-Business Suite code or indexes. Creating a SQL profile can
take some time, even for the most adept. Dropping and recreating indexes may also be non-trivial with large amounts of data. Making
indexes invisible may be a useful technique for quick tests before suppressing them using a SQL profile.
» Note: When considering your index strategy (which to make invisible or even drop), try to limit your choices to those used in an
analytical context. When considering indexes that are used in the transactional context, care should be taken to evaluate the impact of
dropping such indexes on other parts of that or another application.

Creating a SQL Profile


It can be difficult to know the exact hints to embed into a SQL profile. This example walks you through an approach that simplifies this
process. Consider the following problematic SQL:
-- SQL 1
SQL> SELECT /*+ LEADING(L) USE_NL(H SU_L) */
L.ORDERED_QUANTITY, L.UNIT_SELLING_PRICE , H.INVOICE_TO_ORG_ID
FROM OE_ORDER_LINES_ALL L, OE_ORDER_HEADERS_ALL H, HZ_CUST_SITE_USES_ALL SU_L
WHERE H.HEADER_ID = L.HEADER_ID AND H.BOOKED_FLAG = 'Y' AND H.OPEN_FLAG = 'Y'
AND L.OPEN_FLAG = 'Y' AND NVL ( L.INVOICED_QUANTITY , 0 ) = 0
AND SU_L.SITE_USE_ID = L.INVOICE_TO_ORG_ID;

Execution plan:
------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1069K| 43M| 4120K (1)| 00:02:41 |
| 1 | NESTED LOOPS | | 1069K| 43M| 4120K (1)| 00:02:41 |
| 2 | NESTED LOOPS | | 1069K| 38M| 4120K (1)| 00:02:41 |
|* 3 | TABLE ACCESS INMEMORY FULL | OE_ORDER_LINES_ALL | 4069K| 89M| 45164 (6)| 00:00:02 |
|* 4 | TABLE ACCESS BY INDEX ROWID| OE_ORDER_HEADERS_ALL | 1 | 15 | 1 (0)| 00:00:01 |
|* 5 | INDEX UNIQUE SCAN | OE_ORDER_HEADERS_U1 | 1 | | 0 (0)| 00:00:01 |
|* 6 | INDEX UNIQUE SCAN | HZ_CUST_SITE_USES_U1 | 1 | 5 | 0 (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------
This SQL includes a USE_NL Hint which forces a nested loop join and index scan of two of the tables. Ideally we want to convert this
into full table scans for Database In-Memory. Therefore we need to create a SQL profile to do exactly that - override the existing
USE_NL hint and force a Database In-Memory full table scan and hash join.

In order to achieve the desired execution plan, use the following SQL:
-- SQL 2
SQL> SELECT /*+ LEADING(L) FULL(H) FULL(SU_L) USE_HASH(L H SU_L) */
L.ORDERED_QUANTITY, L.UNIT_SELLING_PRICE , H.INVOICE_TO_ORG_ID
FROM OE_ORDER_LINES_ALL L, OE_ORDER_HEADERS_ALL H, HZ_CUST_SITE_USES_ALL SU_L
WHERE H.HEADER_ID = L.HEADER_ID AND H.BOOKED_FLAG = 'Y' AND H.OPEN_FLAG = 'Y'
AND L.OPEN_FLAG = 'Y' AND NVL ( L.INVOICED_QUANTITY , 0 ) = 0
AND SU_L.SITE_USE_ID = L.INVOICE_TO_ORG_ID;

Execution plan:
--------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2656K| 108M| | 43298 (7)| 00:00:02 |
|* 1 | HASH JOIN | | 2656K| 108M| | 43298 (7)| 00:00:02 |
|* 2 | TABLE ACCESS INMEMORY FULL | OE_ORDER_HEADERS_ALL | 37262 | 545K| | 65 (48)| 00:00:01 |
|* 3 | HASH JOIN | | 4962K| 132M| 165M| 43168 (6)| 00:00:02 |
|* 4 | TABLE ACCESS INMEMORY FULL| OE_ORDER_LINES_ALL | 4963K| 108M| | 30861 (8)| 00:00:02 |
| 5 | TABLE ACCESS INMEMORY FULL| HZ_CUST_SITE_USES_ALL | 7701 | 38505 | | 9 (34)| 00:00:01 |
--------------------------------------------------------------------------------------------------------------

32 | USING ORACLE DATABASE IN-MEMORY WITH ORACLE E-BUSINESS SUITE


» Note: Many Oracle customers already use SQL profiles to address performance issues in their environments, and so you should
check that there are no existing SQL profiles for these statements.
Both SQL statements will exist in V$SQL and have a SQL_ID, which you will need.

SQL> SELECT SQL_ID,SQL_PROFILE, SQL_TEXT FROM V$SQL WHERE SQL_TEXT LIKE '%L.OPEN_FLAG%';

SQL_ID SQL_PROFILE SQL_TEXT


-------- ------------ -------------------------------------------------
3npc1kr6r527z SELECT /*+ LEADING (L) USE_NL(H SU_L) */
L.ORDERED_QUANTITY , L.UNIT_SELLING_PRICE,
H.INVOICE_TO_ORG_ID FROM OE_ORDER_LINES_ALL L,
OE_ORDER_HEADERS_ALL H,
................

9dbcsbdbf5cf1 SELECT /*+ LEADING(L) FULL(H) FULL(SU_L) USE_HASH(L H SU_L)*/


L.ORDERED_QUANTITY, L.UNIT_SELLING_PRICE, H.INVOICE_TO_ORG_ID
FROM OE_ORDER_LINES_ALL L,OE_ORDER_HEADERS_ALL H,
................

SQL_ID 9dbcsbdbf5cf1 represents the preferred execution plan for SQL 2. You can display the execution plan with the outline option
using the following command:
SQL> SELECT *
FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR('9DBCSBDBF5CF1',0, 'OUTLINE'));

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------------------------
SQL_ID 9dbcsbdbf5cf1, child number 0
-------------------------------------
SELECT /*+ LEADING(L) FULL(H) FULL(SU_L) USE_HASH(L H SU_L) */ L.ORDERED_QUANTITY,L.UNIT_SELLING_PRICE ,
H.INVOICE_TO_ORG_ID FROM OE_ORDER_LINES_ALL L, OE_ORDER_HEADERS_ALL H,HZ_CUST_SITE_USES_ALL SU_L
WHERE H.HEADER_ID = L.HEADER_ID AND H.BOOKED_FLAG = 'Y' AND H.OPEN_FLAG = 'Y' AND L.OPEN_FLAG = 'Y'
AND NVL (L.INVOICED_QUANTITY , 0 ) = 0 AND SU_L.SITE_USE_ID =L.INVOICE_TO_ORG_ID

Plan hash value: 2536060714


--------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | | 25009 (100)| |
|* 1 | HASH JOIN | | 2656K| 108M| | 25009 (10)| 00:00:01 |
|* 2 | TABLE ACCESS INMEMORY FULL | OE_ORDER_HEADERS_ALL | 37262 | 545K| | 65 (48)| 00:00:01 |
|* 3 | HASH JOIN | | 4962K| 132M| 165M| 24878 (10)| 00:00:01 |
|* 4 | TABLE ACCESS INMEMORY FULL| OE_ORDER_LINES_ALL | 4963K| 108M| | 12571 (17)| 00:00:01 |
| 5 | TABLE ACCESS INMEMORY FULL| HZ_CUST_SITE_USES_ALL | 7701 | 38505 | | 9 (34)| 00:00:01 |
--------------------------------------------------------------------------------------------------------------

Outline Data
-------------
/*+
BEGIN_OUTLINE_DATA
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('12.1.0.2')
DB_VERSION('12.1.0.2')
OPT_PARAM('_b_tree_bitmap_plans' 'false')
OPT_PARAM('_fast_full_scan_enabled' 'false')
ALL_ROWS
OUTLINE_LEAF(@"SEL$1")
FULL(@"SEL$1" "L"@"SEL$1")
FULL(@"SEL$1" "SU_L"@"SEL$1")
FULL(@"SEL$1" "H"@"SEL$1")
LEADING(@"SEL$1" "L"@"SEL$1" "SU_L"@"SEL$1" "H"@"SEL$1")
USE_HASH(@"SEL$1" "SU_L"@"SEL$1")
USE_HASH(@"SEL$1" "H"@"SEL$1")
SWAP_JOIN_INPUTS(@"SEL$1" "H"@"SEL$1")
END_OUTLINE_DATA
*/

Predicate Information (identified by operation id):


---------------------------------------------------
1 - access("H"."HEADER_ID"="L"."HEADER_ID")
2 - inmemory(("H"."OPEN_FLAG"='Y' AND "H"."BOOKED_FLAG"='Y'))
filter(("H"."OPEN_FLAG"='Y' AND "H"."BOOKED_FLAG"='Y'))
3 - access("SU_L"."SITE_USE_ID"="L"."INVOICE_TO_ORG_ID")
4 - inmemory(("L"."OPEN_FLAG"='Y' AND NVL("L"."INVOICED_QUANTITY",0)=0))

33 | USING ORACLE DATABASE IN-MEMORY WITH ORACLE E-BUSINESS SUITE


Using the generated outline, add the hints (marked in green) to the first SQL statement, and thereby change its execution plan, to use a
hash join and Database In-Memory full scan..

If this is the first time you are creating a profile, connect as the SYS user and execute the following grant:
SQL> GRANT ADMINISTER SQL MANAGEMENT OBJECT to APPS;

Create the SQL profile as follows:


SQL> DEFINE SQL_ID = '3NPC1KR6R527Z';
DECLARE
CLSQL_TEXT CLOB;
BEGIN
SELECT SQL_FULLTEXT INTO CLSQL_TEXT FROM V$SQLAREA WHERE SQL_ID = '&SQL_ID';
DBMS_SQLTUNE.IMPORT_SQL_PROFILE(SQL_TEXT => CLSQL_TEXT,
PROFILE=> SQLPROF_ATTR('USE_HASH(@"SEL$1" H L SU_L)FULL(H@SEL$1)FULL(L@SEL$1) FULL(SU_L@SEL$1)'),
NAME=>'PROFILE_&SQL_ID',
FORCE_MATCH=>TRUE);
END;
/

Execute the first statement (SQL-1) and check its execution plan to confirm that the SQL profile is being used:
SQL> SELECT /*+ LEADING (L) USE_NL(H SU_L) */ L.ORDERED_QUANTITY,L.UNIT_SELLING_PRICE,
H.INVOICE_TO_ORG_ID FROM OE_ORDER_LINES_ALL L,OE_ORDER_HEADERS_ALL H, HZ_CUST_SITE_USES_ALL SU_L
WHERE H.HEADER_ID = L.HEADER_ID AND H.BOOKED_FLAG = 'Y' AND H.OPEN_FLAG = 'Y' AND L.OPEN_FLAG = 'Y'
AND NVL ( L.INVOICED_QUANTITY , 0 ) = 0 AND SU_L.SITE_USE_ID = L.INVOICE_TO_ORG_ID;

SET LINESIZE 130 PAGESIZE 0

SQL> SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------
SQL_ID 3npc1kr6r527z, child number 0
-------------------------------------
SELECT /*+ LEADING (L) USE_NL(H SU_L) */ L.ORDERED_QUANTITY, L.UNIT_SELLING_PRICE , H.INVOICE_TO_ORG_ID FROM
OE_ORDER_LINES_ALL L, OE_ORDER_HEADERS_ALL H,HZ_CUST_SITE_USES_ALL SU_L
WHERE H.HEADER_ID = L.HEADER_ID AND H.BOOKED_FLAG = 'Y' AND H.OPEN_FLAG = 'Y' AND L.OPEN_FLAG = 'Y'
AND NVL (L.INVOICED_QUANTITY, 0 ) = 0 AND SU_L.SITE_USE_ID = L.INVOICE_TO_ORG_ID

Plan hash value: 2536060714

--------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | | 20184 (100)| |
|* 1 | HASH JOIN | | 2656K| 108M| | 20184 (12)| 00:00:01 |
|* 2 | TABLE ACCESS INMEMORY FULL | OE_ORDER_HEADERS_ALL | 37262 | 545K| | 65 (48)| 00:00:01 |
|* 3 | HASH JOIN | | 4962K| 132M| 165M| 20054 (11)| 00:00:01 |
|* 4 | TABLE ACCESS INMEMORY FULL| OE_ORDER_LINES_ALL | 4963K| 108M| | 7747 (26)| 00:00:01 |
| 5 | TABLE ACCESS INMEMORY FULL| HZ_CUST_SITE_USES_ALL | 7701 | 38505 | | 9 (34)| 00:00:01 |
--------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):


---------------------------------------------------

1 - access("H"."HEADER_ID"="L"."HEADER_ID")
2 - inmemory(("H"."OPEN_FLAG"='Y' AND "H"."BOOKED_FLAG"='Y'))
filter(("H"."OPEN_FLAG"='Y' AND "H"."BOOKED_FLAG"='Y'))
3 - access("SU_L"."SITE_USE_ID"="L"."INVOICE_TO_ORG_ID")
4 - inmemory(("L"."OPEN_FLAG"='Y' AND NVL("L"."INVOICED_QUANTITY",0)=0))
filter(("L"."OPEN_FLAG"='Y' AND NVL("L"."INVOICED_QUANTITY",0)=0))

Note
-----
- SQL profile PROFILE_3npc1kr6r527z used for this statement

The final line shows that SQL profile - PROFILE_3npc1kr6r527z (and therefore the expected execution plan) was used when running
SQL-1.

34 | USING ORACLE DATABASE IN-MEMORY WITH ORACLE E-BUSINESS SUITE


Appendix C: Diagnostic Queries
This appendix provides a range of diagnostic queries for use with Database In-Memory.

Database In-Memory Views


These views can be used to check the Database In-Memory status of an object.
» V$IM_SEGMENTS
» V$INMEMORY_AREA
» V$IM_SEGMENTS_DETAIL
» DBA_TABLES
» DBA_TAB_PARTITIONS

Database In-Memory Data Dictionary Reference


The following columns have been added to *_TABLES (e.g. dba_tables, user_tables) to hold the INMEMORY attributes:
INMEMORY_PRIORITY, INMEMORY_DISTRIBUTE, and INMEMORY_COMPRESSION.
When using the following examples, for single node systems, use the V$ views; for Oracle RAC systems, use the GV$ views.

» To check which objects are in the IM column store:


V$IM_SEGMENTS and V$IM_USER_SEGMENTS

» To check the details of the columns populated into the IM column store:
V$IM_COLUMN_LEVEL

» To check the space allocation within the INMEMORY_AREA:


V$INMEMORY_AREA

To check the used and allocated memory for the IM column store:
SQL> SELECT POOL, ALLOC_BYTES/1024/1024 ALLOC_BYTES, USED_BYTES/1024/1024 USED_BYTES, POPULATE_STATUS, CON_ID
FROM V$INMEMORY_AREA;

POOL ALLOC_BYTES USED_BYTES POPULATE_STATUS CON_ID


-------------------------- ----------- ---------- -------------------------- ----------
1MB POOL 4095 2254 POPULATING 0
64KB POOL 1008 57.375 POPULATING 0

The following queries identify which tables and table partitions are enabled for the IM column store:
-- For Tables
SQL> SELECT OWNER, TABLE_NAME,INMEMORY,INMEMORY_PRIORITY,INMEMORY_COMPRESSION
FROM DBA_TABLES
WHERE INMEMORY='ENABLED'; -- FOR TABLES
OWNER TABLE_NAME INMEMORY INMEMORY_PRIORITY INMEMORY_COMPRESSION
---------- -------------------- -------- ----------------- -----------------
APPLSYS FND_LOOKUP_TYPES ENABLED CRITICAL FOR QUERY LOW

-- For Partitions
SQL> SELECT TABLE_OWNER, TABLE_NAME, PARTITION_NAME, INMEMORY, INMEMORY_PRIORITY, INMEMORY_COMPRESSION
FROM DBA_TAB_PARTITIONS
WHERE INMEMORY='ENABLED'

TABLE_OWNER TABLE_NAME PARTITION_NAME INMEMORY INMEMORY_PRIORITY INMEMORY_COMPRESS


----------- ------------------------- -------------- -------- ----------------- -----------------
APPLSYS WF_ITEM_ACTIVITY_STATUSES WF_ITEM1 ENABLED CRITICAL FOR QUERY LOW

To check which objects have been populated into the IM column store:
-- Objects Populated In The IM column store
SQL> SELECT OWNER,SEGMENT_NAME NAME, BYTES,INMEMORY_SIZE,BYTES/INMEMORY_SIZE COMP_RATIO,POPULATE_STATUS STATUS,
BYTES_NOT_POPULATED
FROM V$IM_SEGMENTS;

OWNER NAME BYTES INMEMORY_SIZE COMP_RATIO STATUS BYTES_NOT_POPULATED


---------- -------------------- ---------- ------------- ---------- --------- -------------------
ONT OE_ORDER_LINES_ALL 1371537408 269221888 5.094 COMPLETED 0

35 | USING ORACLE DATABASE IN-MEMORY WITH ORACLE E-BUSINESS SUITE


To check progress of object population into the IM column store:
SQL> SELECT OWNER,SEGMENT_NAME NAME, BYTES,INMEMORY_SIZE,BYTES/INMEMORY_SIZE COMP_RATIO,POPULATE_STATUS STATUS,
BYTES_NOT_POPULATED
FROM V$IM_SEGMENTS
WHERE BYTES_NOT_POPULATED<>0;

OWNER NAME BYTES INMEMORY_SIZE COMP_RATIO STATUS BYTES_NOT_POPULATED


---------- -------------------- ---------- ------------- ---------- --------- -------------------
ONT OE_ORDER_LINES_ALL 1371537408 327680 4185.6 STARTED 1365164032

SELECT OWNER,SEGMENT_NAME NAME, BYTES,INMEMORY_SIZE,BYTES/INMEMORY_SIZE COMP_RATIO,POPULATE_STATUS STATUS,


BYTES_NOT_POPULATED
FROM V$IM_SEGMENTS
WHERE POPULATE_STATUS <> 'COMPLETED';

OWNER NAME BYTES INMEMORY_SIZE COMP_RATIO STATUS BYTES_NOT_POPULATED


---------- -------------------- ---------- ------------- ---------- --------- -------------------
ONT OE_ORDER_LINES_ALL 1371537408 46858240 29.26 STARTED 1171439616

To check whether the all the eligible objects have been populated into the IM column store. Note that this SQL will return zero rows (as
shown in the example) when the objects have been populated.
SQL> SELECT DATAOBJ,DATABLOCKS, SUM(BLOCKSINMEM)
FROM GV$IM_SEGMENTS_DETAIL
GROUP BY DATAOBJ, DATABLOCKS HAVING SUM(BLOCKSINMEM) <> DATABLOCKS;

no rows selected

To check the time taken by each object for population into the IM column store:
SQL> SELECT INST_ID, OBJECT_NAME, NUM_ROWS, TIME_TO_POPULATE, IMH.TIMESTAMP
FROM GV$IM_HEADER IMH, DBA_OBJECTS
WHERE DATA_OBJECT_ID=IMH.OBJD;

INST_ID OBJECT_NAME NUM_ROWS TIME_TO_POPULATE TIMESTAMP


---------- ----------------- ---------- ---------------- -----------------------------------
1 OE_PC_CONSTRAINTS 228 29 5/20/2015 12:01:57.065846 AM -07:00

Once all the objects have completed population, you can check the time taken for population of the entire IM column store:
SQL> SELECT MIN(TIMESTAMP), MAX(TIMESTAMP), MAX(TIMESTAMP)-MIN(TIMESTAMP) TIME_TAKEN FROM GV$IM_HEADER;

MIN(TIMESTAMP) MAX(TIMESTAMP) TIME_TAKEN


----------------------------------- ----------------------------------- -----------
19-MAY-15 12.25.35.457857 AM -07:00 25-MAY-15 12.24.08.399687 AM -07:00 +000000005 23:58:32.94183

36 | USING ORACLE DATABASE IN-MEMORY WITH ORACLE E-BUSINESS SUITE


Appendix D: Using Oracle E-Business Suite Application Affinity
Recall that a query can only scan the subset of data populated into the IM column store on the local node (where the query coordinator
is running) with the remainder of the data accessed via the buffer cache (and storage), regardless of whether it exists in the IM column
stores on other nodes. In order for Oracle E-Business Suite node affinity to work optimally, the best approach is to override the default
distribution behavior and populate an entire object into a single IM column store.

This appendix shows how to use the Oracle RAC Server Control Utility (srvctl) to add Oracle Database services and associate them with
a specific Oracle RAC node so that objects can be loaded into particular IM column stores. This approach becomes more complex when
using affinity with multiple IM column stores; it works best when there are no overlaps on the objects used by the concurrent processes
and batch programs that are affinitized to each node.
» Note: When using this approach, you will need to manually populate the IM column stores; it is not possible to populate the same
object into multiple IM column stores.
» Note: The table must be assigned PRIORITY NONE. Using any other priority will result in the table being automatically distributed
across all of the IM column stores.

System Setup
The example in this section consists of a three node Oracle RAC cluster - each node has an IM column store (INMEMORY_SIZE=
3GB). Three tables were created: SOURCE_TESTn where n corresponds to the node/instance on which they will be populated.

The recommended Oracle E-Business Suite database initialization parameters are as follows:
» PARALLEL_DEGREE_POLICY=MANUAL
» PARALLEL_FORCE_LOCAL=TRUE
Use the following command if they are not already set on your system:
SQL> ALTER SYSTEM SET PARALLEL_DEGREE_POLICY = MANUAL;
System altered.
SQL> ALTER SYSTEM SET PARALLEL_FORCE_LOCAL = TRUE;
System altered.

Create and Start the Services


Create a service for each of the instances as follows:
$ srvctl add service -d VISION -s SN1 -r VISION1 -a VISION2,VISION3 -P NONE -e Select -m BASIC -w 10 -z 5 -q TRUE
$ srvctl start service -db VISION -service "SN1"
$ srvctl add service -d VISION -s SN2 -r VISION2 -a VISION1,VISION3 -P NONE -e Select -m BASIC -w 10 -z 5 -q TRUE
$ srvctl start service -db VISION -service "SN2"
$ srvctl add service -d VISION -s SN3 -r VISION3 -a VISION1,VISION2 -P NONE -e Select -m BASIC -w 10 -z 5 -q TRUE
$ srvctl start service -db VISION -service "SN3"
Each of the services is used to populate individual IM column stores on each of the nodes.

Add a TNS entry for each of the services based on the following example:
SN1=
(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=1.1.1.1)(PORT=1521))
(ADDRESS=(PROTOCOL=tcp)(HOST=1.1.1.2)(PORT=1521))
(ADDRESS=(PROTOCOL=tcp)(HOST=1.1.1.3)(PORT=1521))
(CONNECT_DATA=
(SERVICE_NAME=SN1)
)
)

Associate the services with a Parallel Instance Group


Use the following command to assign a service to a parallel instance group:
$ sqlplus apps/apps@sn1
SQL> ALTER SYSTEM SET PARALLEL_INSTANCE_GROUP=SN1 SCOPE=BOTH SID='VISION1';
$ sqlplus apps/apps@sn2
SQL> ALTER SYSTEM SET PARALLEL_INSTANCE_GROUP=SN2 SCOPE=BOTH SID='VISION2';
$ sqlplus apps/apps@sn3
SQL> ALTER SYSTEM SET PARALLEL_INSTANCE_GROUP=SN3 SCOPE=BOTH SID='VISION3';

» Note: It is important to connect to the appropriate service prior to populating the table into the IM column store on a specific node.

37 | USING ORACLE DATABASE IN-MEMORY WITH ORACLE E-BUSINESS SUITE


Populate the Tables into Each IM Column Store
Connect to each node via the respective service, and populate one table into each of the IM column stores. The following example
shows connecting to the SN1 service and populating the table SOURCE_TEST1 into the IM column store on VISION1:
$ sqlplus apps/apps@sn1
SQL> SELECT INSTANCE_NAME FROM V$INSTANCE;
INSTANCE_NAME
----------------
VISION1

SQL> ALTER TABLE SOURCE_TEST1 INMEMORY PRIORITY NONE;


SQL> EXEC DBMS_INMEMORY.POPULATE(SCHEMA_NAME=>'APPS',TABLE_NAME=>'SOURCE_TEST1');

This is repeated for SN2 with SOURCE_TEST2 and SN3 with SOURCE_TEST3.
The following SQL can be used to check that the tables are populated into each of the IM column stores:
SQL> SELECT INST_ID,SEGMENT_NAME,BYTES BYTES,INMEMORY_SIZE IM_SIZE,POPULATE_STATUS,INMEMORY_DISTRIBUTE,INMEMORY_DUPLICATE
FROM GV$IM_SEGMENTS ORDER BY SEGMENT_NAME,INST_ID;

INST_ID SEGMENT_NAME BYTES IM_SIZE POPULATE_STATUS INMEMORY_DISTRI INMEMORY_DUPLICATE


---------- -------------------- ---------- ---------- --------------- --------------- ------------------
1 SOURCE_TEST1 8399093760 2609119232 COMPLETED AUTO NO DUPLICATE
2 SOURCE_TEST2 1214251008 612564992 COMPLETED AUTO NO DUPLICATE
3 SOURCE_TEST3 1214251008 617873408 COMPLETED AUTO NO DUPLICATE

Test 1: Standard IM Column Store Functionality


This test shows that a query can be fulfilled exclusively when run on a single Oracle RAC node and the local IM column store contains
the entire table.

Simply connect to the SN1 service and execute the query. This can be repeated for each of the other services and objects, with the
same result. The test shows that an In-Memory full table scan occurs (and there are no physical disk reads). If you inadvertently query
an object that does not exist in the local IM column store, you will see a radical difference in performance, which will be accounted for by
physical disk reads.

The following extract shows that an In-Memory full table scan occurs for the object in the local IM column store.
$ sqlplus apps/apps@SN1
SQL> SET AUTOTRACE ON TIMING ON LINESIZE 200 PAGES 120
SQL> SELECT COUNT(*) FROM (SELECT /*+ PARALLEL(AUTO) */ LINE, COUNT(OBJ#)FROM SOURCE_TEST1 GROUP BY LINE);

-----------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | | 23929 (6)| 00:00:01 | | | |
| 1 | SORT AGGREGATE | | 1 | | | | | | | |
| 2 | PX COORDINATOR | | | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10001 | 1 | | | | | Q1,01 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | | | | | Q1,01 | PCWP | |
| 5 | VIEW | | 319K| | | 23929 (6)| 00:00:01 | Q1,01 | PCWP | |
| 6 | HASH GROUP BY | | 319K| 1558K| 875M| 23929 (6)| 00:00:01 | Q1,01 | PCWP | |
| 7 | PX RECEIVE | | 319K| 1558K| | 23929 (6)| 00:00:01 | Q1,01 | PCWP | |
| 8 | PX SEND HASH | :TQ10000 | 319K| 1558K| | 23929 (6)| 00:00:01 | Q1,00 | P->P | HASH |
| 9 | HASH GROUP BY | | 319K| 1558K| 875M| 23929 (6)| 00:00:01 | Q1,00 | PCWP | |
| 10 | PX BLOCK ITERATOR | | 76M| 363M| | 1918 (9)| 00:00:01 | Q1,00 | PCWC | |
| 11 | TABLE ACCESS INMEMORY FULL|SOURCE_TEST1 | 76M| 363M| | 1918 (9)| 00:00:01 | Q1,00 | PCWP | |
-----------------------------------------------------------------------------------------------------------------------------------------
Note
-----
- dynamic statistics used: dynamic sampling (level=AUTO)
- automatic DOP: Computed Degree of Parallelism is 4 because of degree limit
- parallel scans affinitized for inmemory
Statistics
----------------------------------------------------------
86 recursive calls
0 db block gets
248 consistent gets
0 physical reads
0 redo size
544 bytes sent via SQL*Net to client
552 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

38 | USING ORACLE DATABASE IN-MEMORY WITH ORACLE E-BUSINESS SUITE


Test 2: Node Affinity
This test shows how a query is restricted to perform an In-Memory scan of the available data in local IM column store only (with the
remainder of the query being serviced by the buffer cache and disk).

This test shows what happens when you access all three tables when using a service that is associated to one of the nodes. In this
example an In-Memory full table scan occurs for SOURCE_TEST1 on Node-1(VISION1), but the remainder of the data is accessed via
physical disk reads for SOURCE_TEST2 and SOURCE_TEST3, which are populated in IM column stores on the other nodes.

Note that there is a problem with Autotrace and the cursor execution plan (show below) that both display INMEMORY FULL for all of the
tables, whereas there were actually 8964283 physical disk reads (as shown in the autotrace statistics).
$ sqlplus apps/apps@SN1

SQL> SELECT COUNT(1) FROM (SELECT /*+ PARALLEL(AUTO) USE_HASH(T1 T2 T3) */ T1.*
FROM SOURCE_TEST1 T1, SOURCE_TEST2 T2, SOURCE_TEST3 T3
WHERE T1.LINE = T2.LINE AND T1.LINE= T3.LINE);

Execution Plan
----------------------------------------------------------
Plan hash value: 3990536709
-----------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 15 | | 1538K (24)| 00:01:01 | | | |
| 1 | SORT AGGREGATE | | 1 | 15 | | | | | | |
| 2 | PX COORDINATOR | | | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10003 | 1 | 15 | | | | Q1,03 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | 15 | | | | Q1,03 | PCWP | |
|* 5 | HASH JOIN | | 254G| 3560G| 75M| 1538K (24)| 00:01:01 | Q1,03 | PCWP | |
| 6 | PX RECEIVE | | 18M| 89M| | 435 (7)| 00:00:01 | Q1,03 | PCWP | |
| 7 | PX SEND HASH | :TQ10000 | 18M| 89M| | 435 (7)| 00:00:01 | Q1,00 | P->P | HASH |
| 8 | PX BLOCK ITERATOR | | 18M| 89M| | 435 (7)| 00:00:01 | Q1,00 | PCWC | |
| 9 | TABLE ACCESS INMEMORY FULL | SOURCE_TEST3 | 18M| 89M| | 435 (7)| 00:00:01 | Q1,00 | PCWP | |
|* 10 | HASH JOIN | | 4407M| 41G| 75M| 27692 (24)| 00:00:02 | Q1,03 | PCWP | |
| 11 | PX RECEIVE | | 18M| 89M| | 434 (7)| 00:00:01 | Q1,03 | PCWP | |
| 12 | PX SEND HASH | :TQ10001 | 18M| 89M| | 434 (7)| 00:00:01 | Q1,01 | P->P | HASH |
| 13 | PX BLOCK ITERATOR | | 18M| 89M| | 434 (7)| 00:00:01 | Q1,01 | PCWC | |
| 14 | TABLE ACCESS INMEMORY FULL| SOURCE_TEST2 | 18M| 89M| | 434 (7)| 00:00:01 | Q1,01 | PCWP | |
| 15 | PX RECEIVE | | 76M| 363M| | 1918 (9)| 00:00:01 | Q1,03 | PCWP | |
| 16 | PX SEND HASH | :TQ10002 | 76M| 363M| | 1918 (9)| 00:00:01 | Q1,02 | P->P | HASH |
| 17 | PX BLOCK ITERATOR | | 76M| 363M| | 1918 (9)| 00:00:01 | Q1,02 | PCWC | |
| 18 | TABLE ACCESS INMEMORY FULL| SOURCE_TEST1 | 76M| 363M| | 1918 (9)| 00:00:01 | Q1,02 | PCWP | |
-----------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):


---------------------------------------------------

5 - access("T1"."LINE"="T3"."LINE")
10 - access("T1"."LINE"="T2"."LINE")

Note
-----
- automatic DOP: Computed Degree of Parallelism is 4 because of degree limit
- parallel scans affinitized for inmemory

Autotrace Statistics
----------------------------------------------------------
2364112 consistent gets
8964283 physical reads

39 | USING ORACLE DATABASE IN-MEMORY WITH ORACLE E-BUSINESS SUITE


Oracle Corporation, World Headquarters Worldwide Inquiries
500 Oracle Parkway Phone: +1.650.506.7000
Redwood Shores, CA 94065, USA Fax: +1.650.506.7200

CONNECT W ITH US

blogs.oracle.com/oracle Copyright © 2015, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposes only, and the
contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other
warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or
facebook.com/oracle fitness for a particular purpose. We specifically disclaim any liability with respect to this document, and no contractual obligations are
formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any
twitter.com/oracle means, electronic or mechanical, for any purpose, without our prior written permission.

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.
oracle.com
Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and
are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are
trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group.0115

Using Oracle Database In-Memory with Oracle E-Business Suite


August 2015
Authors: Andy Tremayne, Deepak Bhatnagar
Contributing Authors: Nitin Shrivastava, Srujan Vejendla, Mohammed Saleem, Pradeep Arni, Samer Barakat

You might also like