Professional Documents
Culture Documents
Supply Chain
Intelligence
Performance Tuning
Guide
2013
This documentation, as well as the software described in it, is furnished under license and may be used or copied
only in accordance with the terms of such license. The information in this documentation is furnished for informational
use only, is subject to change without notice, and should not be construed as a commitment by Manhattan
Associates, Inc. (“Manhattan”). No third party patent liability is assumed with respect to the use of the information
contained herein. While every precaution has been taken in the preparation of this documentation, Manhattan
assumes no responsibility for errors or omissions.
EXCEPT WHERE EXPRESSLY PROVIDED OTHERWISE, ALL CONTENT, MATERIALS AND INFORMATION,
ARE PROVIDED ON AN "AS IS" AND "AS AVAILABLE" BASIS. MANHATTAN EXPRESSLY DISCLAIMS ALL
WARRANTIES OF ANY KIND, WHETHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-
INFRINGEMENT.
Except as permitted by license, no part of this documentation may be reproduced, stored in a retrieval system, or
transmitted, in any form by any means, electronic, mechanical, recording, or otherwise, without the prior written
permission of Manhattan Associates.
All other brands, products, or services are trademarks, registered trademarks, or service marks of their respective
companies or organizations.
Contact Address:
http://www.manh.com/
Europe, Middle +44 (0) 1344 318400 (UK) Performance Tuning Guide 2013
emeacustomersupport@manh.com
East, and Africa +31 (0)30 214 3400 (NL)
Unsure whom to contact? Call +1 404.965.4025 and you will be routed to the appropriate support group.
Table of Contents
Overview ............................................................................................................................................... 4
Cognos BI Server Optimization............................................................................................................. 4
Set the HTTP config to cache static items. ................................................................................................4
Optimize CPU hardware configuration to support Cognos reports running in parallel. ..............................6
Governors Settings ............................................................................................................................... 8
Set Governors .................................................................................................................................... 8
Governors Settings detail ..........................................................................................................................9
ETL Optimization................................................................................................................................. 16
Extraction ................................................................................................................................................. 16
Transformation......................................................................................................................................... 17
Loading .................................................................................................................................................... 21
Cube Optimization .............................................................................................................................. 31
Ulimit Settings .......................................................................................................................................... 31
Enabling Parallelized Cube Processing ................................................................................................... 33
Set maximum parallel processes to be run at a time ............................................................................... 34
Set Auto Summarization .......................................................................................................................... 35
Periodically Clean up Models .................................................................................................................. 36
Time based partitioning techniques using cube groups ........................................................................... 36
Correct Indexing strategy based on cube queries.................................................................................... 38
Other settings .......................................................................................................................................... 38
Overview
The objective of this guide is to provide performance tuning guidelines for the following SCI components:
ExpiresActive On
<FilesMatch "\.(ico|gif|jpg|jpeg|png|flv|swf|mov|mp3|wmv|ppt)$">
</FilesMatch>
</FilesMatch>
<FilesMatch "\.(pdf)$">
</FilesMatch>
<FilesMatch "\.(php|cgi|pl)$">
ExpiresDefault A0
</FilesMatch>
• Edit httpd.conf and remove the “#” from this line (enable mod expires, and mod_headers):
#LoadModule expires_module modules/mod_expires.so
#LoadModule headers_module modules/mod_headers.so
• Navigate to Tuning folder: IBM Cognos Administration System Servers (Select a server)
click on arrow next to the dispatcher for the selected Server Select “Set properties”Select
“Settings” tab Select category “Tuning”
• Enter the following values in the above text fields based on the CPU configuration.
Report Service
The number of 2 For each 4 This setting indicates the number of threads
low affinity BIBusprocess: available per batch report server (BIBus
connections 2 process) to handle low affinity requests. This
for the batch setting must be considered along with the
report service high affinity connections setting.
Governors Settings
Set Governors
Use governors to reduce system resource requirements and improve performance. You set governors
before you create packages to ensure the metadata in the package contains the specified limits. All
packages that are subsequently published use the new settings.
The governor settings that take precedence are the ones that apply to the model that is currently open
(whether it is a parent model or a child model).
In a new project the governors do not have values defined in the model. You must open
the Governors window and change the settings if necessary. When you save the values in
the Governors window by clicking OK, the values for the governors are set. You can also set
governors in Report Studio. The governor settings in Report Studio override the governor settings in the
model.
Based on our test results we recommend the following setting on the Governors.
You can control the number of tables that a user can retrieve in a query or report. When a table is
retrieved, it is counted each time it appears in the query or report. The limit is not the number of unique
tables. If the query or report exceeds the limit set for the number of tables, an error message appears and
the query or report is shown with no data.
A setting of zero (0) means no limit is set.
Note: This governor is not used in dynamic query mode.
You can set data retrieval limits by controlling the number of rows that are returned in a query or report.
Rows are counted as they are retrieved.
When you run a report and the data retrieval limit is exceeded, an error message appears and the query
or report is shown with no data.
You can also use this governor to set limits to the data retrieved in a query subject test or the report
design mode.
A setting of zero (0) means no limit is set.
If you externalize a query subject , this setting is ignored when you publish the model.
Note: This governor is not used in dynamic query mode.
You can limit the time that a query can take. An error message appears when the preset number of
seconds is reached.
A setting of zero (0) means no limit is set.
Note: This governor is not used in dynamic query mode.
You can control the character length of BLOBs (binary large objects) that a user can retrieve in a query or
report. When the character length of the BLOB exceeds the set limit, an error message appears, and the
query or report is shown with no data.
Outer Joins
You can control whether outer joins can be used in your query or report. An outer join retrieves all rows in
one table, even if there is no matching row in another table. This type of join can produce very large,
resource-intensive queries and reports.
Governors are set to deny outer joins by default. For example, outer joins are not automatically generated
when you test a query item in Framework Manager.
If you keep the setting as Deny, you are notified only if you create a relationship in the Diagram tab that
includes outer joins. You are not notified if you create a relationship in a data source query subject that
includes outer joins.
If you set the governor to Allow, dimension to fact relationships are changed from inner joins to outer
joins.
The outer joins governor does not apply in these circumstances:
• SQL that is generated by other means. If you set this governor to Deny, it does not apply to the
permanent SQL found in a data source query subject, whether the SQL was generated on
import , manually entered, or based on existing objects .
• Framework Manager needs to generate an outer join to create a stitched query. A stitched query is
a query that locally combines the results of two or more sub-queries by using a locally processed
outer join.
Cross-Product Joins
You can control whether cross-product joins can be used in your query or report. A cross-product join
retrieves data from tables without joins. This type of join can take a long time to retrieve data.
The default value for this governor is Deny. Select Allow to allow cross-product joins.
Shortcut Processing
You can control how shortcuts are processed by IBM Cognos software.
When you open a model from a previous release, the Shortcut Processing governor is set
to Automatic. Automatic is a shortcut that exists in the same folder as its target and behaves as an
alias, or independent instance. However, a shortcut existing elsewhere in the model behaves as a
reference to the original. When you create a new model, the Shortcut Processing governor is always
set to Explicit.
If you set the governor to Explicit, the shortcut behavior is taken from the Treat As property. If
the Shortcut Processing governor is set to Automatic, we recommend that you verify the model and,
Performance Tuning Guide 2013
when repairing, change the governor to Explicit. This changes all shortcuts to the correct value from
the Treat As property based on the rules followed by the Automatic setting.
The Shortcut Processing governor takes priority over the Treat As property. For example, if the
governor is set to Automatic, the behavior of the shortcut is determined by the location of the shortcut
relative to its target regardless of the setting of the Treat As property is.
You can control how SQL is generated for inner joins in a model by selecting one of the following settings:
10
If the governor is set to Server determined, the CQEConfig.xml file is used to determine the governor
value. If there is no active CQEConfig.xml file or no parameter entry for the governor in the
CQEConfig.xml file, then the Implicit setting is used.
For example,
• The Explicit setting uses the from clause with the keywords inner join in an on predicate.
For example,
SELECT
publishers.name, publishers.id,
You can set the join type on the query property in Report Studio to override the value of this
governor.
Regardless of the setting you use for this governor, the Explicit setting is used for left outer
joins, right outer joins, and full outer joins.
If the governor is set to Server determined, the CQEConfig.xml file is used to determine the governor
value. If there is no active CQEConfig.xml file or no parameter entry for the governor in the
CQEConfig.xml file, then the Disabled setting is used.
The Disabled setting prevents aggregation of the measure for the attributes. This is the default behavior.
For example,
select
measure
from ...
The Enabled setting allows aggregation of the measure for the attributes.
Note: This is the default behavior for IBM Cognos Framework Manager versions prior to 8.3.
11
select
Product.Product_line_code as
Product_line_code,Order_method.Order_method_code as Order_method_code,
//measure attributeXSUM(Sales.Quantity for
Order_method.Order_method_code, Product.Product_line_code) as Quantity
//aggregated measure
from ...
You can control the use of the minimum aggregate in SQL generated for attributes of a level (member
caption).
If the governor is set to Server determined, the CQEConfig.xml file is used to determine the governor
value. If there is no active CQEConfig.xml file or no parameter entry for the governor in the
CQEConfig.xml file, then the Minimum setting is used.
The Minimum setting generates the minimum aggregate for the attribute. This setting ensures data
integrity if there is a possibility of duplicate records. For example,
select XMIN(Product.Product_line
Product.Product_line_code as Product_line_code
from
(...) Product
The Group By setting adds the attributes of the level in the group by clause. with no aggregation for the
attribute. The distinct clause indicates a group by on all items in the projection list. The Group By setting
is recommended if the data has no duplicate records. It can enhance the use of materialized views and
may result in improved performance. For example,
select distinctProduct.Product_line
Product.Product_line_code as Product_line_code
from
You can control the use of the minimum aggregate in SQL generated for attributes of a determinant with
the group by property enabled.
If the governor is set to Server determined, the CQEConfig.xml file is used to determine the governor
value. If there is no active CQEConfig.xml file or no parameter entry for the governor in the
CQEConfig.xml file, then the Minimum setting is used.
12
The Minimum setting generates the minimum aggregate for the attribute. This setting ensures data
integrity if there is a possibility of duplicate records. For example,
select
PRODUCT_LINE.PRODUCT_LINE_CODE as Product_line_code,
as Product_line //attribute
from
great_outdoors_sales..GOSALES.PRODUCT_LINE PRODUCT_LINE
group by
PRODUCT_LINE.PRODUCT_LINE_CODE //key
The Group By setting adds the attributes of the determinants in the group by clause with no aggregation
for the attribute. This setting is recommended if the data has no duplicate records. It can enhance the use
of materialized views and may result in improved performance. For example,
select
PRODUCT_LINE.PRODUCT_LINE_CODE as Product_line_code,
from
great_outdoors_sales..GOSALES.PRODUCT_LINE PRODUCT_LINE
This governor specifies whether generated SQL uses parameter markers or literal values.
If the governor is set to Server determined, the CQEConfig.xml file is used to determine the governor
value. If there is no active CQEConfig.xml file or no parameter entry for the governor in the
CQEConfig.xml file, then the Marker setting is used.
This governor is selected upon initial upgrade of a Cognos ReportNet® 1.x model. It prevents rigid
enforcement of data types so that an IBM Cognos model can function as a ReportNet® 1.x model until
you update the data types in the metadata. After you have verified that the model has been upgraded
successfully, clear this governor.
13
Other than for initial upgrade, there are limited uses for this governor. For example, you have created a
model for use with a data source and you want to run it against a different data source. The new data
source must be structurally similar to the original data source, and the database schema must be the
same between the two data sources. If you select this governor, IBM Cognos BI retrieves metadata from
the data source and caches it instead of using the metadata already cached in the model. When you have
completed modifying and testing the model against the new data source, clear this governor.
If you do not use this governor, you must ensure that the following metadata is the same in the original
and new data sources:
• collation level
• character set
• nullability
• precision
• scale
• column length
• data type
Select this governor to specify that all reports based on this model will use cached data. For a new model,
this governor is enabled by default.
This setting affects all reports that use the model. Use Report Studio if you want a report to use a different
setting than the model.
This governor is selected only upon initial upgrade of a ReportNet® 1.x model. This governor allows
consistent behavior with ReportNet® 1.x by deriving a form of dimension information from the
relationships, key information, and index information in the data source.
You can choose to use the With clause with IBM Cognos SQL if your data source supports
The With clause is turned on for models created in IBM Cognos BI. For upgraded models, it is turned off
unless it was explicitly turned on in the Cognos ReportNet®model prior to upgrading.
You can control whether or not nulls are suppressed by any report or analysis that uses the published
package. The governor is also applied to test results during the current Framework Manager session. It is
supported for SAP BW data sources only.
Some queries can be very large because null values are not filtered out. Null suppression removes a row
or column for which all of the values in the row or column are null (empty). Null suppression is performed
14
by SAP BW. This reduces the amount of data transferred to the IBM Cognos client products and
improves performance.
By default, nulls values are suppressed. If you clear this governor, null values are not suppressed.
There is a property called Suppress in Report Studio that overrides this governor. If
the Suppress property is set to None, null values are included in the result set even if the governor is set
to suppress null values.
Note: This governor is not applied when creating CSV files; therefore, CSV files include null
values if they exist in the data.
A published package includes the model objects selected when the package was created. In addition,
those model objects are analyzed in order to identify and include dependent objects in the package.
In a complex or very large model, the analysis can take considerable time. To shorten the publish time,
set this governor to skip this analysis step and have the entire model written to the content store. The
resulting package may be larger because the entire model is published instead of only required objects,
however the time required to publish should be reduced.
To use external data, report users import their data into an existing package. This governor controls the
number of external data files that can be imported.
The default is 1.
For more information about external data sources, see the IBM Cognos Report Studio User Guide.
To use external data, report users import their data into an existing package. This governor controls the
size of each external data file.
By default, the maximum file size that report users can import is 2560 KB.
For more information about external data sources, see the IBM Cognos Report Studio User Guide.
To use external data, report users import their data into an existing package. This governor controls the
number of rows that can exist in each external data file. Performance Tuning Guide 2013
By default, the maximum number of rows that report users can import is 20000.
For more information about external data sources, see the IBM Cognos Report Studio User Guide.
15
ETL Optimization
This section describes how to optimize all three sub-processes of the ETL: Extraction, Transformation,
and Load
Extraction
SCI uses the Import builds to perform extraction. Modifications to the Import builds can improve the
extraction processing times.
Queries
• Native queries are always preferred over Cognos SQL for speed; unless there is any DB specific
db function is used.
• Use Oracle SQL HINT for parallel records fetch. There are two versions of this:
1. Let Oracle decide the number of threads to be created. Use the following query:
OR
2. Assign the number of parallel threads to be created. Use the following query:
Merge Technique
16
Transformation
Transformation, also referred to as Staging, is the ETL process of preparing data to be loaded into the
data warehouse.
Normal dimension builds create records using the builds provided by data manager. Data manager is
designed to support source dimensional data from configuration tables. Generally, SCI dimension tables
contain data from the source system’s configuration tables, and not from its transactional tables (which
have a significantly larger volume of data).
Because it’s designed to support configuration data, data manager loads the entire data into memory.
After the data is loaded, it processes the records.
If the volume of data is too significant, a dimension can throw memory leak errors. Memory errors can be
managed in the following ways:
• Switch to a degenerate dimension. If the customer wants to see the measures based on this
Performance Tuning Guide 2013
dimension then we should move this to the Fact build and populate it as an attribute (degenerate
dimension).
• Reduce the record counts.
• Build the dimension using a fact build.
A dependent dimension is a slowly changing dimension. The SCI ETL uses fact builds to implement
dimensions to accommodate the following scenarios:
17
• A subset of dimensions, like Pickticket Line and Outbound LPN Line, has a large volume of data
which can’t be handled with normal dimension builds
• To track slowly changing attributes, SCI creates dimensions using multi-stage fact builds.
If the data is coming from a transaction table with a large volume of data, the best approach is to convert
it into a FACT attribute, which is called a degenerate dimension.
There are two scenarios where there could be performance issues with dependent dimensions:
The Lookup to the Fact table is running out of memory. This can be addressed in the following ways:
Dimension Breaking
One method to manage large data volume is dimension breaking. Dimension breaking processes data in
chunks by splitting the “sorted” data based on a particular column. This is possible as the dimension build
pulls the data into memory.
Selecting the dimensions on which to break requires a mixture of analysis, experience, and the evaluation
of different options. A dimension is suitable for breaking if it meets the following conditions:
• It is not aggregated, or it is aggregated through few levels, with relatively few aggregated members
generated.
18
Process
1. Choose the grain of the dimension build. The grain is the column that uniquely identifies a
dimension record. Sort the data based on this column in the query.
2. Double click on the Merge and Breaking Tab on the data build transformation model (highlighted).
3. The Build Properties Window opens (see screen shot below). Navigate to tab “Breaks”.
a. Add the grain column to the list “Break on:”
b. You can multiple columns where there is a composite key
c. Select the checkbox “Perform Break Processing Every:”
d. Enter 75% in the text field.
Note: This value can vary based on performance testing.
19
A Domain Size defines internal transaction sizes when a fact build is executed. It applies for both
dimension elements and derived dimension elements. By default, Data Manager estimates the domain
size by referring to the reference structure associated with the dimension or derived dimension.
The system defined domain size can be inaccurate when the following conditions exist:
In these scenarios, you can provide a better custom domain size manually.
Note: To merge dimension elements that are not associated with a reference dimension, you must
set the domain size manually. Set the domain size to be greater than the maximum number of
distinct domain members for the dimension.
Process
4. In the Domain size box, type the required domain size. This value should be greater than the
maximum number of distinct values for the dimension.
5. In the Domain type box, click the type of domain to use. Remember, for delivery of an aggregation
exception for the dimension, you must select Reference domain.
6. Click OK.
Note: This new domain size is applied when the build is next executed. If the domain size is too
small, the build process will generate a message to increase the size.
For more information on Dynamic Domain refer the following IBM link on Domain
20
Loading
During the loading process, SCI compares the stage fact build data to the fact table data in the Data Mart.
SCI uses the normal fact build to pull the data from the stage build to the final fact table. The load build
includes two activities: data loading and data merging.
Data Loading
If data is taking too long to load into the FACT table, there might be a significant amount of data
accumulated in the FACT Table. To decrease processing time:
• Check the indexes on the FACT table. Refer section below on Database guidelines for Data manager
• Use the Oracle SQL* Loader to bulk load the data into the fact table. Refer the section on Oracle SQL
Loader
Data Merging
If the join between the stage table and the fact table is very slow and is returning data at a very low rate,
there might be a problem with the join. To decrease processing time:
• Check the joins between the tables. Refer section below on Database guidelines for Data manager
• Use the Data Merging technique. Please refer section Data Merging below.
Data merging is a very good mechanism to handle data coming from multiple tables as compared to join
queries between the tables. It loads data into memory and processes data which is faster than joins. In
addition, it merges data efficiently by handling duplicates. SCI can use the merge technique to merge the
stage table’s data with the Data mart tables.
Process
21
2. For the FACT_GRAIN, identify a column in the incoming data sources which uniquely identifies a row
in the table. This is used for joins between the stage and the fact tables and to break the records
while processing them in memory during transformation.
In the first data source, pull the records from the stage table. Add the query as mentioned below.
Add */+ PARALLEL */ for parallel record fetch.
a. In the second data source, add a join between the Stage table and FACT table to pull the existing
rows.
ORDER BY T1.FACT_GRAIN
b. Add literals for each data source to mark the rows as NEW or EXISTING.
22
c. Merge the records in the data stream. Create a variable in the data stream items named
“EXISTS” and merge the literals to this.
23
In this stage, define the data delivering for inserting new rows and for updating the existing rows.
24
25
e. Right click on the data source marked (4) and go to table properties. Mark settings as mentioned
in the screenshot below
26
Indexing can make the Data Manager build faster. Below are some procedures for indexing on Data
Manager to improve performance.
1. Drop indexes on the tables before inserting. This is done using a procedure node from DM job stream
where we can invoke a stored procedure to drop the indexes on the tables.
2. Add the indexes back using the same strategy. The index is created mainly on the grain of the fact
i.e. a key which uniquely identifies a row in the FACT table.
Note: Composite indexes can be created for cubes and reporting purposes.
Indexes are to be added to business id columns in dimensions because datamanager uses the
columns selected as business id to update the dimensions.
27
The import builds pull the data from the source system and appends it to the Import tables. Since SCI
doesn’t update any records on the destination table and it uses Oracle SQL Loader which appends an
array of data.
28
4. Go to table properties and update the entries as per the screenshot below:
29
The number of parallel nodes to be processed for import, dimension and facts can be altered. Parallel
processing helps in running multiple builds in a shorter period of time.
If there are no other CPU expensive jobs in the SCI server other than the ETL, set the number of parallel
nodes to 4. If there are other big processes running on the same machine as the ETL, set the value to 2
or 1.
To manage the parallel nodes for import, dimension and facts please update the field
CONFIG_TYPE_CONFIG_OPTION_VALUE in the table
TBL_CONFIG_TYPE_CONFIG_OPTION corresponding to each.
30
Cube Optimization
Ulimit Settings
The Ulimit in the server for the user should be set to at least the following values
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) 4194304
memory(kbytes) unlimited
coredump(blocks) unlimited
To process the cube faster with optimal utilization of space, change to the Cognos configuration file for
transformer.
1. Navigate to COG_ROOT/configuration.
2. Open file cogtr.xml.
3. Add the following lines:
<Section Name="Transformer">
<Preference Name="MultiFileCubeThreshold"
Type="int" Value="50000">
</Preference>
<Preference Name="WorkFileSortBufferSize"
Type="int" Value="160000000"></Preference>
<Preference Name="ModelWorkDirectory"
Type="string"
Value="<SCI_HOME_DIR>/cubes/output/temp/mdl"></Preference>
31
</Section>
32
You can change how data manager handles the parallelization of independent processes through a
setting applied through catalog. You can choose to run independent processes in parallel or not. The
setting is shown here:
By default, a condition node, procedure node, SQL node, alert node, or email node is executed inline that
is, run within the Job Stream process. The implication of running inline is that the nodes are run in series
even if the Job Stream design has parallel flows. However, for procedure nodes and SQL nodes there Performance Tuning Guide 2013
may be instances where the nodes may take a long time to process so parallel execution would be
desirable. To facilitate this, we can specify that IBM Cognos Data Manager should create a separate
process for a node (procedure or SQL). For that select the above check box “Run as separate process”
as highlighted for that node.
This will make the processes run in parallel. This number of parallel processes is further based on the
parameter setting “-N” when using the data manager functions.
Note: Executing a node as a separate process uses more memory than executing inline.
33
The parameter is “N?” where ‘?’ has to be replaced with the number of parallel nodes to be processed.
34
• Auto Summarize - This setting depends upon the fact data being read. If the fact data is not
consolidated then this can be checked so that the group by happens at database end. Hence less
data need to be transferred from database, consider the effect this setting has upon the generated
SQL of the data source as introducing summary functions.
• Review the SQL again to ensure appropriate grouping and summary functions are being applied.
• If the auto summarize option is not available then ensure the fact query is consolidated. That is it only
brings in one row of data for a unique key combination.
35
• Delete or exclude records from the source data if they are no longer needed.
• To improve cube creation time, try changing the order of your structural data sources. Start with the
structural data sources that contain the hierarchical data for the dimensions. Then add transactional
data sources to supply measures for the model, using the minimum number of columns needed to
reference those dimensions.
• Categories. Minimizes the number of categories in a cube, the default option for models. Transformer
adds only categories that are referenced in the source data or specifically designated to be included.
• Ensure that your data does not have any uniqueness violations, if you are using level uniqueness.
Allocate extra time for data source processing to verify that all categories are unique within a level, or
eliminate this step if it is not necessary by clearing the Verify Category Uniqueness option on the
relevant data source property sheet.
Report users can view each cube independently, or access the entire collection of child cubes as a single,
time-based virtual cube. This means that reports can be viewed across the entire time dimension, or
across only one level in the time dimension, such as a specific month.
To more easily manage time-based partitioned cubes, you can gather them into like-structured groups.
36
Process
1. To set up the cube group, while executing the option “Insert Power Cube”, on the Power
Cube property sheet, click the Cube Group tab, select the Enable time-based partitioning check
box, and click OK.
2. To ensure that the child cubes in your group cover a distinct level, for each one, specify the
appropriate level from your time dimension, such as Quarter or Month.
Note: You can open the .vcd file later, in any text editor, and manually include or exclude cubes.
For example, to improve performance, try adding entries in the .vcd file for cubes that are at a
higher level of the time hierarchy, such as the Quarter or Year level.
37
Other settings
Further optimize cube creation by specifying one of the following processing methods, accessed from the
Processing tab of the Power Cube property sheet:
• Auto-partition. Enables the Auto-Partition tab, where you can set the parameters for
Transformer to devise a partitioning scheme.
• Data Passes. Optimizes the number of passes through the temporary working files during the
creation of a cube. This option is beneficial only if the more efficient alternative, auto-partitioning,
is not used (that is, your model implements features not supported with auto-partitioning).
Performance Tuning Guide 2013
38