Professional Documents
Culture Documents
by
Amit Sharma
http://learnhyperion.wordpress.com
mail to aloo_a2@yahoo.com
Viewing
ViewingEssbase
EssbaseServer/Database
Server/DatabaseInformation
Information
MaxL
MaxL Display
DisplayApplication
Application
ESSCMD
ESSCMD GETAPPSTATE,
GETAPPSTATE,GETPERFSTATS
GETPERFSTATS
ESSCMD
ESSCMD GETAPPINFO,GETDBINFO
GETAPPINFO,GETDBINFO
ListLocks,
ListLocks,UNLOCKOBJECT
UNLOCKOBJECT
Monitoring
MonitoringUser
UserSessions
Sessionsand andRequests
Requests
MaxLdisplay
MaxL displaysession,
session,alter
altersystem
system
alter
altersystem
systemlogout
logoutsession
sessionby
byuser
user'admin'
'admin'
on
onapplication
applicationsample
sampleforce;
force;
Unlockobject
Unlockobject11sample
samplebasic
basicbasic
basic
Index Cache
Index Cache
alter
alterdatabase
databaseset
setindex_cache_size
index_cache_size
When you request a data block, the index is used to find its location on disk. If the
When you request a data block, the index is used to find its location on disk. If the
block location is not found in the index cache, the index page that has the block entry is
block location is not found in the index cache, the index page that has the block entry is
pulled into memory (into the index cache) from the disk. If the index cache is full, the least
pulled into memory (into the index cache) from the disk. If the index cache is full, the least
recently used index page in memory (in the index cache) is dropped to make room for the
recently used index page in memory (in the index cache) is dropped to make room for the
new index page.
new index page.
http://learnobiee.wordpress.com aloo_a2@yahoo.com for all Hyperion video tutorial/Training/Certification/Material
http://learnobiee.wordpress.com aloo_a2@yahoo.com for all Hyperion video tutorial/Training/Certification/Material
Optimizing
OptimizingDatabase
DatabaseCaches
Caches
Block Numbering
Index:
100-10, New York 11
1
100-20, New York 2
100-10, Massachusetts
100-20, Massachusetts 20
21
100-30, Massachusetts
22
Data Cache Data blocks can reside on physical disk and in RAM. The amount of
Data blocks can reside on physical disk and in RAM. The amount of
Data Cache memory allocated for blocks is called the data cache.
memory allocated for blocks is called the data cache.
When a block is requested, the data cache is searched. If the block is found in the data
When a block is requested, the data cache is searched. If the block is found in the data
cache, it is accessed immediately. If the block is not found in the data cache, the index is
cache, it is accessed immediately. If the block is not found in the data cache, the index is
searched for the appropriate block number. The block's index entry is then used to retrieve
searched for the appropriate block number. The block's index entry is then used to retrieve
the block from the proper data file on disk.
the block from the proper data file on disk.
http://learnobiee.wordpress.com aloo_a2@yahoo.com for all Hyperion video tutorial/Training/Certification/Material
http://learnobiee.wordpress.com aloo_a2@yahoo.com for all Hyperion video tutorial/Training/Certification/Material
Optimizing
OptimizingDatabase
DatabaseCaches
Caches
Calculator Cache The calculator cache is a buffer in memory that Essbase uses to create
Calculator Cache The calculator cache is a buffer in memory that Essbase uses to create
and track data blocks during calculation operations. Essbase can create
and track data blocks during calculation operations. Essbase can create
a bitmap, whose size is controlled by the size of the calculator cache, to
a bitmap, whose size is controlled by the size of the calculator cache, to
record and track data blocks during a calculation. Determining which
record and track data blocks during a calculation. Determining which
blocks exist using the bitmap is faster than accessing the disk to obtain
blocks exist using the bitmap is faster than accessing the disk to obtain
the information, particularly if calculating a database for the first time or
the information, particularly if calculating a database for the first time or
calculating a database when the data is very sparse.
calculating a database when the data is very sparse.
Dynamic calculator cache The dynamic calculator cache is a buffer in memory that
Dynamic calculator cache The dynamic calculator cache is a buffer in memory that
Essbase uses to store all of the blocks needed for a
Essbase uses to store all of the blocks needed for a
calculation of a Dynamic Calc member in a dense dimension
calculation of a Dynamic Calc member in a dense dimension
(for example, for a query). Essbase uses a separate dynamic
(for example, for a query). Essbase uses a separate dynamic
calculator cache for each open database. The
calculator cache for each open database. The
DYNCALCCACHEMAXSIZE setting in the essbase.cfg file
DYNCALCCACHEMAXSIZE setting in the essbase.cfg file
specifies the maximum size of each dynamic calculator
specifies the maximum size of each dynamic calculator
cache on the server.
cache on the server.
•To prevent fragmentation, optimize data loads by sorting load records based upon sparse
dimension members.
•To remove fragmentation, perform an export of the database, delete all data in the database
with CLEARDATA, and reload the export file.
The average clustering ratio database statistic indicates the fragmentation level of the data
(.pag) files. The maximum value, 1, indicates no fragmentation
Step
Step1:1:The
TheStarting
StartingLine:
Line:Model
ModelAnalysis
Analysis
Minimize the number of dimensions. Do not ask for everything in one model
Minimize the number of dimensions. Do not ask for everything in one model
Minimize complexity of individual dimensions. Consider UDAs and Attribute Dimensions
Minimize complexity of individual dimensions. Consider UDAs and Attribute Dimensions
in order to reduce the size of some of the dimensions
in order to reduce the size of some of the dimensions
Examine the level of granularity in the dimensions.
Examine the level of granularity in the dimensions.
Step
Step2:2:Order
OrderThe
TheOutline
Outline Hourly
HourlyGlass
GlassModel
Model
Dense Largest
Dense Largest
Smallest
Smallest Dense dimensions from largest to smallest. Small and large is
Dense dimensions from largest to smallest. Small and large is
measured simply by counting the number of Stored members in a
measured simply by counting the number of Stored members in a
dimension. The effect of sparse dimension ordering is much greater
dimension. The effect of sparse dimension ordering is much greater
Sparse smallest than dense dimension ordering.
Sparse smallest
to Largest
than dense dimension ordering.
to Largest
Sparse dimensions from smallest to largest. This relates directly to
Sparse dimensions from smallest to largest. This relates directly to
how the calculator cache functions.
how the calculator cache functions.
Step
Step3:3:Evaluate
EvaluateDense/Sparse
Dense/SparseSettings
Settings
Finding the optimal configuration for the Dense/sparse settings is the most
Finding the optimal configuration for the Dense/sparse settings is the most
important step in tuning a database.
important step in tuning a database.
Optimize the block size. This varies per operating system, but in choosing the best
Optimize the block size. This varies per operating system, but in choosing the best
Dense/sparse configuration keep in mind that blocks over 100k tend to yield poorer
Dense/sparse configuration keep in mind that blocks over 100k tend to yield poorer
performance. In general, Analytic Services runs optimally with smaller block sizes
performance. In general, Analytic Services runs optimally with smaller block sizes
Step
Step4:4:System
SystemTuning
Tuning
System tuning is dependent on the type of hardware and operating
System tuning is dependent on the type of hardware and operating
Keep memory size higher.
Keep memory size higher.
Ensure there is no conflict for resources with other applications
Ensure there is no conflict for resources with other applications
Step
Step5:5:Cache
CacheSettings
Settings
The actual cache settings recommended is strongly dependent on your specific
The actual cache settings recommended is strongly dependent on your specific
situation.
situation.
To measure the effectiveness of the cache settings, keep track of the time taken to
To measure the effectiveness of the cache settings, keep track of the time taken to
do a calculation and examine the hit ratio statistics in your database information.
do a calculation and examine the hit ratio statistics in your database information.
Step
Step6:6:Optimize
OptimizeData
DataLoads
Loads
Know your database configuration settings (which dimensions are dense and
Know your database configuration settings (which dimensions are dense and
sparse).
sparse).
Organize the data file so that it is sorted on sparse dimensions. The most
Organize the data file so that it is sorted on sparse dimensions. The most
effective data load is one which makes the fewest passes on the database.
effective data load is one which makes the fewest passes on the database.
Hence, by sorting on sparse dimensions, you are loading a block fully before
Hence, by sorting on sparse dimensions, you are loading a block fully before
moving to the next one.
moving to the next one.
Load data locally on the server. If you are loading from a raw data file dump,
Load data locally on the server. If you are loading from a raw data file dump,
make sure the data file is on the server. If it is on the client, you may bottleneck
make sure the data file is on the server. If it is on the client, you may bottleneck
on the network
on the network
Step
Step7:7:Optimize
OptimizeRetrievals
Retrievals
Increase the Retrieval Buffer size. This helps if retrievals are affected due to
Increase the Retrieval Buffer size. This helps if retrievals are affected due to
dynamic calculations and attribute dimensions.
dynamic calculations and attribute dimensions.
Increase the Retrieval Sort Buffer size if you are performing queries involving
Increase the Retrieval Sort Buffer size if you are performing queries involving
sorting or ranking.
sorting or ranking.
Smaller block sizes tend to give better retrieval performance. Logically, this makes
Smaller block sizes tend to give better retrieval performance. Logically, this makes
sense because it usually implies less I/O.
sense because it usually implies less I/O.
Smaller reports retrieve faster.
Smaller reports retrieve faster.
Attribute may impact the calculation performance which usually has a higher
Attribute may impact the calculation performance which usually has a higher
importance from a performance standpoint.
importance from a performance standpoint.
If you have a lot of dynamic calculations or attribute dimensions
If you have a lot of dynamic calculations or attribute dimensions
Higher Index cache settings may help performance since blocks are found quicker
Higher Index cache settings may help performance since blocks are found quicker
Step
Step8:8:Optimize
OptimizeCalculations
Calculations
Unary calculations are the fastest. Try to put everything in the outline and perform a
Unary calculations are the fastest. Try to put everything in the outline and perform a
Calc All when possible.
Calc All when possible.
You should FIX on sparse dimensions, IF on dense dimensions. FIX statements on
You should FIX on sparse dimensions, IF on dense dimensions. FIX statements on
sparse dimensions only brings into memory blocks with those sparse combinations
sparse dimensions only brings into memory blocks with those sparse combinations
which the calc has focused on. If statements on dense dimensions operate on blocks
which the calc has focused on. If statements on dense dimensions operate on blocks
as they are brought into memory.
as they are brought into memory.
Use the Two Pass Calculation tag. Try to avoid multiple passes on the database. In
Use the Two Pass Calculation tag. Try to avoid multiple passes on the database. In
the case where the calculation is a CALC
the case where the calculation is a CALC
Use Intelligent Calc in the case of simple calc scripts.
Use Intelligent Calc in the case of simple calc scripts.
Step
Step9:9:Defragmentation
Defragmentation
Fragmentation occurs over time as data blocks are updated. As the data blocks are updated,
Fragmentation occurs over time as data blocks are updated. As the data blocks are updated,
they grow (assuming you are using compression) and the updated blocks are appended to
they grow (assuming you are using compression) and the updated blocks are appended to
the page file. This tends to leave small free space gaps in the page file.
the page file. This tends to leave small free space gaps in the page file.
Time - The longer you run your database without clearing and reloading the more
Time - The longer you run your database without clearing and reloading the more
likely it is that it has become fragmented.
likely it is that it has become fragmented.
Incremental Loads - This usually leads to lots of updates for blocks.
Incremental Loads - This usually leads to lots of updates for blocks.
Many Calculations/Many Passes On The Database - Incremental calculations or
Many Calculations/Many Passes On The Database - Incremental calculations or
calculations that pass through the data blocks multiple times leads to
calculations that pass through the data blocks multiple times leads to
fragmentation.
fragmentation.
Step
Step10:
10:Partition
Partition
By breaking up one large database into smaller pieces, calculation
By breaking up one large database into smaller pieces, calculation
performance may be optimized. Because this adds a significant layer of
performance may be optimized. Because this adds a significant layer of
complexity to administration, this is the last of the optimization steps we
complexity to administration, this is the last of the optimization steps we
list. However, this does not mean that has the least impact.
list. However, this does not mean that has the least impact.