Professional Documents
Culture Documents
ASO Optimization
ASO Optimization
We have an ASO cube having 11 store dimensions with 24 measures where 5 are having formulas. It
contains 20millions of records (dataload). I have created default views. The performance of cube is not
bad now but facing performance issue when drilling up/down for some dimensions in WA reports.
If any experience on the ASO optimization then pls share with us.
Hypadmin
Hi,
1) Use CALCPARALLEL option: multiple threads runs parallel while creating aggregate views or
fetching data.
3) Increase the Buffer Size of "Data retrieval buffer" and "Data Sort buffer" ( This method is more
related to retrieval side)
If your performance issues are in zoom in, zoom out operations, what will help you is increasing the
aggregations. Doing so puts more intermediate points for reference. This can be done from EAS or frm
MaxL. One thing you can also do is turn on tracking and then use what you find from that to optimize
where aggregations are made.
Hi Mohit,
We have already set these parameters. Because of the volume of data , getting time during zoom-in /
zoom-out. trying to build aggregation points for the views.
Hi Glenn,
Exactly the performance issue is during zoom-in/zoom-out. We are taking more intermediate point for
references which are being used for the analysis. Regarding query tracking, as first time retrieval will
take time after that it will aggregate based on the query. But after data load these aggregate views will
be removed which will follow the same process to create again.
Hello,
As Glenn mentioned, Query tracking could help to improve your query time.
As he said, enable it and let run a batch of typical queries the users are doing, then save the Query
tracking file generated.
By using the MaxL command " execute aggregate build on database $app.$db using view_file
$cubeagg_file " you could use your Query Tracking file every time you rebuild the cube.
Still, there can be some limitations like having a new range of data for a new month and in practice,
you will sometimes need to renew the query tracking file.
We
On a personal note, I just went through this in 7.1.5, it took me a while to wrap my brain around the
query tracking, saving the view_file, and re-aggregating but once I did it was a major success.
Like everyone says above, make sure query tracking is turned on, in MaxL it's " alter database
'XXXX.'XXX' enable query_tracking; ".
Make sure it stays turned on, do not re-load or re-start the DB. In 7.1.5 DO NOT depend on EAS to
give you an accurate read as to whether query tracking is turned on, instead use MaxL " query database
'XXXX'.'XXX' get cube_size_info; ".
Have users cut loose on the database, it was a huge benefit to me that there were canned Hyperion
Reports I could run to capture the most inefficient queries and set up an aggregation based on those
queries.
While insuring that query tracking is turned on, save those aggregations into a view_file stored in a .csc
file (not like a calc script in BSO). Use that file to materialize the new aggregations. I used EAS to
perfom this task, It can also be done in maxL with ease.
Going forward, set up a MaxL Aggregation such as " execute aggregate build on database
'XXXX'.'XXX' using view_file 'ZZZZZ'; like stated above.
I want to emphasize that this can be a repetitive process, you can add views that are captured to the
View_File when you like as your users run more complicated queries. Also, as what has been said
before, if your outline or cube has drastic changes, you can repeat the process or build a new view_file
from scratch.
This is only my personal experience, I'm sure there are other uses for query tracking or for removing
obsolete aggregations from the view_file.
JOHNS BLOG
http://www.jasonwjones.com/?p=184