You are on page 1of 6

Hi All, We have an ASO cube having 11 store dimensions with 24 measures where 5 are having formulas.

It contains 20millions of records (dataload). I have created default views. The performance of cube is not bad now but facing performance issue when drilling up/down for some dimensions in WA reports. If any experience on the ASO optimization then pls share with us. Your response will be really appreciable. Thanks & regards, Hypadmin

Hi, There can be several options for this1) Use CALCPARALLEL option: multiple threads runs parallel while creating aggregate views or fetching data. 2) Compression (perform compression on a dimension which is dynamic) 3) Increase the Buffer Size of "Data retrieval buffer" and "Data Sort buffer" ( This method is more related to retrieval side) 4) Use query hints to build the aggregate views. Hope these will help you. Thanks and Regards, Mohit Jain

If your performance issues are in zoom in, zoom out operations, what will help you is increasing the aggregations. Doing so puts more intermediate points for reference. This can be done from EAS or frm MaxL. One thing you can also do is turn on tracking and then use what you find from that to optimize where aggregations are made.

Query tracking could help to improve your query time. trying to build aggregation points for the views. We . As he said. Hi Glenn. you will sometimes need to renew the query tracking file. 2009 5:28 PM Hello. Edited by: Hypadmin on Feb 10. there can be some limitations like having a new range of data for a new month and in practice. By using the MaxL command " execute aggregate build on database $app. Still. Regarding query tracking. getting time during zoom-in / zoom-out.Hi Mohit. then save the Query tracking file generated. Because of the volume of data . enable it and let run a batch of typical queries the users are doing. as first time retrieval will take time after that it will aggregate based on the query. We are taking more intermediate point for references which are being used for the analysis. As Glenn mentioned. Is there any alternate to achieve the same? Thanks & Regards. We have already set these parameters. But after data load these aggregate views will be removed which will follow the same process to create again.$db using view_file $cubeagg_file " you could use your Query Tracking file every time you rebuild the cube. Exactly the performance issue is during zoom-in/zoom-out.

instead use MaxL " query database 'XXXX'. it took me a while to wrap my brain around the query tracking. Usually the choice to use ASO was not arrived at lightly — it was/is for very specific technical reasons. Like everyone says above. you can add views that are captured to the View_File when you like as your users run more complicated queries. and re-aggregating but once I did it was a major success. save those aggregations into a view_file stored in a .'XXX' get cube_size_info. if your outline or cube has drastic changes.csc file (not like a calc script in BSO). Also. ". While insuring that query tracking is turned on. saving the view_file. set up a MaxL Aggregation such as " execute aggregate build on database 'XXXX'. ".com/?p=184 Some performance observations changing an ASO hierarchy from Dynamic to Stored There are numerous ASO cubes among my flock. I'm sure there are other uses for query tracking or for removing obsolete aggregations from the view_file. the main reason I have for using ASO . make sure query tracking is turned on. Going forward. 2009 1:13 PM JOHNS BLOG http://www. It can also be done in maxL with ease.1. it was a huge benefit to me that there were canned Hyperion Reports I could run to capture the most inefficient queries and set up an aggregation based on those queries. as what has been said before.5.'XXX' enable query_tracking. I used EAS to perfom this task. Have users cut loose on the database.5 DO NOT depend on EAS to give you an accurate read as to whether query tracking is turned on. This is only my personal experience.On a personal note. Make sure it stays turned on. In 7. in MaxL it's " alter database 'XXXX. you can repeat the process or build a new view_file from scratch.'XXX' using view_file 'ZZZZZ'. like stated above. do not re-load or re-start the DB.1. Typically. Use that file to materialize the new aggregations. I want to emphasize that this can be a repetitive process. Edited by: user651301 on Feb 10. I just went through this in 7.jasonwjones.

6 11:51.4 00:06.000 members. they all have Dynamic hierarchies in the Accounts/Measures dimension. The performance of these cubes is generally pretty acceptable (considering the amount of data). At this point I had two cubes with identical data but one with a Dynamic hierarchy (Measures with 10.000 to 40. loaded it up with data.000 or so members) and one with stored. So I started looking into ways to squeeze out a little more performance.0 11:13. It’s not an exact science. This isn’t huge. right?).35 start-report_01-DB (Dynamic) finish-report_01-DB (Dynamic) start-report_01-DB (Stored) finish-report_01-DB (Stored) start-report_02-DB (Dynamic) finish-report_02-DB (Dynamic) start-report_02-DB (Stored) finish-report_02-DB (Stored) start-report_03-DB (Dynamic) Time 11:10. Actually. loaded the data. especially in this version. I have a set of four cubes that are all very similar. due to the sparsity of much of the data. Yes. And without further ado.4 11:22.0 12:00. It then runs the report against the database.0 12:01.4 11:14.x) is that I’m giving up calc scripts. why?). Really. and materialized 1 gig worth of aggregations. all of which ASO is very particular with. and shared members. oodles of data. Also. The downsides (with version 7.0 Stored Dynamic . it might be worth it if I can get some better performance on retrieves to this cube — particularly when the queries start to put some of the attribute dimensions or other MDX calcs into play. They range from 10. I cooked up some report scripts. What I need is some sort of method to test the performance of some typical retrieves against the two variants of the cube. even though I would need to spin off a separate EIS metaoutline in order to build the dimension differently (these cubes are all generated from the same metaoutline but with different filters). native Dynamic Time Series. some flexiblity with my hierarchies. except for one. here are the results: Starting new process at Tue 01/20/2009 10:11:08.8 00:29. The batch file loads a configuration file which specifies which database to hit and which report to run.3 00:07. and I have to have just one database per application (you… uhh… were doing that already.6 11:21.3 Duration Winner 00:03. So. MaxL scripts. All of the other dimensions have Stored hiearchies or are set to Multiple Hierarchies (such as the MDX calcs in the Time dimension to get me some Year-To-Date members).is to get the fast data loads and the ability to load oodles of noodles… I mean data. I setup one cube as normal. but in conjunction with the sizes of the other dimensions. but in theory it’ll give me somewhat of an idea as to whether making the hierarchy Stored is going to help my users’ retrieval operations. This is due to using the minus (-) operator.9 11:53. did a gig of aggregations. and some batch files. Prior to this I had also copied the cube within EAS. sets a timestamp before and after it runs. Time to compare. incremental loading (although this has been somewhat addressed in later versions). except for different Measures dimensions. Due to the nature of the aggregations in the cubes. some label only stuff. but occasionally user Excel queries (especially with attribute dimensions) really pound the server and take awhile to come back. trying to use BSO would result in a very unwieldy cube in this particular instance. it turns out that all of the these cubes have Measures dimensions that make it prohibitively difficult to set Measures to Stored instead of Dynamic. there is an incredible “maximum possible blocks” potential (sidenote for EAS: one of the most worthless pieces of information to know about your cube. and dumps it all to a text file. made the tweak to Measures to change it from Dynamic to Stored.

9 12:42.3 00:12.6 start-report_05-DB (Stored) 33:39. There is one of the tests though (report_02) that seems to completely smoke the Dynamic hierarchy. This isn’t to say that Dynamic is necessarily better than Stored or vice versa.5 03:02.7 Dynamic start-report_02-DB (Dynamic) 31:07. of course).0 02:12.0 00:00. For the least part.0 finish-report_01-DB (Stored) 31:05.4 Dynamic Dynamic Dynamic Starting new process at Tue 01/20/2009 10:30:55.2 12:51. I wrote these report scripts kind of randomly.6 12:50.8 00:33. I ran this very limited number of tests numerous times and got essentially the same results.8 00:04.6 01:35.2 12:03.5 finish-report_05-DB (Dynamic) 33:38.1 12:43. interestingly enough. stuff the query statistics by running some reports. however.0 00:00.1 00:03.1 00:42.2 00:06.finish-report_03-DB (Dynamic) start-report_03-DB (Stored) finish-report_03-DB (Stored) start-report_04-DB (Dynamic) finish-report_04-DB (Dynamic) start-report_04-DB (Stored) finish-report_04-DB (Stored) start-report_05-DB (Dynamic) finish-report_05-DB (Dynamic) start-report_05-DB (Stored) finish-report_05-DB (Stored) 12:02.3 start-report_04-DB (Stored) 32:12. In any case. but in the mean time I think I feel better about using a Dynamic hierarchy. at least for most of the tests I wrote.3 15:19.4 finish-report_03-DB (Dynamic) 31:51.3 Tie start-report_04-DB (Dynamic) 31:54.1 finish-report_04-DB (Stored) 32:51. I am curious.65 start-report_01-DB (Dynamic) 30:57.9 00:02.5 finish-report_02-DB (Stored) 31:46. this goes to show that there isn’t really a silver bullet for optimization and that experimentation is always a good way to go (except on your production servers. one of the next steps I could look at for query optimization would be to enable query tracking.9 00:38.0 finish-report_04-DB (Dynamic) 32:06. and then using those stats to design the aggregations.4 start-report_01-DB (Stored) 31:01.7 finish-report_02-DB (Dynamic) 31:40.1 00:42. so I definitely need to do some more testing. the Dynamic dimension comes out on top.3 Dynamic start-report_05-DB (Dynamic) 32:55.8 Dynamic So.1 start-report_02-DB (Stored) 31:42.6 start-report_03-DB (Stored) 31:52.4 finish-report_03-DB (Stored) 31:52.8 00:00.7 finish-report_05-DB (Stored) 36:42.4 14:36.4 00:39.6 17:32.3 15:18. I’m glad I am looking at some actual data rather than just blindly implementing a change and hoping for the best.3 14:26.5 finish-report_01-DB (Dynamic) 30:59. Since the ASO aggregation method for these cubes is simply to process aggregations until the database size is a certain multiple of it’s original size. however to go back and look at report_02 and see what it is about that particular report that is apparently so conducive to Stored .5 Stored start-report_03-DB (Dynamic) 31:50.

.hierarchies.