You are on page 1of 71

1. What are the main changes b/w 3.5 and 7.0 Ans : 1. In Infosets now you can include Infocubes as well. 2.

The Remodeling transaction helps you add new key figure and characteristics and handles historical data as well without much hassle. This is only for info cube. 3. The BI accelerator (for now only for infocubes) helps in reducing query run time by almost a factor of 10 - 100. This BI accl is a separate box and would cost more. Vendors for these would be HP or IBM. 4. The monitoring has been imprvoed with a new portal based cockpit. Which means you would need to have an EP guy in ur project for implementing the portal ! :) 5. Search functionality has improved!! You can search any object. Not like 3.5 6. Transformations are in and routines are passe! Yes, you can always revert to the old transactions too. 7. The Data Warehousing Workbench replaces the Administrator Workbench. 8. Functional enhancements have been made for the DataStore object: New type of DataStore object Enhanced settings for performance optimization of DataStore objects. 9. The transformation replaces the transfer and update rules. 10. New authorization objects have been added 11. Remodeling of InfoProviders supports you in Information Lifecycle Management. 12 The Data Source: There is a new object concept for the Data Source. Options for direct access to data have been enhanced. From BI, remote activation of Data Sources is possible in SAP source systems. 13. There are functional changes to the Persistent Staging Area (PSA). 14.BI supports real-time data acquisition.

15 SAP BW is now known formally as BI (part of NetWeaver 2004s). It implements the Enterprise Data Warehousing (EDW). The new features/ 16. Renamed ODS as DataStore. 17 Inclusion of Write-optmized DataStore which does not have any change log and the requests do need any activation 18 Introduction of "end routine" and "Expert Routine" 19 Push of XML data into BI system (into PSA) without Service API or Delta Queue 20 Load through PSA has become a mandatory. You can't skip this, and also there is no IDoc transfer method in BI 7.0. DTP (Data Transfer Process) replaced the Transfer and Update rules. Also in the Transformation now we can do "Start Routine, Expert Routine and End Routine". during data load. 21 New data flow capabilities such as Data Transfer Process (DTP), Real time data Acquisition (RDA). 22 Enhanced and Graphical transformation capabilities such as Drag and Relate options. 23 User management (includes new concept for analysis authorizations) for more flexible

2. What are the role & responsibilities when you are in implementation
and while in support also Responsibilities of an implementation project... For ex, Lets say If its a fresh implementation of BI or for that matter you are implementing SAP... First and foremost will be your requirements gathering from the client. Depending upon the requirements you will creat a business blueprint of the project which is the entire process from the start to the end of an implementation... After the blue print phase sign off we start off with the realization phase where the actual development happens... In our example after installing the necessary softwares, patches for BI we need to discuss with the end users who are going to use the system for inputs like how they want a report to look like and what are the Key Performance Indicators(KPI) for the reports etc., basically its a question and answer session with the business users... After collecting those informations the development happens in the development servers... After the development comes to an end the same objects are tested in quality servers for any bugs, errors etc., When all the tests are done we move all the objects to the production environment and test it again whether everything works fine...

The Go-Live of the project happens where the actually postings happen from the users and reports are generated based on those inputs which will be available as an analytical report for the management to take decisions... The responsibilites vary depending on the requirement... Initially the business analyst will interact with the end users/managers etc., then on the requirements the software consultants do the development, testers do the testing and finally the go-live happens... What are the objects that we peform in a production Support project? In production Suport Generally most of the project they will work on monitoring area for their loads(R3/ NON SAP to Data Taggets (BW)) and depending up the project to project it varies because some of them using the PC's and Some of them using Event Chains. So its Depends up on the Project to project varies. What are the different transactions that we use frequently in Production support project? Plz explain them in detial.. Generally In Production Support Project , we will use the check the loads by using RSMO for Monitoring the loads and we will rectify the errors in that by using step by step analysis. The consultant is required to have access to the following transactions in R3.

1. ST22
2. SM37 – jobs execution 3. SM58 – execute luw’s, trfc error 4. SM51 - It basically checks whether servers are logged on to the msg

server.
5. RSA7 – delta queue 6. SM13 – update tables Rsrv Authorizations for the following transactions are required in BW 1. RSA1 3. ST22 - You can use the tools of the ABAP dump analysis (ST22) to list the ABAP runtime errors that have occurred in an ABAP system

5. SE38 – to display programs and write programs

locks 8. I/O and network load on an application server instance. What is the maximum no. Upload Monitor (transaction RSMO or RSRQ (if the request is known) The Workload Monitor (transaction ST03) shows important overall key performance indicators (KPIs) for the system performance The database monitor (transaction ST04) checks important performance indicators in the database.6. change and view process chains. it should be at least 40MB. What is the maximum no. The ABAP runtime analysis (transaction SE30) The Cache Monitor (accessible with transaction RSRCACHE or from RSRT) shows among other things the cache size and the currently cached queries. The SQL trace (transaction ST05) records all activities on the database and enables you to check long runtimes on a DB table or several similar accesses to the same data. of characteristics? 248 . RSRV . memory. database buffer quality and database indices. RSKC maintanence of extra characters permitted 10. SM12 . The OS Monitor (transaction ST06) gives you an overview on the current CPU. SE37 – function module 7. such as database size. The Export/Import Shared buffer determines the cache size. 3. of key figures? 233 4.repairs The Process Chain Maintenance (transaction RSPC) is used to define.

Additional X table will be created for mat group. What is the impact on star schema? Ans:... and change log table. 1.5...(new. Needs Request activation process It consists of three tables.. X table is time independent Sid table of navigational attribute . change and active). What are the DSO and DTP Types in BI? Standard DSO Data population is done via the Standard DTP SID are generated for standard DSO Data Records with same key are aggregates during activation process for standard DSO. After words I changed to navigational attribute..... 6. I have 0material as characteristic...active table. How would you optimize the dimensions? Ans:We should define as many dimensions as possible and we have to take care that no single dimension crosses more than 20% of the fact table size.Using change log table means that all changes are also written and are available as delta uploads for connected data targets Write optimzed DSO Data population is done via Standard DTP No SID are generated for Write optimized DSO No Need to activate Request Direct update DSO Data population is done via Direct DTP via API's No SID are generated for direct DSO No Need to activate Request . for this omat_group as created as display attribute..It is completly integrated in the staging process..activation queue.

It is generally use for external application and APD.. which is similar to the ODS.0? Ans: In BI 7.... it has overwrite fucntionality as well. did doesn't get generated in the second and third type of DSOs Standard DSO provides you delta images.SIDs are not generated .This DSO type is filled using API and can be read via BAPI.. and write optimized which also has only active data table. Different types of DTP's Standard DTP Direct Access (Direct update DSO) DTP Error stock DTP DTP for Real-Time Data Acquisition 2.It consists of active data table only. direct update which consists of active data table only... three types are available. Direct DSO is old concept (3..x).. can be loaded data onlywith APIs . has three tables for activation queue. This means it is not easily integrated in the staging process.. ODS was renamed as DSO (data store objects). table for active data and change log table. What are DSO changes between 3.0.. With this new object.5 and 7.... "Standard". Write optimizes DSOs are used for efficient and targeted for warehouse layer of the architecture.

you would want to have a Write-Optimized DataStore to be the first stage in BI and then pull the Delta request to a cube. Write-optimized DataStore object is used as a temporary storage area for large sets of data when executing complex transformations for this data before it is written to the DataStore object. You only have to create the complex transformations once for all incoming data. Example: multiple loads per day (or) short source system access times (world wide system landscapes). • If the Data Source is not delta enabled. the data can be updated to further InfoProviders. When we use write optimized DSO and when we use Direct DSo? Write optimized DSO used to pull large volume of data /direct DSO used in APD purpose. Subsequently. • . Used where fast loads are essential.3. a. In this case.

Business rules are only applied when the data is updated to additional InfoProviders. Can be included in InfoSet or Multiprovider. Write-optimized DataStore objects can be the staging layer for saving data. If you want to report daily refresh data with out activation. Performence improvement during dataload. If you want to retain history at request level. and Record No): o o o o o No change log table and no activation queue. you might want to use Write Optimized DataStore first. Probably you can use it for preliminary landing space for your incoming data from diffrent sources. c. e. only inserts. Technical key is unique. Functionality of Write-Optimized DataStore Only active data table (DSO key: request ID. Every record has a new technical key. Data is stored at request level like PSA table. Fully integrated in data flow: . d.In this case it can be used in reporting layer with InfoSet (or) MultiProvider. No SID generation: o Reporting is possible(but you need make sure performance is optimized ) o o o BEx Reporting is switched off. Size of the DataStore is maintainable. and then feed data into Standard Datastore. In this case you may not need to have PSA archive. Packet No. instead you can use Write-Optimized DataStore. f.b. If a multi dimensional analysis is not required and you want to have operational reports.

We need to select concern source system. by default all source systems will be checked. the active table of the DataStore object could contain several records with the same key. If you are going to install the objects from business content what is the prerequisite? Ans:. After that I to generate report on this DSO I will create infosets by using this DSO. But we need to uncheck unnecessary source systems. I want to generate reports on that DSO. 11. So both requiments will be satisfied. And RodeltaM table. between multiproviders and infosets? . But if I check this option I am going to lose load performance. To generate reports we check SID generation option. Diff. have NON-SAP and SAP source systems. How do you model this situation? Ans:I will uncheck SID generation option and I will load. 12. 10 How do you decide whether DSO should be taken place in standard BI flow? Ans:Depending on the data source delta mechanism. Allows parallel load. We will check in Roosource table. 13. Depending on the image we will decide. o If this indicator is set.o o Used as data source and data target Export into info providers via request delta Uniqueness of Data: o Checkbox “Do not check Uniqueness of data”. Can be included in Process chain with out activation step. I have stand DSO as my final data target. Support Archive.

You can modify or delete data in the data package. It is used to perform preliminary calculations and store these in a global data structure or in a table. Difference among start routines.Ans : A Multiprovider can be built on basic InfoCubes. This is used generally for customizing rules. Expert Routine. end routines. Start Routine: The start routine is run for each data package at the start of the transformation. all existing rules are deleted. ODSs. This structure or table can be accessed from other routines. Generally used for Filtering records. info objects or Infosets. this will trigger before Transformations. expert routines? Ans:Start Routine. We have following types of routins in BI 7. Endroutine. You can use an end routine to postprocess . This will trigger before Transformations. you can define the routine as a transformation rule for a key figure or a characteristic. What is the impact on existing routines if we create expert routine? Automatically existing routines will be deleted or deactive. An Infoset can have only ODSs or info objects. The input and output values depend on the selected field in the transformation rule. this will trigger without any transformation Rule. More information: the Routine section under End Routine: An end routine is a routine with a table in the target structure format as input and output parameters. Routine for Key Figures or Characteristics: This routine is available as a rule type. 14. Multiproviders use 'union' operation but Infosets use 'join'. The start routine has a table in the format of the source structure as input and output parameters. Whenever we try to write a expert routine. 15. Both the objects are logical definitions which don't store any data. Generally used for updating data based on existing data.

However. It has no return value. If we write transfer routine at characteristic level. Expert Routine: This type of routine is only intended for use in special cases. Its purpose is to execute preliminary calculations and to store them in global DataStructures. how it will behave in transformations? Ans:When you create a transfer routine. 17. It is independent of the DataSource. You can use the expert routine if there are not sufficient functions to perform a transformation. The entire DataPackage in the transfer structure format is used as a parameter for the routine. • Transfer / Update Routines: They are defined at the InfoObject level. It is like the Start Routine. or perform data checks. This structure or table can be accessed in the other routines. What are Start routines. We can use this to define Global Data and Global Checks. the transfer routine is only run in one . For example. it is valid globally for the characteristic and is included in all the transformation rules that contain the InfoObject. The expert routine should be used as an interim solution until the necessary functions are available in the standard routine. you can delete records that are not to be updated. It allows complex computations for a key figure or a characteristic. Transfer routines and Update routines? Ans:- • Start Routines: The start routine is run for each DataPackage after the data has been written to the PSA and before the transfer rules have been executed.data after transformation on a package-by-package basis. Try these tables: RSAABAP ABAP code in routines uses Code id RSAABAPINV inverse routines RSLDPRULESH Coding done in schedulers RSLDPRULE 16.

Update Valid Records only. Then the transfer routine for the value of the corresponding field is executed for each InfoObject that has a transfer routine. b. During data transfer.transformation with a DataSource as a source. the logic stored in the individual transformation rule is executed first. Reporting not possible (Request red). when you click this you will find all the options related error handling . under Data update types in the data targets.x version 2) In info package : You find the update tab . Reporting possible (Request green).0 Version 1) DTP: In DTP. you find the Update tab . the transfer routine can store InfoObject-dependent coding that only needs to be maintained once. What are the error handling mechanisms are available? Error stock DTP Incase of 7. Incase of 3. but that is valid for all transformation rules. a. No Reporting. What are the rule types in transformations? Ans:Constant Direct Assignment Formula Read Master data No Transformation Routine 19. In this way. 18. you find the error handling push button . Update Valid Records only. The transfer routine is used to correct data before it is updated in the characteristic. where you will find the different error handling options. NO Update. c.

. then the load to targets has to be done as a single data load to all.. independent of the other because each target will have its own delta Queue maintained. What is the Difference between DTP and IP in BI 7.x is that in 3...x and helps in load distribution mechanisms.... A very simple example: Lets say there are two records in the input stream of data for a DTP. To do this. DTP helps in maintaining delta queues from PSA to different targets. What is the importance of semantic groups in DTP? Ans:Semantic Groups to specify how you want to build the data packages that are read from the source (DataSource or InfoProvider).. This setting is only relevant for Data Store objects with data fields that are overwritten. Product Material . Data records that have the same key are combined in a single data package... Whereas in 7. define key fields.x..20..you could not load them at different timings. This setting also defines the key fields for the error stack. How do you enable error stack? Ans:DTP->Update Tab-->Error handling-->Valid Records Update. P1 M1 XYZ. By defining the key for the error stack..x and 7. you ensure that the data can be updated in the target in the correct order once the incorrect data records have been corrected. ODS etc) so if the source was sending delta. 22.enabling you to load the delta to each target. . P1 M1 PQR.meaning a delta has to get loaded to all targets at the same time.. 21. No reporting (Request Red).. This is a big change in 7..0? Ans:The key difference between 3..x an infopackage would load from a single DS to multiple data targets (infocubes..

the customer-specific attributes are to be retained. In case you have defined semantic group in DTP with product and material. later I changed from 15 to 20. the above two records might go into separate data packets. . However. In the D version. two additional attributes have been delivered by SAP that do not contain the customer-specific attributes. If you select the match X copy it will merge the properties. For example you installed the 0material info object and added some Z attrbites to it. you have to set the indicator (X) in the checkbox. I have installed 0plant from business content initially its length is 15 chars. the additional attributes are available and the customer-specific enhancements have been retained automatically. On installing the same system takes care that object is not being overwritten in which case you enhancement if any will not be lost. After installing the Business Content. At the time of Infocube installation if you don't select the match X copy it will simply overwrite the 0material. After that I have another requirement according to that I need to install 0plant from business content. In this case. If I am installing seco9nd time is there any impact on existing 0plant. You don't lose the Z attrbites. in start routine. This is sometimes required when it is needed to process all such records together. for eg. if yes how do we overcome? Ans: By selecting Match X copy option Match(X) means when you are installing a business content and that particular object is already existing in you system then the tick mark can been seen against it. Again you are installing the Infocube 0IC_C03 it contains the 0material.If the data gets divided into multiple packets while getting processed. the customer-specific enhancements in the A version are lost. 23. the delivery version has to be installed from Business Content again. In order to be able to use the additional attributes.. You will lose the Z attrbutes. system will always put these two records together. At the same time. if you have not checked the match field. Example of match Additional customer-specific attributes have been added to an InfoObject in the A version.

Eg: Actual length 20 0material installed and changed length to 40. 26. If I have 0plant in US market and 0plant in uk market. How can we use compound attribute in reporting? Ans: Compounding attribute lets you derive a unique data records in the reporting.9002. 1001. How do you identify so and so material is concern to so and so plant. Again your are installing the 0IC_C03 if don't select the match X copy it will overwrite the 0material and make length to 20.What is compounding. 1002 Cost accounts: 9001. When you add the cost centers ac compounding attribute a unique record will be present. How do you identify which are most useful? Ans: by using Valuation 25.9003 The cost accounts are not unique across cost centers and the master data will be over written. After compounding the records will look unique like below in reporting: . So the cost accounts across cost centers cannot be differentiated. 24. Ans: Using 0source system id.I have 10 aggregates on one particular infocube. Material is coming from two plants and stored in 0plant which is compound chrematistic. If you select the match X copy it will merge the properties. Suppose you have a cost center and cost accounts like this and you want to maintain proper relation: Cost centers: 1000.

can influence performance. The characteristic itself also has the operational semantics. You sometimes need to compound InfoObjects in order to map the data model. This includes the concatenated value. if storage location A for plant B is not the same as storage location A for plant C. data type and the definition of the currency and unit of measure. You can do this by setting the Master data is valid locally for the source system indicator. it has its technical properties: · For characteristics these are the data type and length as well as the master data (attributes. 27. These properties can only be maintained with the reference InfoObject. You may need to do this if there are identical characteristic values for the same characteristic in different source systems. compound characteristic Storage Location to Plant. Several InfoObjects can use the same reference InfoObject. so that the characteristic is unique. you can only evaluate the characteristic Storage Location in connection with Plant.A compound attribute differentiates a characteristic to make the characteristic uniquely identifiable. meaning the total length of the characteristic in compounding plus the length of the characteristic itself. For example. relevance to .9001/1000 9002/1000 9003/1000 9001/1001 9002/1001 9003/1001 9001/1002 9002/1002 9003/1002 Thus differentiating each cost account across cost centers uniquely. One particular option with compounding is the possibility of compounding characteristics to the source system ID. The operational semantics. Reference InfoObjects If an InfoObject has a reference InfoObject. · For key figures these are the key figure type. that is the properties such as description. Note that characteristic values can also have a maximum of 60 characters. The referencing key figure can have another aggregation. texts and hierarchies). Compounding objects and its purpose Ans:. Do not try to display hierarchical links through compounding. In the Compounding tab page. particularly if you include a lot of InfoObjects in compounding. you determine whether you want to compound the characteristic to other InfoObjects. but these values indicate different objects. Use hierarchies instead. Recommendation : Using compounded InfoObjects extensively. In this case. display. Some InfoObjects cannot be defined uniquely without compounding. text selection. Note : A maximum of 13 characteristics can be compounded for an InfoObject. InfoObjects of this type automatically have the same technical properties and master data.

28. this is the best and simple example. i.e. More than one characteristic can have the same reference characteristic: The characteristics Sending Cost Center and Receiving Cost Center both have the reference characteristic Cost Center. constant. If I create line item dimension Sid table is connected to fact table. Example : The characteristic Sold-to Party is based on the reference characteristic Customer and. with the combination of some Keys only you can recognize the unique report Eg: If I combine Material and Bath then only I'll get Unique record in material master. attributes. So the employee id starting for india would be india/101 and for UK would be UK/101. are also maintained with characteristics that are based on one reference characteristic. If dim table size is more than 20% of the fact table size I will go for line item. So her I use batch as Compounding object Or Technically Compounding key is like a super key. Lets your Organization comes out with a new employee id scheme where the employee id for each location would start with 101. Assign Points if useful Example : Typically in a organization the employee id are allocated in serial like say 101. therefore. How do you design dimension table and on what basis you will create it if you create line item dimension? Ans:By analyzing 1: m. person responsible. and texts. 29. 102 and so on. Now if someone has to contact employee 101 he needs to know the location without which he cannot uniquely identify the employee. has the same values. I have 0material it have 10 attributes and 0customer it has 15 attributes. . M: n relations and after that I will check by using Sap_infocubes_designs program. Hence in this case location is the compunding attribute. Now note that the employee india/101 and US/101 are different. normally there will be only one primary key for any table if you want 2 or more fields to act as primary key then this compounding key will help you. WHAT IS COMPOUNDING ATTRIBUTE? IN WHICH SENARIO WE CAN USE THAT ONE? Ans:This is like Composite Key in Table. And line item din table data stored in fact table field called RSSM. and attribute exclusively.authorization. and I have 0document no. Scenario: we have materials coming from different plants and if you want to analyze which material is getting from which plant then you have to go for compounding key here plant is a super key for material.

0. it is possible activate the master data for a single object. the char info objects with master data attributes take care of 1:N relation. In case of transaction data for 1:N relation. Data loading with 1:N & M:N relationship Ans:In case of master data. you have a checkbox fro Line Item Dimention (3. you need to look at the properties of the Dimentions using Context Menu. the material number would be the key based on which the data would be loaded for each new material. multiprovider may be used.X). Beside the Dimension Table Name.g for material master data. When you go for Attribute Change Run screen. . 31. How will u know that a dimension can b a lineitem Dim b4 loading into Cube ? Ans:If you display your cube. For 7. What is the diff between activate master data and chagerun? Ans:Attribute Change Run and Master Data Activation both functions uses for Activating the Master Data. you may take up DSO in which the primary keys can be made the key fields of DSO based on which the repeating combinations would be overwritten.For union operation between two info providers .30. you will able to activate the Master Data for multiple objects at a time and it schedule a background job. The infocube can be used for M:N relation.For e. What is the mirror field at r/3 side for 0record mode? That is Rocancel 33. You may use an infoset when there is a requirement of inner join (intersection) or left outer join between master data/DSO objects. and we can use the Attribute change run option in the process chains. But when go for Master Data activation option by righ clicking on the object and say. you could see all the dimentions. if you click on Dimentions Tab. 32.

Note: You can only change the value range when the InfoCube does not contain any data. limit the maximum number of partitions. Generally in process chains when you are loading any master data object. In the Info Cube maintenance choose Extras ->DB Performance Partitioning. dividing the cube into different cubes and create a multiprovider on top of it. system will give the dependent processes after the info package that is attribute change run. How can we do the Partition? Ans:There are two types of partitioning physical partitioning which is done at database level and logical partitioning which is done at data target level. kindly check F table and E table In 3. 34.there is a process type for attribute change run. Where necessary.e. and specify the value range. Logical partitioning is partition of Infocube i. InfoCube partitioning with time characteristics 0calmonth & Fiscper is physical partitioning.5 ..

shipment document.. Batch master data and so on.shedule time. Activate. Account payables. Purchase order. now create a variable & click on variable & click on MAINTAIN. 6.0 1. ADH (tab).goto "FURTHER RESTRICTION(tab)"give a value 11..& click "write request" 8.create. Purchase requisition. Selection Profile tab.give info provider name & type. Material master data. 3: goto General Settings . 9. Vendor master data.You can drop empty partitions using the program “SAP_DROP_EMPTY_FPARTITIONS” 35. Master data such as Customer master data.Specify logical file name. Transaction data such as Sales order. Archiving helps to increase more database size. 7. improvement of the system performance will take care to a greater extant and cost effectiveness for the client with respect to hardware. . Attribute (tab):give a name & save 14. goto to processing options: click on production mode. Goto transaction -: RSDAP 2.give start date & print parameters.Give archiving object name. 5. Delivery document. Billing document. How does Archiving the data in SAP BI? Archiving is used to store your data at a remote location to improve the performance in BI. (or) Archiving is a process of moving the data from the sap database to storage system which is not required online and archived data can be read offline when ever user required.CONTINUE 10. and so on Following steps should be followed for Archiving in 7. 12. goto t-code : SARA give your object name.select a field Then.save13. 4. We can archive Master data and Transactional datas. Account receivables. Production Order. make sure you copy your archiving object name. We use archiving process in various SAP application areas. Transfer order.

Project plan. like what are his needs and requirements etc.Project peparation. In the second phase u will get a functional spec n basing on that u will have a technical spec. Discussions with the client. End user training is given that is in the client site you train them how to work with the new environment. decision makers define clear project objectives and an efficient decision making process ( i. Can you explain a life cycle in brief Ans:When they refer to the full life cycle its the ASAP methodology which SAP recommends to follow for all its projects. 1. 3. end user training etc.Fit-Gap Analysis 5. Project Preparation: In this phase. what are the objects we need to develop are modified depending on the client's requirements). Business Blue print. Go-live . Realization. 4. Project managers will be involved in this phase (I guess).Business Blueprint.15.e. 36. Realization: In this only.e. You might fall under and get involved in the realization phase . 2.e. What are the five ASAP Methodologies? Ans: . Final preparation & Go-Live support. conducting pre-go-live.Go-Live Phase.Realization. testing. Execute. . In the third phase u will actually inplement the project and finally u after testing u will deploy it for production i. 1. 3. the implementation of the project takes place (development of objects etc) and we are involved in the project from here only. Business Blueprint: It is a detailed documentation of your company's requirements. (i. 37.e. If its a supporting project u will come into picture only after successful deployment. as they are new to the technology. Normally in the first phase all the management guys will sit with in a discussion . 4. Final Preparation: Final preparation before going live i. A Project Charter is issued and an implementation strategy is outlined in this phase. 2.).

Example of errors while loading data and how do u resolve them 1.Then reload the data. g) Can be because of no data found in the source system h) Invalid characters while loading. When you are loading data then you may get some special characters like @#$%. sol: activate the data source and replicate the data source and load..then BW will throw an error like Invalid characters..5. 38.. sol: Modify the in PSA error data and load..it will store this data in RSALLOWEDCHAR table.. Similar mannner in the data level the indexes are act in BW side. Short dump error.c. RFC connection failed. What is index and how do you increase performance using them INDEX: They are used to improve the performance of data retrieval while executing the queries or work books the moment when we execute and once we place the values in Selection criteria at that time the indexes will act as retrieval point of data . 4.then you need to go through this RSKC transaction and enter all the Invalid chars and execute..... and it will fetch the data in a faster manner..(sol: raise the ticket to BASIS team to provide the connection) f) Can be because of missing master data. Just common example if you can obeserve any book which has indexes .. 2. . Time stamp error...... if the load is dependant on other loads d) Can be because of erreneous records e) Can be because of RFC connections. 39.e..done by RSKC. Go-Live & support: The project has gone live and it is into production. The Project team will be supporting the end users. 3. a) Loads can be failed due to the invalid characters b) Can be because of the deadlock in the system c) Can be becuase of previuos load failure .. Data error in PSA..t. sol: raise the ticket to BASIS team to provide the connection. sol: delete the request and load once again.You won't get any error because now these are eligible chars. This indexes will give a exact location of each topic based on this v can easily go to that particulur page and we will continue our things.

In general this process is taken @ 3 places one is comparing the info provider data with R/3 data. Invalid data values for units/currencies etc 3. you should replicate this DataSource in 'dialog' mode. During the replication. Invalid characters (# like characters) 2.x DataSource or as the new DataSource. JD edwards. What is reconciliation ? Reconciliation is nothing but the comaprision of the values between BW target data with the Source system data like R/3. j) Lower case letters not allowed.Oracle. Error in generating SID values for some data 40. the DataSource is not replicated. l)datasource not replicated If the new DataSource is created in the source system. 4. ALEREMOTE gets locked due to a sm59 RFC destination entry having the incorrect password. If you do not run the replication in 'dialog' mode. You should be about to get a list of all sm59 RFC destinations using ALEREMOTE by using transaction se16 to search field RFCOPTIONS for a value of "*U=ALEREMOTE". Want to know who are all the users are executing that report ? Ans:- .Compare the Query display data with R/3 or ODS data and Checking the data available in info provider kefigure with PSA key figure values 41. look at your infoobject description: Lowercase Letters allowed k)extraction job aborted in r3 It might have got cancelled due to running for more than the expected time.i) ALEREMOTE user is locked Normally. You will need to look for this information in any external R/3 instances that call the instance in which ALEREMOTE is getting locked as well. Invalid values for data types of char & key figures. you can decide whether this DataSource should be replicated as the 3. or may be cancelled by R/3 users if it is hampering the performance. m)ODS Activation Error ODS activation errors can occur mainly due to following reasons 1.ECC or SCM or SRM.

Please tell me the disadvantages of the following: (1) AGGREGATES (2) COMPRESSION (3) IC PARTIONING (4) INDEXES (5) LINE ITEM DIMENSIONS and How to rectify them? Ans:Disadvantages of (1) AGGREGATES . 43. How can we delete the selective deletion in data targets? Ans: Using the t-code “DELETE_FACTS “ Give DELETE_FACTS Tcode in Command field screen and enter and give InfoCube name and select Generate selection program option and then execute. you can look in to these tables where you can find details on query name and user ran. (or) Use BW Statisrtics CUbes.Use these tables as input to ur data source RSZCOMPDIR or V_CMP_JOIN and build generic datasource on this table(s) and load it DSO and then you can build the query. See DataSource in BW in RSA5 and install it and also see the Cubes starting with 0TCT_* for that DS's 0BWTC_C02 0BWTC_C03 0BWTC_C04 0BWTC_C05 0BWTC_C09 0BWTC_C11 42.

then all the request number will be removed and hence deletion by request id is no more possible. Here the repair/repeat option for the process type is mentioned.partion cannot be done after data loaded(3. more aggregates will cause waste of memory. IC pratioin. Compression.X) but repartion is possible in BI7. Repair .compressed request cannot be got back to normal.will create new instance.more number of line item cannot be used as number char used will be reduced. Line item .will continue with same instance Repeat . it ended with error. Check the table .RSPROCESSTYPES. Now in the process chain you will have option to repair (not repeat). In Process chains what is the difference between Repair and repeat? Ans:In process chain.for large volume of data create and delete index consume lot of time (5) LINE ITEM DIMENSIONS This can be set when you have one only characteristic in the Dim table.Even though Aggr are used for performance. After some of the records are processed. (2) COMPRESSION Once Cube is compressed. . a DTP is triggered. log view and job overview Earlier in 3. 44.its main disadv is that is stores data fisically as a redundant.0 (4) INDEXES If you don't drop the index before loading then the data load will be slow If you don't create the index before the reporting then the reporting will be slow Index. deletion is difficult (3) Partition Partition Handling for several thousand partitions is usually impacting DB performance.0 it was planning view instead of check view Now we have 3 views only 45. but it will decrease the same when you create more aggregates Until Rollup takes place the query won't hit the aggregate Cube Aggregates. Views in process chain Ans:3 views vuntayee PC loo Check view.

value of this KF is not stored in the fact table. Instead values of all the incoming stock is updated in inflow KF and similarly stock moving out is updated in outflow KF. Instead the status at a particular time is taken. If you load data for the month of may then it is the "May snapshot" and now if you load again in June then the data is as of June. Suppose you want to see monthly stock level.Via the status of the last delta in the BW Request Monitor. then you'll upload data every month. As this value is directly stored in cube. Delta requests set to red despite of data being already updated lead to duplicate records in a . At the time of query execution. 2) A non-cummulative key figure contains two cummulative key figures to store inflow and outflow of stock movement. adding inflow and subtracting outflow. in snapshot method. As we are using Noncumulative key figure it will take lot of time to calculate the current stock for example at report time. 47.com/saphelp_nw04/helpdata/en/80/1a62dee07211d2acb80 000e829fbfe/frameset. Both the requests would be maintained.whic is updated while compression( you can find more info in sap help on this). On the other hand. For the contents of the repeat see Question 14. Check this link for more info on NC KF http://help. there is a concept of marker in NC cube.sap. How and where can I control whether a repeat delta is requested? Ans:. For example lets say there is a employee snapshot cube. Now. If you include a noncumulative key figure in cube. If you need to repeat the last load for certain reasons. value of NC KF is calculated by taking marker as reference. Purpose of No marker update ? Ans:Marker update is just like check point ie it will give the snapshot of the stock on a particular date when it was updated. you will see these two KFs and not the non-cummulative KF. the next load will be of type 'Repeat'. It acts as reference point while query execution. no calculation is required at the time of query execution. Status of the stock at that particulat time is loaded.46. These cubes are more for comparing the figures between 2 time periods.htm 48. Hence suppose a lot of movement in stock has haapened. What is the Snapshot scenario ? Ans:1) To explain in simple terms a " snapshot" scenario is the data at a given point of time. To overcome this we use marker update. stock movements are not loaded. If the request is RED. then the performance of query will be hampered as a lot of calculation is involved. In fact in update rule also. set the request in the monitor to red manually.

Trigger/schedule the Infopackage. One is Global (All Loads) and the other option is specific to one Load. Delete the Red Request from the data target. a pop will be shown which asks whether you want to repeat the delta.If the Delta load fail then how will I do Delta repetition? Ans:Repeat Delta is used when the previous delta load fails. When Infopackage is triggered. So. . you can find datasource name. read the pop up message and click on "Request Again". Repeat Delta picks up the previous delta as well as the current delta. Click on save 50. Specific to one Load: Go to the Infopackage. Datapacket Size Settings: This can be done in 2 ways. Steps: Make the technical status of the request to Red.subsequent repeat. though the request is Red. 49. you will just check info package for that request. Click on Scheduler in the Menu on the Top Select "Datas Default Data Transfer" and give Maximum SIze of a data packet and number of Data Packets. Global: Tcode : RSCUSTV6 Packet Size: This option refers to the number of data records that are delivered with every upload from a flat file within a packet. if they have not been deleted from the data targets concerned before. The basic setting should be between 5000 and 20000 depending on how many data records you want to load. If the data coming from different data source how you going to identify the data from which data source it is came from? Ans:- Data is coming from 2 datasources.

where the data is pre-aggregated and stored in an InfoCube structure.SAP BW Query Performance Ans:Aspects of SAP BW Query Performance Sound Data Model Dimensional Modelling Logical Partitioning Physical Partitioning BW Reporting Performance Aggregates Pre-calculated Web Templates OLAP Cache Definition: Aggregates are materialized subsets of InfoCube data. We can have the same ceco but in diferent CO Areas. Compound attribute: It's a method to expand the key of the info object. Example: I hace info object 0Country and i need to create Zcountry. Template: Create a copy.51. 52. It means that the ceco has a dependency with the CO Area. If the data coming from 2 data source. Example: 0costcenter (Cost center).Definition: Aggregates are materialized subsets of InfoCube data. 53. Aggregates -. we need to define an indicator while data load which populates a constant value D1 and D2 in data target . and compound attr? Ans:Reference Object: Create a new info object but the master tables of the info object are the same of the referenced info object. where the data is pre-aggregated and stored in an InfoCube structure. are the same of the 0Country. Reference object. it's easy administrate the country attributes.In report we can give selection on this indicator. . template. how you going to separate them in report? Ans:- If we are taking data from two data sources and want to separate them at report level. doing these it's not necessary update the attributes of the Zcountry. Create a new info object copy from other info object. i create the Zcountry reference to 0Country. To represent this in BI we add the CO Area like compound attr of the ceco.

in a table called /BI0/ICOUNTRY OLAP Processor: Query Splitter The split of a query is rule-based: Parts of the query on different aggregation level are split. Parts with different selections on characteristic are combined. Runtime Suggestion based on the last entry of the database tables RSDDSTAT/RSDDSTATAGGRDEF for the current user Suggestion based on the database tables RSDDSTAT/RSDDSTATAGGRDEF Suggestion based on the InfoCube BW StatisticsChoose Proposals can be restricted to queries with a minimum runtime Suggestion Choose the period to be used for the proposals Proposals Aggregates suggested from BW statistics get the name STAT <N> .Purpose: To accelerate the response time of queries. After the split. in this example.x) Defining Aggregates * Group according to characteristic or attribute value H Group according to nodes of a hierarchy level F Filter according to fixed value Aggregation Using Hierarchies Time-independent hierarchies are stored outside the dimension. Aggregates can be created: For Basic InfoCubes On dimension characteristics On navigational attributes On hierarchy levels Using time-dependent navigational attributes (as of BW 3. Parts which use the same aggregate will be combined again (in some cases it is not possible to combine them). by reducing the amount of data that must be read in the database for a navigation step. OLAP processor searches for an optimal aggregate each part. U can get this time ST03.x) Using hierarchy levels where the structure is time-dependent (as of BW 3. Parts on different hierarchy levels or parts using different hierarchies are split. We can build the Aggregates mainly decided if the database time is >30%.

not all Not too specific. not enough general ones Old aggregates. not too general -should serve many different query navigations Consider "component"aggregates??Should be frequently used and used recently (except basis aggregates) Building Bad Aggregates Characteristics of Bad Aggregates: Too many very similar aggregates Aggregates not small enough (compared to parent cube) Too many "for a specific query"aggregates. not used recently??Infrequently or unused aggregates Exceptions: A large aggregate containing navigational attributes may benefit performance despite its size (but remember the tradeoff) Basis Aggregate may be large and may not be used for reporting but still be useful for maintenance Analysis Tools: Workload Monitor (ST03) Scenario Find out the queries with the worst performance and try to optimize them.Aggregates suggested from BW statistics get the names MIN <N> and MAX <N> Building Good Aggregates Tips for Building (and Maintaining) Good Aggregates : Relatively small compared to parent InfoCube Try for summarization ratios of 10 or higher??Find good subsets of data (frequently accessed) Build on some hierarchy levels. Useful Features Expert Mode BW System Load -->Analysis of table RSDDSTAT Check following parameter values: Check queries with highest runtimes and check where most time has been consumed OLAP init DB??OLAP Frontend Check for ratio of selected to transferred How to Tell if an Aggregate Will Help Do the following steps Call query or InfoCube overview in technical content or ST03 Sort by mean overall time to find queries/InfoCubes with highest runtimes Calculate the KPI 'aggregation ratio'= number of records read from DB / number of records transferred .

e. Difference between the between Migration and up gradation projects? Ans:Upgradation generally refers to upgrading from an Old version to higher version of BW/BI itself. an aggregate will be helpful if the query statistics show Summarization Ratio > 10Summarization Ratio > 10. Referential integrity . 4. For eg from Oracle EDW to BW.0. 54. 55. Your line-item dimensions should be 15% of your fact table.e.Check quota of database time to total runtime As a rule of thumb.I. 2. Compress your cube immediately. the time spent on database is a substantial part of the whole query runtime Overview: Reporting Performance Analysis Tools Table RSDDSTATBW Queries of BW STATISTICS Using table RSDDSTAT as InfoSourceTable ST03Collecting information from table RSDDSTAT Function moduleRSDDCVER_RFC_BW_STATISTICS BW Workload Analysis -ST03 Creation of Aggregates:Cuberight click-maintain aggregates--now pop up window will com and select create by your self-- next ----left-hand side--you will find info objects --drag and drop to right hand side--and click on activate. 3.5 to BI 7. Migration refers to migration from a different technology to BW. 10 times more records are read than are displayed.I. For eg from BW 3. Aggregates should be build. AND Percentage of DB time > 30%Percentage of DB time > 30%. In process chain we can add aggregates in data target administration-you will find Roll up of filling Aggregates--To improve query performance always remember: 1. Cube should be partitioned.

For example. that whether loading will be properly or not.. In cube dimensions. . any records in Table B that are linked to the deleted record will also be deleted. This check is absolutely normal since BW NEEDS to have a SID (surrogate ID) for EACH of the master IDs prior posting to a datatarget. the load will fail with referential integrity. all records in Table B that are linked to it will also be modified accordingly. Finally..to check the data records of transaction data and master data. Most RDBMS's have various referential integrity rules that you can apply when you create a relationship between two tables. In other words and not going into RDBMS consideration (which are absolutely true) when you load a record like 0PLANT = XYZ 0MATERIAL = 4711. the referential integrity rules might also specify that whenever you delete a record from Table A. This is called cascading delete. This is called cascading update. In sap we use it during flexible updation. suppose Table B has a foreign key that points to a field in Table A. With Ref Integrity enabled after each transfer rule and just prior passing the data to the communication structure. Referential integrity would prevent you from adding a record to Table B that cannot be linked to Table A. We will check(tick) the option in the maintainance of the “infosource--> communication structure” ============================================= When you load data through flexible infosource into a target it is possible to enable the referential integrity. the system will check if 0PLANT XYZ and 0MATERIAL 4711 exists as Master Data. With no ref integrity enabled: The data will pass through the comm structure. to check before loading of data. Just before posting the data in the target. the referential integrity rules could specify that whenever you modify the value of a linked field in Table A. If not. In other words. In addition.Ans:A feature provided by relational database management systems ( RDBMS's) that prevents users or applications from entering inconsistent data. the system will check again if the corresponding master IDs exists for these IObjs.

the contents of this table can be changed. 57. data packet size. an IDOC is a data container for data exchange between SAP systems or between SAP systems and external systems based on an EDI interface. What is the function of 'selective deletion' tab in the manage->contents of an infocube? .It is an IDOC parameter source system. This table contains the details of the data transfer like the source system of the data.e. What is the importance of the table ROIDOCPRMS? Ans:. the best. etc.IDOCs are used for communication between logical systems like SAP R/3. All this makes sense: 1. It is used when the file size is lesser than 1000 bytes. therefore two spots for referential integrity checking: 1. In BW.the SIDs of master are posted as well as in the attributes of an IObj (/BI*/X* table) If your infopackage is set with "allow master creation while loading" then it will create the master data IDs with their corresponding SIDs. R/2 and non-SAP systems using ALE and for communication between an SAP R/3 system and a non-SAP system. The data packet size can be changed through the control parameters option on SBIW i. it will fail again. Note that this allows you also to prevent "garbage" to be posted! if a value is not expected (even form R/2) BW will complain. If not. 2. When is IDOC data transfer used? Ans:. if you don't have all attributes and texts for 0MATERIAL 4711 loaded before what's the point of posting it in a target and report on it? 2. In the comm str of a flex Isource (object by object=.. When the system needs to create master data IS and SIDs it impacts the loading time.. 58. So IDOCs are not used when loading data into PSA since data there is more detailed. maximum number of lines in a data packet. At IPack level globally. IDOCs support limited file size of 1000 bytes. Of course this is the whole challenge of datawarehousing and doing this right will be the key success factor of your project 56.. In summary Master data should always be loaded before transactional data and in the right order: if 0MATL_GROUP is an attribute of 0MATERIAL then it should be loaded prior 0MATERIAL.

How to know in which table (SAP BW) contains Technical Name / Description and creation data of a particular Reports. The distribution of data becomes clear through central monitoring from the distribution status in the BW system. Char1 is an attribute of Char2 already and data is existed. Can we create the cube without time characteristic ? Ans:. 65. Record type(0RECORDTP). but what is the use?. You will find your information about technical names and description about queries in the following tables.For every load System will generates DataPackets Id so it will store in the Dimensional Dable of DataPackage. It is System defiend we can't change it. attri for Char2 from display attribute. 61. Using this.Yes. What is open hub service? Ans: . 59.Ans: . So after the change what else have to do or check because Char2 has been exited in many other InfoProviders . The central object for the export of data is the Infospoke. With this. you can ensure controlled distribution using several systems. 60. It will have the following information Change Run ID (0CHNGID). analytical applications. Directory of all reports (Table RSRREPDIR) and Directory of the reporting component elements (Table RSZELTDIR) for workbooks and the connections to queries check Where. Now as request Char1 has to be change to Nav. Can we create DSO without DATAFIELDS? Ans:. but no use. What are the characteristics in DATAPACKET ID DIMENTION TABLE? Ans:. you can define the object from which the data comes and into which target it is transferred. Reports that are created using BEx Analyzer.There is no such table in BW if you want to know such details while you are opening a particular query press properties button you will come to know all the details that you wanted.Yes.used list for reports in workbooks (Table RSRWORKBOOK) Titles of Excel Workbooks in InfoCatalog (Table RSRWBINDEXT) 62. Ans:.SOME DATA IS UPLOADED TWICE INTO INFOCUBE. You can't analyze the data in Cube.It allows us to select a particular value of a particular field and delete its contents.But how is it possible? If you load it manually twice.The open hub service enables you to distribute data from an SAP BW system into external data marts. and other applications. Through the open hub service. SAP BW becomes a hub of an enterprise data warehouse. then you can delete it by requestID. HOW TO CORRECT IT? Ans: . 63. Request ID(0REQUID) 64.

e. You mentioned Infoprovider which contain such char. then CHAR2 goes inactive.E-Fatc Table and F-Fact Tabl Once you compress the Cube the Data will move from F fact table to E fact Table 70. If I need to switch on one nav.e. So the X table will now have the CHAR1 field. Sorry forgot to mention. and in Edit mode of infoprovider open Navigation Attributes tree and mark Char2 as Navigation Attr and if it is on a Multiprovider do the identify / assign to provide assignment from where this value will come from. If Char2 contains data already then how system will maintain X table of Char2 for char1 because there is no such field in X table before the modification. Now when you reactivate this info object. you need not perform any reloads etc. i. Ans: . A navigational attribute is never physically stored in the cube / info provider. How about Master data. Globle and Local in BW 3. this change is reflected in all infoproviders that use CHAR2. HOW TO CREATE GLOBLE QUREY AND LOCAL QUERY? You can change the view in Rpeort i. . Data would also be copied from the P to the X table. You will see that attribute as CHAR2__CHAR1 in the navigational attributes section of the infoprovider. Char1 is updated to Nav. IN CUBE KEY FIGURES PROPERTIES WHAT IS DISPLAY? Ans:This is used how you want to display the values in repots. If you want to use them check the check box which lies next to the navigational attribute in the navigation attributes section. Since there is no physical change in data. 69.attr for Char2 from diaplay attr. 68. a warning will appear saying Char 1 exsists into many providers. you are essentially changing the info object. HOW MANY TABLES ARE THE TABLES IN FACT TABLE? Ans:. So the object will go inactive. the tables are recreated according to the new structure.When you need to change from display to navigational attribute. ignore it and activate Char1. Decimal Places. So essentially you need not reload or change any data. When you change CHAR1 as a nav.5. They will by default not be used. attribute in CHAR2. 66. Hope this answers your question.Answer: mark Char 2 as Nav Attr and activate the object. CHAR1 is switched to nav. This enables the navigational attribute for the cube/ infoprovider. I mean e. 67. and other properties.g. attri in an DSO after check it on what else I have to take care because lots of data are stored in this DSO already ? Answer: Go to the Infoprovider for which you needs Char 2 as a Nav Attr. if you have a DSO mark it as navigational in DSO and if there is a Multiprovider on top of DSO mark it as Nav in Multiprovider also(depends where u want to use it / where ur query is) Hope this helps.

KPIs are typically tied to an organization's strategy using concepts or techniques such as the Balanced Scorecard. 74. WHAT IS REALIGNMENT Once you load the master Data then you need to run the Change run Attribute. and satisfaction. HOW MANY LINE ITEM DIMENSION CAN WE CREATE ? No lImit. 75. typically in terms of making progress towards its long-term organizational goals[2]. we have used a max of 4 in our project. engagement. KPIs can be specified by answering the question. you can have 13 Line item .LO . You can . 73. . The act of monitoring KPIs in real-time is known as business activity monitoring (BAM). and this additional characteristic was also added to the cube (eg Sales-channel in Sales overview). in some other cases there was extractor enhancement to add additional characteristic. 76. KPIs may be monitored using Business Intelligence techniques to assess the present state of the business and to assist in prescribing a course of action. but having too many line item will leads to performance issue. if you have the complete data in PSA. WHAT IS THE DIFFERENCE BETWEEN LO & FI DATASOURCES? AND WHAT ARE THE DIFFERENCE BETWEEN ABOUT DELTA? Ans:.[1] Such measures are commonly used to help an organization define and evaluate how successful it is. 78.A performance indicator or key performance indicator (KPI) is a measure of performance. service. Analysis Process Designer (APD)Tcode ? Ans: . Key performance indicator (KPI) Ans:. and execute V3 job for deltas .LO transfers the data from Update Q to Delta Q and then BW but FI won't have this kind of scenarios. How much customization was done on the InfoCubes have you implemented? Ans:In some cases we added navigational attributes (eg Material type on inventory cube). "What is really important to different stakeholders?". WHAT IS CURRENCY TRANSLATION TALBE NAME? TCURR and you can workout in various places Tansfer Rules/Update Rules/Transformations and Report level 72.load from PSA to ODS.71. KPIs are frequently used to "value" difficult to measure activities such as the benefits of leadership development. So it will adjust the data as per changes. IN ODS I DELEATED ONE REQUEST CAN I GET BACK IT AGAIN? Ans:. .Yes.RSANWB 77.you need to Fill setutables fo rInit loads .

You want to see how good it is doing. the defined norms have to be Achievable. For the example above. In most of the cases. KPI. a critical success factor would be something that needs to be in place to achieve that objective. which means the value or outcomes are shown for a predefined and relevant period. In this case. which is made up of a direction. benchmark. For eg. it is Measurable to really get a value of the KPI.. A KPI can follow the SMART criteria.The KPIs differ depending on the nature of the organization and the organization's strategy. Investigating variances and tweaking processes or resources to achieve shortterm goals. target. So you will create a report which will show you the Sales of the store for month 001. Requirements for the business processes.2010. key figures are the KPIs. Having a quantitative/qualitative measurement of the results and comparison with set goals. Example : Performance indicators differ from business drivers & aims (or goals). Difference between doc date. whereas a business might consider the percentage of income from return customers as a potential KPI. 'Average Revenue Per Customer' is the KPI.2010. 79. especially toward difficult to quantify knowledge-based goals. Else we will create them by creatign RKF or CKF. The key environments for identifying KPIs are: • • • • Having a pre-defined business process (BP). KPIs should not be confused with a Critical Success Factor. the KPI has to be Relevant to measure (and thereby to manage) and it must be Time phased. A school might consider the failure rate of its students as a Key Performance Indicator which might help the school understand its position in the educational community. and time frame. posting date and invoice date . an attractive new product. A KPI is a key part of a measurable objective. This means the measure has a Specific purpose for the business. for example. This sales is nothing but a KPI. They help to evaluate the progress of an organization towards its vision and long-term goals.g You want to see perormance(Sales) of the store 9999 for month 001. But it is necessary for an organization to at least identify its KPIs. For example: "Increase Average Revenue per Customer from £10 to £15 by EOY 2008".

The calculate and store during load is not always possible . Invoice Date : Usually the date when goods are shipped.There are two levels of exception aggregation. . Since this ideally cannot be done at a query . this ideally should be done at a query level Another example is Calculate average sales for the customer as invoive amount by number of lines sold across invoices. 80.. Meaning if the customer has bought 10 lines across 20 bills then average sales = invoice amount across all these bills / 10 even though the total number of lines may be 40 with repeats across invoices.when you are looking at higher level data that invoice and line item level information and for variable time periods.. Posting date is loading date where data gets posted to BW (Check in VA03 transation schedule line for Material) Invoice date is billing date where the ordered is billed. Exception Aggregation Ans:. This would be an exception aggregation at the key figure level.take zero. This is typically very expensive to implement at a data load level and is heretic to consider if your data volumnes are too high.. Document Date: The document date is the date on which the original document was issued. The posting date can differ from both the entry date and the doc date.one is to calculate and store the values another is to calculate the same at runtime. Bill date etc.. Document date is the date where sales document is created.either due to the conditions on the calculation or due to constantly changing data. One typical example is consider sales for a customer. this is because there are two ways to calculate and display values . (Or) This is related to sales where customer sales document in SAP (VA01). Ex: Inv date. If todays sales for the customer is zero then take the previous days sales else take todays sales. And if both are zero .Ans:Posting Date: Date which is used when entering the document in Financial Accounting or Controlling. Payment dates are set relative to the invoice date.

.

4. Watch statistics very closely for DB time and OLAP time Observe the query costs in RSRT by going through the execution plan If possible cache the query so that data fetch time is reduced. 5. 2. It can help aggregate data to the level that you want irrespective of the level at which the data is stored.try and use data load calculations if possible. Edit the Key Figure as shown below. .do not use exception aggregation . If it cannot be done without exception aggregation then follow the thumb rules: 1. This amounts to huge amounts of data ( for very large cubes / DSO's) 1.. Assuming that the lowest level of detail in this DSO is order item. How do you achieve this without creating another Infoprovider at order level? The answer is simple. Follow these steps: 1. Ideally for very large cubes / DSO's unless you do not have a choice . For example. (OR) Exception Aggregation is an extremely powerful concept when developing a query in BW. Compress the cube regularly Update statistics regularly . you may have an Infoprovider at order item level and you may want to report on the number of items and the number of orders in that InfoProvider.What happens when such exception aggregation is executed :# The data is fetched from the cube / DSO at the most detailed level including the characteristic on which exception aggregation is done. The calculation is done by the OLAP processor and not while data is fetched the data is fetched and then the calculation is done on the OLAP processor meaning you will not see any of these formulae in the run schedule / sql query in RSRT. 3. 2. create a new calculated key figure in query designer.preferably after each data load.

3. In the Ref. This key figure will then output the number of unique orders thus allowing you to report on the number of orders on this DSO even though the DSO is at order item level. . Then click on the Aggregation tab and set the Exception Aggregation as shown below. Characteristic box select the Order Characteristic.

81.. We are Transfering only the Structure.And For that new field Historical data is not available and after moving to production then onwords u will get data for that field. Maximum and Minimum values..4. Last. THE DATA SOURCE IN PRODUCTION . .DELTAS ARE RUNNING ? WHEN U R ENHANCE THE FIELD TO DATA SOURCE . Add this key figure to your query and run it! You can also use exception aggregation to report on First. HOW YOU DID TRANSPORT THAT DATA SOURCE?WHAT ABOUT DATA & LOADS? Ans:- Nothing will happen to data. If u have the Historical data also for that field then u have to delete all the data and have to reload to get Historical data for that new field.

Active Objects. Aggregates 4. How to find all the Inactive info objects of the BW system and how to activate all of them? Ans:- Table RSDIOBJ with OBJVERS = M D will give you delivered objects A .OLAP Cache 5. OLAP Cache 5.Data Model 2.make sure you take only the M versions for those that do not have an A version. Data Model 2. those have Disable appearance the Icons.Active M . Query Definition 3. A.Authorizatio . ABAP Trace (SE30) Database: 1. Indices 8.Delivered a) The Infoobjects are Inacitve status.Aggregates 4.RSRT.82. DB and basis (Buffer) Parameter OLAP : 1. Compressing 7. b) You can activate the all objects by using' RSDG_IOBJ_ACTIVATE' 83. Also some objects will be in both A and M state . RSRV . Performance tools? Ans:Trace (ST05) . RSRTRACE. DB Statistics 9.Query Definition (including OLAP features) 3.Virtual Key Figures / Characteristics 6.Modified D . Pre-Calculated Web Templates 6.

Formatting 7.Network 2.Documents 6.Frontend : 1. The source contains three date characteristics: Order date.Client Hardware 4. it remains empty. The target only contains one general date characteristic. It contains one transformation rule for each key field of the target. What is the Rule group? Ans: .The key figure is not updated in the InfoProvider For Example: 1.ODBO/ 3rdpartySQL1 85.VBA / Java 5.A Rule Group is a group of transformation rules. depending on the key figure. or invoice date to the target. delivery date. The field is not filled. . Create three rule groups which. Delivery date and Invoice date. Rule Type Initial can be set only for key fields. this is filled from the different date characteristics in the source.WAN and BEx 3. update the order date. A transformation can contain multiple rule groups. Depending on the key figure. Rule Type No Update can be set only for non-key fields.

Most importantly you should not put charcteristics that have a M : N relatiionship together in an infocube. data packet and unit. 3 dimension will be there by default. To be more specific I will give an example. Wow thats a lot of infoobjects. 1. Once you know these truths you will get a better picture about it. I will just give an overview of the basic rules of incube. Cube design Ans:We should design a cube based on our requirement. Give the datasource name suffixed by * . 2.5 Select any infocube rightclick - select "show data flow" -u will get the data flow in the rhs screen select " Technical Name ON/OFF" switch - select "Zoom In" to see te maximum - wiil get both the technical name and table name 87. At the top of the window it will show you PSA table name. And each dimension can contain 248 characteristics. So you can have 13 * 248 characteristics in your fact table. Or you can check in RSTSODS table. Out of these 16. For example you should not put customer and material in the same dimension. This is because a . When you put characteristics in an infocube you should club them into an infocube in such a way that they should have a parent child relationship. So we can create 13 dimensions.86. 3. Technical name of PSA in BI7? Ans:Right click on datasource --> Manage. it will show you PSA table name. they should have a 1 : N relationship. It should not be based on the number of infoobjects. They are time. An infocube can contain a maximum of 16 dimensions. 4. Or check in RSA1OLD tcode as in BW3. that is.

. You will get lots of documents in net. We have 2 types of Indices. There are much more facts to be taken care of when designing an infocube.. So there exists M : N relationship. Rule of thumb is that the size of dimension table should not exceed 10 to 15 % of the size of the fact table. Hope this will help you... 3 Leave one dimension for future enhancements. What is high cardinality flag? When we have to use? Ans:- Normally when the dimension table space occupies more than 20% of the fact table then that dimension can be declared as line item dimension.if the relation is 1 to many or many to one or 1 to 1. u can find the RSDEW_INFOCUBE_DESIGNS program and give the name of info cube ... you first check the relationships between the info objects.there will be mainly there types of relationships that info objects have there are 1. and my questions are.. 88... then u can’t know each dimension how much space occupied based on that u can declare line item dimension at that time u can use the high cardinality 89. else dimension table's size will grow more than 20% of Fact table. (or) We have only use 13 dimensions among the 16 dimensions of info cube.the criteria of selecting different info objects in to different dimensions.. 1) Primary Index 2) Secondary Index Primary Index: Automatically System generated Secondary Index: we have 2 types in Secondary Index a) Bitmap Index b) B-Tree Index When the cardinality of Dimension table is less than the 20% of the Fact table we go for Bitmap Index When the cardinality of Dimension table is more than the 20% of the Fact table we go for B-tree Index am i right? if i am wrong please correct me...customer can buy many materials and a material can be bought by many customers. In your case just analyse your infoobjects and include them in dimensions according to the rules specified above.then you can insert those info objects in to a single dimension 2. As a result dimension table will grow rapidly and it will result in bad performance.if the relationships is many to many then you must put those info objects in to different dimensions.

B-tree indices for regular database tables and bitmap indices for fact tables and aggregates tables. deletion. with only the fields available in DSO 1 populated (fields belonging to the right table have initial values) Temporal join: A join is called temporal if at least one member is timedependent.Here you will see the all the tables involved in InfoCube.RSRV 3. you can manually find the no. you can use1. B-tree indices: B-tree indices are used when dealing with huge volume of data. the record is part of the result set Example : Left Outer Join on DSO 1. T-Code. What are the joins available? What is the temporal join? An:• Inner join: A record can only be in the selected result set if there are entries in both joined tables Example master data is coming for 0material and one more cube getting transactional data with material now if u go with inner join u can get the only common (intersection function works) records will come . For example. From here. T-Code-Listschema. Bitmap Indices can dramatically improve query performance when table columns contain few distinct values. ABAP Program. updates. 90. InfoObjects in the join Valid from Valid to • • • . Bitmap indices: for fact tables and aggregate tables: This is for regular database tables. Left outer join: If there is no corresponding record in the right table. Except LID other dimensions use bitmap indices Bitmap indices cannot able to handle any inserts. Line Item dimensions use B-tree indices. a join contains the following time-dependent InfoObjects (in addition to other objects that are not timedependent).SAP_INFOCUBE_DESIGNS 2. if u use the B-Tree indices there is no need for deletion of indices as B-tree can able to handle it very well. Bcoz of that we have to delete the indices before loading and have to rebuild it after loading.1) How can we find out the total fact table size and Dimension table size? 2) where can we calculate Cardinalty of Dimension? 3) where and how can we create Bitmap and B-Tree index? Please help me. of records in Fact table and Dimension tables associated with the Fact table (E and F tables) Indices are used to locate needed records in a data-base table quickly.BW uses two types of indices. which are there in 0material mastr and in cube. you will be able to see all the common Customer IDs will all fields populated + The Customer IDs that are there only in DSO 1. Ans:To find the Fact table v/s Dimension tables ratio.

Infopackage doesn't know delta so you need to select it.2001 Where the two time-intervals overlap.. 98. The data is stored in cluster tables from where it is read when the initialization is run. is known as the valid time-interval of the temporal join. you need to run V3 job (If data is there RSA7 then load delta and after that set V3 job) then create Infopackage in BW and select Delta and run. We r changing the extract structure right. VBAP) & fills the relevant communication structure with the data. You can run manually not a problem. So to get the required data ( i.e. customers orders with the tables like VBAK. Temporal join Valid time-interval • Valid from 01. The extraction set up reads the dataset that you want to process such as. at least until the tables can be set up. 0recordmode has values as X. R. Data source migration T-code? Ans :. the data which is required is taken and to avoid redundancy) we delete n then fill the setup tables. ODS uses 0recordmode info object for delta load. that means there are some newly added fields in that which r not before.01.2001 31.Initially we don't delete the setup tables but when we do change in extract structure we go for it. meaning the validity area that the InfoObjects have in common.It is used in delta management.2001 01. In delta x means rows to be skipped D & R for delta & remove of rows. It is important that during initialization phase.2001 Self join: The same object is joined together 91.Cost center (0COSTCENTER) Profit center (0PROFIT_CTR) 01. To refresh the statistical data. Ans:After Init without data transfer.2001 31. What steps we do after initialization happened in LO extraction to pick delta. Where is 0recordmode Infoobject used? Ans:.05. no one generates or modifies application data. D.2001 Valid to 31.RSDS 92.07.05.03.03. 93. . Why we delete the setup tables (LBWG) & fill them (OLI*BW)? Ans:.

LO Extraction Steps: 1. Go to Transaction LBWE (LO Customizing Cockpit) Select Logistics Application  SD Sales BW  Extract Structures 2. Select the desired Extract Structure and deactivate it first 3. Give the Transport Request number and continue
4. Click on `Maintenance' to maintain such Extract Structure Select the fields of your choice and continue  Maintain DataSource if needed

5. Activate the extract structure 6. Give the Transport Request number and continue 7. Delete the content of Setup tables (T-Code LBWG) 8. Filling the Setup tables -SD Sales Orders – Perform Setup (T-Code OLI7BW) 9. Check the data in Setup tables at RSA3 10. Replicate the DataSource 11. Go to transaction LBWE and make sure the update mode for the corresponding DataSource is serialized V3 update 12. Go to BW system and create infopackage and under the update tab select the initialize delta process. And schedule the package. Now all the data available in the setup tables are now loaded into the data target 13. Now for the delta records go to LBWE in R/3 and change the update mode for the corresponding DataSource to Direct/Queue delta. By doing this record will bypass SM13 and directly go to RSA7. Go to transaction code RSA7 there you can see green light # Once the new records are added immediately you can see the record in RSA7 14. Go to BW system and create a new infopackage for delta loads. Double click on new infopackage. Under update tab you can see the delta update radio button. 15. Now you can go to your data target and see the delta record

101. How to link fields from cube to R/3 table/field? Ans:If it is LO dataSources you can see it in LBWE and click on Maintain and see the Table-Field names. Check in Following Tables.

DataSource (= OLTP Source) - Table ROOSOURCE Header Table for SAP BW DataSources (SAP Source System/BW System) RODELTAM BW Delta Procedure (SAP Source System) RSOLTPSOURCE Replication Table for DataSources in BW Mapping - Tables RSISOSMAP Mapping Between InfoSources and DataSources (=OLTP Sources) RSOSFIELDMAP Mapping Between DataSource Fields and InfoObjects

96. Ans:-

What are the tables are available in RSA7?

In RSA7 there are three tables 1.TRFCQOUT (Sender Reciever, Delta Queue Name, user) 2.ARFCDATA (LUW's) 3.ARFCSTAT (Link b/w above two tables) But It's not possible to read. if you want read use function module RSC2_QOUT_READ_DATA. 97. Can we create hierarchies data source by using generics?
Yes we can Generally, hierarchy sets are created (ECC) using tcode GS01. For this, DataSource created using tcode BW07. In BW07 give Set leaf table name as table for set field name is whatever you defined for set. And give hierarchy name for User's self-defined datasource name once you created datasources replicate it in BW and use it as normal datasources. In data selection of infopackage of Datasource you have to select your hierarchy from the list.

98.

How does generic delta works?

C:\SAP KM\SAP BI\ BW\Extraction\Genaric\How to Create Generic Delta.pdf

100. Generic steps:
We opt for generic extraction whenever the desired datasource is not available in business content or if it is already used and we need to

regenerate it . when u want to extract the data from table, view, infoset, function module we use generic extraction. In generic we create our own datasource and activate it . Steps: 1.Tcode is RSO2 2.Give the datasource name and designate it to a particular application component. 3. Give the short medium and long desc mandatory. 4. Give the table / Fm /view /infoset. 5. and continue which leads to a detail screen in which you can always hide, select ,inversion , field only options are available. HIDE is used to hide the fields .. it will not transfer the data from r/3 to BW SELECT -- the fields are available in the selection screen of the info package while u schedule it . INVERSION is for key figs which will operate with '-1' and nullify the value . Once the datasource is generated you can extract the data using it . And now to speak abt the delta ... we have .. 0calday , Numeric pointer , time stamp. 0calday -- is to be run only once a day that to at the end of the day and with a range of 5 min. Numeric Pointer -- is to be used for the tables where it allows only appending of records ana no change .. eg: CATSDB HRtime managent table . Timestamp: using this you can always delta as many times as possible with a upeprlimit . Whenever there 1:1 relation you use the view and 1:m you use FM. 101. Generic delta datasource with numeric pointer?

Ans:Delta in Generic extraction is based on Timestamp, Calday, and Numeric pointer. If you do not find Timestamp/calday then we can use Numeric pointer, again it depends on the requirement. Sequentially incrementing field like Sales document number is a good example for numeric pointer. Lets say 1000 is the staring number, 1001 , 1002 , changes will be collected, as a delta . Also, Numeric pointer allows only newly added records but not changed records. Check this one 1. scenario in which we use Numeric Pointer option in Generic Delta ? Ans- whenever you require delta based on some fields other than time reference you can use it .ex- employee id, gl account etc..

1999 is read.  These standard extractors use ALE pointers to enable Delta.when Numeric Pointer is used why we go for safety interval ? safety inteval is to avoid any loss of delta. 103. Display the datasource. the numeric pointer will not pick them. The default value for the key date is the date on which the query is executed. You can also select a variable key date: . Choose a date from the calendar. 2. Numeric Pointer picks only added records . Only if there is a change in those fields will it take effect as delta. . Go to RSO2 in the source system. time-dependent data for 01..1999. ie its intense is to not to miss any record because the last numeric pointer might not loaded the last changed records.. .. What is the importance of key date? Ans:Key Date Every query has a key date.Is there any specific types for Delta-specific field when we use Numeric Pointer ? I dont think so. that is < today>. Choose .01. you can use it for any filed. Go to Menu -> Datsource -> ALE Delta . 3. 102. You will need to check what are the fields that are enabled for delta. If you select 01. . for example. 3. and if u modify any record. .01. the key date determines the time for which the data is selected. 1.Lo extarctors basically works on Queue management but the the FI extarctors works on Time stamp.2. For time-dependent data.. Choose OK. WHAT IS THE DIFFERENCE BETWEEN LO & FI DATASOURCES? AND WHAT ARE THE DIFFERENCE BETWEEN ABOUT DELTA? Ans:. Check what is the table name and the change document object and the status of the fields of the table if these are enabled for the change document to get triggered. It only picks increasing records only. The Select Values for Date dialog box appears.

Choose OK.1. the 2. The variable editor appears and you can change the variable. The key date only applies to time-dependent master data. Choose Display Key/Text to show the key. If you want to change a variable. 3. For the selection. If you want to create a new variable. 104. select the variable and choose Change Variable. Select a variable.What is the difference between RKF and CKF? . The variable editor appears and you can create a new variable. choose New Variable. The new variable is displayed. From the context menu you access using the black arrow next to icon. choose Entry of Variables. you may need to know the technical names of the variables as well as their descriptions.

No of employees in south region. New Selection is also local and similar to Restricted keyfigure . boolean fucntions are available and other functions too. 2.What is the diff between CKF and free chars? A calculated key figure is a formula consists of basic. Once you save this Restricted and Calculated keyfigures they are saved globally and available in all quries on that infoprovider.e. restricted.The calculated key figure is valid only for the query in question.The calculated key figure is used in all queries that are based on the same InfoProvider. but that is local formula. Query level . 105. New formula is similar function to Calculated keyfigure. InfoProvider level . With free characteristics the user will be able to navigate i. drill across and the data will be displayed by the means by navigational steps.What things you will keep in mind before designing reports? .Restricted keyfigure is to create a keyfigure values based on some value of Charecteristic. percentage fuctions. other Calculated key figures available in the info provider stored in the Metadata repository for reuse in multiple queries. Calculated keyfigure own formulas can be created based on existing keyfigure with the available functions at formula or Caculated keyfigures. including drill down. mathe matical functions.. 106. Calculated key figures are defines at both: 1. Ex.

6. User Input date 1)1 April-5 April 2)6 April-10 April . In a paper or web-based report you cannot rely on color alone to convey differences in charts or tables. 8. Sketch tables and charts and plan the order of information that is included. Charts and graphs should add value and convey a single message. hard to understand reports. important background facts.1. headings. Then a real report can be designed. Use page numbers and specify restrictions on distribution and confidentiality. We define a RKF for this where the Date-Variable and Quantity are used. Educate people who create reports about effective reporting. Make decisions about titles. Don't create complex. Make the context of the report obvious to anyone who sees the report. Just because it is easy to create charts doesn't mean you need them. It is true that a chart often conveys a lot of of information in a decision compelling way. Discuss report design with the person who requests the report and be willing to help end users who are creating ad hoc reports. Decide what data to put in each report section and decide how to arrange the detail data. Use a text box at the beginning of the report to quickly state reporting objectives. but a chart needs annotation. Some managers want all of the detail in a report. Don't overload the user with too many numbers. Make a plan. 7. Always include in the header the date and title of the report. discrepancies or major indicators. When possible use color. 107. shading or graphics like arrows to highlight key findings.Have you used any time text variables? Ans:One of the Scenarios we have used text variable is Age Wise (Bucket Report) 1)The User will input the Dates as input variable and that should be displayed in the Columns for that KF. Create and follow report design guidelines. and data formats. That's usually because they don't trust the accuracy of the summarized data. Talk to managers/users. and limitations of the data. descriptive title and labels. Keep it simple and short! Shorter is better and for performance monitoring a one page report is ideal. Pages of detailed data is not a report. 4. 2. authorization. After some period of time. the manager may gain trust in the data and request more summarization. 3. 5.

108.3)11April-15 April So the report will be like Quantity (1 April-5 April) | Quantity (6 April-10 April)|Quantity (11 April-15 April) So here we have defined the Date as Text Variables such that we displayed the input Dates in the Report Column heading.How do you create global structures? . Variables for characteristic vales 2. Variables for Hierarchies and Hierarchy nodes 4. Variables for Text 109.What are the types of variable are there? Ans:1. Variables for formals 3.What are the variable processing types? Ans:User entry/default values Replacement path SAP Exit Customer exit Authorization object 110. How do you broadcast reports to portal? Ans:Using javaStack we will connect BI reports to portal 111.

expand it structure are two types. rather than the structure. Another advantage is that you have flexible use of restricted and calculated key figures. if you want to make any changes to this structure for only one query without effecting globally than. Defining local formulae in a structure prevents reuse of that formula as a standalone CKF / RKF in another query. so sometimes the best option available is to create CKF on the InfoProvider.Ans:Query designer --> right click on rows or columns --> create structure --> Add your elements in that structure and save the structrue with a technical name. 1) Structure with key figures 2) Structure without key figures in your created structure if you have key figures. the structure elements will be converted into objects available in the universe. So it would be global to that Multiprovider/ Infoprovider. then expand the Folder (structure with Keyfigures). you can just drag the structure to columns/rows like any other objects. including variables and formula variables. . If you are using BEx Queries as the basis for a universe. these structures will be available and valid for all the queries which you create on the same infoProvider. Disadvantages of structures are that they are not dynamic . Note: what ever the changes you made to these structures will effect globally. in the query panel remove the reference by right click-->remove reference.you need other means (heirarchy variables) in reports that have varying numbers of rows / columns. Create a Structure and save as by giving technical name and description. Local formulae defined in a structure do not always aggregate how you would want them to. now you can use this structure locally. in Query Designer top left side you can see a folder name as Structure.

. there are two rows and two columns with simple values. when there are two different formulae in rows (say: Summation) and column (say: Multiplication) structures. In reporting. it is unclear to the system how to calculate the formulas at the point where both formulas intersect.112. The following example clarifies the concept of formula collision: Column 1 Row 1 Row 2 Row 1 + Row 2 Value A Value C A+C Column 2 Value B Value D B+D Column 1 x Column 2 AxB CxD ? Formula Collision? In this example. In query definition. How may structures we can create in reports? How many default structures will be created? Ans:We can create maximum two structures one is characteristic structures and only one can be a key figure structure allowed in a BEx Report. (Or) Formula Collision The Formula Collision function is offered ONLY in the formulas property window. we have to give which formula we have to take into consideration for the cell in which formula collision occurs. the third row is a simple summation formula and the third column is a simple multiplication. In the cell in which the row and column formulas meet. formula collision comes into picture.What is formula collision? Ans:Formula collision means conflict of formulae. it is not clear which calculation should be made. 113. By default one structure created. When you define two structures. which both contain formulas.

you can determine which formula is used in the calculation. You can make the following settings in the Formula Collision field: · Nothing defined If you do not make a definition. the cell contains (AxB)+(CxD). 4) Formula is created in Query designer Rows/columns where as CKF is created in the left side panel of Query designer (from the context menu of Calculated Keyfigures tree). as described in the example above.I want to create one formula that should be valid for that query. 3) All the formula functions from formula builder will be available while creating by formula where as only few of the formula functions are available for CKF.Can we create variables while defining exceptions are conditions? . the cell contains (A+C)x(B+D).If you calculate according to the column formula in this cell. 115. The result gives a different value. 1) Formula is local and calculated Keyfigure is global. both calculation directions give the same result. Result of this formula The result of this formula has priority in a collision Result of competing formula The result of a competing formula has priority in a collision Collisions always occur when point and dash calculations or functions are mixed in competing formulas. not settings are required for formula collision. · · 114. If a formula collision occurs. Setting means that you defined and saved the formula. Therefore.Calculated KeyFigure is applicable across all Queries on same InfoProvider 2)Fomula doesnt have Tech name where as CKF must have. If there is only dash calculation or point calculation in both formulas.it is available for only for that Query. When u create a formula. How do you create? Ans:Please create a local formula and not a CKF. If you calculate according to the rows formula in this cell. the formula that was set last takes priority in a formula collision.

First you load all billing data from billing DSO to consolidation one. all fields are mapped directly. 118. assuming you are using WAD for your reporting right click on the template & choose Insert -> Image. What are BW Statistics and what is its use? Ans:They are group of Business Content InfoCubes which are used to measure performance for Query and Load Monitoring.Ans:You can also use formula variables as the reference value of the condition 116. (or) . It also shows the usage of aggregates. write an end routine to read the billing documents for the respective sales documents and write to consolidation. Once done. From the pop-up dialog select the image imported thru' SE80. You can report on this you get data in single record. Other options are using infoset and using "constant selection” option. Next while loading from sales to billing.Sales documents and Billing documents should appear in the same line in the report. 119. it is direct mapping. you have to consolidate both sales and billing DSO into one. so it forms another line the report. Which will have the key of sales doc no and Billing doc. OLAP and Warehouse management. In the consolidation you get both data. But mostly dependant on the business need. To achieve this scenario. Ans:Using the RSZDELETE transaction 117. you get sales doc number also in the source. The reason is you can't identify the billing doc num from sales cube. how do you model? Ans:Creating sales Order cube and Billing Cube and top of them multiprovider will generate multiple records you will not get result in one line. Delete a BEx query that is in Production system through request.How to place the company logo in Reports? Ans:Import the image in SE80 Mime Repository under /sap/bw/Customer/Images.

you want to display the date on which data was loaded into the data target from which the report is being executed. 122.We can develop 2 business DSOs each for Sales and BIlling (Using VAITM and VDITM data sources). The following values are valid for I_STEP: · I_STEP = 1 Call takes place directly before variable entry . Here. which is the date on which the data load has taken place.What is istep=1&2&3. Display attributes. Ans:. 121.. the parameter I_STEP specifies when the enhancement is called. 123.Transitive Attributes? Ans:.? Ans:. Usually. you can go for N number.Navigational attributes having nav attr…these nav attrs are called transitive attrs HOW MANY CALICULATED KEY FIGURES CAN WE CREATE IN ONE REPORT? No limit.Navigational attribute. you can control the dependencies with the parameter I_STEP Features The enhancement RSR00001 (BW: Enhancements for Global Variables in Reporting) is called up several times during execution of the report. As a preemptive measure.I would like to display the date the data was uploaded on the report. This displays the relevance of data field.If I understand your requirement correctly. Different types of Attributes? Ans: . If it is so.Use The variable exit may not be executed or false data may be selected when executing a query that contains variables with the replacement path Customer Exit. 120. CAN WE CREATE VARIABLE ON DISPLAY ATTRIBUTE? You Can't. Compounding attributes.. Transitive attributes. untile and unl.ess it is Navigational Attr. Time dependent attributes. Currency attributes. Above that we can make a cube that takes all data from billing DSO and populate the corresponding sales order from Sales DSO using look up. configure your workbook to display the text elements in the report. we load the transactional data nightly. filled dependently of the entry-ready variables. Is there any easy way to include this information on the report for users? So that they know the validity of the report.

LE [[. GE [[. LT [.before the variable screen pop up the code will be executed once. the system transfers the currently available values of the other variables in table I_T_VAR_RANGE. This step is only started up when the same variable is not input ready and could not be filled at I_STEP=1. The call can come from the authorization check or from the Monitor. GT [. Values of other Variables When calling the enhancement RSR00001 (BW: Enhancements for Global Variables in Reporting). The table type is RRS0_T_VAR_RANGE and the row type RRS0_S_VAR_RANGE references to the structure RRRANGEEXIT. Triggering an exception (RAISE) causes the variable screen to appear once more. after that only the variable screen will be displayed For ex: if we want to populate constant values in the code we will use this variable · I_STEP = 2 Call takes place directly after variable entry.After entering the values in the variable screen and when we click on the execute button then the code will be executed for ex: for data validation we will use this variable · I_STEP = 0 The enhancement is not called from the variable screen . -. Afterwards.  When we enter the values in the variable screen the code will be executed for ex: if we want to calculate any value based on the values which we give on the variable screen we will use this variable · I_STEP = 3 In this call. you can check the values of the variables. CP and so on. . This structure has the following fields: Field Description VNAM Variable name IOBJNM InfoObject name SIGN (I)ncluding [ [ or (E)xcluding [ [ OPT Operators: EQ =. I_STEP=2 is also called again. BT [ [.

LOW Characteristic value HIGH Upper limit characteristic value for intervals/the node-InfoObject for hierarchy nodes.. 124... CASE I_VNAM..means.. ENDIF. R. D. 0recordmode has values as X.Is aggregate DB index is required for query performance? Ans:Yes it improves the reports performance. In delta x means rows to be skipped D & R for delta & remove of rows.. Using the Check Indexes button.It is used in delta management.so it'll take long times in reporting. you can check whether indexes already exist and whether these existing indexes are of the correct type (bitmap indexes). IF I_STEP 2. In order to get better reporting performance.Where is 0recordmode Infoobject used? Ans:..Aggregates: These are minicubes. Why do you create Aggregates? What is the Valuation type in aggregates? There are some ++.When you execute a query on the cube (as you know cube will have very huge data). or one or more indexes are faulty You can also list missing indexes using transaction DB02. 125. 126. Yellow status display: There are indexes of the wrong type Red status display: No indexes exist.aggregates are one way. Now you are at a position prior to the variable entry. the query will have to search the entire cube for the information required by the query. Values have not yet been entered for the input-ready variables. You can insert the following statements to force the variable to be executed with I_STEP=2 and not I_STEP=1. pushbutton Missing Indexes. Activities A variable that is to be filled dependently of an entry-ready variable must never be filled in step I_STEP=1..here we will aggregate data in the cube as per requirements of . If a lot of indexes are missing.. RAISE no_processing.. minus in that screen? Ans: . it can be useful to run the ABAP reports SAP_UPDATE_DBDIFF and SAP_INFOCUBE_INDEXES_REPAIR. ODS uses 0recordmode info object for delta load. ………….

0? But I know in Bw3. SKB1 Master DataBSIS and BSAS are the Transaction Data 2. SAP valuates how well your aggregates are used. 4. Account Receivables. We can aggregate as MAX. the one which is having background colour as exception reporting are with exceptions... Which will take less time for query execution as the query execution will have to search limited data base. which has less significance for reports.related VendorAll the MM related documents data when transfered to FI these are createdRelated Tables BSIK and BSAKAll the above six tables data is present in BKPF and BSEG tablesYou can link these tables with the hlp of BELNR and GJAHR and with Dates also. which the query don't have to search the entire database. you would not be able to judge whether the query is having any exception. Open queries in the BEX Query Designer. After creation of aggregates they should be made active.related to CustomerAll the SD related data when transfered to FI these are created. First it will search the aggregates for the related data. Asset ManagmentIn CO there are Profit center AccountingCost center Accounting will be there 128) How to create condition and exceptions in Bi. Special Purpose Ledger. If you are finding exception tab at the right side of filter and rows/column tab. There are some cases when you create 3 aggregates on a Cube. So the remaining 2 aggregates are a waste of space as well the waste of time to fill these aggregates.. the query is having exception. There may be a case when a aggregate is just a subset of a larger aggregate.2.. GL Accounting -related tables are SKA1.the query. 1. . Execute queries one by one. Account Payables . once the data finds in the aggregates the query will be executed in less time.? From a query name or description. There are two ways of finding exception against a query:1..5 version.So when a query executes.MIN etc. In such cases you will have minus sign. which is rarely used. It’s a SAP way of saying that your aggregates are either not properly built or not used.7. The required data will be maintained as smaller cubes. 5. If the sign is -ve you need to check what's going wrong. Related Tables BSID and BSAD 3. case studies or scenarios FI FlowBasically there are 5 major topics/areas in FI. 127) The FI Business Flow related to BW. but most of the queries are run on the first aggregate.

Like in above case putting filter Fiscyear>2006 willmake data from cube for yeaers 2001. In every query you have one structure for key figures.. Cell-specific definitions allow you to define explicit formulas..showing sales in 2007.Within that data. solving the errors related to data load. chC/% = chA/% + chB/% then:kfA kfB %chA 6 4 66%chB 10 2 20%chC 8 4 86% Manager Round Review Questions.. other than this you will also be doing some enhancements to the present cubes and master data but that done on requirement. .Suppose for example.. Then having two structures. These cells are not displayed and serve as containers for help selections or help formulas. In addition.2003. . 2. 131) Production support In production support there will be two kind jobs which you will be doing mostly 1.So query is only left with data to be shown from 2007 and 2008.. looking into the data load errors..Sales in 2007 is one RKF which is defined on keyfigure Sales restricted by Fiscyear = 2007Similarly. Now to meet your requirement.2002. 130) what is the use of Define cell in BeX & where it is useful? Cell in BEX: Use * When you define selection criteria and formulas for structural components and there are two structural components of a query.you can design your RKF to show only 2007 or something like that. solving the tickets raised by the user. then you have to do another structure with selections or formulas inside.You have got a key figure called Sales in your cube Now you will put global restriction at query level by putting Fiscyear > 2006 in the Filter. Then in cell editor you are enable to write a formula specifically for that cell as sum of the two cell before. RKF is restriction applied on a keyfigure. the cross among them results in a fix reporting area of n rows * m columns. 2004.2005 .Sales in 2008 is one RKF which is defined on Keyfigure Sales restricted by Fiscyear = 2008Now i think u understood the differenceFilter will make the restriction on query level. you want to analyze data only after 2006.2006 unavailable to the query for showing up.... 2008 against Materials. This function allows you to design much more detailed queries... generic cell definitions are created at the intersection of the structural components that determine the values to be presented in the cell. and selection conditions for cells and in this way. you can define cells that have no direct relationship to the structural components. Data loading involves monitoring process chains. The cross of any row with any column can be defined as formula in cell editor. This is useful when you want to any cell had a different behavior that the general one described in your query defininion. along with implicit cell definition.129) what is the difference between filter & Restricted Key Figures? Examples & Steps in BI? Filter restriction applies to entire query. This will make only data which have fiscyear >2006 available for query to process or show.like belowMaterial Sales in 2007 Sales in 2008 M1 200 300M2 400 700You need to create two RKF's. you need two structures to enable cell editor in bex.For example imagine you have the following where % is a formula kfB/KfA *100.kfA kfB %chA 6 4 66%chB 10 2 20%chC 8 4 50%Then you want that % for row chC was the sum of % for char and % chB. to override implicitly created cell values.

what is the technical of the . Regression testing when version/patch upgrade is done. what is the approach. 1. Normally the production support activities include * Scheduling * R/3 Job Monitoring * B/W Job Monitoring * Taking corrective action for failed data loads.helpline activities 3.Data Loading . Monitoring Process Chains Daily/weekly/ monthly 3. -SAP ABAP programming with BWData modeling.User will raise a ticket when they face any problem with the query. Coordinate and manage business / user testing Deliver training to key users Coordinate and manage product ionization and rollout activities Track CIP (continuous improvement) requests. Fiscal year and Fiscal Version – certain Info Objects should be available in the system. If available. Check Aggr's Rollup. to resolve each of the line items listed above. Let’s say Functional Spec says. 4. report designs Translate requirements into design specifications( report specs. Perform Change run Hirerachy 4. The calculations or formulas for the report will be displayed in precision of one decimal point.if the system response is slow or if the queries run time is high. 2. master data. Fiscal Version. 6. 1.could be using process chains or manual loads. Fiscal Year.so that they are used as user entry variable. business key users) Liase with key users to agree reporting requirements. 4. * Working on some tickets with small changes in reports or in AWB objects. source system developer. Creating adhoc hierarchies. work with users to prioritize. Resolving urgent user issues . The Company variable should be defaulted to USA but then if the user wants to change it. Monitoring Dataload failures thru RSMO 2. the user should be able to enter the Key date. they can check the drop down list and choose other countries. Technical Specs translate these requirements in a technical fashion. plan and manage CIP An SAP BW technical consultant is responsible for:SAP BW extraction using standard data extractor and available development tools for SAP and non-SAP data sources. To give the option of key date. The activities in a typical Production Support would be as follows: 1. Functional specs are also called as Software requirements. We can perform the daily activities in Production 1. Creating aggregates in Prod system 5. The report should return values for 12 months of data depending on the fiscal year that the user enters Or it should display in quarterly values. like report showing wrong values incorrect data etc. To create any variables. 2. functional specs) Write and execute test plans and scripts . 3. where do you do it. ODS and cube design in BWData loading process and procedures (performance tuning)Query and report development using Bex Analyzer and Query DesignerWeb report development using Web Application. Modifying BW reports as per the need of the user. star schema. many of which are executed by resources not directly managed by the project leader (central BW development team. Now from this Technical Spec follows. 132) An SAP BW functional consultant is responsible for the following: Key responsibilities include Maintain project plans Manage all project activities. 133) Give me one example of a Functional Specification and explain what information we will get from that? Functional Specs are requirements of the business user. then should we create any variables for them . data mapping / translation.

Then we are going to say the data flow and behavior of the data load (either delta or full) also we can tell the duration of the cube activation or creation.. then you don’t want to create any info providers or you don’t want to enhance any thing in the existing BW Business Content. Functional Specification: Here we will describe the business requirements. 4.. MM or FI. In my first project we implemented for Solution Manager BW implementation. For example. MM and FI etc. For that we have taken only existing info objects and created new info objects which are not there in the business content.. How will you get the 12 months of data. This document is going to mingle with both Function Consultants and Business Users.Security requirements to prevent unauthorized use . But the source system has new scenarios for message escalation. 3.Source systems that are involved and the scope of information needed from each.. 2. Same explanation goes for the rest. According their business scenario we couldn’t use standard business content.Any major transformation that is needed in order to provide the information. What will be the technical and display name of the report. data sources.objects you'll use. Pure BW technical things are available in this document. If your source system other that R3 then you should go with customization of your all objects.Intended audience and stakeholders and their analysis needs. info sources and info providers).. These functional requirements represent the scope of analysis needs and expectations (both now and in the future) of the end user. What changes in properties will do to get the precision. Because surely they should have included their new business scenario or new enhancements. This is not for End users document. etc. But 99% this is not possible. 134) who used to make the Technical and Functional Specifications? Technical Specification: Here we will mention all the BW objects (info objects. This document is applicable for end users also. There we have activated all the business content in CRM. 135) How do we decide what cubes has to be created? Its depends on your project requirement. then we are going to tell the KPI and deliverable reports detail to the users.Business reasons for the project and business questions answered by the implementation. Customized cubes are not mandatory for all the projects. ageing calculation etc. Normally your BW customization or creation of new info providers all are depending on your source system. If your business requirement is differs from given scenario (BI content cubes) then only we will opt for customized cubes. etc are clearly specified in the technical specs. If your source system is R3 and your users are using only R3 standard business scenarios like SD. what'll be the technical name of the objects you'll create as a result of this report. After that we have created custom data source to info providers as well as reports. That means here we are going to say which are all business we are implementing like SD. who'll be authorized to run this report. How do you set up the variable. These typically involve all of the following:. 136) How do we gather the requirements for an Implementation Project? One of the biggest and most important challenges in any implementation is gathering and understanding the end user and process team functional requirements.Critical success factors for the implementation.

You can then track the issues in database. we are creating single access point to the users to view all information available to them. which displays KPIs (Key Performance Indicators) as charts and graphs. Choose the graph type required that meet your requirement. A dashboard is just a collection of reports. both now and in the future. they also serve to document the implementation. helps the users to understand the measure(s) trend with business flow creating dashboard Dashboards : Could be built with Visual Composer & WAD create your dashboard in BW. gives clarity on decision that needs to be taken. Although simple in concept. These reports are called as Dashboard Reports. Centrally storing this information assists in gathering and then managing issues to resolution. Issues database: Another tool used in the blueprinting phase is the issues database. as seen in the following sample questions:1) What information do you capture on a purchase order?2) What information is required to complete a purchase order? Accelerated SAP question and answer database: The question and answer database (QADB) is a simple although aging tool designed to facilitate the creation and maintenance of your business blueprint. The kinds of questions asked are germane to the particular business function. As such. A dashboard is a graphical reporting interface. and update the database accordingly. Customers are provided with a customer input template for each application that collects the data. and build the BW system to these requirements. A dashboard is a performance management system When we look at the all organization measures how they are performing with helicopter view. tune them perfectly. (1) (2) (3) (4) (5) Create all BEx Queries with required variants. The question and answer format is standard across applications to facilitate easier use by the project team. assign them to teammembers. igoogle is a dashboard. Absolutely this will save lot of precious time. Each business blueprint document essentially outlines your future business processes and business requirements. This database stores the questions and the answers and serves as the heart of your blue print. 137) What we do in Business Blue Print Stage? SAP has defined a business blueprint phase to help extract pertinent information about your company that is necessary for implementation. views and links etc in a single view. . These blueprints are in the form of questionnaires that are designed to probe for information that uncovers how your company does business. we need a report that teaches and shows the trend in a graphical display quickly. still we can report these measures individually.This process involves one seemingly simple task: Find out exactly what the end users' analysis requirements are. in practice gathering and reaching a clear understanding and agreement on a complete set of BW functional requirements is not always so simple. 138) what is dashboard? A dash board can be created using the web application Designer (WAD) or the visual composer (VC). Create a web template that has navigational block / selection information. For e. Differentiate table queries and graph queries.g. so that important matters do not fall through the cracks. This database stores any open concerns and pending issues that relate to the implementation. Draw the layout how the Dashboard page looks like. but by keeping all measures in a single page.

In order to have this special structure filled with your starting position. this can have its reflection on order. 7) The URL when this web template is executed should be used in the portal/intranet 139) Tell me web template? You get information on where the web template details are stored from the following tables : RSZWOBJ Storage of the Web Objects RSZWOBJTXT Texts for Templates/Items/ Views RSZWOBJXREF Structure of the BW Objects in a Template RSZWTEMPLATE Header Table for BW HTML Templates You can check these tables and search for your web template entry . In LO. 1. (7) Include the relevant web items into web template. and this structure now is used for BI. However. delivery. (8) Deploy the URL/Iview to users through portal/intranet The steps to be followed in the creation of Dashboard using WAD are summarized as below: 1) Open a New Web template in WAD. etc. And the payer can also be different. 2) Define the tabular layout as per the requirements so as to embed the necessary web items. 3) Place the appropriate web items in the appropriate tabular grids 4) Assign queries to the web items (A Query assigned to a web item is called as a data provider) 5) Care should be taken to ensure that the navigation block’s selection parameters are common across all the BEx queries of the affected data providers. If I understand your question correctly. Therefore a special record structure is build for Logistical reports. 6) Properties of the individual web items are to be set as per the requirements. Go to transaction code RSA3 and see if any data is available related to your . BI can get information direct out of this (relatively) simple database structure. BI would start with data from the moment you start the filling of LO (with the logistical cockpit) 141) What is statistical setup and what is the need and why? Follow these steps to filling the set up table. supply. invoice. you can have an order with multiple deliveries to more than one delivery addresses. you must run a set-up. Once you post in a ledger that is done. They can be modified in Properties window or in the HTML code. When 1 item (orderline) changes. you will have to open the template in the WAD and then make the corrections in the same to correct it. 140) Why we have construct setup tables? The R/3 database structure for accounting is much more easier than the Logistical structure.(6) Keep navigational block fields are common across the measures. If you wouldn't run the setup. From that moment on R/3 will keep filling this LO-database. You can correct. but that give just another posting.

. 2.x and RSDDSTAT_DM for BI 7. Go to transaction code RSA7 there you can see green light # Once the new records are added immediately you can see the record in RSA7. Now all the data available in the setup tables are now loaded into the data target. Go to BW system and create a new info package for delta loads. the documents can be entered again. The cause of the termination should be investigated and the problem solved. 4. if problems that result in the termination of the statistics update occur. In OLI*** (for example OLI7BW for Statistical setup for old documents : Orders) give the name of the run and execute. 142) How can you decide the query performance is slow or fast? You can check that in RSRT tcode. V2 Update: V2 Update starts a few seconds after V1 Update and in this update the values get into Statistical Tables. Double click on new info package.sm59: Choose the . v3 jobs in extraction? V1 Update: when ever we create a transaction in R/3(e. Now for the delta records go to LBWE in R/3 and change the update mode for the corresponding DataSource to Direct/Queue delta. Now you can go to your data target and see the delta record. Go to transaction SBIW --> Settings for Application Specific Datasource --> Logistics --> Managing extract structures --> Initialization --> Filling the Setup table --> Application specific setup of statistical data --> perform setup (relevant application) 3. Go to BW system and create info package and under the update tab select the initialize delta process. Under update tab you can see the delta update radio button. 9. While updating.) and this takes place in V1 Update. 7.. the original documents are NOT saved. 6.DataSource.0 and press enter you can view all the details about the query like time taken to execute the query and the timestamps 143) Difference between v1. Go to transaction RSA3 and check the data. V3 Update: Its purely for BW extraction. By doing this record will bypass SM13 and directly go to RSA7. You need to maintain the login information for the logical system. VBAP. 8. Now all the available records from R/3 will be loaded to setup tables. V2 and V3 updates means? 144) what are statistical update and document update? Synchronous Updating (V1 Update) The statistics update is made synchronously with the document update. V1. If data is there in RSA3 then go to transaction code LBWG (Delete Setup data) and delete the data by entering the application name.g. v2. from where we do the extraction into BW. But in the Document below. And schedule the package. execute the query in RSRT and after that follow the below steps Goto SE16 and in the resulting screen give table name as RSDDSTAT for BW 3. Go to transaction LBWE and make sure the update mode for the corresponding DataSource is serialized V3 update. 5.Sales Order) then the entries get into the R/3 Tables(VBAK. Subsequently. V2 and V3 are defined in a different way. Radio button: V2 updating 145) How we do the SD and MM configuration for BW? You need to activate the data sources in R3 system. Can You please explain me in detial what exactly V1..

maintain the user credentials. Next you need to understand the process flow that has been implemented at the clients place.RFC destination. what happened in background? 133) What is the “Row count”. From an SD perspective. under logon Security. you as a BW consultant should first understand the basic SD process flow on the R3 side. Next look at all the cubes and ODS for SD. From a BW perspective you need to first know all the SD extractors and what information they bring. 146) 125) Shall I put CKF in RCKF? 126) What is the t-code for RRI? 127) What is the t-code for sales order and General ledger creation? 128) Can we use CKF in RKF? yes 129) Open hub and Info spoke? 130)In Info cube & DSO which one is better suited for reporting? Explain and what are the drawbacks of each one? 131) Early delta Initialization? 132)I have loaded the data using V3unserialized to DSO. Maintain control parameters for data transfer. which info providers it is available? 134) In W. (Search the forum for SD process flow and you'll get a wealth of information on the flow and the tables as well as transactions involved in SD). How the SD data flows and what are the integration points with other modules as well as how the integration happens. BW system. Filling in of setup tables SBIW I feel that these are certain prerequisites. This knowledge is essential when modeling your BW design.DSO we have 0RECORD MODE? 135) If I put the Key figher in key fields area in DSO what happened? .O.