This action might not be possible to undo. Are you sure you want to continue?
1. Different kinds of extractors: LO Cockpit Extractors are SAP standard / pre-defined extractors / Data Source for loading data to BW. COPA- is customer generated application specific Data Source. When we create COPA Data Source we will be getting different field selections. There are no BI cubes for COPA. Generic Extractor: We create generic extractors from table views, query and functional module / InfoSet Query. 2. What's the difference between extraction structure and table in datasource? a) The extraction structure is just a technical definition, it does not hold any physical data on the database. The reason why you have it in addition to the table/view is that you can hide deselect fields here so that not the complete table needs to be transferred to BW. b) In short - The extract structure define the fields that will be extracted and the table contains the records in that structure. c) Table is having data but Extract structure doesn’t have data. Extract structure is formed based on table and here we have the option to select the fields that are required for extraction. So extract structure will tell what are the fields that are using for extraction. 3. Define V3 Update (Serialised and Unserialised), Direct Delta and Queued Delta a). Direct Delta: When number of document changes between two delta extractions is small, you go for direct delta. The recommended limit is 10000 i.e. if the No of doc changes (Creating, changing and deleting) between two successive delta runs is within 10000, direct delta is recommended. Here the number of LUWs are more as they are not clubbed into one LUW. b). Queued delta is used if number of document changes is high ( more than 10000). Here data is written into an extraction queue and from there it is moved to delta queue. Here up to 10000doc changes are cumulated to one LUW. c). Unserialized V3 update method is used only when it is not important that data to be transferred to BW in the exactly same sequence as it was generated in R/3. d). Serialized V3 Update: This is the conventional update method in which the document data is collected in the sequence of attachment and transferred to BW by batch job. The sequence of the transfer does not always match the sequence in which the data was created. Basic difference is in the sequence of data transfer. In Queued delta it is same as the one in which documents are created whereas in serialized v3 update it is not always the same. 4) Difference between Costing based and Account based CO-PA
sap. 5. Search functionality has improved!! You can search any object. New ODS introduced in additional to the Standard and transactional. The monitoring has been improved with a new portal based cockpit. There are many differences between them in areas like extraction. Account based would be more exact to tie out to the GL.com. analysis administration and so forth.1C that wasn't found in BW 3. COPA accounting based is for seeing at abstract level whereas costing based is the detailed level. reporting. 8. you can always revert to the old transactions too. In Infosets now you can include Infocubes as well. This facility is available only for info cube. where as cost accounting is based on cost centers.1C. Not like 3. COEP.1C. ODS is renamed as DataStore to meet with the global data warehousing standards.Instead of update rules and Transfer rules new concept introduced called transformations. What is the difference between SAP BW 3. The BI accelerator (for now only for infocubes) helps in reducing query run time by almost a factor of 10 . 6. Which means you would need to have an EP guy in your project for implementing the portal ! :) 9. No Update rules or Transfer rules (Not mandatory in data flow) 2. Vendors for these would be HP or IBM. 2. Costing based is derived from value fields. 4. SAP has a pretty decent reference library on their Web site that documents that additional objects found with 3.0 is called SAP BI and is one of the components of SAP NetWeaver 2004s. COSS and COSP 1.100. please refer to the documentation given on help. The Re-Modeling transaction helps you adding new key figures and characteristics and handles historical data as well without much hassle.0B and SAP BW 3.5 10. you get account based with it. Costing based is not easy to balance to the GL and more analytical and expect differences. Costing based offers some added revaluation costing features Implementing costing based is much more work but also gives much more reporting possibilities especially focused on margin analyses. COPA Tables: Account base COPA tables are COEJ. 1. Data integrity means no duplicate data.5 and 7. What is the difference between SAP BW 3. 3. EDW. 7. And lot more changes in the functionalities of BEX query designer and WAD etc.5? The best answer here is Business Content.0? SAP BW 7. For a detailed description. There is additional Business Content provided with BW 3. with the advantage of reconciled data. This BI accl is a separate box and would cost more. Without paying attention to it while implementing costing based COPA. What is data integrity? Data Integrity is about eliminating duplicate entries in the database. . Transformations are in and routines are passe! Yes. 90% we go for costing based only. COPA Accounting is based on Account numbers.0B.Account based is tied to a GL account posting. 3. 3.
You can also use hierarchies on navigational attributes. 5. In SAP BW. The only difference is that navigation attributes can be used for navigation in queries. What are the differences between ODS and Info Cube? ODS holds transactional level data. Used to automate all the processes including Data load and all Administrative Tasks like indices creation deletion. Highly controlled data loading. B-tree indices for regular database tables and bitmap indices for fact tables and aggregate tables. But an extra feature is that there is a possibility to change your history. A navigational attribute would function more or less like a characteristic within a cube.4. 6. If there are duplicate data in Cubes. Business Content KPIs have been developed based upon input from customers. . as it is possible for characteristics. Cube compression etc. Cube is based on multidimensional model. What is the use of process chain? The use of Process Chain is to automate the data load process.) 3. which is useful in making strategic decisions. KPIs are put in place and visible to an organization to indicate the level of progress and status of change efforts in an organization. drill-down etc. Disadvantage is also a slow down in performance. (Please look at the relevant time scenarios). The data is deleted after the records have been activated) Whereas Cube holds aggregated data which is not as detailed as ODS. Difference between Display Attribute and Navigational Attribute? The basic difference between the two is that navigational attributes can be used to drilldown in a Bex report whereas display attributes cannot be used so. ODS have three tables 1. 8. the attribute needs to be made navigational in the cube apart from the master data info-object. Activation Queue table (For saving ODS data records that are to be updated but that have not yet been activated. What is KPIs (Key Performance Indicators)? (1) Predefined calculations that render summarized and/or aggregated information. 7. 9. Fix data in PSA or ODS and re-load again from PSA / ODS. It’s just as a flat table. To enable these features of a navigational attribute. partners. such as ODS Objects or InfoCubes for example. like filtering. What is index? Indices/Indexes are used to locate needed records in a database table quickly. If navigation attributes changes for a characteristic. Performance Metric measures. (2) Also known as Performance Measure. Change log Table (Contains the change history for delta updating from the ODS Object into other data targets. and industry experts to ensure that they reflect best practices. it is changed for all records in the past. KPIs are industry-recognized measurements on which to base critical business decisions. how would you fix it? Delete the request ID. It’s not based on multidimensional model. BW uses two types of indices. Active Data table (A table containing the active data) 2.
ODS.An ODS is a flat structure. Query performance and Load performance 13. ODS. Then you aggregate this data to an info cube One major difference is the manner of data storage. Make sure schedule this job judiciously. Here. nested selects and rectify them. What are the Data Loading Tuning from R/3 to BW. no overwrite. These InfoProviders are connected to one another by a union operation. joins form the intersection of the tables. Info-object in Multiprovider whereas in an Infoset you can only have ODS and Info-object. As a comparison: InfoSets are created using joins. 11. it stores all changes per request and updates the target. 10. b) Watch out the ABAP code in Transfer and Update Rules. try to do load balancing by distributing the load among different servers. this might slow down performance c) If you have several extraction jobs running concurrently. all values of these data sets are combined. d) A union operation is used to combine the data from these objects into a MultiProvider. The join may be a outer join or a inner join. d) If you have multiple application servers. Difference between InfoSet and Multiprovider a) The operation in Multiprovider is "Union" where as in Infoset it is either "inner join" or "Outer join". it composed of multiple tables arranged in a STAR SCHEMA joined by SIDs. b) You can add Info-cube. data is stored in flat tables. . we can delete / overwrite the data load but in cube – only add is possible. Whereas a Multiprovider is created on all types of Infoproviders . Advantage: To minimize space. By flat I mean to say ordinary transparent table whereas in a CUBE. the system constructs the union set of the data sets involved. What is the use of change log table? Change log is used for delta updates to the target. In ODS.Cubes. The purpose is to do MULTIDIMENSIONAL Reporting In ODS. What is the T. These joins only combine values that appear in both tables.Code for Data Archival and what is it's advantage? SARA. In other words. Most of the time you use an ODS for line item data. there probably are not enough system resources to dedicate to any single extraction job. In contrast to a union. FF to BW? a) If you have enhanced an extractor. c) An Infoset is an Info-provider that joins data from ODS and Info-objects( with master data). check your code in user exit RSAP0001 for expensive SQL statements. Info-object. It is just one table that contains all data. 12.
RFC trace and buffer trace. we use the transfer rules to determine how we want the transfer structure fields to be assigned to the communication structure InfoObjects. . SQL traces. characteristics) is updated to data targets from the communication structure of an InfoSource. time characteristics. (Use PSA and Data Target in parallel option in the info package. It is a very helpful tool if you know the program or routine that you suspect is causing a performance bottleneck. You are therefore connecting an InfoSource with a data target. table/buffer trace etc. We can also fill InfoObjects using routines.e) Build secondary indexes on the under lying tables of a DataSource to correspond to the fields in the selection criteria of the datasource. make sure that this file is housed on the application server and not on the client machine. We can arrange for a 1:1 assignment.) g) Buffer the SID number ranges if you load lot of data at once. Enqueue trace. e) BW Technical Content Analysis: SAP Standard Business Content 0BWTCT that needs to be activated. formulas. ( Indexes on Source tables) f) Try to increase the number of parallel processes so that packages are extracted parallelly instead of sequentially. or constants. It contains several InfoCubes. f) BW Monitor: You can get to it independently of an InfoPackage by running transaction RSMO or via an InfoPackage. d) Performance Analysis: Transaction ST05 enables you to do performance traces in different are as namely SQL trace. j) If your source is not an SAP system but a flat file. g) ABAP Runtime Analysis Tool: Use transaction SE30 to do a runtime analysis of a transaction. i) Use SAP Delivered extractors as much as possible. ODS Objects and MultiProviders and contains a variety of performance related information. h) Load master data before loading transaction data. program or function module. 15. Difference between Transfer Rules and Update Rules a) Transfer Rules: When we maintains the transfer structure and the communication structure. Performance monitoring and analysis tools in BW a) System Trace: Transaction ST01 lets you do various levels of system trace such as authorization checks. b) Workload Analysis: You use transaction code ST03 c) Database Performance Analysis: Transaction ST04 gives you all that you need to know about what’s happening at the database level. It is a general Basis tool but can be leveraged for BW. 14. An important feature of this tool is the ability to retrieve important IDoc information. Files stored in an ASCII format are faster to load than those stored in a CSV format. Update rules: Update rules specify how the data (key figures.
In the right window there is a category "all object according to type" iii. v. double click on this you will get the number of objects. i. 17. select all your required objects you want to transport. it will create one request. of source system for a data target.b) Transfer rules are linked to InfoSource. How to unlock objects in Transport Organizer? To unlock a transport use Go to SE03 --> Request Task --> Unlock Objects .The no. In the display tab. xi. Transfer rules are source system dependant whereas update rules are Data target dependant. note it down this request. If you have several InfoCubes or ODS objects connected to one InfoSource you can for example adjust data according to them using update rules. vi. How to transport BW object? Follow the steps. iii. and then go with display. iv. Continue. Currency conversion is not possible in transfer rules. of transfer rules would be equal to the no. Click that. select yours one. update rules are linked to InfoProvider (InfoCube. Select required object you want to transport. there is select object. enter the Request. Go with the selection. c. You can access this by entering OSS1 transaction or visit Service. There is icon Transport Object (Truck Symbol). ix. Currency translations are possible in update rules. your coordinator/Basis person will move this request to Quality or Production. ODS). ii. vii. Transfer rules give you possibility to cleanse data before it is loaded into BW. Go to Transport Organizer (T. b. RSA1 > Transport connection ii. c) Using transfer rules you can assign DataSource fields to corresponding InfoObjects of the InfoSource. This is not possible in transfer rules. iv.SAP. Check your transport request whether contains the required objects or not. x. if yes "Release" that request. What is OSS? OSS is Online support system runs by SAP to support the customers. if not go with edit. Update rules describe how the data is updated into the InfoProvider from the communication structure of an InfoSource. i. Only in Update Rules: a. That’s it. You can use return tables in update rules which would split the incoming data package record into multiple ones. 18. Expand that object. 16. viii.code SE01). If you have a key figure that is a calculated one using the base key figures you would do the calculation only in the update rules.Com and access it by providing the user name and password.Transfer rules is mainly for data cleansing and data formatting whereas in the update rules you would write the business rules for your data target.
Cube compression etc Highly controlled dataloading. (Automation of a group of infopackages only for dataload).Enter your request and select unlock and execute. 25. example: CONVERSION_EXIT_ALPHA_INPUT CONVERSION_EXIT_ALPHA_OUTPUT 23. 19.Info Package Groups are used to group only Infopackages where as Process chains are used to automate all the processes. R/3 to ODS delta update is good but ODS to Cube delta is broken. This will unlock the request. 22. We can use ABAP programs and lot of additional features like ODS activation and sending emails to users based on success or failure of data loads. Difference between Start Routine and Conversion Routine In the start routine you can modify data packages when data loading. ii Infopackage goups: Use to group all relevent infopackages in a group. What is Conversion Routine? a) Conversion Routines are used to convert data types from internal format to external/display format or vice versa. you need not to go to the application tables again and again which in turn will increase your system performance. InfoPackage Groups/Event Chains are older methods of scheduling/automation. 21. Process Chains: Used to automate all the processes including Dataload and all Administrative Tasks like indices creation deletion. iii. Differences Between Infopackage Groups and Process chains i. What is the use of setup tables in LO extraction? The use of setup table is to store your historical data in them before updating to the target system. 20. b) These are function modules. Conversion routine usually refers to routines bound to InfoObjects (or data elements) for conversion of internal and display format. Once you fill up the setup tables with the data. How to fix it? . Process Chains are newer and provide more capabilities.. Possible to Sequence the load in order. What is InfoPackage Group? An InfoPackage group is a collection of InfoPackages. What are the critical issues you faced and how did you solve it? Find your own answer based on your experience. c) There are many function modules. CONVERSION_EXIT_XXXX_OUTPUT. 24. they will be of type CONVERSION_EXIT_XXXX_INPUT.
choose Tools --> ABAP Workbench --> Test --> Dump-Analysis from the SAP Eas . full table space. vii.. 26. You get short dumps b'coz of runtime errors. This could be of many reasons. Check the mapping of Transfer/Update Rules iv. Dump (for a lot of reasons. Here you can even analyze short dump. You can check short dumps in T-code ST22.) Do not receive an IDOC correctly.. Check the Monitor (RSMO) what’s the error explanation. time out. There is a error load before the last one and so on. U can give the job tech name and your userid..i. Based on explanation. BW is not set as source system vi. U can use ST22 in both R/3 and BW. The short dump u got is due to the termination of background job. sql errors. What is short dump and how to rectify? Short dump specifies that an ABAP runtime error has occurred and the error messages are written to the R/3 database tables. OR To call an analysis method.. Check the timings of delta load from R3 – ODS – CUBE if conflicting after ODS load iii. You can view the short dump through transaction ST22. we can check the reason ii. It will show the status of jobs in the system. Fails in RFC connection v.
What are the role & responsibilities when you are in implementation and while in support also Major Differences between Sap Bw 3.0 3.5 & sapBI 7. New authorization objects have been added 11. Differences b/w 3.0 version 1. How do you load deltas into ODS and cube 10. How do you maintain work history until a ticket is closed 12.SAP BW Questions .100. Can you explain a life cycle in brief 4. This is only for info cube. 10. 4.5 6. What is index and how do you increase performance using them 9. The BI accelerator (for now only for infocubes) helps in reducing query run time by almost a factor of 10 . you can always revert to the old transactions too. 8. Functional enhancements have been made for the DataStore object: New type of DataStore object Enhanced settings for performance optimization of DataStore objects. Steps of LIS 7. Which means you would need to have an EP guy in ur project for implementing the portal ! :) 5. 7. Not like 3. Differences b/w 3. The monitoring has been imprvoed with a new portal based cockpit. The Data Warehousing Workbench replaces the Administrator Workbench. Steps of LO 6. Search functionality hass improved!! You can search any object.5 and BI 7. 2. Transformations are in and routines are passe! Yess. In Infosets now you can include Infocubes as well. Steps of generic 8. The Remodeling transaction helps you add new key figure and characteristics and handles historical data as well without much hassle. What is reconciliation 13. The transformation replaces the transfer and update rules. . 3. What is the methodology u use before implementation 14.Some Real Question 1. Remodeling of InfoProviders supports you in Information Lifecycle Management. Vendors for these would be HP or IBM. 9.0 and 3. Example of errors while loading data and how do u resolve them 11.5 2. Difference b/w table & structure 5. This BI accl is a separate box and would cost more.
It looks like we would not have the option to bypass the PSA Yes. DTP (Data Transfer Process) replaced the Transfer and Update rules.12 The Data Source: There is a new object concept for the Data Source. Performance optimization includes new BI Accelerator feature. Load through PSA has become a mandatory. 1. v. g) Load through PSA has become a must. It implements the Enterprise Data Warehousing (EDW). This replaces the Transfer Rules and Update Rules iv.0. ============================== 3A. . b) Inclusion of Write-optmized DataStore which does not have any change log and the requests do need any activation c) Unification of Transfer and Update rules d) Introduction of "end routine" and "Expert Routine" e) Push of XML data into BI system (into PSA) without Service API or Delta Queue f) Intoduction of BI accelerator that significantly improves the performance. Real time data Acquisition (RDA). 13. remote activation of Data Sources is possible in SAP source systems. Also in the Transformation now we can do "Start Routine. New data flow capabilities such as Data Transfer Process (DTP). data extraction. I am not too sure about this. Complete life-cycle implementation of SAP BW.BI supports real-time data acquisition. From BI.There are functional changes to the Persistent Staging Area 14. which includes data modeling. ii. Expert Routine and End Routine".Project peparation. You can't skip this. data loading. during data load. One level of Transformation. and also there is no IDoc transfer method in BI 7. ===================================== 2. iii. (PSA). Options for direct access to data have been enhanced. The new features/ Major differences include: a) Renamed ODS as DataStore. New features in BI 7 compared to earlier versions: i. User management (includes new concept for analysis authorizations) for more flexible BI end user authorizations. When they refer to the full life cycle its the ASAP methodology which SAP recommends to follow for all its projects. 2. 16. 15 SAP BW is now known formally as BI (part of NetWeaver 2004s).Business Blueprint. reporting and support. Enhanced and Graphical transformation capabilities such as Drag and Relate options.
Fit-Gap Analysis 5.Go-Live Phase. and then LBWE to maintain the extract structure LBWG -. We need to transfer them from the BCT and then activate them from D version to the active Verion A . we can also select and hide the fields here .. 0-499 is for SAP structures n 500 to 999 are for customer defined .to delete the setup tables ---. LO dataosuurces come as a part of the business content in BW . LBWf -.to delete the update queue . Its always recommended to have the Queued delta as an update method ..e.Statistcal initialization .to get the log .it comes under the application specific customer generated extractions.. We need to consider the two cases if its the SAP defined r the customer defined . The range is 000 to 999. If the required fields are not avilable we will go for the DS enhancments . In the second phase u will get a functional spec n basing on that u will have a technical spec. Normally in the first phase all the management guys will sit with in a discussion .. since it uses the collective run to schedule all the changed records . LBWQ --. ? OLI*bw -. 4.To delete the extractor queue SM13 -.3. =========================== 5A. Lets see the SAP defined first : . If its a supporting project u will come into picture only after successful deployment.why .? We need to delete the setup tables since we need to delete the data that is already in them and also because we will change the ES by adding the required fields from the R/3 communuication structrure . ================================= 6A. LO extraction is one stop shop for all the logistics extarctions . Go to the tcode -. You might fall under and get involved in the realization phase . LIS uses information structures to extract the data .RSA5 ---. In the third phase u will actually inplement the project and finally u after testing u will deploy it for production i. all the filds in blue are mandatory .Transfer the desired datasource . Once the intial run is over set the delta method Queued to periodic for further delta load Why . Go-live ..Realization.
LO provdes the IS upto the level of detail .change MC20 -. ============================ 7A.MC23. You can use OMO1 to fill the tables n u do the full upload then u need to setup the delta by sel the steup delta .. Enhanced performance and also deletion of setup tables after updation which we never do in LIS . infoset. Then go to LBW1 to change the version ..if u seelct the generate datasource option it will throw an error saying that u cannot create in the SAp name range . now you do the full load and after that go LBW1 change the version and then go to LBW2 to setup the delta update n the periodic job .MC24. But LO is preffered to LIS in all aspects . ..we will come to know this in the table TMCBIW . LBW2 to setup the no update update method .2LIS_appl No _BIW2 and 2LIS _appl no_BIWS These two structures are used to enable the delta records interchangebaly --. 2LIS_application no _BIW1 .. In generic we create our own datasource and activate it . You select the option the setup environment settings it will create the two tables n one structure with the naming convention .dispaly MC21--MC22.... You can set whether the delta is enabled are not using the option activate /deactivate deelta Both in cases you need while migrating the data you need to lock the setup tables to prevent users from entreing the transactions SE14 ..create the IS MC19-. and after completion u need to unlock .MC26 folwing to craete the update rules .Tcode -.MC25..LBW0 give the information structure and selct the option display settings to know the status . function module we use generic extraction. Now you can load the delta updates . We opt for generic extraction whenever the desired datasource is not available in business content or if it is already used and we need to regenerate it . If its in the case of the user defined : You need to create the IS MC18 --. Then Tcode LBW0 give the IS name and sel setup lis environment it will craete the the two tables n the structure . when u want to extract the data from table. view.
sol: raise the ticket to BASIS team to provide the connection. Data error in PSA.. Timestamp: using this you can always delta as many times as possible with a upeprlimit .Tcode is RSO2 2. eg: CATSDB HRtime managent table . Similar mannner in the data level the indexes are act in BW side. sol: Modify the in PSA error data and load.. time stamp.. sol: delete the request and load once again. This indexes will give a exact location of each topic based on this v can easily go to that particulur page and we will continue our thingz...... Give the short medium and long desc mandatory.. and it will fetch the data in a faster mannner . Just common example if you can obeserve any book which has indexes . 5. we have . Time stamp error. Short dump error.is to be run only once a day that to at the end of the day and with a range of 5 min. and continue which leads to a detail screen in which you can always hide. ============================= 9A.is to be used for the tables where it allows only appending of records ana no change . select . 3. 0calday -. field only options are available.Steps : 1. 2. RFC connection failed. 3. Numeric pointer .Give the datasource name and designate it to a particular application component. INDEX: They are used to improve the performance of data retrival while executing the queries or work books the moment when we execute and once we place the valiues in Selection criteria at that time the indexes will act as retrival point of data .inversion . it will not transfer the data from r/3 to BW SELECT -. 4. HIDE is used to hide the fields . sol: activate the data source and replicate the data source and load. Numeric Pointer -. a) Loads can be failed due to the invalid characters b) Can be because of the deadlock in the system c) Can be becuase of previuos load failure . Once the datasource is generated you can extract the data using it .. Give the table / Fm /view /infoset. 4. INVERSION is for key figs which will operate with '-1' and nullify the value .the fields are available in the selection screen of the info package while u schedule it . And now to speak abt the delta . Whenever there 1:1 relation you use the view and 1:m you use FM. ============================ 8A. 0calday . if the load is dependant on other loads d) Can be because of erreneous records .
Compare the Query display data with R/3 or ODS data and .it will store this data in RSALLOWEDCHAR table.(sol: raise the ticket to BASIS team to provide the connection) f) Can be because of missing master data. Error in generating SID values for some data =================================== 10A Ticket is nothing but an issue or a process error which need to be addressed.for example. * non-ito tickets .t. g) Can be because of no data found in the source system h) Invalid characters while loading.which are usually generated by the system automatically when a process fails.. Invalid values for data types of char & key figures. During the replication. You should be about to get a list of all sm59 RFC destinations using ALEREMOTE by using transaction se16 to search field RFCOPTIONS for a value of "*U=ALEREMOTE". Invalid data values for units/currencies etc 3. when a process chain fails toi run it wil generate an ito ticket which we need to address and find the fault. There are two types of tickets.then BW will throw an error like Invalid characters.then you need to go through this RSKC transaction and enter all the Invalid chars and execute... You will need to look for this information in any external R/3 instances that call the instance in which ALEREMOTE is getting locked as well. * ito tickets .e) Can be because of RFC connections.You won't get any error because now these are eligible chars. In general this process is taken @ 3 places one is comparing the info provider data with R/3 data... Unfortunately it's not possible. But this depends on the software you are using. m)ODS Activation Error ODS activation errors can occur mainly due to following reasons 1.x DataSource or as the new DataSource. If you do not run the replication in 'dialog' mode. the DataSource is not replicated.ECC or SCM or SRM. j) Lower case letters not allowed. =========================== 11A Reconciliation is nothing but the comaprision of the values between BW target data with the Source system data like R/3.e. i) ALEREMOTE user is locked Normally.done by RSKC. ALEREMOTE gets locked due to a sm59 RFC destination entry having the incorrect password. you can decide whether this DataSource should be replicated as the 3..which are the issues which the client face and which are forwarded for correction or alternative action.Oracle. If you're using Remedy for tickets. l)datasource not replicated If the new DataSource is created in the source system.. or may be cancelled by R/3 users if it is hampering the performance.. When you are loading data then you may get some special characters like @#$%.. ask your admin.c.Then reload the data.. 4. Invalid characters (# like characters) 2. JD edwards.. look at your infoobject description: Lowercase Letters allowed k)extraction job aborted in r3 It might have got cancelled due to running for more than the expected time. you should replicate this DataSource in 'dialog' mode..
Blueprint. The responsibilites vary depending on the requirement. For ex.. Initially the business analyst will interact with the end users/managers etc. and the project work environment is set up..... What are the objects that we peform in a production Support project? In production Suport Generally most of the project they will work on monitoring area for their loads(R3/ NON SAP to Data Taggets (BW)) and depending up the project to project it varies because some of them using the PC's and Some of them using Event Chains. When all the tests are done we move all the objects to the production environment and test it again whether everything works fine. in which the business processes are defined and the business blueprint document is designed.. Go-Live and Support.. in which the system is configured. then on the requirements the software consultants do the development.Checking the data available in info provider kefigure with PSA key figure values ==================== 12A ASAP Methodology 1. and 5.. Project Preparation. knowledge transfer occurs. Realization. in which the project team is identified and mobilized. In our example after installing the necessary softwares..... . and all end users are trained.. First and foremost will be your requirements gathering from the client. 2.. Lets say If its a fresh implementation of BI or for that matter you are implementing SAP. 3.. the project standards are defined. errors etc. patches for BI we need to discuss with the end users who are going to use the system for inputs like how they want a report to look like and what are the Key Performance Indicators(KPI) for the reports etc.. the new system is activated. After the development comes to an end the same objects are tested in quality servers for any bugs. The Go-Live of the project happens where the actually postings happen from the users and reports are generated based on those inputs which will be available as an analytical report for the management to take decisions. and data mappings and data requirements for migration are defined. Final Preparation. and conversion testing are conducted... Depending upon the requirements you will creat a business blueprint of the project which is the entire process from the start to the end of an implementation... 4. So its Depends up on the Project to project varies. and post-implementation support is provided ======================== 13A Responsibilities of an implementation project. in which the data is migrated from the legacy systems.. stress testing. After collecting those informations the development happens in the development servers.. basically its a question and answer session with the business users.. After the blue print phase sign off we start off with the realization phase where the actual development happens. testers do the testing and finally the go-live happens. extensive unit testing is completed. in which final integration testing.
it should be at least 40MB. RSKC 9. SE37 7. Generally In Production Support Project . SM51 5. Upload Monitor (transaction RSMO or RSRQ (if the request is known) The Workload Monitor (transaction ST03) shows important overall key performance indicators (KPIs) for the system performance The OS Monitor (transaction ST06) gives you an overview on the current CPU. The consultant is required to have access to the following transactions in R3. RSA7 6. The SQL trace (transaction ST05) records all activities on the database and enables you to check long runtimes on a DB table or several similar accesses to the same data. ST04 5. SM58 4. RSA1 2. database buffer quality and database indices. SM13 Authorizations for the following transactions are required in BW 1. memory. we will use the check the loads by using RSMO for Monitoring the loads and we will rectify the errors in that by using step by step analysis. SM37 3. I/O and network load on an application server instance. SM37 3. SE38 6. 1. ST22 2. RSRV The Process Chain Maintenance (transaction RSPC) is used to define. The database monitor (transaction ST04) checks important performance indicators in the database. . such as database size. change and view process chains.. SM51 10. The Export/Import Shared buffer determines the cache size. SM12 8. The ABAP runtime analysis (transaction SE30) The Cache Monitor (accessible with transaction RSRCACHE or from RSRT) shows among other things the cache size and the currently cached queries.What are the different transactions that we use frequently in Production support project? Plz explain them in detial. ST22 4.
*-.Anu radha .
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue listening from where you left off, or restart the preview.