You are on page 1of 15

Learn the answers of some critical questions commonly asked during SAP BO Data Services interview.

1. What is the use of BusinessObjects Data Services?


BusinessObjects Data Services provides a graphical interface that allows you to easily create jobs that
extract data fromheterogeneous sources, transform that data to meet the business requirements of
your organization, and load the data into a single location.

2. Define Data Services components.


Data Services includes the following standard components:



¦Job Server


¦Access Server


¦Real-time Services

¦Address Server

¦Cleansing Packages, Dictionaries, andDirectories

¦Management Console

3. What are the steps included in Data integration process?

To know the answer of this question and similar high frequency Data Services questions, please continue

and loading targets. Workflow. 5. ¦Manage a single metadata repository to capture the relationships between different extraction and access methods and provide integrated lineage and impact analysis. Answer Project. Workflow. What are the steps included in Data integration process? Answer: ¦Stage data in an operational datastore. ¦Data flows extract. ¦Create a single environment for developing. including reading sources. or data mart.3. ¦A work flow defines the decision-making process for executing data flows. Everything having to do with data. Job. and load data. Arrange these objects in order by their hierarchy: Dataflow. Job. and Dataflow Answer: ¦A job is the smallest unit of work that you can schedule independently for execution. What are reusable objects in DataServices? Answer: . Project. transforming data. Define the terms Job. and Workflow. and deploying the entire data integration platform. occurs inside a data flow. Dataflow. testing. ¦Update staged data in batch or real-time modes. data warehouse. 6. 4. transform.

7. What is a transform? Answer: A transform enables you to control how datasets change in a dataflow. What is a Script? Answer: A script is a single-use object that is used to call functions and assign values in a workflow. Workflow. 9. 10. 11. 8. What is a real time Job? Answer: Real-time jobs "extract" data from the body of the real time message received and from any secondary sources used in the job.Job. What is an Embedded Dataflow? Answer: An Embedded Dataflow is a dataflow that is called from inside another dataflow. Dataflow. What is the difference between a data store and a database? .

What are file formats? . 15.Answer: A datastore is a connection to a database. ¦Database Datastores: provide a simple way to import metadata directly froman RDBMS. 14. ¦Adapter Datastores: can provide access to an application’s data and metadata or just metadata. What is the use of Compace repository? Answer: Remove redundant and obsolete objects from the repository tables. 13. How many types of datastores are present in Data services? Answer: Three. What are Memory Datastores? Answer: Data Services also allows you to create a database datastore using Memory as the Database type. ¦Application Datastores: let users easily import metadata frommost Enterprise Resource Planning (ERP) systems. 12. Memory Datastores are designed to enhance processing performance of data flows executing in real- time jobs.

There are 3 types of repositories. ¦A local repository ¦A central repository ¦A profiler repository 18. File format objects can describe files in: ¦Delimited format — Characters such as commas or tabs separate each field.Answer: A file format is a set of properties describing the structure of a flat file (ASCII). 16. File formats describe the metadata structure. ¦Fixed width format — The column width is specified by the user. Which is NOT a datastore type? Answer: File Format 17. and transformation rules. Answer: The DataServices repository is a set of tables that holds user-created and predefined system objects. What is repository? List the types of repositories. source and target metadata. ¦SAP ERP and R/3 format. What is the difference between a Repository and a Datastore? Answer: .

A Variable is a symbolic placeholder for values. Job Server not running. port numbers for Designer and Job Server not matching. ¦When you need to create a dependency between job level global variable name and job components. 22. source and target metadata. 20. ¦When you want to reduce the development time required for passing values between job components. What is the difference between a Parameter and a Variable? Answer: A Parameter is an expression that passes a piece of information to a work flow. but may change when a job is migrated to another environment. . and transformation rules. A Datastore is an actual connection to a database that holds data. List some reasons why a job might fail to execute? Answer: Incorrect syntax. 21. 19. What is Substitution Parameter? Answer: The Value that is constant in one environment. When would you use a global variable instead of a local variable? Answer: ¦When the variable will need to be used multiple times within a job.A Repository is a set of tables that hold system objects. data flow or custom function when it is called in a job.

' 25. They differ in how they choose which of several matching rows to return. Answer: Discrete.23. What does a lookup function do? How do the different variations of the lookup function differ? Answer: All lookup functions return one row for each row in the source. List factors you consider when determining whether to run work flows or data flows serially or in parallel? Answer: Consider the following: ¦Whether or not the flows are independent of each other ¦Whether or not the server can handle the processing requirements of flows running at the same time (in parallel) 24. Answer: The Merge transform. . Name the transform that you would use to combine incoming data sets to produce a single output data set with the same schema as the input data sets. List the three types of input formats accepted by the Address Cleanse transform. multiline. 26. and hybrid.

28. List the data integrator transforms Answer: ¦Data_Transfer ¦Date_Generation ¦Effective_Date ¦Hierarchy_Flattening ¦History_Preserving ¦Key_Generation ¦Map_CDC_Operation ¦Pivot Reverse Pivot ¦Table_Comparison ¦XML_Pipeline 29.27. There is also a SoftwareDevelopment Kit (SDK) to allow customers to create adapters for custom applications. What are Adapters? Answer: Adapters are additional Java-based programs that can be installed on the job server to provide connectivity to other systems such as Salesforce. List the Data Quality Transforms Answer: ¦Global_Address_Cleanse ¦Data_Cleanse .com or the JavaMessagingQueue.

titles. 32. business rules defined in the rule file. and expressions defined in the pattern file. Answer: . and standardize data such as names. Dictionary files are used to identify. and standardizes your data based on information stored in the parsing dictionary. What is the difference between Dictionary and Directory? Answer: Directories provide information on addresses from postal authorities. and firm data. 31. and describe the benefit of those enhancements. Give some examples of how data can be enhanced through the data cleanse transform. 33. parse. What are Cleansing Packages? Answer: These are packages that enhance the ability of Data Cleanse to accurately process various forms of global data by including language-specific reference data and parsing rules.¦Match ¦Associate ¦Country_id ¦USA_Regulatory_Address_Cleanse 30. What is Data Cleanse? Answer: The Data Cleanse transform identifies and isolates specific parts of mixed data.

¦Enhancement Benefit ¦Determine gender distributions and target ¦Gender Codes marketing campaigns ¦Provide fields for improving matching ¦Match Standards results 34. A project requires the parsing of names into given and family. Answer: The Data Cleanse transform can generate name match standards and greetings. and finding duplicates across several systems. Global Address Cleanse should be utilized when processing multi-country data. 36. Describe when to use the USA Regulatory and Global Address Cleanse transforms. What are name match standards and how are they used? . Name the transforms needed and the task they will perform. 37. ¦Match: Find duplicates. Give two examples of how the Data Cleanse transform can enhance (append) data. and Mrs. Answer: Use the USA Regulatory transform if USPS certification and/or additional options such as DPV and Geocode are required. ¦Address Cleanse: Validate address information. validating address information. 35. Answer: ¦Data Cleanse: Parse names into given and family. It can also assign gender codes and prenames such as Mr.

What is the use of Auto Correct Load? Answer: It does not allow duplicated data entering into the target table. What is the use of Array fetch size? Answer: Array fetch size indicates the number of rows retrieved in a single request to a source database. Answer: ¦Using the auto-correct load option in the target table.They are used in the match process to greatly increase match results. ¦Including a preload SQL statement to execute before the table loads. 38. 39.Answer: Name match standards illustrate the multiple ways a name can be represented. and possibly improve performance.It works like Type 1 Insert else Update the rows based on Non-matching and matching data respectively. lowering network traffic. Higher numbers reduce requests. What are the difference between Row-by-row select and Cached comparison table and sorted input in Table Comparison Tranform? . 40. The maximum value is 5000 41. ¦Including the Table Comparison transform in the data flow. ¦Designing the data flow to completely replace the target table during each execution. What are the different strategies you can use to avoid duplicate rows of data when re-loading a job. The default value is 1000.

Loading when the number of loaders is greater than one is known as Parallel Loading. Then. 44. Data Integrator sends a commit to the underlying database every 1000 rows. The maximum number of loaders is 5. from the query’s input schema. lookup_ext () and lookup_seq ()? Answer: ¦lookup() : Briefly. ¦Cached comparison table — To load the comparison table into memory. This option is best when the table fits into memory and you are comparing the entire target table ¦Sorted input — To read the comparison table in the order of the primary key column(s) using sequential read. What is the use of Rows per commit? Answer: Specifies the transaction size in number of rows. This option is best if the target table is large.This option improves performance because Data Integrator reads the comparison table only once. 42. What is the use of using Number of loaders in Target Table? Answer: Number of loaders loading with one loader is known as Single loader Loading.Add a query between the source and the Table_Comparison transform. What is the difference between lookup (). If set to 1000. 43. drag the primary key columns into the Order By box of the query. It returns single value based on single condition ¦lookup_ext(): It returns multiple values based on single/multiple condition(s) .Answer: ¦Row-by-row select —look up the target table using SQL every time it receives an input row. The default number of loaders is 1.

¦Parent Column.¦lookup_seq(): It returns multiple values based on sequence number 45. Child Column ¦Parent Attributes. Child Attributes. NORMAL. What is the use of Case Transform? Answer: . What is the use of Map-Operation Transfrom? Answer: The Map_Operation transform allows you to change operation codes on data sets to produce the desired output. What is Heirarchy Flatenning? Answer: Constructs a complete hierarchy from parent/child relationships. What is the use of History preserving transform? Answer: The History_Preserving transform allows you to produce a new row in your target rather than updating an existing row. DELETE. 46. You can indicate in which columns the transform identifies changes to be preserved. this transform creates a new row for each row flagged as UPDATE in the input data set. and then produces a description of the hierarchy in vertically or horizontally flattened format. 48. Operation codes: INSERT UPDATE. or DISCARD. 47. If the value of certain columns change.

The transformallows you to split a data set into smaller sets based on logical branches. 50. What must you define in order to audit a data flow? Answer: You must define audit points and audit rules when you want to audit a data flow. List some factors for PERFORMANCE TUNING in data services? Answer: The following sections describe ways you can adjust Data Integrator performance ¦Source-based performance options ¦Using array fetch size ¦Caching data ¦Join ordering ¦Minimizing extracted data ¦Target-based performance options ¦Loading method and rows per commit ¦Staging tables to speed up auto-correct loads ¦Job design performance options ¦Improving throughput ¦Maximizing the number of pushed-down operations ¦Minimizing data type conversion ¦Minimizing locale conversion .Use the Case transform to simplify branch logic in data flows by consolidating case or decision-making logic into one transform. 49.

dwbiconcepts.¦Improving Informix repository performance .