Professional Documents
Culture Documents
Bods Faq
Bods Faq
Answer:
Answer:
Designer
Repository
Job Server
Engines
Access Server
Adapters
Real-time Services
Address Server
Management Console
Answer:
Answer:
A job is the smallest unit of work that you can schedule independently
for execution.
A work flow defines the decision-making process for executing data
flows.
Answer
Answer:
7. What is a transform?
Answer:
8. What is a Script?
Answer:
A script is a single-use object that is used to call functions and assign
values in a workflow.
Answer:
Real-time jobs "extract" data from the body of the real time message
received and from any secondary sources used in the job.
Answer:
Answer:
Answer:
Three.
Database Datastores: provide a simple way to import metadata directly
froman RDBMS.
Answer:
Answer:
Answer:
Answer:
File Format
Answer:
A local repository
A central repository
A profiler repository
Answer:
A Repository is a set of tables that hold system objects, source and target
metadata, and transformation rules. A Datastore is an actual connection
to a database that holds data.
Answer:
20. When would you use a global variable instead of a local variable?
Answer:
When the variable will need to be used multiple times within a job.
When you want to reduce the development time required for passing
values between job components.
When you need to create a dependency between job level global variable
name and job components.
Answer:
The Value that is constant in one environment, but may change when a
job is migrated to another environment.
22. List some reasons why a job might fail to execute?
Answer:
Incorrect syntax, Job Server not running, port numbers for Designer and
Job Server not matching.
23. List factors you consider when determining whether to run work
flows or data flows serially or in parallel?
Answer:
24. What does a lookup function do? How do the different variations of
the lookup function differ?
Answer:
All lookup functions return one row for each row in the source. They
differ in how they choose which of several matching rows to return.
'
25. List the three types of input formats accepted by the Address Cleanse
transform.
Answer:
26. Name the transform that you would use to combine incoming data
sets to produce a single output data set with the same schema as the
input data sets.
Answer:
Answer:
Answer:
Data_Transfer
Date_Generation
Effective_Date
Hierarchy_Flattening
History_Preserving
Key_Generation
Map_CDC_Operation
Table_Comparison
XML_Pipeline
Answer:
Global_Address_Cleanse
Data_Cleanse
Match
Associate
Country_id
USA_Regulatory_Address_Cleanse
30. What are Cleansing Packages?
Answer:
Answer:
Answer:
33. Give some examples of how data can be enhanced through the data
cleanse transform, and describe the benefit of those enhancements.
Answer:
Enhancement Benefit
Determine gender distributions and target
34. A project requires the parsing of names into given and family,
validating address information, and finding duplicates across several
systems. Name the transforms needed and the task they will perform.
Answer:
35. Describe when to use the USA Regulatory and Global Address Cleanse
transforms.
Answer:
36. Give two examples of how the Data Cleanse transform can enhance
(append) data.
Answer:
The Data Cleanse transform can generate name match standards and
greetings. It can also assign gender codes and prenames such as Mr. and
Mrs.
37. What are name match standards and how are they used?
Answer:
38. What are the different strategies you can use to avoid duplicate rows
of data when re-loading a job.
Answer:
Designing the data flow to completely replace the target table during
each execution.
Answer:
It does not allow duplicated data entering into the target table.It works
like Type 1 Insert else Update the rows based on Non-matching and
matching data respectively.
Answer:
41. What are the difference between Row-by-row select and Cached
comparison table and sorted input in Table Comparison Tranform?
Answer:
Row-by-row select —look up the target table using SQL every time it
receives an input row. This option is best if the target table is large.
Sorted input — To read the comparison table in the order of the primary
key column(s) using sequential read.This option improves performance
because Data Integrator reads the comparison table only once.Add a
query between the source and the Table_Comparison transform. Then,
from the query’s input schema, drag the primary key columns into the
Order By box of the query.
42. What is the use of using Number of loaders in Target Table?
Answer:
Answer:
Answer:
Answer:
The History_Preserving transform allows you to produce a new row in
your target rather than updating an existing row. You can indicate in
which columns the transform identifies changes to be preserved. If the
value of certain columns change, this transform creates a new row for
each row flagged as UPDATE in the input data set.
Answer:
Answer:
Answer:
Use the Case transform to simplify branch logic in data flows by
consolidating case or decision-making logic into one transform. The
transformallows you to split a data set into smaller sets based on logical
branches.
Answer:
You must define audit points and audit rules when you want to audit a
data flow.
Answer:
The following sections describe ways you can adjust Data Integrator
performance
Caching data
Join ordering
Improving throughput