You are on page 1of 7

What is a source qualifier?

What is a surrogate key?


What is difference between Mapplet and reusable transformation?
What is DTM session?
What is a Mapplet?
What is a look up function? What is default transformation for the look up function?
What is difference between a connected look up and unconnected look up?
What is up date strategy and what are the options for update strategy?
What is subject area?
What is the difference between truncate and delete statements?
What kind of Update strategies are normally used (Type 1, 2 & 3) & what are the
differences?
What is the exact syntax of an update strategy?
What are bitmap indexes and how and why are they used?
What is bulk bind? How does it improve performance?
What are the different ways to filter rows using Informatica transformations?
What is referential Integrity error? How do you rectify it?
What is DTM process?
What is target load order?
What exactly is a shortcut and how do you use it?
What is a shared folder?
What are the different transformations where you can use a SQL override?
What is the difference between a Bulk and Normal mode and where exactly is it defined?
What is the difference between Local & Global repository?
What are data driven sessions?
What are the common errors while running a Informatica session?
What are worklets and what is their use?
What is change data capture?
What exactly is tracing level?
What is the difference between constraints based load ordering and target load plan?
What is a deployment group and what is its use?
When and how a partition is defined using Informatica?
How do you improve performance in an Update strategy?
How do you validate all the mappings in the repository at once?
How can you join two or more tables without using the source qualifier override SQL or
a Joiner transformation?
How can you define a transformation? What are different types of transformations in
Informatica?
How many repositories can be created in Informatica?
How many minimum groups can be defined in a Router transformation?
How do you define partitions in Informatica?
How can you improve performance in an Aggregator transformation?
How does the Informatica know that the input is sorted?
How many worklets can be defined within a workflow?
How do you define a parameter file? Give an example of its use.
If you join two or more tables and then pull out about two columns from each table into
the source qualifier and then just pull out one column from the source qualifier into an
Expression transformation and then do a ‘generate SQL’ in the source qualifier how
many columns will show up in the generated SQL.
In a Type 1 mapping with one source and one target table what is the minimum number
of update strategy transformations to be used?
At what levels can you define parameter files and what is the order?
In a session log file where can you find the reader and the writer details?
For joining three heterogeneous tables how many joiner transformations are required?
Can you look up a flat file using Informatica?
While running a session what default files are created?
Describe the use of Materialized views and how are they different from a normal view.

1) While running a session what default files are created? – log files
2) Describe the use of Materialized views and how are they different
from a normal view.

A view is a logical entity. It is a SQL statement stored in the database


in the system tablespace. It can used in much the same way as
database table. When ever query is fired against it database taked the
stored SQL statement and creates a table in memory and the
temporary table.

Materialized Views (also known as snapshots in prior releases) is a


pre-computed table comprising aggregated or joined data from fact
and possibly dimensions tables. Also known as a summary or
aggregate table.

3)What is DTM session & DTM process?

– Both are same

• The DTM process - Creates threads to initialize the session,


read, write, and transform data, and handle pre- and post-session
operations

• DTM process is the second process associated with a session


run

• The primary purpose of the DTM process is to create and


manage threads that carry out the session tasks

• The DTM allocates process memory for the session and divides
it into buffers. This is also known as buffer memory
• It creates the main thread, which is called the master thread

• The master thread creates and manages all other threads

• If you partition a session, the DTM creates a set of threads for


each partition to allow concurrent processing

• When the Informatica Server writes messages to the session


log, it includes the thread type and thread ID

4) How do you improve performance in an Update strategy?

5)What are bitmap indexes and how and why are they used?

Bitmap index is generally used for low cardinality.meaning if ur table


has got millions of records and the particular column where u need to
create the bitmap index has very few distinct values then u can create
bit map index on that particular column. Say for eg the population
table and in the column SEX(M or F).It is advisable to go for bitmap
index in this place. It is usually preferred to go for bitmap index in
Data warehousing type projects. Where seldom we update the table
and as well as the table is very huge.

One more thing is It is not suitable when u are going to have frequent
updates on this particular column. Frequent Updates should be
avoided.

6)What is bulk bind? How does it improve performance?

Bulk bind operations help to improve the performance of PL/SQL


operations.To reduce the SQL processing overhead by efficient use of
collections in PL/SQL code.

7)What is the difference between a Bulk and Normal mode and where
exactly is it defined?

Bulk mode – huge data


8)What is a look up function? What is default transformation for the
look up function?

Look up compares the source to target. it update and insert


The new rows to the target.

The default transformation for look up is source qualifier

9)What is referential Integrity error? How do you rectify it?


Referential integrity error occurs if you are editing a dependent table
and you create a foreign key value for which there is no corresponding
entry in the parent table.

10) What are data driven sessions?

The informatica server follows instructions coded into update strategy


transformations with in the session mapping to determine how to flag
records for insert,update,delete or reject. If you do not choose data
driven optionn setting, the informatica server ignores all update
strategy transformations in the mapping

11)What is the cost effective b/w lookup and joiner?

Are you lookup flat file or database table? Generaly, sorted joiner is
more efective on flat files than lookup, because sorted joiner uses
merge join and cashes less rows. Lookup cashes always whole file. If
the file is not sorted, it can be comparable.Lookups into database table
can be effective if the database can return sorted data fast and the
amount of data is small, because lookup can create whole cash in
memory. If database responses slowly or big amount of data are
processed, lookup cache initialization can be really slow (lookup waits
for database and stores cashed data on discs). Then it can be better
use sorted joiner, which throws data to output as reads them on input.

12) Setting the Target Load Order

You can configure the target load order for a mapping containing any
type of target definition. In the Designer, you can set the order in
which the Integration Service sends rows to targets in different target
load order groups in a mapping. A target load order group is the
collection of source qualifiers, transformations, and targets linked
together in a mapping. You can set the target load order if you want to
maintain referential integrity when inserting, deleting, or updating
tables that have the primary key and foreign key constraints.

The Integration Service reads sources in a target load order group


concurrently, and it processes target load order groups sequentially.

To specify the order in which the Integration Service sends data to


targets, create one source qualifier for each target within a mapping.
To set the target load order, you then determine in which order the
Integration Service reads each source in the mapping.

To set the target load order:

1. Create a mapping that contains multiple target load order


groups.
2. Click Mappings > Target Load Plan.

The Target Load Plan dialog box lists all Source Qualifier
transformations in the mapping and the targets that receive data
from each source qualifier.

3. Select a source qualifier from the list.


4. Click the Up and Down buttons to move the source qualifier
within the load order.
5. Repeat steps 3 and 4 for any other source qualifiers you want to
reorder.
6. Click OK.
7. Click Repository > Save.

13)Incremental Aggregation

The first time you run an incremental aggregation session, the


Integration Service processes the source. At the end of the session,
the Integration Service stores the aggregated data in two cache files,
the index and data cache files. The Integration Service saves the cache
files in the cache file directory. The next time you run the session, the
Integration Service aggregates the new rows with the cached
aggregated values in the cache files.

When you run a session with an incremental Aggregator


transformation, the Integration Service creates a backup of the
Aggregator cache files in $PMCacheDir at the beginning of a session
run. The Integration Service promotes the backup cache to the initial
cache at the beginning of a session recovery run. The Integration
Service cannot restore the backup cache file if the session aborts.

When you create multiple partitions in a session that uses incremental


aggregation, the Integration Service creates one set of cache files for
each partition.

In Informatica 8.1

You can recover a session that contains an incremental Aggregator


transformation. In previous versions, if a session failed unexpectedly,
the PowerCenter Server did not restore the cache file. In PowerCenter
8.0, the Integration Service restores the Aggregator transformation
backup cache when the session fails.

14) Static and Dynamic cache

Static cache: The Integration Service builds the cache when it processes the
first lookup request. It queries the cache based on the lookup condition for
each row that passes into the transformation. The Integration Service does
not update the cache while it processes the transformation.

Dynamic cache: The Integration Service builds the cache when it processes
the first lookup request. It queries the cache based on the lookup condition
for each row that passes into the transformation. When you use a dynamic
cache, the Integration Service updates the lookup cache as it passes rows to
the target.

When the Integration Service reads a row from the source, it updates the
lookup cache by performing one of the following actions:

Inserts the row into the cache. The row is not in the cache and you
specified to insert rows into the cache. You can configure the
transformation to insert rows into the cache based on input ports or
generated sequence IDs. The Integration Service flags the row as
insert.

Updates the row in the cache. The row exists in the cache and you
specified to update rows in the cache. The Integration Service flags the
row as update. The Integration Service updates the row in the cache
based on the input ports.

Makes no change to the cache. The row exists in the cache and you
specified to insert new rows only. Or, the row is not in the cache and
you specified to update existing rows only. Or, the row is in the cache,
but based on the lookup condition, nothing changes. The Integration
Service flags the row as unchanged.

You might also like