You are on page 1of 4

Informatica Scenario based Interview Questions And

Answer -Introduction
Informatica is a very important tool used in data warehousing, which helps in managing large
amounts of data and reporting the analysis of data. Below are some questions which will be
helpful when you attend an interview on Informatica.

So you have finally found your dream job in Informatica but are wondering how to crack
the Informatica interview and what could be the probable Informatica Scenario based
Interview Questions. Every interview is different and the scope of a job is different too.
Keeping this in mind we have designed the most common Informatica Scenario based
Interview Questions and Answers to help you get success in your interview.

Some of the most important Informatica Scenario based Interview Questions that are
frequently asked in an interview are as follows:

1. How to remove duplicate records in Informatica? Explain the different


ways to do it.

Answer:
There are many ways of eliminating duplicates:
a)If in the source database there are duplicates, a user can use property in source qualifier. A
user must go to Transformation tab and check mark the ‘Select Distinct’ option. Also, a user
can use SQL override for the same purpose. The user can go to Properties tab and in SQL
query tab write the distinct query.
b)A user can use Aggregator and select ports as key to getting distinct values. If a user wishes
to find duplicates in the entire column then all ports should be selected as a group by key.
c)The user can also use Sorter with Sort distinct property to get distinct values.
d)Expression and filter transformation can also be used to identify and remove duplicate data.
If data is not sorted then it needs to be sorted first.
e)When a property in Lookup transformation is changed to use Dynamic cache, a new port is
added to the transformation. This cache is updated as and when data is read. If a source has
duplicate records then the user can look in Dynamic lookup cache and then router selects
only one distinct record.

2. What is the difference between Source qualifier and filter transformation?

Answer:
Source qualifier transformation is used to represent rows that Integration service reads in a
session. It is an active transformation. Using source qualifier following tasks can be fulfilled:
1)When two tables from the same source database with primary key – foreign key
transformation relationship is there then the sources can be linked to one source qualifier
transformation.
2)Filtering rows when Integration service adds a where clause to the user’s default query.
3)When a user wants an outer join instead of an inner join, then join information is replaced
by metadata specified in SQL query.
4)When sorted ports are specified then integration service uses order by clause to the default
query.
5)If a user chooses to find a distinct value then integration service uses select distinct to the
specified query.
When the data we need to filter is not a relational source then the user should use Filter
transformation. It helps the user to meet specified filter condition to let go or pass through. It
will directly drop the rows that do not meet the condition and multiple conditions can be
specified.

3. Design a mapping to load last 3 rows from a flat file into the target.

Answer:
Suppose the flat file in consideration has below data:
Column A
Aanchal
Priya
Karishma
Snehal
Nupura

Step1: Assign row numbers to each record. Generate row numbers using expression
transformation by creating a variable port and incrementing it by 1. After this assign this
variable port to output port. After expression transformation, the ports will be as –

Variable_count= Variable_count+1
O_count=Variable_count
Create a dummy output port for same expression transformation and assign 1 to that port.
This dummy port will always return 1 for each row.
Finally, the transformation expression will be as follows:
Variable_count= Variable_count+1
O_count=Variable_count
Dummy_output=1

The output of this transformation will be :


Column A O_count Dummy_output
Aanchal 1 1
Priya 2 1
Karishma 3 1
Snehal 4 1
Nupura 5 1

Step 2: Pass the above output to an aggregator and do not specify any group by condition. A
new output port should be created as O_total_records in the aggregator and assign O_count
port to it. The aggregator will return the last row. The final output of this step will have
dummy port with value as 1 and O_total_records will have a total number of records in the
source. The aggregator output will be: O_total_records, Dummy_output
51

Step 3: Pass this output to joiner transformation and apply a join on dummy port. The
property sorted input should be checked in joiner transformation. Only then the user can
connect both expression and aggregator transformation to joiner transformation. Joiner
transformation condition will be as follows:
Dummy_output (port from aggregator transformation) = Dummy_output (port from
expression transformation)

The output of joiner transformation will be


Column A o_count o_total_records
Aanchal 1 5
Priya 2 5
Karishma 3 5
Snehal 4 5
Nupura 5 5

Step 4: After the joiner transformation we can send this output to filter transformation and
specify filter condition as O_total_records (port from aggregator)-O_count(port from
expression) <=2

The filter condition, as a result, will be


O_total_records – O_count <=2

The final output of filter transformation will be :


Column A o_count o_total_records
Karishma 3 5
Snehal 4 5
Nupura 5 5

4. How to load only NULL records into the target? Explain using mapping
flow.

Answer:
Consider below data as a source
Emp_Id Emp_Name Salary City Pincode
619101 Aanchal Singh 20000 Pune 411051
619102 Nupura Pattihal 35000 Nagpur 411014
NULL NULL 15000 Mumbai 451021

The target table also has table structure as a source. We will have two tables, one which will
contain NULL values and other which would not contain NULL values.
The mapping can be as:

SQ –> EXP –> RTR –> TGT_NULL/TGT_NOT_NULL


EXP – Expression transformation create an output port
O_FLAG= IIF ( (ISNULL(emp_id) OR ISNULL(emp_name) OR ISNULL(salary) OR
ISNULL(City) OR ISNULL(Pincode)), ‘NULL’,’NNULL’)
RTR – Router transformation two groups
Group 1 connected to TGT_NULL ( Expression O_FLAG=’NULL’)
Group 2 connected to TGT_NOT_NULL ( Expression O_FLAG=’NNULL’)

5. Explain how the performance of joiner condition can be increased.

Answer:
The performance of joiner condition can be increased by following some simple steps.
1)The user must perform joins whenever possible. When for some tables this is not possible
then a user can create a stored procedure and then join the tables in the database.
2)Data should be sorted before applying join whenever possible.
3)When data is unsorted then a source with less number of rows should be considered as a
master source.
4)For sorted joiner transformation, a source with less duplicate key values should be
considered as a master source.

You might also like