1. What is a transformation? A transformation is a repository object that generates, modifies, or passes data. 2. What is an active transformation?
An active transformation is the one which changes the number of rows that pass through it. Example: Filter transformation 3. What is a passive transformation? A passive transformation is the one which does not change the number of rows that pass through it. Example: Expression transformation 4. What is a connected transformation? A connected transformation is connected to the data flow or connected to the other transformations in the mapping pipeline. Example: sorter transformation 5. What is an unconnected transformation? An unconnected transformation is not connected to other transformations in the mapping. An unconnected transformation is called within another transformation and returns a value to that transformation. Example: Unconnected lookup transformation, unconnected stored procedure transformation 6. What are multi-group transformations? Transformations having multiple input and output groups are called multi-group transformations. Examples: Custom, HTTP, Joiner, Router, Union, Unstructured Data, XML source qualifier, XML Target definition, XML parser, XML generator 7. List out all the transformations which use cache? Aggregator, Joiner, Lookup, Rank, Sorter 8. What is blocking transformation? Transformation which blocks the input rows are called blocking transformation. Example: Custom transformation, unsorted joiner 9. What is a reusable transformation? A reusable transformation is the one which can be used in multiple mappings. Reusable transformation is created in transformation
developer. 10. How do you promote a non-reusable transformation to reusable transformation? Edit the transformation and check the Make Reusable option 11. How to create a non-reusable instance of reusable transformations? In the navigator, select an existing transformation and drag the transformation into the mapping workspace. Hold down the Ctrl key before you release the transformation. 12. Which transformation can be created only as reusable transformation but not as non-reusable transformation? External procedure transformation. INFORMATICA INTERVIEW QUESTIONS ON AGGREGATOR TRANSFORMATION 1. What is aggregator transformation? Aggregator transformation performs aggregate calculations like sum, average, count etc. It is an active transformation, changes the number of rows in the pipeline. Unlike expression transformation (performs calculations on a rowby-row basis), an aggregator transformation performs calculations on group of rows. 2. What is aggregate cache? The integration service creates index and data cache in memory to process the aggregator transformation and stores the data group in index cache, row data in data cache. If the integration service requires more space, it stores the overflow values in cache files. 3. How can we improve performance of aggregate transformation? Use sorted input: Sort the data before passing into aggregator. The integration service uses memory to process the aggregator transformation and it does not use cache memory. Filter the unwanted data before aggregating. Limit the number of input/output or output ports to reduce the amount of data the aggregator transformation stores in the data cache. 4. What are the different types of aggregate functions?
The different types of aggregate functions are listed below: AVG,COUNT,,FIRST,LAST,MAX,MEDIAN,MIN, PERCENTILE,STDDEV,SUMVARIANCE 5. Why cannot you use both single level and nested aggregate functions in a single aggregate transformation? The nested aggregate function returns only one output row, whereas the single level aggregate function returns more than one row. Since the number of rows returned are not same, you cannot use both single level and nested aggregate functions in the same transformation. If you include both the single level and nested functions in the same aggregator, the designer marks the mapping or mapplet as invalid. So, you need to create separate aggregator transformations. 6. Up to how many levels, you can nest the aggregate functions? We can nest up to two levels only. Example: MAX( SUM( ITEM ) ) 7. What is incremental aggregation? The integration service performs aggregate calculations and then stores the data in historical cache. Next time when you run the session, the integration service reads only new data and uses the historical cache to perform new aggregation calculations incrementally. 8. Why cannot we use sorted input option for incremental aggregation? In incremental aggregation, the aggregate calculations are stored in historical cache on the server. In this historical cache the data need not be in sorted order. If you give sorted input, the records come as presorted for that particular run but in the historical cache the data may not be in the sorted order. That is why this option is not allowed. 9. How the NULL values are handled in Aggregator? You can configure the integration service to treat null values in aggregator functions as NULL or zero. By default the integration service treats null values as NULL in aggregate functions.
INFORMATICA INTERVIEW QUESTIONS ON EXPRESSION TRANSFORMATION 1. What is an expression transformation? An expression transformation is used to calculate values in a single row. Example: salary+1000 2. How to generate sequence numbers using expression transformation? Create a variable port in expression transformation and increment it by one for every row. Assign this variable port to an output port. 3. Consider the following employees data as source? Employee_id, 10, 20, 30, 40, Salary 1000 2000 3000 5000
Q1. Design a mapping to load the cumulative sum of salaries of employees into target table? The target table data should look like as Employee_id, Salary, Cumulative_sum 10, 1000, 1000 20, 2000, 3000 30, 3000, 6000 40, 5000, 11000 Q2. Design a mapping to get the pervious row salary for the current row. If there is no pervious row exists for the current row, then the pervious row salary should be displayed as null. The output should look like as Employee_id, Salary, Pre_row_salary 10, 1000, Null 20, 2000, 1000 30, 3000, 2000 40, 5000, 3000 4. Consider the following employees table as source Department_no, Employee_name 20, R 10, A 10, D 20, P 10, B 10, C 20, Q 20, S
Q1. Design a mapping to load a target table with the following values from the above source? Department_no, 10, 10, 10, 10, 20, 20, 20, 20, Employee_list A A,B A,B,C A,B,C,D A,B,C,D,P A,B,C,D,P,Q A,B,C,D,P,Q,R A,B,C,D,P,Q,R,S
Q2. Design a mapping to get the pervious row salary for the current row. If there is no pervious row exists for the current row, then the pervious row salary should be displayed as null. The output should look like as employee_id, salary, pre_row_salary 10, 1000, Null 20, 2000, 1000 30, 3000, 2000 40, 5000, 3000 Solution:
Q2. Design a mapping to load a target table with the following values from the above source? Department_no, 10, 10, 10, 10, 20, 20, 20, 20, Employee_list A A,B A,B,C A,B,C,D P P,Q P,Q,R P,Q,R,S
Connect the source Qualifier to expression transformation. In the expression transformation, create a variable port V_count and increment it by one for each row entering the expression transformation. Also create V_salary variable port and assign the expression IIF(V_count=1,NULL,V_prev_salary) to it . Then create one more variable port V_prev_salary and assign Salary to it. Now create output port O_prev_salary and assign V_salary to it. Connect the expression transformation to the target ports. In the expression transformation, the ports will be employee_id salary V_count=V_count+1 V_salary=IIF(V_count=1,NULL,V_prev_salary) V_prev_salary=salary O_prev_salary=V_salary
INFORMATICA SCENARIO BASED QUESTIONS - PART 2 1. Consider the following employees data as source employee_id, salary 10, 1000 20, 2000 30, 3000 40, 5000 Q1. Design a mapping to load the cumulative sum of salaries of employees into target table? The target table data should look like as employee_id, salary, cumulative_sum 10, 1000, 1000 20, 2000, 3000 30, 3000, 6000 40, 5000, 11000 Solution: Connect the source Qualifier to expression transformation. In the expression transformation, create a variable port V_cum_sal and in the expression editor write V_cum_sal+salary. Create an output port O_cum_sal and assign V_cum_sal to it.
Q3. Design a mapping to get the next row salary for the current row. If there is no next row for the current row, then the next row salary should be displayed as null. The output should look like as employee_id, salary, next_row_salary 10, 1000, 2000 20, 2000, 3000 30, 3000, 5000 40, 5000, Null Solution: Step1: Connect the source qualifier to two expression transformation. In each expression transformation, create a variable port V_count and in the expression editor write V_count+1.
11000 20. Then from the joiner. Design a mapping to find the sum of salaries of all employees and this sum should repeat for all the rows. 3000. In the first expression transformation. 11000 40. In the joiner specify the join type as Detail Outer Join. Step4: Pass the output of joiner to the target table.Now create an output port O_count in each expression transformation. A 10. connect the salary which is obtained from the second expression transformaiton to the next_row_salary port in the target table. Create a new port O_sum_salary and in the expression editor write SUM(salary). 11000 30.
Step1: Connect the source qualifier to the expression transformation. From the joiner. S Q1. 1000. A. assign V_count to O_count. In the second expression transformation assign V_count-1 to O_count. aggregator transformation to joiner transformation and join on the DUMMY port. In the joiner transformation check the property sorted input. Q 20. employee_list 10. 5000. salary. Consider the following employees table as source department_no. R 10. C 20. the ports will be employee_id salary V_count=V_count+1 O_count=V_count In the second expression transformation. In the joiner transformation check the property sorted input. salary ports in target table. 2000. the ports will be employee_id salary O_dummy=1 Step2: Pass the output of expression transformation to aggregator. In the expression transformation. create a dummy port and assign value 1 to it. A 10. Step3: Pass the output of joiner transformation to a target table. The output should look like as employee_id. Design a mapping to load a target table with the following values from the above source? department_no. employee_name 20. In the first expression transformation. D 20. 2. then only you can connect both expression transformations to joiner transformation. P 10. In the aggregator transformation.C
Q4. A. 11000 Solution:
. In the expression transformation.B 10. Do not specify group by on any port. connect the employee_id. salary_sum 10.B. the ports will be employee_id salary V_count=V_count+1 O_count=V_count-1 Step2: Connect both the expression transformations to joiner transformation and join them on the port O_count. B 10. Consider the first expression transformation as Master and second one as detail. salary which are obtained from the first expression transformation to the employee_id. the ports will be salary O_dummy O_sum_salary=SUM(salary) Step3: Pass the output of expression transformation. then only you can connect both expression and aggregator to joiner transformation.
B 10.'||employee_name) V_prev_deptno=department_no O_employee_list = V_employee_list Step2: Now connect the expression transformation to a target table. A.S Solution: Step1: Use a sorter transformation and sort the data using the sort key as department_no and then pass the output to the expression transformation. Q2.Q. A. In the expression transformation.D 20.V_employee_li st||'. A 10.D. A. P.B.D.B. the ports will be department_no employee_name V_curr_deptno=department_no V_employee_list = IIF(V_curr_deptno! = V_prev_deptno. then it passes all the rows without filtering any data. A. employee_names 10.employee_name. Pass the output of expression to an aggregator transformation and specify the group by as department_no. A. In the expression transformation.'||employee_name) O_employee_list = V_employee_list Step2: Now connect the expression transformation to a target table.D 20. How to filter the null values and spaces? Use the ISNULL and IS_SPACES functions Example: IIF(ISNULL(commission). In this case.C.P.Q 20.R 20.C. INFORMATICA INTERVIEW QUESTIONS ON FILTER TRANSFORMATION 1.
Q3.R. Now connect the aggregator transformation to a target table. P.C. the ports will be department_no employee_name V_employee_list = IIF(ISNULL(V_employee_list).D.TRUE)
. Can we concatenate ports from more than one transformation into the filter transformation? No.employee_name.Q.C 10. Design a mapping to load a target table with the following values from the above source? department_no. The input ports for the filter must come from a single transformation.P.C.Q.B.P 20.S Solution: Step1: Use a sorter transformation and sort the data using the sort key as department_no and then pass the output to the expression transformation. The filter transformation allows the rows that meet the filter condition to pass through and drops the rows that do not meet the condition. employee_list 10. Design a mapping to load a target table with the following values from the above source? department_no. 4.C. P. A.Q 20. A. we have to use router transformation? 3.B.B.D 20. What is a filter transformation? A filter transformation is used to filter out the rows in mapping.10.C. 5.D. V_employee_list||'. the filter transformation acts as passive transformation.Q. To specify more than one condition.Q.B.R 20.FALSE. A. Can we specify more than one filter condition in a filter transformation? We can only specify one condition in the filter transformation. In which case a filter transformation acts as passive transformation? If the filter condition is set to TRUE.C. P 20. A.S Solution: The first step is same as the above problem.B.P.R. 2.B. P. Filter transformation is an active transformation.R.
INFORMATICA INTERVIEW QUESTIONS ON JOINER TRANSFORMATION 1. How many joiner transformations are required to join n sources? To join n sources n-1 joiner transformations are required. As it blocks the detail source. It discards the unmatched rows from the detail source. relational table. What are the different types of joins? Normal join: In a normal join. the integration service blocks all the details source while it caches rows from the master source. the unsorted joiner is called a blocking transformation. designate the source with fewer duplicate key values as the master source. Why joiner is a blocking transformation? When the integration service processes an unsorted joiner transformation. What is a joiner transformation? A joiner transformation joins two heterogeneous sources. What is a lookup transformation? A lookup transformation is used to look up data in a flat file. How to improve the performance of joiner transformation? Join sorted data whenever possible. To ensure it reads all master rows before the detail rows. 2. view. the integration service discards all the rows from the master and detail source that do not match the join condition. 6. In case of sorted joiner. Master outer join: A master outer join keeps all the rows of data from the detail source and the matching rows from the master source. the integration service reads both sources (master and detail) concurrently and builds the cache based on the master rows. It discards the unmatched rows from the master source. The joiner uses a condition that matches one or more joins of columns between the two sources. 5. For an unsorted Joiner transformation. What are the settings used to configure the joiner transformation Master and detail source Type of join Join condition
INFORMATICA INTERVIEW QUESTIONS ON LOOKUP TRANSFORMATION 1. 4.
Full outer join: A full outer join keeps all rows of data from both the master and detail rows. and synonym. You can also join the data from the same source. 2. This allows the unwanted data to be discarded and the integration service processes only the required rows. Then the integration service reads the detail source and performs the join. it reads all master rows before it reads the detail rows.6. How session performance can be improved by using filter transformation? Keep the filter transformation as close as possible to the sources in the mapping. What are the limitations of joiner transformation? You cannot use a joiner transformation when input pipeline contains an update strategy transformation. 7. 8. You cannot use a joiner if you connect a sequence generator transformation directly before the joiner. it reads the rows from master source and builds the index and data cached. use the source qualifier to filter the rows. 3. For a sorted Joiner transformation. What is joiner cache? When the integration service processes a joiner transformation. What are the tasks of a lookup transformation? The lookup transformation is used to perform the following tasks?
. Detail outer join: A detail outer join keeps all the rows of data from the master source and the matching rows from the detail source. If the source is relational source. designate the source with fewer rows as the master source. The joiner transformation joins sources with at least one matching column.
When this option is enabled. If you configure dynamic caching. A transformation in the pipeline calls the unconnected lookup with a :LKP expression. the Integration Service outputs the same values out of the lookup/output and input/output ports. Unconnected lookup transformation passes one output value to another transformation. Connected lookup transformation supports user-defined values. 9. When the Integration Service updates a row in the cache. If there is no match for the lookup condition. A pipeline lookup transformation has a source qualifier as the lookups source. In an unconnected lookup transformation.
. Unconnected lookup transformation does not support userdefined default values. What are the differences between connected and unconnected lookup transformation? Connected lookup transformation receives input values directly from the pipeline. How do you configure a lookup transformation? Configure the lookup transformation to perform the following types of lookups: Relational or flat file lookup Pipeline lookup Connected or unconnected lookup Cached or uncached lookup
If there is no match for the lookup condition.
4. What is connected and unconnected lookup transformation? A connected lookup transformation is connected the transformations in the mapping pipeline. Get a related value: Retrieve a value from the lookup table based on a value in the source. 6. Connected lookup transformation can return multiple columns from the same row or insert into the dynamic lookup cache. You can select lookup to return first or last row or any matching row or to report an error. Connected lookup transformation can be configured as dynamic or static cache. the unconnected lookup transformation returns null. What is "Output Old Value on Update"? This option is used when dynamic cache is enabled. Perform a calculation: Retrieve a value from a lookup table and use it in a calculation. Unconnected lookup transformation can return one column from each row. it outputs null values. 5. 8. the cache includes all lookup/output ports in the lookup condition and the lookup/return port. Unconnected lookup transformation can be configured only as static cache. In a connected lookup transformation. An unconnected lookup transformation is not connected to the other transformations in the mapping pipeline. What is "Insert Else Update" and "Update Else Insert"? These options are used when dynamic cache is enabled. It receives source data. 3. How do you handle multiple matches in lookup transformation? or what is "Lookup Policy on Multiple Match"? "Lookup Policy on Multiple Match" option is used to determine which rows that the lookup transformation returns when it finds multiple rows that match the lookup condition. Unconnected lookup transformation receives input values from the result of a :LKP expression in another transformation. Connected lookup transformation passes multiple output values to another transformation. MSMQ or SAP. Update slowly changing dimension tables: Determine whether rows exist in a target. 7. the cache includes the lookup source columns in the lookup condition and the lookup source columns that are output ports. When the Integration Service inserts a new row in the cache. When you disable this property. the integration service outputs old values out of the lookup/output ports. it outputs the value that existed in the lookup cache before it updated the row based on the input data. What is a pipeline lookup transformation? A pipeline lookup transformation is used to perform lookup on application sources such as JMS. the Integration Service inserts rows into the cache or leaves it unchanged. performs a lookup and returns data to the pipeline. connected lookup transformation returns default value for all output ports.
The Integration Service builds the cache in memory when it processes the first row of the data in a cached lookup transformation. The integration service either inserts the row in the cache or updates the row in the cache or makes no change to the cache.Integration Service does not update or insert the row in the cache.
11. When a session runs for the first time. What is a persistent cache? If the lookup source does not change between session runs. 16. do you need to associate each lookup port with the input port? Yes. then you can improve the performance by creating a persistent cache for the source. What is a cached lookup transformation and uncached lookup transformation? Cached lookup transformation: The Integration Service builds a cache in memory when it processes the first row of data in a cached Lookup transformation. Uncached lookup transformation: For each row that enters the lookup transformation. The Integration Service queries the cache for each row that enters the transformation.Integration Service updates the row in the cache. 14. When disabled. What are the different values returned by NewLookupRow port? The different values are 0 .Integration Service inserts the row into the cache. and inserts a new row if it is new. The integration service does not build a cache. the Integration Service does not update existing rows. The Integration Service uses the data in the associated port to insert or update rows in the lookup cache. How the integration service builds the caches for unconnected lookup transformation? The Integration Service builds caches for unconnected Lookup transformations as sequentially. It queries the cache based on the lookup condition for each row that passes into the transformation. The Integration Service updates the lookup cache as it passes rows to the target. The next time when the session runs. What is a dynamic cache? The dynamic cache represents the data in the target. 2 . Update Else Insert option applies to rows entering the lookup transformation with the row type of update. 12. 13. When this option is enabled the integration service inserts new rows in the cache and updates existing rows when disabled. the Integration Service queries the lookup source and returns a value. 15. the integration service builds the memory from the cache file. the Integration Service does not insert new rows. Insert Else Update option applies to rows entering the lookup transformation with the row type of insert. When this option is enabled. It does not need to wait for data to reach the Lookup transformation. When you use a dynamic cache. 17. 10. the Integration Service updates existing rows. How the integration service builds the caches for connected lookup transformation? The Integration Service builds the lookup caches for connected lookup transformation in the following ways: Sequential cache: The Integration Service builds lookup caches sequentially. You need to associate each lookup/output port with the input/output port or a sequence ID. The Integration Service builds the cache when it processes the first lookup request. 1 . the integration service creates the cache files and saves them to disk instead of deleting them. The Integration Service stores condition values in the index cache and output values in the data cache.
. What are the options available to configure a lookup cache? The following options can be used to configure a lookup cache: Persistent cache Recache from lookup source Static cache Dynamic cache Shared Cache Pre-build lookup cache
Concurrent caches: The Integration Service builds lookup caches concurrently.
provide Sorted files as lookup source. normalizer transformation is used to normalize the cobol sources. Which transformation is required to process
. This means it converts column data in to row data. You can only share static unnamed caches.
the cobol sources? Since the cobol sources contain denormalzed data. What is a shared cache? You can configure multiple Lookup transformations in a mapping to share a single lookup cache. Named cache: Use a persistent named cache when you want to share a cache file across mappings or share a dynamic and a static cache. 3. Specify explicitly the ORDER By clause on the required columns. Avoid ORDER BY on all columns in the lookup source. What is VSAM? VSAM (Virtual Storage Access Method) is a file access method for an IBM mainframe operating system. Each row contains the same generated key value. How do you improve the performance of lookup transformation? Create an index on the columns used in the lookup condition first Place conditions with equality operator Cache small lookup tables. What is generated key and generated column id in a normalizer transformation? The integration service increments the generated key sequence number each time it process a source row. 4. the Integration Service shares the cache by default. It uses the same cache to perform lookups for subsequent Lookup transformations that share the cache. 2. The caching structures must match or be compatible with a named cache. What is pipeline normalizer transformation? Pipeline normalizer transformation processes multiple-occurring data from relational tables or flat files. What is VSAM normalizer transformation? The VSAM normalizer transformation is the source qualifier transformation for a COBOL source definition. 20. What is occurs clause and redefines clause in normalizer transformation? Occurs clause is specified when the source row has a multiple-occurring columns. When the source row contains a multiple-occurring column or a multiple-occurring group of columns. if a column occurs 3 times in a source record. A COBOL source is flat file that can contain multiple-occurring data and multiple types of records in the same file. For flat file lookups.
Join tables in the database: If the source and the lookup table are in the same database. Use persistent cache for static lookups. 5. the normalizer returns a value of 1. join the tables in the database rather than using a lookup transformation. The normalizer transformation has a generated column ID (GCID) port for each multiple-occurring column. 6. A redefines clause is specified when the source has rows of multiple columns. For example.2 or 3 in the generated column ID. 7. VSAM organize records in indexed or sequential flat files. Normalizer is an active transformation. The GCID is an index for the instance of the multiple-occurring data. What is unnamed cache and named cache? Unnamed cache: When Lookup transformations in a mapping have compatible caching structures. INFORMATICA INTERVIEW QUESTIONS ON NORMALIZER TRANSFORMATION 1.18. 19. The Integration Service builds the cache when it processes the first Lookup transformation. What is normalizer transformation? The normalizer transformation receives a row that contains multiple-occurring columns and retruns a row for each instance of the multipleoccurring data. You can share static and dynamic named caches. the normalizer transformation returns a row for each occurrence.
6. What is rank cache? The integration service compares input rows in the data cache. the integration service processes the incoming data for each transformation. 6. Rank transformation also selects the strings at the top or bottom of a session sort order. Can you connect ports of two output groups from router transformation to a single target? No. In the ports tab. How do you specify the number of rows you want to rank in a rank transformation? In the rank transformation properties. 2. How to improve the performance of a session using router transformation? Use router transformation in a mapping instead of creating multiple filter transformations to perform the same task. if the input row out-ranks a cached row. Where you specify the filter conditions in the router transformation? You can creat the group filter conditions in the groups tab using the expression editor. the integration service processes the incoming data only once. We can specify to rank the data based on only one port. it selects the largest or smallest numeric value in a port or group. What is RANKINDEX port? The designer creates RANKINDEX port for each rank transformation. you can specify one or more conditions in a router transformation. What is a router transformation? A router is used to filter the rows in a mapping.
. you have to check the R option for designating the port as a rank port and this option can be checked only on one port. there is an option 'Top/Bottom' for selecting the top or bottom ranking for a column. How many types of output groups are there? There are two types of output groups: User-defined group Default group 5. Can we specify ranking on more than one port? No. You cannot connect more than one output group to one target or a single input group transformation. Router is an active transformation. 5. 3.
INFORMATICA INTERVIEW QUESTIONS ON ROUTER TRANSFORMATION 1. 3. 2. What is rank transformation? A rank transformation is used to select top or bottom rank of data. the integration service replaces the cached row with the input row. Rank transformation is an active transformation. What are the different groups in router transformation? The router transformation has the following types of groups: Input Output
4. If you configure the rank transformation to rank across multiple groups. The router transformation is more efficient in this case. 4. How to select either top or bottom ranking for a column? In the rank transformation properties. When you use a router transformation in a mapping. The integration service stores group information in index cache and row data in data cache. The integration service uses the rank index port to store the ranking position for each row in a group. there is an option 'Number of Ranks' for specifying the number of rows you wants to rank. This means.INFORMATICA INTERVIEW QUESTIONS ON RANK TRANSFORMATION 1. When you use multiple filter transformations. Unlike filter transformation. the integration service ranks incrementally for each group it finds.
As the data is sorted. Why sorter is an active transformation? As sorter transformation can suppress the duplicate records in the source.036. the number of cached values is set to 1000. 2. the number of cached values is set to zero. the integration service uses the memory to do aggregate and join operations and does not use cache files to process the data. How do you configure a sequence generator transformation? The following properties need to be configured for a sequence generator transformation: Start Value Increment By End Value Current Value Cycle Number of Cached Values
INFORMATICA INTERVIEW QUESTIONS ON SORTER TRANSFORMATION 1. How to improve the performance of a session using sorter transformation? Sort the data using sorter transformation before passing in to aggregator or joiner transformation. 3.775. What is a sorter transformation? Sorter transformation is used to sort the data.
. 3.372. What is the number of cached values set to default for a sequence generator transformation? For non-reusable sequence generators. Sequence generator transformation is a passive transformation. 7. For reusable sequence generators. 2.INFORMATICA INTERVIEW QUESTIONS ON SEQUENCE GENERATOR TRANSFORMATION 1. 9. You can sort the data either in ascending or descending order according to a specified sort key. 4. replace missing primary key values or cycle through a sequential range of numbers. What are the ports in sequence generator transformation? A sequence generator contains two output ports. what will be the output values of these ports? The output values are NEXTVAL CURRVAL 1 2 2 3 3 4 4 5 5 6 6. What is a sequence generator transformation? A Sequence generator transformation generates numeric values. What will be the value of CURRVAL in a sequence generator transformation? CURRVAL is the sum of "NEXTVAL" and "Increment By" Value. When you connect both the NEXTVAL and CURRVAL ports to a target.
8.807 5. What will be the output value.223. They are CURRVAL and NEXTVAL. it is called an active transformation. if you connect only CURRVAL to the target without connecting NEXTVAL? The integration service passes a constant value for each row.854. What is the maximum number of sequence that a sequence generator can generate? The maximum value is 9. What is the use of a sequence generator transformation? A sequence generator is used to create unique primary key values.
5. 3. 2. You can pass a script name to the transformation with each input row. What are the different tasks a source qualifier can do? Join two or more tables originating from the same source (homogeneous sources) database. What are the different modes in which a SQL transformation runs? SQL transformation runs in two modes. How to join heterogeneous sources and flat files? Use joiner transformation to join heterogeneous sources and flat files 7. Filter the rows. They are script mode and query mode. You can output multiple rows when the query has a SELECT statement. Database type: The type of database that SQL transformation connects to. Sort the data Selecting distinct values from the source Create custom query Specify a pre-sql and post-sql
Select Distinct Pre-SQL Post-SQL
INFORMATICA INTERVIEW QUESTIONS ON SQL TRANSFORMATION 1. SQL transformation supports two modes. When you configure an SQL transformation to run in script mode. In which cases the SQL transformation becomes a passive transformation and active transformation? If you run the SQL transformation in script mode. 3. 6. then it becomes passive transformation. The SQL transformation outputs one row for each input row. Query mode: The SQL transformation executes a query that you define in a query editor.INFORMATICA INTERVIEW QUESTIONS ON SOURCE QUALIFIER TRANSFORMATION 1. How to create a custom join in source qualifier transformation? When there is no primary key-foreign key relationship between the tables. 4. what are the ports that the designer adds to the SQL transformation?
4. delete and retrieve rows from a database. How do you configure a SQL transformation? The following options are required to configure SQL transformation: Mode: Specifies the mode in which SQL transformation runs. Why you need a source qualifier transformation? The source qualifier transformation converts the source data types into informatica native data types. They are: Script mode: The SQL transformation runs scripts that are externally located. What is SQL transformation? SQL transformation process SQL queries midstream in a pipeline and you can insert. What is a source qualifier transformation? A source qualifier represents the rows that the integration service reads when it runs a session. then it becomes an active transformation. What is the default join in source qualifier transformation? The source qualifier transformation joins the tables based on the primary key-foreign key relationship. 5. Connection type: Pass database connection to the SQL transformation at run time or specify a connection object. update. If you run the SQL transformation in the query mode and the query has a SELECT statement. Source qualifier is an active transformation. You can pass parameters to the query to define dynamic queries. 2. you can specify a custom join using the 'user-defined join' option in the properties tab of source qualifier. How do you configure a source qualifier transformation? SQL Query User-Defined Join Source Filter Number of Sorted Ports
Otherwise it returns FAILED. You must first create the connection object in workflow manager. 6. updated or deleted in a table? You can enable the NumRowsAffected output port to return the number of rows affected by the INSERT. To configure unlimited output rows. What are the types of SQL queries you can specify in the SQL transformation when you use it in query mode. The integration service prepares a query for each input row. Pass parameters to the stored procedure and receive multiple output parameters. password and other connection information to SQL transformation input ports at run time. Give some examples where a stored procedure is used? The stored procedure can be used to do the following tasks Check the status of a target database before loading data into it. Stored procedures are stored and run within the database. what will be the output? In script mode. ScriptResult: This is an output port. 4. Perform a specialized calculation. Dynamic SQL query: The query statement can be changed. ScriptError: This is an output port. How do you limit the number of rows returned by the select statement? You can limit the number of rows by configuring the Max Output Row Count property. What are the types of connections to connect the SQL transformation to the database available? Static connection: Configure the connection object tin the session. 7. You must first create the connection object in workflow manager. What will be the output of NumRowsAffected port for a SELECT statement? The NumRowsAffected outout is zero for the SELECT statement. the NumRowsAffected port always returns NULL. 2. 9. ScriptName receives the name of the script to execute the current row.The designer adds the following ports to the SQL transformation in script mode: ScriptName: This is an input port. What is a connected stored procedure transformation? The stored procedure transformation is connected to the other transformations in the mapping pipeline. In which scenarios a connected stored procedure transformation is used? Run a stored procedure every time a row passes through the mapping. INFORMATICA INTERVIEW QUESTIONS ON STORED PROCEDURE TRANSFORMATION 1. 11. Logical connection: Pass a connection name to the SQL transformation as input data at run time. When you enable the NumRowsAffected output port in script mode. 5. ScriptResult returns PASSED if the script execution succeeds for the row. Static SQL query: The query statement does not change. How do you find the number of rows inserted. This NumRowsAffected option works in query mode. What is a stored procedure? A stored procedure is a precompiled collection of database procedural statements. but you can use query parameters to change the data. Drop and recreate indexes. Determine if enough space exists in a database. 8. The integration service prepares the query once and runs the query for all input rows. UPDATE or DELETE query statements in each input row.
3. What is an unconnected stored procedure transformation? The stored procedure transformation is not
. ScriptError returns the errors that occur when a script fails for a row. Full database connection: Pass the connect string.
10. user name. set Max Output Row Count to zero.
the stored procedure runs. INFORMATICA INTERVIEW QUESTIONS ON TRANSACTION CONTROL TRANSFORMATION 1. 10. This is useful for verifying target tables or disk space on the target system. What are the different transaction levels available in transaction control transformation? The following are the transaction levels or builtin variables: TC_CONTINUE_TRANSACTION: The Integration Service does not perform any transaction change for this row. What is the commit type if you have a transaction control transformation in the mapping? The commit type is "user-defined". 2. This is useful for re-creating indexes on the database. such as when a specific port does not contain a null value. the stored procedure runs. the stored procedure runs.
. Run nested stored procedures. What are the parameter types in a stored procedure? There are three types of parameters exist in a stored procedure: IN: Input passed to the stored procedure OUT: Output returned from the stored procedure INOUT: Defines the parameter as both input and output. Pre-load of the Source: Before the session retrieves data from the source. What is a transaction control transformation? A transaction is a set of rows bound by a commit or rollback of rows. This is useful for removing temporary tables. where the output of an unconnected stored procedure transformation is assigned by default. It either runs before or after the session or is called by an expression in another transformation in the mapping. 3. Run a stored procedure based on data that passes through the mapping. This is useful for verifying the existence of tables or performing joins of data in a temporary table.
7.connected directly to the flow of the mapping. What are the options available to specify when the stored procedure transformation needs to be run? The following options describe when the stored procedure transformation runs: Normal: The stored procedure runs where the transformation exists in the mapping on a row-by-row basis. Connected stored procedures run only in normal mode. This is useful for calling the stored procedure for each row of data that passes through the mapping. A connected stored procedure transformation runs only in Normal mode. What is PROC_RESULT in stored procedure transformation? PROC_RESULT is a system variable. Post-load of the Source: After the session retrieves data from the source. Only Oracle supports this parameter type. The transaction control transformation is used to commit or rollback a group of rows. A unconnected stored procedure transformation runs in all the above modes. Call multiple times within a mapping. Post-load of the Target: After the session sends data to the target. the stored procedure runs. relative to any other stored procedures in the same mapping. such as pre or post-session. such as running a calculation against an input port. This is the default value of the expression.
8. Only used when the Stored Procedure Type is set to anything except Normal and more than one stored procedure exists. In which scenarios an unconnected stored procedure transformation is used? Run a stored procedure before or after a session Run a stored procedure once during a mapping. 9. What is execution order in stored procedure transformation? The order in which the Integration Service calls the stored procedure used in the transformation. 6. Pre-load of the Target: Before the session sends data to the target.
and the sequence of rows from any given input stream is preserved in the output. The Union transformation does not generate transactions. INFORMATICA INTERVIEW QUESTIONS ON UNION TRANSFORMATION 1. and begins a new transaction. rolls back the transaction. What is an update strategy transformation? Update strategy transformation is used to flag source rows for insert. TC_ROLLBACK_AFTER: The Integration Service writes the current row to the target. 2. The current row is in the rolled back transaction. The current row is in the committed transaction. update. 4. i. The current row is in the new transaction. DD_DELETE is used for deleting the rows. the positions of the rows are not preserved. The numeric value is 0. The numeric value is 2. you must add another transformation such as a Router or Filter transformation. Alternatively the row can be rejected. 3. begins a new transaction. What are the constants used in update strategy transformation for flagging the rows? DD_INSERT is used for inserting the rows. DD_REJECT is used for rejecting the rows. 2. All input groups and the output group must have matching ports. The numeric value is 3. but only one output group. 4. row number 1 from input stream 1 might not be row number 1 in the output stream. Though the total number of rows passing into the Union is the same as the total number of rows passing out of it. What is a union transformation? A union transformation is used merge data from multiple sources similar to the UNION ALL SQL statement to combine the results from two or more SQL statements. As union transformation gives UNION ALL output. datatype. Why union transformation is an active transformation? Union is an active transformation because it combines two or more data streams into one. it is called as an active transformation. 3. In the properties of sorter transformation check the option select distinct. and begins a new transaction. Union does not even guarantee that the output is repeatable INFORMATICA INTERVIEW QUESTIONS ON UPDATE STRATEGY TRANSFORMATION 1. TC_COMMIT_BEFORE: The Integration Service commits the transaction.e. What are the guidelines to be followed while using union transformation? The following rules and guidelines need to be taken care while working with union transformation: You can create multiple input groups. DD_UPDATE is used for updating the rows. Based on this flagging each row will be either inserted or updated or deleted from the target. The precision. TC_ROLLBACK_BEFORE: The Integration Service rolls back the current transaction.
The Union transformation does not remove duplicate rows. delete or reject within a mapping. To remove duplicate rows. commits the transaction. and writes the current row to the target. and writes the current row to the target. You cannot use a Sequence Generator or Update Strategy transformation upstream from a Union transformation. The current row is in the new transaction. The numeric value is 1. and scale must be identical across all groups. TC_COMMIT_AFTER: The Integration Service writes the current row to the target. Why update strategy is an active transformation? As update strategy transformation can reject rows. Alternatively you can pass the output of union transformation to aggregator transformation and in the aggregator transformation specify all ports as group by ports. how you will get the UNION output? Pass the output of union transformation to a sorter transformation. begins a new transaction. If you place an aggregator after the update
strategy transformation. 5. the integration service subtracts the value appearing in this row. How you flag a particular row determines how the aggregator transformation treats any values in that row used in the calculation. update and delete of reject before you perform aggregate calculation. 7. For example. If you have an update strategy transformation in the mapping and you did not selected the value 'Data Driven' for 'Treat Source Rows As' option in session. Using this option you can specify whether all the source rows need to be inserted. How to update the target table without using update strategy transformation? In the session properties.
. how the output of aggregator will be affected? The update strategy transformation flags the rows for insert. In which files the data rejected by update strategy transformation will be written? If the update strategy transformation is configured to Forward Rejected Rows then the integration service forwards the rejected rows to next transformation and writes them to the session reject file. what should be the value selected for 'Treat Source Rows As' option in session properties? The value selected for the option is 'Data Driven'. updated or deleted. If the row had been flagged for insert. If you have an update strategy transformation in the mapping. if you flag a row for delete and then later use the row to calculate the sum. then how the session will behave? If you do not choose Data Driven when a mapping contains an Update Strategy or Custom transformation. If you enable row error handling. the Integration Service does not follow instructions in the Update Strategy transformation in the mapping to determine how to flag rows. the integration service drops rejected rows and writes them to the session log file. 6. the integration service would add its value to the sum. the Integration Service writes the rejected rows and the dropped rows to the row error logs. When you run the session. the Workflow Manager displays a warning. It does not generate a reject file. 8. there is an option 'Treat Source Rows As'. The integration service follows the instructions coded in the update strategy transformation. If you do not select the
forward reject rows option.
the ports are: V_count=V_count+1 O_count=V_count 2. In the expression transformation. 1 Step2: Pass the output of expression transformation to aggregator and do not specify any group by condition. 5 d. o_total_records a. 1 b. Design a mapping to load the last 3 rows from a flat file into a target? Solution: Consider the source has the following data. In the joiner transformation. The aggregator will return the last row by default. the join condition will be O_dummy (port from aggregator transformation) = O_dummy (port from expression transformation) The output of joiner transformation will be col. Create an output port O_total_records in the aggregator and assign O_count port to it.INFORMATICA SCENARIO BASED INTERVIEW QUESTIONS WITH ANSWERS PART 1 I have listed the following informatica scenarios which are frequently asked in the informatica interviews. create a variable port and increment it by 1. 3. then only you can connect both expression and aggregator to joiner transformation. 4. 4. 1. 2. So that. 5 Step4: Now pass the ouput of joiner transformation to filter transformation and specify the filter condition as O_total_records (port from aggregator)-O_count(port from expression) <=2
. The output of aggregator contains the DUMMY port which has value 1 and O_total_records port which has the value of total number of records in the source. 5. 5. These scenario based questions helps you a lot in gaining confidence in interviews. the DUMMY output port always return 1 for each row. In the aggregator transformation. Generate the row numbers using the expression transformation as mentioned above and call the row number generated port as O_count. Then assign the variable port to an output port. 1 e. 1. o_count. o_count. In the expression transformation. 5 b. the ports are V_count=V_count+1 O_count=V_count
O_dummy=1 The output of expression transformation will be col. Create a DUMMY output port in the same expression transformation and assign 1 to that port. Generate the row numbers either using the expression transformation as mentioned above or use sequence generator transformation. 5 c. 5 e. o_dummy a. Design a mapping to load the first 3 rows from a flat file into a target? Solution: You have to assign row numbers to each record. In the joiner transformation check the property sorted input. How to generate sequence numbers using expression transformation? Solution: In the expression transformation. 1 c. 1 d. 1 Step3: Pass the output of expression transformation. O_dummy 5. 1. 2. the ports are O_dummy O_count O_total_records=O_count The output of aggregator transformation will be O_total_records. col a b c d e Step1: You have to assign row numbers to each record. aggregator transformation to joiner transformation and join on the DUMMY port. Then pass the output to filter transformation and specify the filter condition as O_count <=3 3. 3.
1. o_total_records c.In the filter transformation. 1 B. 1 B. A B C C B D B Q1.O_count <=2 The output of filter transformation will be col o_count. 1 Pass the output of expression transformation to an aggregator transformation. O_dummy. In the joiner transformation check the property sorted input. O_count_of_each_product A. the last record from a flat file into table B and the remaining records into table C? Solution: This is similar to the above problem. In the aggreagtor. create an output port O_count_of_each_product and write an expression count(product). you have to use router transformation. 1. 5. 1. Then connect this group to one table. the condition should be O_count=O_total_records and connect the corresponding output group to table B. In the last step instead of using the filter transformation. then only you can connect both expression and aggregator to joiner transformation. In the router transformation create two output groups. The first table should contain the following output A D The second target should contain the following output B B B C C
Solution: Use sorter transformation and sort the products data. 1 C. the DUMMY output port always return 1 for each row. 1 D. Design a mapping to load the first record from a flat file into one table A. Design a mapping to load all unique products in one table and the duplicate rows in another table. 5 4. 3 C. Connect the output of default group to another table. 1 B. 4. 1 B. 5 d. The output of expression transformation will be Product. O_count_of_each_product A. aggregator transformation to joiner transformation and join on the products port. 2 D. 3 B. The output of aggregator will be Product. 1. 1 Now pass the output of expression transformation. the condition should be O_count=1 and connect the corresponding output group to table A. the first 3 steps are same. create one group and specify the group condition as O_dummy=O_count_of_each_product. 2 C. 3 B. Consider the following products data which contain duplicate rows. 3 C. 1 C. So that. 1 Now pass the output of joiner to a router transformation. 5. 1. The output of default group should be connected to table C. the filter condition will be O_total_records . 1.
. Pass the output to an expression transformation and create a dummy port O_dummy and assign 1 to that port. In the second group. 5 e. O_dummy A. 3. 1. Check the group by on product port. The output of joiner will be product. 2 D. 1 B. In the first group.
Movie 20. the ports will be product_id product_type V_curr_prod_type=product_type V_count = IIF(V_curr_prod_type = V_prev_prod_type. Product_id. 1 C. V_count+1. Now create an output port O_count port and assign V_count port to it.V_curr_product. Movie 40.1).
Q1. Create one more variable port V_prev_port and assign product port to it. if the number of
. The source contains 12 records and you dont know how many products are available in each product type.V_count+1. 2 D. Movie 30.1) V_prev_prod_type=product_type O_count=V_count Step3: Now connect the expression transformaion to a filter transformation and specify the filter condition as O_count<=3. O_count A.V _count+1. Design a mapping to select 9 products in such a way that 3 products should be selected from video.PART 3 1. video 10.Q2. 2 B. Design a mapping to load each product once into one table and the remaining products which are duplicated into another table. product_type 10. Audio 20. Audio 10.1) V_prev_product=product O_count=V_count The output of expression transformation will be Product. Solution: Step1: Use sorter transformation and sort the data using the key as product_type. 1 Now Pass the output of expression transformation to a router transformation. and assign product port to it. Audio 40. In the expression transformation. 1 B. Connect the output of default group to another table. 3 C. Then create a V_count port and in the expression editor write IIF(V_curr_product=V_prev_product. Movie Assume that there are only 3 product types are available in the source. Consider the following product types data as the source. Step2: Connect the sorter transformation to an expression transformation. the ports are Product V_curr_product=product V_count=IIF(V_curr_product=V_prev_product.
Q2. Movie 60. Audio 50. 3 products should be selected from Audio and the remaining 3 products should be selected from Movie. In the above problem Q1. create one group and specify the condition as O_count=1. Pass the output to an expression transformation and create a variable port. 1 B. Pass the output of filter to a target table. Then connect this group to one table. Movie 50.
INFORMATICA SCENARIO BASED QUESTIONS . The first table should contain the following output A B C D The second table should contain the following output B B C Solution: Use sorter transformation and sort the products data. In the expression transformation. Audio 30.
d 20. Create a union transformation with three input groups and each input group should have one port. The source data looks like id. id value V_curr_id=id V_count= IIF(v_curr_id=V_prev_id.1) V_prev_id=id
. create the ports and assign the expressions as mentioned below. col3 10. a 10. For example: If the number of products in videos are 1. c d. Connect col1 from Source Qualifier to port in first expression transformation. f The target table data should look like id. Connect the filter transformation to a target table.V_count+1. col3 a. e. The ports in soter transformation will be product_id product_type O_count (sort key) Step3: Discard O_count port and connect the sorter transformation to an expression transformation. Solution: The first two steps are same as above. b. b 10. Step2: In the expression transformation. Now connect the expression transformations to the input groups and connect the union transformation to the target table. b. d. col2. col2. then the reamaining 2 records should come from audios or movies. then you wont get the total 9 records in the target table. Then connect the sorter transformation to the expression transformation. value 10.products in a particular product type are less than 3. e. Step3: Connect the expression transformation to a sorter transformation and sort the data using the key as O_count. Design a mapping to convert column data into row data without using the normalizer transformation. f The target table data should look like Col a b
Step1: Use sorter transformation and sort the data using id port as the key. For example. then you have to get those less number of records from another porduc types. So. see the videos type in the source data. Now design a mapping in such way that even if the number of products in a particular product type are less than 3. f Solution:
3. Connect col3 from source qualifier to port in third expression transformation. The ports in expression transformation will be product_id product_type V_count=V_count+1 O_prod_count=V_count Step4: Connect the expression to a filter transformation and specify the filter condition as O_prod_count<=9. c 20. the total number of records in the target table should always be 9. col1. Connect col2 from Source Qualifier to port in second expression transformation. e 20.
c d e f Solution: Create three expression transformations with one port each. c 20. Design a mapping to convert row data into column data. a. The source data looks like col1.
c3. A. G. ORACLE COMPLEX QUERIES . 2 5. As the element "A" is at root. c3. E. C. C are at level 1 and so on. B is parent of D and E. c4 H I NULL NULL NULL
Here in this table. From the tree structure. Write a query to load the target table with the below data. Design a mapping to accommodate these type of null conditions. As the element "A"
. A. c1. B. column C2 is parent of column C3. B. D. This is an extension to the problem Q1. Q2. NULL 2. E. create the ports and assign the expressions as mentioned below.NULL) O_col2= IIF(V_count=2. Then c1 becomes the parent of c4. I. You can easily derive the parentchild relationship between the elements. col1. you can easily derive the parent-child relationship between the elements. Q1. F.value. Here you need to generate sequence numbers for each element and then you have to get the parent id. then C1 becomes the parent of C3 and c3 is parent of C4. B is parent of D and E. D. A. B.O_col1= IIF(V_count=1. 4 I have provided the solution for this problem in Oracle Sql query. c1. id (specify group by on this port) O_col1 O_col2 O_col3 col1=MAX(O_col1) col2=MAX(O_col2) col3=MAX(O_col3) Stpe4: Now connect the ports id. A. col3 from aggregator transformation to the target table. C.
The above tree structure data is represented in a table as shown below. For example. column C3 is parent of column C4. 3 8. INFORMATICA SCENARIO BASED QUESTIONS . D. c4 A. H A. Let say column C2 has null for all the rows. B. Design a mapping to load the target table with the below data. C. I A.value. Q1. G. Let say both columns c2 and c3 has null for all the rows. C. D. 3 7. As the element A is root element. 1 4. element. A. The above tree structure data is represented in a table as shown below. Here you need to generate sequence numbers for each element and then you have to get the parent id. c2. D. C. F. column C1 is parent of column C2. G. column C2 is parent of column C3. column C3 is parent of column C4. B. column C1 is parent of column C2. A.PART 4 Take a look at the following tree structure diagram. 4 9. c2. NULL A. In the aggregator transforamtion. B. F. H. col2. it is at level 0. B.value.PART 3 The source data is represented in the form the tree structure. 2 6. If you are interested you can Click Here to see the solution. For example. B. it does not have any parent and its parent_id is NULL. 1 3. NULL A.NULL) O_col3= IIF(V_count=3.
id. E. NULL Here in this table.NULL) Step3: Connect the expression transformation to aggregator transformation. parent_id 1.
ROWNUM SEQ FROM (SELECT DISTINCT PARENT. c3...PARENT) LEFT OUTER JOIN T2 P ON (T3. 2. 2 5. Note: The unpivot function works in oracle 11g.SEQ Id. Solution: STEP1: Drag the source to the mapping designer and then in the Source Qualifier Transformation properties.CHILD ELEMENT.. 2. parent_id 1. D. The source data looks like as Id 1 2 3 4 5 6 7 8 . 4 9. T3. 1000 Create a workflow to load only the Fibonacci numbers in the target table. 1 3. 0.c2 as 1. It will have sequence numbers from 1 to 1000. C.PARENT = P. 3. The target table data should look like as Id 1 2 3 5 8 13 .1) OVER (PARTITION BY r ORDER BY lev) CHILD FROM (SELECT c1. 3 7. PARENT FROM T1 WHERE LEV=0 ) SELECT C. P. C. E. I. element.is at root. 1. 2 6. CHILD FROM T1 WHERE CHILD IS NOT NULL UNION ALL SELECT DISTINCT NULL.. id.
INFORMATICA SCENARIO BASED QUESTIONS . 1 4. A. ROWNUM r FROM table_name ) UNPIVOT (value FOR lev IN (c1 as 0. 2. B. The source data contains only column 'id'.CHILD = C. LEV. t2 AS ( SELECT PARENT. F.SEQ. NULL 2. LEAD(value. H.SEQ PARENT_ID FROM T3
INNER JOIN T2 C ON (T3. lev. This will sort the source data
. 3. c2.c3 as 2. c4. 2. 4 Solution:
WITH t1 AS ( SELECT VALUE PARENT..PARENT) ORDER BY C. 1. T3 AS ( SELECT DISTINCT PARENT..LEV. LEV FROM T1 ORDER BY LEV ) ).PART 5 Q1. LEV. G. it does not have any parent and its parent_id is NULL. set the number of sorted ports to one.c4 as 3)) ).. In Fibonacci series each subsequent number is the sum of previous two numbers. Here assume that the first two numbers of the fibonacci series are 1 and 2. 3 8.
1. 3.1.'. The output should look like as below.m.'.1. INSTR(val. v_prev_val2. Q2.1. Then connect the Union Transformation to the Sorter Transformation. NULL ) STEP4: Now pass the output of Expression Transformation to the Target definition.1)+1. 1. IIF(v_sum=id.2)+1).1. SUBSTR(val. val ports of Target Definition. So that we will get the numbers in sequence as 1.
Here the "val" column contains comma delimited data and has three fields in that column.ro.'.b. Connect id. SUBSTR(val. id 1 1 1 2 2 2 3 3 3 val a b c pq m n asz ro liqt
.1. 2. INSTR(val. v_prev_val2) ) o_flag = IIF(id=1 or id=2.1)-1 ). o_val ports of Expression Transformation to the id. .in ascending order. For those who are interested to solve this problem in oracle sql. Create a workflow to split the fields in “val” column to separate rows.'. SUBSTR(val. INSTR(val.1. In the Expression Transformation.liqt
STEP1: Connect three Source Qualifier transformations to the Source Definition STEP2: Now connect all the three Source Qualifier transformations to the Union Transformation.0) ) STEP3: Now connect the Expression Transformation to the Filter Transformation and specify the Filter Condition as o_flag=1 STEP4: Connect the Filter Transformation to the Target Table. Assign the expressions to the ports as shown below. STEP3: Pass the output of Sorter Transformation to the Expression Transformation. IIF( v_sum=id.'.2)-INSTR(val.1)-1). Click Here.n asz. INSTR(val. 3.1000 STEP2: Connect the Source Qualifier Transformation to the Expression Transformation.'. v_prev_val1) ) v_prev_val2 = IIF(id=1 or id =2.1. v_sum. 2.c pq.'. The source data looks like as below id 1 2 3 val a.1.. IIF(v_sum = id. In the sorter transformation sort the data based on Id port in ascending order. 2. The ports in Expression Transformation are: id (input/output port) val (input port) v_currend_id (variable port) = id v_count (variable port) = IIF(v_current_id!=v_previous_id.'. The source table contains two columns "id" and "val".v_count+1) v_previous_id (variable port) = id o_val (output port) = DECODE(v_count.. Ports in Expression Transformation: id v_sum = v_prev_val1 + v_prev_val2 v_prev_val1 = IIF(id=1 or id=2. create three variable ports and one output port. The oracle sql query provides a dynamic solution where the "val" column can have varying number of fields in each row..'.1.'.
9000). 100.YEAR. 200. INSERT INTO SALES VALUES ( 3. PRODUCT_ID PRODUCT_NAME 100 Nokia 200 IPhone 300 Samsung The sales table contains the following data. "Insert" statements are provided below. 7000). INSERT INTO SALES VALUES ( 1. STEP1: First we will get the previous year sales for each product. 'Samsung').SQL QUERIES INTERVIEW QUESTIONS ORACLE PART 1 As a database developer. Having a good knowledge on SQL is really important.
The products table contains the below data. 15. INSERT INTO SALES VALUES ( 6. sales tables in your oracle database.QUANTITY. PRODUCT_NAME VARCHAR2(30) ).0) OVER (
. 9000). 2010. 2012. INSERT INTO PRODUCTS VALUES ( 100. S. 2010. 300. CREATE TABLE SALES ( SALE_ID INTEGER. PRICE INTEGER ). The "Create Table". 100. The SQL query to do this is SELECT P. INSERT INTO PRODUCTS VALUES ( 400. 100. 'LG'). PLSQL code is part of daily life. Now try to solve the below SQL queries. INSERT INTO PRODUCTS VALUES ( 200. Quantity INTEGER. 7000). CREATE TABLE PRODUCTS ( PRODUCT_ID INTEGER. writing SQL queries. 2011. 20. PRODUCT_ID INTEGER. COMMIT. Price is the sale price of each product. 16. 5000). 'IPhone'). INSERT INTO SALES VALUES ( 7.1. 200. Here i am posting some practical examples on SQL queries.QUANTITY. 'Nokia'). Write a SQL query to find the products which have continuous increase in sales every year? Solution: Here “Iphone” is the only product whose sales are increasing every year. 2010. INSERT INTO SALES VALUES ( 5. 20. SELECT * FROM PRODUCTS. 9000). 7000). INSERT INTO PRODUCTS VALUES ( 300. SALE_ID PRODUCT_ID YEAR QUANTITY PRICE 1 100 2010 25 5000 2 100 2011 16 5000 3 100 2012 8 5000 4 200 2010 10 9000 5 200 2011 15 9000 6 200 2012 20 9000 7 300 2010 20 7000 8 300 2011 18 7000 9 300 2012 20 7000 Here Quantity is the number of products sold in each year. 18. To solve these interview questions on SQL queries you have to create the products. SELECT * FROM SALES. 25. LEAD(S.PRODUCT_NAME. INSERT INTO SALES VALUES ( 4. 10. 2012. 200. S. 2011. 300. INSERT INTO SALES VALUES ( 9. INSERT INTO SALES VALUES ( 2. 2012. INSERT INTO SALES VALUES ( 8. 5000). I hope you have created the tables in your oracle database. 1. 300. 20. 5000). 2011. YEAR INTEGER. 8.
The final query to get the required result is SELECT PRODUCT_NAME FROM ( SELECT P. If this difference is greater than or equal to zero for all the rows. PRODUCT_NAME YEAR QUANTITY QUAN_PREV_YEAR Nokia 2012 8 Nokia 2011 16 Nokia 2010 25 IPhone 2012 20 IPhone 2011 15 IPhone 2010 10 Samsung 2012 20 Samsung 2011 18 Samsung 2010 20 Here the lead analytic function will get the quantity of a product in its previous year. The SQL query to get the required output is SELECT P.PRODUCT_NAME FROM PRODUCTS P.
2.PRODUCT_ID = P. Method1: Using left outer join. SELECT P. S. SELECT P. This can be achieved in three ways.YEAR DESC ) QUAN_DIFF FROM PRODUCTS P.1.PRODUCT_ID )A GROUP BY PRODUCT_NAME HAVING MIN(QUAN_DIFF) >= 0.0) OVER ( PARTITION BY P.QUANTITY.PRODUCT_ID.does not have sales at all? PARTITION BY P.QUANTITY – LEAD(S.QUANTITY IS NULL PRODUCT_NAME LG Method2: Using the NOT IN operator.PRODUCT_ID = S. SELECT P. SALES S WHERE P. WHERE S.PRODUCT_NAME FROM PRODUCTS P WHERE P.YEAR DESC ) QUAN_PREV_YEAR FROM PRODUCTS P.PRODUCT_NAME FROM PRODUCTS P LEFT OUTER JOIN SALES S ON (P. SALES S WHERE P. then the product is a constantly increasing in sales.PRODUCT_ID = S. PRODUCT_NAME LG Method3: Using the NOT EXISTS operator. Write a SQL query to find the products which
. SALES S_2012.PRODUCT_ID ORDER BY S.PRODUCT_ID). PRODUCT_NAME LG
16 25 0 15 10 0 18 20 0
3. Write a SQL query to find the products whose sales decreased in 2012 compared to 2011? Solution: Here Nokia is the only product whose sales decreased in year 2012 when compared with the sales in the year 2011.PRODUCT_NAME.PRODUCT_ID = S. STEP2: We will find the difference between the quantities of a product with its previous year’s quantity. PRODUCT_NAME IPhone Solution: “LG” is the only product which does not have sales at all.PRODUCT_ID ORDER BY S.PRODUCT_ID NOT IN (SELECT DISTINCT PRODUCT_ID FROM SALES).PRODUCT_ID).PRODUCT_NAME FROM PRODUCTS P WHERE NOT EXISTS (SELECT 1 FROM SALES S WHERE S.
Write a query to select the top product sold in each year? Solution: Nokia is the top product sold in the year 2010.PRODUCT_ID = S_2012. YEAR FROM ( SELECT P. SQL Queries Interview Questions .PRODUCT_ID = S.PRODUCT_NAME.QUANTITY > (SELECT AVG(QUANTITY) FROM SALES S1 WHERE S1.PRODUCT_ID) GROUP BY P. PRODUCT_NAME Nokia 4. Samsung in 2011 and IPhone.PRODUCT_ID = S.QUANTITY DESC ) RNK FROM PRODUCTS P.QUANTITY FROM PRODUCTS P. S.PRODUCT_ID AND S_2012.PRICE ). SALES S WHERE P. 1. PRODUCT_NAME TOTAL_SALES LG 0 IPhone 405000 Samsung 406000 Nokia 245000 SQL QUERIES INTERVIEW QUESTIONS ORACLE PART 2 This is continuation to my previous post. The query for this is SELECT PRODUCT_NAME. PRODUCT_NAME Nokia IPhone Samsung YEAR QUANTITY 2010 25 2012 20 2012 20
5. Where i have used PRODUCTS and SALES tables as an example.QUANTITY*S.YEAR = 2012 AND S_2011.SALES S_2011 WHERE P. Write a query to find the total sales of each product.PRODUCT_NAME.YEAR.YEAR = 2011 AND S_2012. RANK() OVER ( PARTITION BY S. You just need to group by the data on PRODUCT_NAME and then find the sum of sales. So.PRODUCT_ID AND S_2012. SELECT P.PRODUCT_ID ).PRODUCT_ID = S.? Solution: This is a simple query.Oracle Part 1. S.PRODUCT_ID ) A WHERE RNK = 1. S.YEAR.
. Similarly.YEAR ORDER BY S.PRODUCT_ID = S_2011. The SQL query for this is SELECT P.PRODUCT_ID AND S. Solve the below examples by writing SQL queries. 0) TOTAL_SALES FROM PRODUCTS P LEFT OUTER JOIN SALES S ON (P. Write a query to find the products whose quantity sold in a year should be greater than the average quantity of the product sold across all the years? Solution: This can be solved with the help of correlated query.PRODUCT_NAME. Here also i am using the same tables.QUANTITY < S_2011.PRODUCT_NAME. Samsung in 2012. PRODUCT_NAME Nokia Samsung IPhone Samsung YEAR 2010 2011 2012 2012
NVL( SUM( S.PRODUCT_ID = S. SALES S WHERE P. just take a look at the tables by going through that link and it will be easy for you to understand the questions mentioned here.QUANTITY.
Write a query to compare the products sales of "IPhone" and "Samsung" in each year? The output should look like as YEAR IPHONE_QUANT SAM_QUANT IPHONE_PRICE SAM_PRICE 2010 10 20 7000 2011 15 18 7000 2012 20 20 7000 Solution: By using self-join SQL query we can get the required result.310344828
2. PRODUCT_NAME IPhone IPhone IPhone Nokia Nokia Nokia Samsung Samsung Samsung YEAR 2011 2012 2010 2012 2011 2010 2010 2012 2011 RATIO 0.2012)). S. Oracle provides RATIO_TO_REPORT analytical function for finding the ratios. S_S. S_I.QUANTITY*S.QUANTITY)) QUAN_2010. The SQL query for this is SELECT * FROM ( SELECT P.444444444 0. S.PRODUCT_NAME ) SALES_RATIO FROM PRODUCTS P.344827586 0. SALES S_S WHERE P_I. MAX(DECODE(S.PRODUCT_ID = S_I.PRODUCT_ID = S.QUANTITY SAM_QUANT. S.YEAR FROM PRODUCTS P.QUANTITY)) QUAN_2012 QUAN_2010 QUAN_2011 10 20 25 15 18 16 20 20 8
. The SQL query is SELECT P. In the SALES table quantity of each product is stored in rows for every year.PRODUCT_ID). SALES S
9000 9000 9000
4.2012.YEAR 3.PRICE SAM_PRICE FROM PRODUCTS P_I.PRICE IPHONE_PRICE.PRODUCT_ID AND P_I. S_S.PRODUCT_NAME. S.PRODUCT_ID) )A PIVOT ( MAX(QUANTITY) AS QUAN FOR (YEAR) IN (2010.PRODUCT_ID = S_S. S. MAX(DECODE(S. SALES S WHERE (P.YEAR.YEAR.2011.2010.QUANTITY.PRODUCT_NAME. MAX(DECODE(S.PRODUCT_NAME = 'Samsung' AND S_I.QUANTITY IPHONE_QUANT.344827586 0. PRODUCTS P_S. Write a query to find the ratios of the sales of a product? Solution: The ratio of a product is calculated as the total sales price in a particular year divide by the total sales price across all years.510204082 0.PRODUCT_NAME = 'IPhone' AND P_S.Samsung
WHERE (P.2011.333333333 0. RATIO_TO_REPORT(S. then use the below query for transposing the row data into column data.222222222 0. SALES S_I. The required SQL query is SELECT S_I.YEAR.PRODUCT_ID AND P_S. If you are not running oracle 11g database. SELECT P.YEAR = S_S. Now write a query to transpose the quantity for each product and display it in columns? The output should look like as PRODUCT_NAME QUAN_2012 IPhone Samsung Nokia Solution: Oracle 11g provides a pivot function to transpose the row data into column data.YEAR.YEAR.163265306 0.PRICE) OVER(PARTITION BY P.QUANTITY)) QUAN_2011. S. S_I.PRODUCT_ID = S.326530612 0.PRODUCT_NAME.
The SQL query for this question is SELECT YEAR. A. The output will look like as below Products.PRODUCT_NAME. B. B. Write a query to duplicate each row based on the value in the repeat column? The input table data looks like as below Products. B. Write a query to display each letter of the word "SMILE" in a separate row? S M I L E Solution: SELECT SUBSTR('SMILE'.'DDMON-YYYY')+LEVEL-1 C_DATE FROM DUAL CONNECT BY LEVEL <= (SYSDATE . that i will rectify them. the product A should be repeated 3 times.REPEAT >= A. 3.'DD-MON-YYYY')+1)
. C. If you find any bugs in the queries. Please do comment. A. ( SELECT LEVEL L FROM DUAL CONNECT BY LEVEL <= (SELECT MAX(REPEAT) FROM T) ) A WHERE T.TO_DATE('01-JAN2000'. B should be repeated 5 times and C should be repeated 2 times.LEVEL. 2000 to till now? Solution: SELECT C_DATE.'DY')
SELECT PRODUCTS. 3 B. SALES S WHERE (P. 5. Write a query to display only friday dates from Jan. REPEAT FROM T.1) A FROM DUAL CONNECT BY LEVEL <=LENGTH('SMILE'). Solution: Repeat 3 3 3 5 5 5 5 5 2 2
SQL QUERIES INTERVIEW QUESTIONS ORACLE PART 3 Here I am providing Oracle SQL Query Interview Questions. Write a query to generate sequence numbers from 1 to the specified number N? Solution: SELECT LEVEL FROM DUAL CONNECT BY LEVEL<=&N.PRODUCT_ID) GROUP BY P. Repeat A. COUNT(1) NUM_PRODUCTS FROM SALES GROUP BY YEAR. YEAR 2010 2011 2012 NUM_PRODUCTS 3 3 3
) WHERE TO_CHAR(C_DATE. 2 Now in the output data. Write a query to find the number of products sold in each year? Solution: To get this result we have to group by on year and the find the count.FROM
PRODUCTS P. A. 1. 2. So. B. B.L ORDER BY T.PRODUCTS.PRODUCT_ID = S.
FROM ( SELECT TO_DATE('01-JAN-2000'.'DY') = 'FRI'. C. 4. TO_CHAR(C_DATE. 5 C.
name = 'sam' AND f1.friend_name = f2. The outuput should look as Name. ram vamsi. friends f2 WHERE f1.friend_name as friend_of_friend FROM friends f1. 620. f2. Solution2: SELECT WM_CONCAT(A) FROM ( SELECT ASCII(SUBSTR('SMILE'.friend_name as friend_of_friend FROM friends f1. anand
FROM DUAL CONNECT BY LEVEL <=LENGTH('SMILE') ). This is because. ram is mutual friend of sam and vamsi. jhon sam. Solution: SELECT f1.jhon. anand Here ram and vamsi are friends of sam. This is an extension to the problem 1. Friend_Name sam. quantity_sold. Friend_of_Friend sam.vijay and anand are friends of friends. ram sam.friend_name = f2.73.name AND NOT EXISTS (SELECT 1 FROM friends f3 WHERE f3. ram. For sam. 2. year A. vijay ram.name. it will give the ascii value of first letter in the string.name = 'sam' AND f1. Where 83 is the ascii value of S and so on. ram sam. friends f2 WHERE f1.LEVEL.name = f1. you can see ram is displayed as friends of friends.5. ram and jhon are friends of vamsi and so on. 2009 D. Solution1: SELECT SUBSTR(DUMP('SMILE'). 200. jhon ram. The output should look as
. vijay sam.69. Now extend the above query to exclude mutual friends. Now write a query to find friends of friends of sam. f2.76. jhon sam.name AND f3. 3. Write a query to get the top 5 products based on the quantity sold without using the row_number analytical function? The source data looks as Products. Consider the following friends table as the source Name.77. 2009 vijaysam.name.name.1)) A
Name. 2009 C. Convert the string "SMILE" to Ascii values? The output should look like as 83. Friend_of_Firend sam. anand Solution: SELECT f1. The ASCII function will give ascii value for only one character. 455.15) FROM DUAL.friend_name). 2009 B. If you pass a string to the ascii function.
SQL QUERIES INTERVIEW QUESTIONS ORACLE PART 4 1. vamsi vamsi. Here i am providing two solutions to get the ascii values of string. 155.friend_name = f2. In the output.
INSERT INTO PRODUCTS VALUES ( 100. Do not select the products which are already loaded in the target table with in the last 30 days.PART 5 Write SQL queries for the below interview questions: 1. SQL QUERY INTERVIEW QUESTIONS . quantity_sold. 810.
products. Write a query to produce the same output using row_number analytical function? Solution: SELECT products. rownum r from t ORDER BY quantity_sold DESC
)A WHERE r <= 5. quantity_sold. L. 'Motorola'). INSERT INTO PRODUCTS VALUES ( 400. This is an extension to the problem 3. 'IPhone'). H. INSERT INTO PRODUCTS VALUES ( 500. 'Nokia'). I. INSERT INTO PRODUCTS VALUES ( 200. INSERT INTO PRODUCTS VALUES ( 600. 'LG'). row_number() OVER( PARTITION BY year ORDER BY quantity_sold DESC) r from t )A WHERE r <= 5. 4. CREATE TABLE PRODUCTS ( PRODUCT_ID INTEGER. 5. PRODUCT_NAME VARCHAR2(30) ). INSERT INTO PRODUCTS VALUES ( 300.E. SELECT * FROM PRODUCTS. F. 109. year
year FROM ( SELECT products. J. quantity_sold. M. 910. year. 'BlackBerry'). This is an extension to the problem 3. Load the below products table into the target table. year. year. quantity_sold. quantity_sold. G. row_number() OVER( ORDER BY quantity_sold DESC) r from t )A WHERE r <= 5. 'Samsung').
2009 2009 2010 2010 2010 2010 2010 2010
Solution: SELECT FROM ( SELECT products. 260. PRODUCT_ID PRODUCT_NAME 100 Nokia 200 IPhone 300 Samsung 400 LG 500 BlackBerry 600 Motorola The requirements for loading the target table are: Select only 2 products randomly. year
FROM ( SELECT products. 580. quantity_sold. COMMIT. 390. write a query to get the top 5 products in each year based on the quantity sold? Solution: SELECT products.
PRODUCT_NAME. COMMIT. The target table structure is CREATE TABLE TGT_PRODUCTS ( PRODUCT_ID INTEGER.3. DELETE FROM TGT_PRODUCTS WHERE INSERT_DATE < SYSDATE . COMMIT. INSERT INTO CONTENTS_LKP VALUES('MAGAZINE'. INSERT INTO CONTENTS VALUES (6. INSERT INTO CONTENTS VALUES (1. Solution: First we will create a lookup table where we mention the priorities for the content types. INSERT INTO CONTENTS VALUES (4. CREATE TABLE CONTENTS ( CONTENT_ID INTEGER.'MOVIE').'MOVIE'). INSERT INTO CONTENTS VALUES (2. The target table should always contain only one contain type.30. PRODUCT_NAME VARCHAR2(30).
Solution: First we will create a target table. First MOVIE. )A WHERE ROWNUM <= 2. INSERT INTO CONTENTS_LKP VALUES('AUDIO'. CONTENT_ID CONTENT_TYPE 1 MOVIE 2 MOVIE 3 AUDIO 4 AUDIO 5 MAGAZINE 6 MAGAZINE The requirements to load the target table are: Load only one content type at a time into the target table. The last step is to delete the products from the table which are loaded 30 days back. INSERT_DATE DATE ). SYSDATE INSERT_DATE FROM ( SELECT PRODUCT_ID. The loading of content types should follow round-robin style.'MAGAZINE').
CONTENT_TYPE VARCHAR2(30) ). Third MAGAZINE and again fourth Movie.'AUDIO'). The next step is to pick 5 products randomly and then load into target table. It should not contain the products which are loaded prior to 30 days.0). SELECT * FROM CONTENTS_LKP. CONTENT_TYPE PRIORITY LOAD_FLAG
.PRODUCT_ID = S. PRODUCT_NAME FROM PRODUCTS S WHERE NOT EXISTS ( SELECT 1 FROM TGT_PRODUCTS T WHERE T. The target table will have an additional column INSERT_DATE to know when a product is loaded into the target table. PRIORITY INTEGER. SELECT * FROM CONTENTS. INSERT INTO CONTENTS VALUES (3.1.0).'AUDIO'). LOAD_FLAG INTEGER ).2.PRODUCT_ID ) ORDER BY DBMS_RANDOM. Target table should always contain the products loaded in 30 days.VALUE --Random number generator in oracle. CREATE TABLE CONTENTS_LKP ( CONTENT_TYPE VARCHAR2(30). Load the below CONTENTS table into the target table. INSERT INTO CONTENTS VALUES (5. second AUDIO. INSERT INTO CONTENTS_LKP VALUES('MOVIE'. The lookup table “Create Statement” and data is shown below. While selecting check whether the products are there in the INSERT INTO TGT_PRODUCTS SELECT PRODUCT_ID.'MAGAZINE').
CONTENT_TYPE FROM CONTENTS WHERE CONTENT_TYPE = (SELECT CONTENT_TYPE FROM CONTENTS_LKP WHERE OAD_FLAG=1).(SELECT MAX(PRIORITY) FROM CONTENTS_LKP) . The last step is to update the LOAD_FLAG of the Lookup table. The second step is to truncate the target table before loading the data TRUNCATE TABLE TGT_CONTENTS. Only one content type will have LOAD_FLAG as 1. then it indicates which content type needs to be loaded into the target table. The other content types will have LOAD_FLAG as 0. INSERT INTO TGT_CONTENTS SELECT CONTENT_ID. UPDATE CONTENTS_LKP SET LOAD_FLAG = 1 WHERE PRIORITY = ( SELECT DECODE( PRIORITY. UPDATE CONTENTS_LKP SET LOAD_FLAG = 0 WHERE LOAD_FLAG = 1. The target table structure is same as the source table structure.
The third step is to choose the appropriate content type from the lookup table to load the source data into the target table.MOVIE 1 1 AUDIO 2 0 MAGAZINE 3 0 Here if LOAD_FLAG is 1.1 . RIORITY+1) FROM CONTENTS_LKP WHERE CONTENT_TYPE = (SELECT DISTINCT CONTENT_TYPE FROM TGT_CONTENTS) ).