This action might not be possible to undo. Are you sure you want to continue?
$ prefixes are used to denote session Parameter and variables and $$ prefixes are used to denote mapping parameters and variables Q. What are Target Types on the Server? A. Target Types are File, Relational and ERP. Q. What are Target Types on the Server? A. Target Types are File, Relational and ERP.
Q. How do you identify existing rows of data in the target table using lookup transformation? A. There are two ways to lookup the target table to verify a row exists or not : 1. Use connect dynamic cache lookup and then check the values of NewLookuprow Output port to decide whether the incoming record already exists in the table / cache or not. 2. Use Unconnected lookup and call it from an expression transformation and check the Lookup condition port value (Null/ Not Null) to decide whether the incoming record already exists in the table or not.
Q. What are Aggregate transformations? A. Aggregator transform is much like the Group by clause in traditional SQL. This particular transform is a connected/active transform which can take the incoming data from the mapping pipeline and group them based on the group by ports specified and can caculated aggregate functions like ( avg, sum, count, stddev....etc) for each of those groups. From a performance perspective if your mapping has an AGGREGATOR transform use filters and sorters very early in the pipeline if there is any need for them. Q. What are various types of Aggregation?
A. Various types of aggregation are SUM, AVG, COUNT, MAX, MIN, FIRST, LAST, MEDIAN, PERCENTILE, STDDEV, and VARIANCE. Q. What are Dimensions and various types of Dimension? A. Dimensions are classified to 3 types. 1. SCD TYPE 1(Slowly Changing Dimension): this contains current data. 2. SCD TYPE 2(Slowly Changing Dimension): this contains current data + complete historical data. 3. SCD TYPE 3(Slowly Changing Dimension): this contains current data. Q. What are 2 modes of data movement in Informatica Server? A. The data movement mode depends on whether Informatica Server should process single byte or multi-byte character data. This mode selection can affect the enforcement of code page relationships and code page validation in the Informatica Client and Server. a) Unicode - IS allows 2 bytes for each character and uses additional byte for each non-ascii character (such as Japanese characters) b) ASCII - IS holds all data in a single byte The IS data movement mode can be changed in the Informatica Server configuration parameters. This comes into effect once you restart the Informatica Server. Q. What is Code Page Compatibility? A. Compatibility between code pages is used for accurate data movement when the Informatica Sever runs in the Unicode data movement mode. If the code pages are identical, then there will not be any data loss. One code page can be a subset or superset of another. For accurate data movement, the target code page must be a superset of the source code page. Superset - A code page is a superset of another code page when it contains the character encoded in the other code page, it also contains additional characters not contained in the other code page. Subset - A code page is a subset of another code page when all
characters in the code page are encoded in the other code page. What is Code Page used for? Code Page is used to identify characters that might be in different languages. If you are importing Japanese data into mapping, u must select the Japanese code page of source data. Q. What is Router transformation? A. It is different from filter transformation in that we can specify multiple conditions and route the data to multiple targets depending on the condition.
Q. What is Load Manager? A. While running a Workflow, the PowerCenter Server uses the Load Manager process and the Data Transformation Manager Process (DTM) to run the workflow and carry out workflow tasks. When the PowerCenter Server runs a workflow, the Load Manager performs the following tasks: 1. 2. 3. 4. 5. 6. 7. 8. Locks the workflow and reads workflow properties. Reads the parameter file and expands workflow variables. Creates the workflow log file. Runs workflow tasks. Distributes sessions to worker servers. Starts the DTM to run sessions. Runs sessions from master servers. Sends post-session email if the DTM terminates abnormally.
When the PowerCenter Server runs a session, the DTM performs the following tasks: 1. Fetches session and mapping metadata from the repository. 2. Creates and expands session variables. 3. Creates the session log file. 4. Validates session code pages if data code page validation is enabled. Checks query conversions if data code page validation is disabled. 5. Verifies connection object permissions. 6. Runs pre-session shell commands. 7. Runs pre-session stored procedures and SQL. 8. Creates and runs mappings, reader, writer, and transformation threads to extract, transform, and load data. 9. Runs post-session stored procedures and SQL.
which is called the master thread. Runs post-session shell commands. it creates the DTM process. What is Session and Batches? A. Q. The master thread creates and manages all other threads. Session . Creates and manages all other threads. What is Data Transformation Manager? A.One Thread each to Perform Pre and Post Session Operations. Sends post-session email. Fetches Session and Mapping Information. The primary purpose of the DTM process is to create and manage threads that carry out the session tasks. 11. Writer Thread .One or More Transformation Thread For Each Partition.One Thread for Each Partition if target exist in the source pipeline write to the target. Q.One Thread to Each Session. When Informatica server writes messages to the session log it includes thread type and thread ID. The DTM process is the second process associated with the session run. After the load manager performs validations for the session. Transformation Thread . • If we partition a session. the DTM creates a set of threads for each partition to allow concurrent processing.. • The DTM allocates process memory for the session and divide it into buffers. It creates the main thread. Reader Thread .One Thread for Each Partition for Each Source Pipeline.Main thread of the DTM process. Pre and Post Session Thread .10. This is also known as buffer memory. Following are the types of threads that DTM creates: Master Thread .A Session Is A set of instructions that tells the Informatica Server How And When To Move Data From Sources To . Mapping Thread .
Q.Run Session At The Same Time. There Are Two Types Of Batches : 1.It Provides A Way to Group Sessions For Either Serial Or Parallel Execution By The Informatica Server. we can use either the server manager or the command line program pmcmd to start or stop the session. 2.We can use unconnected lookup transformation to determine whether the records already exist in the target or not. Update slowly changing dimension tables . Reimport the definition Q. Q.Run Session One after the Other. Concurrent . we can accomplish the following tasks: Get a related value-Get the Employee Name from Employee table based on the Employee ID Perform Calculation. Lookup Transformations can access data from relational tables that are not sources in mapping. After creating the session. Sequential . what are the meta data of source U import? Source name Database location Column names Data types Key constraints Q. Where should you place the flat file to import the flat file . How many ways you can update a relational source definition and what are they? A. With Lookup transformation. Q. Why we use lookup transformations? A. Edit the definition 2. What is a source qualifier? A. Batches . It represents all data queried from the source.Targets. Two ways 1. While importing the relational source definition from database.
What is a mapplet? A. A mapplet should have a mapplet input transformation which receives input values. Connected transformation can receive multiple inputs and provides multiple outputs Unconnected: An unconnected transformation does not participate in the mapping data flow. What are the designer tools for creating transformations? A. Once the target is created. Q. It can receive multiple inputs and provides single output . Which transformation should u need while using the Cobol sources as source definitions? A. What are connected and unconnected transformations? A.definition to the designer? A. you can create a new target: select the type as flat file. Normalizer transformation which is used to normalize the data. Q. In the warehouse designer. You can import it from the mapping designer. save it. Q. Save it and u can enter various columns for that created target by editing its properties. How can you create or import flat file definition in to the warehouse designer? A. Since Cobol sources r often consists of denormalized data. It is a repository object that generates. Set of transformations where the logic can be reusable when the mapplet is displayed within the mapping only input & output ports are displayed so that the internal logic is hidden from end-user point of view. What is a transformation? A. Q. modifies or passes data. and an output transformation which passes the final modified data to back to the mapping. Connect Transformation : A transformation which participates in the mapping data flow. Place it in local folder Q. Mapping designer Transformation developer Mapplet designer Q. You can create flat file definition in warehouse designer.
A mapping parameter retains the same value throughout the entire session. We can use mapping parameters or variables in any transformation of the same mapping or mapplet in which U have created mapping parameters or variables. The Informatica server saves the value of mapping variable to the repository at the end of session run and uses that value next time U run the session. a mapping variable represents a value that can change throughout the session. Can U use the mapping parameters or variables created in one mapping into another mapping? A. Click the add button on the ports tab.U declare and use the parameter in a mapping or mapplet. In how many ways can you create ports? A. When u use the mapping parameter . Do not forget to check the option on the aggregator that tells the aggregator that the input is sorted on the same keys as group by. Q. Q. Create normal one and promote it to reusable Q. A transformation that can be reused is called a reusable transformation They can be created using two methods: 1. How can U improve session performance in aggregator transformation? A. Mapping parameter represents a constant value that U can define before running a session. Then define the value of parameter in a parameter file for the session. Using transformation developer 2. . Use a sorter before the aggregator 2.Q. Drag the port from another transformation 2. NO. 1. The key order is also very important. Use sorted input. Q. Two ways 1. Unlike a mapping parameter. What are reusable transformations? A. What are mapping parameters and mapping variables? A.
The aggregator stores data in the aggregate cache until it completes aggregate calculations. Q.1): ♦Either input pipeline contains an Update Strategy transformation. ♦You connect a Sequence Generator transformation directly before the Joiner transformation. You can join heterogeneous data sources in joiner transformation which we cannot achieve in source qualifier transformation. Normal (Default) -.only matching rows from both master and detail Master outer -. the Informatica server creates index and data caches in memory to process the transformation. Q. it stores overflow values in cache files.all detail rows and only matching rows . Master and detail source Type of join Condition of the join Q. In which conditions can we not use joiner transformations? A. What r the difference between joiner transformation and source qualifier transformation? A. You can join relational sources which r coming from different sources also. What r the settings that u use to configure the joiner transformation? A. When u run a session that uses an aggregator transformation. Is aggregate cache in aggregator transformation? A. Whereas u doesn’t need matching keys to join two sources. You need matching keys to join two relational sources in source qualifier transformation. You cannot use a Joiner transformation in the following situations (according to Informatica 7.Q. Q. If the Informatica server requires more space. Two relational sources should come from same data source in sourcequalifier. What are the join types in joiner transformation? A.
Q. but you want to include the employee name in your target table to make your summary data easier to read. such as gross sales per invoice or sales tax.all rows from both master and detail ( matching or non matching) Q. What is meant by lookup caches? A. You can use a Lookup transformation to determine whether records already exist in the target.all master rows and only matching rows from detail Full outer -. Why use the lookup transformation? A. Perform a calculation. After building the caches. It allocates memory for the cache based on the amount u configure in the transformation or session properties. What are the joiner caches? A. the Informatica Server reads all the records from the master source and builds index and data caches based on the master rows. Update slowly changing dimension tables. the Joiner transformation reads records from the detail source and performs joins. To perform the following tasks. Many normalized tables include values used in a calculation. The Informatica server stores condition values in the index cache and output values in . When a Joiner transformation occurs in a session.from master Detail outer -. but not the calculated value (such as net sales). Get a related value. Q. For example. The Informatica server builds a cache in memory when it processes the first row of a data in a cached look up transformation. if your source table includes employee ID.
When the lookup condition is true. It caches the lookup table and . Recache from database: If the persistent cache is not synchronized with the lookup table. Persistent cache: U can save the lookup cache files and reuse them the next time the Informatica server processes a lookup transformation configured to use the cache. the Informatica server does not update the cache while it processes the lookup transformation. What r the types of lookup caches? A. Q. Dynamic cache: If you want to cache the target table and insert new rows into cache and the target. you can create a look up transformation to use dynamic cache. By default Informatica server creates a static cache. Persistent cache: U can save the lookup cache files and reuse them the next time the Informatica server processes a lookup transformation configured to use the cache. You can share unnamed cache between transformations in the same mapping. Recache from database: If the persistent cache is not synchronized with the lookup table. What r the types of lookup caches? A. you can configure the lookup transformation to rebuild the lookup cache. It caches the lookup table and lookup values in the cache for each row that comes into the transformation. By default Informatica server creates a static cache. The Informatica server dynamically inserts data to the target table. Q. Static cache: U can configure a static or read-only cache for only lookup table. you can configure the lookup transformation to rebuild the lookup cache.the data cache. Shared cache: U can share the lookup cache between multiple transactions. Static cache: U can configure a static or read-only cache for only lookup table.
and Repository Manager. They are two types of transformations: 1. Dynamic cache: If you want to cache the target table and insert new rows into cache and the target. Q: What is a Mapping? A: Mapping Represent the data flow between source and target Q: What are the components must contain in Mapping? A: Source definition. DELTA file contains only the changes since last extract. Shared cache: U can share the lookup cache between multiple transactions. The Informatica server dynamically inserts data to the target table. or passes data. Q: Power Center/ Power Mart – which products have you worked with? A: Power Center will have Global and Local repository. Transformation. the Informatica server does not update the cache while it processes the lookup transformation. Target Definition and Connectors Q: What is Transformation? A: Transformation is a repository object that generates. Q: What do you know about Informatica and ETL? A: Informatica is a very useful GUI based ETL tool. Historical and Ongoing load. Q: Explain what are the tools you have used in Power Center and/or Power Mart? A: Designer. Server Manager. Q: FULL and DELTA files. When the lookup condition is true. A: FULL file contains complete data as of today including history data. whereas Power Mart will have only Local repository. You can share unnamed cache between transformations in the same mapping.lookup values in the cache for each row that comes into the transformation. modifies. Transformation performs specific function. Active . you can create a look up transformation to use dynamic cache.
Advance External Procedure. This is useful for removing temporary tables. the stored procedure runs. Eg: Aggregator. Q: Which transformation can be overridden at the Server? A: Source Qualifier and Lookup Transformations Q: What is connected and unconnected Transformation and give Examples? Q: What are Options/Type to run a Stored Procedure? A: Normal: During a session. Connected stored procedures run only in normal mode. Router. 2. This is useful for verifying the existence of tables or performing joins of data in a temporary table. Source qualifier. This is useful for calling the stored procedure for each row of data that passes through the mapping. Pre-load of the Source. It must contain at least one Input and one Output port. Sequence Generator. Eg: Expression. External Procedure. Lookup. Rank. After the session sends data to the target. Joiner. Q: What kinds of sources and of targets can be used in Informatica? A: . Normalizer. ERP Source Qualifier. After the session retrieves data from the source. XML Source Qualifier.Rows. which are affected during the transformation or can change the no of rows that pass through it. Update Strategy. Post-load of the Source. Input. the stored procedure runs. the stored procedure runs. the stored procedure runs. This is useful for verifying target tables or disk space on the target system. Filter. Passive Does not change the number of rows that pass through it. Post-load of the Target. Pre-load of the Target. Stored Procedure. Output. This is useful for re-creating indexes on the database. Before the session sends data to the target. Before the session retrieves data from the source. the stored procedure runs where the transformation exists in the mapping on a row-by-row basis. such as running a calculation against an input port.
They are not linked in the map with any input or output ports. Aggregator. Rank. Joiner. Lookup. Eg. ERP. rows passing thru it. Stored Proc. Unconnected transformations are not part of the mapping pipeline. Unconnected: Lookup. MQ) Joiner Expression Lookup Filter Router Sequence Generator Aggregator Update Strategy Stored Proc External Proc Advanced External Proc Rank Normalizer Q: What are active/passive transformations? A: Passive transformations do not change the nos. what do you order . Q: In target load ordering.Targets or Source Qualifiers? . The input and output ports are connected to other transformations. XML or flat files. relational db or XML. Stored Proc. Active: Filter. Target may be relational tables. of rows passing through it whereas active transformation changes the nos. Sources may be Flat file. Generator Q: What are connected/unconnected transformations? A: Connected transformations are part of the mapping pipeline. Seq. In Unconnected Lookup you can pass multiple values to unconnected transformation but only one column of data will be returned from the transformation. Q: Transformations: What are the different transformations you have worked with? A: Source Qualifier (XML. Source Qualifier Passive: Expression.
which file will you use as the master file? A: Use the file with lesser nos. and you need to stop the session from continuing then you may use ABORT function in the default value for the port. You have to set the Source “Treat Rows as: INSERT” and check the box “Constraint based load ordering” in Advanced Tab. what value will each target get? A: Each target will get the value in multiple of 3. It can be used with IIF and DECODE function to Abort the session.A: Source Qualifiers. Q: Have you used constraint-based load ordering? Where do you set this? A: Constraint based loading can be used when you have multiple targets in the mapping and the target tables have a PK-FK relationship in the database. Q: If you have 2 files to join. If the primary key column contains NULL. can you undo the reusable action? A: No Q: What is the difference between filter and router transformations? A: Filter can filter the records based on ONE condition only whereas Router can be used to filter records on multiple condition. how will you go about it? Will you use Joiner transformation? A: Use Joiner and join the file and Source Qualifier. which are populated from multiple sources. Decode functions? A: Abort can be used to Abort / stop the session on an error condition. Q: If you make a local transformation reusable by mistake. If there are multiple targets in the mapping. Q: Have you used SQL Override? A: It is used to override the default SQL generated in the Source Qualifier / Lookup transformation. Q: If you have a FULL file that you have to match and load into a corresponding table. It can be set in the session properties. Q: If a sequence generator (with increment of 1) is connected to (say) 3 targets and each target uses the NEXTVAL port. then we can use Target Load ordering. . of records as master file. Q: Have you used the Abort.
In the next run Informatica server uses the cache file from the previous session. Q: What is dynamic lookup strategy? A: The Informatica server compares the data in the lookup table and the cache. Q: How does Informatica do variable initialization? Number/String/Date A: Number – 0. Q: Connected/unconnected – if there is no match for the lookup. if there is no matching record found in the cache file then it modifies the cache files by inserting the record. Mapplet Output Q: Have you used Shortcuts? A: Shortcuts may used to refer to another mapping. You may use only (=) equality in the lookup condition. what is returned? A: Unconnected Lookup returns NULL if there is no matching record found in the Lookup transformation.Q: Lookup transformations: Cached/un-cached A: When the Lookup Transformation is cached the Informatica Server caches the data and index. If the Lookup is uncached then the Informatica reads the data from the database for every record coming from the Source Qualifier. Q: If you used a database when importing sources/targets that was dropped later on. Q: Mapplets: What are the 2 transformations used only in mapplets? A: Mapplet Input / Source Qualifier. Q: What is persistent cache? A: When the Lookup is configured to be a persistent cache Informatica server does not delete the cache files after completion of the session. String – blank. If multiple matches are found in the lookup then Informatica fails the session. This is done at the beginning of the session before reading the first record from the source. it is immediately reflected in the mapping where it is used. Date – 1/1/1753 . Informatica refers to the original mapping. how can you store a value from the previous row? A: By creating a variable in the transformation. will your mappings still be valid? A: No Q: In expression transformation. If any changes are made to the mapping / mapplet. By default the Informatica server creates a static cache.
Q: When would you truncate the target before running the session? A: When we want to load entire data set including history in one shot. DTM. .Q: Have you used the Informatica debugger? A: Debugger is used to test the mapping during development. data passes through SQL layer. You can give breakpoints in the mappings and analyze the data. thus leading to reduced accuracy. Reader. dd_delete and it does only dd_insert. Update strategy do not have dd_update. locks sessions and other objects. A: Load Manager is the first process started when the session runs. Transformer performs the task specified in the mapping. Reader scans data from the specified sources. Writer manages the target/output data. Q: When would use multiple update strategy in a mapping? A: When you would like to insert and update the records in a Type 2 Dimension table. Q: What do you know about the Informatica server architecture? Load Manager. data is logged in to the archive log file and as a result it is slow. It starts a thread for each pipeline. Q: Have you used External loader? What is the difference between normal and bulk loading? A: External loader will perform direct data load to the table/data files. DTM process is started once the Load Manager has completed its job. Q: Have you used partitioning in sessions? (not available with Powermart) A: It is available in PowerCenter. Transformer. It checks for validity of mappings. Q: Do you enable/disable decimal arithmetic in session properties? A: Disabling Decimal Arithmetic will improve the session performance but it converts numeric values to double. Writer. It can be configured in the session properties. During normal data load. bypass the SQL layer and will not log the data.
g. Q: What did you do in the stored procedure? Why did you use stored proc instead of using expression? A: Q: When would you use SQ. then we can use a single SQ. e. Q: How did you handle reject data? What file does Informatica create for bad data? A: Informatica saves the rejected data in a . Joiner and Lookup? A: If we are using multiples source tables and they are related at the database. Batches may be sequential or concurrent. etc. Drag the sessions into the batch from the session list window. Q: How did you handle runtime errors? If the session stops abnormally how were you managing the reload process? Q: Have you used pmcmd command? What can you do using this command? A: pmcmd is a command line program. Q: How do you create a batch load? What are the different types of batches? A: Batch is created in the Server Manager. First create sessions and then create a batch. When handling through session. Additionally for every column there is an indicator for each column specifying whether the data was rejected due to overflow.Q: How do you use stored proc transformation in the mapping? A: In side mapping we can use stored procedure transformation. Concurrent sessions run parallel thus optimizing the server resources. Flat file and relational tables. It contains multiple sessions. Using this command You can start sessions Stop sessions . Joiner is used to join heterogeneous sources. truncation. it can be invoked either in Pre-session or post-session scripts. Sequential batch runs the sessions sequentially. If we need to Lookup values in a table or Update Slowly Changing Dimension tables then we can use Lookup transformation. pass input parameters and get back the output parameters. null. Informatica adds a row identifier for each record rejected indicating whether the row was rejected because of Writer or Target.bad file.
Starts the session. and sends post-session email when the session completes. Created when the repository reads information about repository objects from the database. and handle pre and post-session operations. Q: What is DTM process? A: The DTM process creates threads to initialize the session. Created when you open a repository object in a folder for which you do not have write permission. Recover session Q: What are the two default repository user groups A: Administrators and Public Q: What are the Privileges of Default Repository and Extended Repository user? A: Default Repository Privileges o Use Designer o Browse Repository o Create Session and Batches Extended Repository Privileges o Session Operator o Administer Repository o Administer Server o Super User Q: How many different locks are available for repository objects A: There are five kinds of locks available on repository objects: Read lock. Save lock. Created when you save information to the repository. write. what are the tasks handled? A: Load Manager (LM): . Write lock. read. Execute lock. transform data. creates the DTM process. Fetch lock. Created when you create or edit a repository object in a folder for which you have write permission. Created when you start a session or batch. Q: What is Session Process? A: The Load Manager process. Q: When the Informatica Server runs a session. or when the Informatica Server starts a scheduled session or batch. Also created when you open an object with an existing write lock.
o LM expands the server and session variables and parameters.Line Sequential buffer length for flat files. Data o o o o Transformation Manager (DTM): DTM process allocates DTM process memory. o LM creates the DTM (Data Transformation Manager) process. Reader Parameter . General Parameter . Q: What are the DTM (Data Transformation Manager) Parameters? A: DTM Memory parameter .Indicator file to wait for. and it writes persisted sequence values and mapping variables to the repository. o DTM writes historical incremental aggregation and lookup data to disk.o LM locks the session and reads session properties. Event based Scheduling . o LM reads the parameter file.Commit Interval (source and Target)/ Others. o LM verifies permissions and privileges. DTM creates reader. The server administrator will take up the issue. 1. transformation. Q: How to handle the performance in the server side? A: Informatica tool has no role to play here. Explain about your projects – Architecture – Dimension and Fact tables – Sources and Targets . o LM validates source and target code pages. DTM initializes the session and fetches the mapping.Enabling Lookup cache. o Load Manager sends post-session email Q: What is Code Page? A: A code page contains the encoding to specify characters in a set of one or more languages. o LM creates the session log file. it creates a set of threads for each partition.Default buffer block size/Data & Index Cache size . and writer threads for each source pipeline. DTM executes pre-session commands and procedures. o DTM executes post-session commands and procedures. If the pipeline is partitioned.
Update strategy 6. Joiner Passive transformations do not change the number of rows that pass through the mapping. Expressions 2.It is also called star schema. Ranker 5. What are mapplets? Mapplets are reusable objects that represents collection of transformations Transformations not to be included in mapplets are Cobol source definitions Joiner transformations Normalizer Transformations Non-reusable sequence generator transformations Pre or post session procedures Target definitions XML Source definitions IBM MQ source definitions Power mart 3.– Transformations used – Frequency of populating data – Database size 2. Advanced External procedure 8. 1. 3. Stored procedure 4. Joiner and Ranker 5. External procedure . Router transformation 4. Lookup 3. Aggregator 7. What are the transformations that use cache for performance? Aggregator. Lookups. What the active and passive transformations? An active transformation changes the number of rows that pass through the mapping. 1.5 style Lookup functions 4. Normalizer 9. Filter transformation 3. What is dimension modeling? Unlike ER model the dimensional model is very asymmetric with one large central table called as fact table connected to multiple dimension tables . Source Qualifier 2.
The result is passed to other transformations and the target.5. Unshared: Within the mapping if the lookup table is used in more than one transformation then the cache built for the first lookup can be used for the others. Can return only one column from each row. or synonym. Shared: If the lookup table is used in more than one transformation/mapping . views. Rows are not added dynamically. If there is no match it returns null. Which is better? Connected : Received input values directly from the pipeline Can use Dynamic or static cache. XML Source qualifier 6. Cache includes all lookup columns used in the mapping Can return multiple columns from the same row If there is no match . Sequence generator 6. Un connected : Receive input values from the result of a LKP expression in another transformation. It cannot be used across mappings. Only static cache can be used. The informatica server queries the lookup table based on the lookup ports in the transformation. Default values cannot be specified. Dynamic: Caches the rows as and when it is passed. Diff between connected and unconnected lookups. Used to : Get related value Perform a calculation Update slowly changing dimension tables. Explain various caches : Static: Caches the lookup table before executing the transformation. can return default values Default values can be specified. It compares lookup transformation port values to lookup table column values based on the lookup condition. What is a lookup transformation? Used to look up data in a relational table. Cache includes all lookup/output ports in the lookup condition and lookup or return port.
and reject at the targets. it passes new source data through the mapping and uses historical cache (index and data cache) data to perform new aggregation calculations incrementally. What are the uses of index and data caches? The conditions are stored in index cache and records from the lookup are stored in data cache 7. Persistent : If the cache generated for a Lookup needs to be preserved for subsequent use then persistent cache is used. Create Sorted input ports and pass the input records to aggregator in sorted forms by groups then by port Incremental aggregation? In the Session property tag there is an option for performing incremental aggregation. in that you can use the aggregator transformation to perform calculations in groups. 8. max.2 DD_REJECT.1 DD_DELETE. What happens? Hints: If in Session anything other than DATA DRIVEN is mentions . When the Informatica server performs incremental aggregation . It is useful only if the lookup table remains constant. What are update strategy constants? DD_INSERT. It can be used across mappings. What are the uses of index and data cache? The group data is stored in index files and Row data stored in data files. Explain aggregate transformation? The aggregate transformation allows you to perform aggregate calculations. The aggregate transformation is unlike the Expression transformation.0 DD_UPDATE. Performance issues ? The Informatica server performs calculations as it reads and stores necessary data group and row data in an aggregate cache. Explain update strategy? Update strategy defines the sources to be flagged for insert. update. delete.then the cache built for the first lookup can be used for the others. sum. The expression transformation permits you to perform calculations on a row-by-row basis only. It will not delete the index and data files. min etc. such as averages.3 If DD_UPDATE is defined in update strategy and Treat source rows as INSERT in Session .
Explain Joiner transformation and where it is used? While a Source qualifier transformation can join data originating from a common source database. What are the three areas where the rows can be flagged for particular treatment? In mapping. 11. How do you call stored procedure and external procedure transformation ? External Procedure can be called in the Pre-session and post session tag in the Session property sheet. the joiner transformation joins two related heterogeneous sources residing in different locations or file systems.then Update strategy in the mapping is ignored. What are the default values for variables? Hints: Straing = Null. Update strategy . How many ways you can filter the records? 1. Select Transformation – Create and then select stored procedure. In Router the multiple conditions are placed and the rejected rows can be assigned to a port. Ranker 5. Source Qualifier 2. What is the use of Forward/Reject rows in Mapping? 9. In Session treat Source Rows and In Session Target Options. . Two relational tables existing in separate databases Two flat files in different file systems. 12. Filter transformation 3. Select transformation – Import Stored Procedure 3. Explain the expression transformation ? Expression transformation is used to calculate values in a single row before writing to the target. Select the icon and add a Stored procedure transformation 2. Date = 1/1/1753 10. Number = 0. Store procedures are to be called in the mapping designer by three methods 1. Difference between Router and filter transformation? In filter transformation the records are filtered based on the condition and rejected rows are discarded. Router transformation 4.
If more than two is to be couples add another Joiner in the hierarchy. 16. you need to connect to a source Qualifier transformation. What is target load option? It defines the order in which informatica server loads the data into the targets. 15. Explain Normalizer transformation? The normaliser transformation normalises records from COBOL and relational sources. What are join options? Normal (Default) Master Outer Detail Outer Full Outer 13. What is Ranker transformation? Filters the required number of records from the top or from the bottom. What is Source qualifier transformation? When you add relational or flat file source definition to a mapping . Specify an outer join rather than the default inner join. Filter records when the Informatica server reads the source data.Two different ODBC sources In one transformation how many sources can be coupled? Two sources can be couples. creating input and output ports for every columns in the source. A Normaliser transformation can appear anywhere in a data flow when you normalize a relational source. Join Data originating from the same source database. allowing you to organise the data according to your own needs. the Normaliser transformation appears. Use a Normaliser transformation instead of the Source Qualifier transformation when you normalize COBOL source. When you drag a COBOL source into the Mapping Designer Workspace. The source qualifier represents the records that the informatica server reads when it runs a session. Specify sorted ports Select only distinct values from the source Create a custom query to issue a special SELECT statement for the Informatica server to read the source data. This is to avoid integrity constraint violations . 14.
If the performance is same then there is a Source bottleneck. Execute the query against the source database with a query tool. Dynamic Extension etc. How do you identify the bottlenecks in Mappings? Bottlenecks can occur in 1. Using database query – Copy the read query directly from the log. If the time it takes to execute the query and the time to fetch the first row are significantly different. Solutions: If High error rows and rows in lookup cache indicate a mapping bottleneck. (OR) Look for the performance monitor in the Sessions property sheet and view the counters. Add a filter transformation before target and if the time is the same then there is a problem. Sources Set a filter transformation after each SQ and see the records are not through. You can identify target bottleneck by configuring the session to write to a flat file target. Mapping If both Source and target are OK then problem could be in mapping. you have a target bottleneck. Solutions: Optimize Queries using hints. If the time taken is same then there is a problem.. Optimize Single Pass Reading: . SQ and remove all transformations and connect to file target. then the query can be modified using optimizer hints.17. If the session performance increases significantly when you write to a flat file. Use indexes wherever possible. Solution : Drop or Disable index or constraints Perform bulk load (Ignores Database log) Increase commit interval (Recovery is compromised) Tune the database for RBS. Targets The most common performance bottleneck occurs when the informatica server writes to a target database. 3. 2. You can also identify the Source problem by Read Test Session – where we copy the mapping with sources.
it is better to index the lookup table on the columns in the condition Optimize Filter transformation: You can improve the efficiency by filtering early in the data flow. move the filter transformation as close to the source qualifier as possible to remove unnecessary data early in the data flow. Optimize Aggregate transformation: 1. Try creating a reusable Seq. The server assumes all input data are sorted and as it reads it performs aggregate calculations. Use incremental aggregation in session property sheet. 3. Generator transformation: 1. Un-shared and Persistent cache 2. Preferably numeric columns. 2. Shared. Indexing the lookup table The cached lookup table should be indexed on order by columns. If not possible to move the filter into SQ. Caching the lookup table: When caching is enabled the informatica server caches the lookup table and queries the cache during the session. The sorted input decreases the use of aggregate caches. Group by simpler columns. Static. Optimize Seq. Optimizing the lookup condition Whenever multiple conditions are placed. Use a source qualifier filter to remove those same rows at the source. Dynamic.Optimize Lookup transformation : 1. When this option is not enabled the server queries the lookup table on a row-by row basis. The number of cached value property determines the number of values the informatica . the condition with equality sign should take precedence. 3. Use Sorted input. The session log contains the ORDER BY statement The un-cached lookup since the server issues a SELECT statement for each row passing into lookup transformation. Instead of using a filter transformation halfway through the mapping to remove a sizable amount of data. Generator transformation and use it in multiple mappings 2.
5. output rows. Tune Parameter – DTM buffer pool. target definitions. The server stores the transformed data from the above transformation in the data cache before returning it to the data flow. you may have a session bottleneck. Use operators instead of functions. All transformations have some basic counters that indicate the Number of input rows. and error rows. 3. Commit Interval. Terse. It stores group information for those transformations in index cache. The informatica server creates performance details when you enable Collect Performance Data on the General Tab of the session properties.server caches at one time. and individual transformation. Lookup and Joiner transformation. Replace common sub-expressions with local variables. Index cache size. or Rank transformations indicate a session bottleneck. Small cache size. 4. data cache size. low buffer memory. Minimize aggregate function calls. Rank. If it is more. Sessions If you do not have a source. Factoring out common logic 2. Joiner. You can identify a session bottleneck by using the performance details. Tracing level (Normal. System (Networks) 18. Low bufferInput_efficiency and BufferOutput_efficiency counter also indicate a session bottleneck. target. and small commit intervals can cause session bottlenecks. How to improve the Session performance? 1 Run concurrent sessions 2 Partition session (Power center) 3. Any value other than zero in the readfromdisk and writetodisk counters for Aggregate. The informatica server uses the index and data caches for Aggregate. Buffer block size. then DTM can be increased. 4. Verbose Data) The session has memory to hold 83 sources and targets. Performance details display information about each Source Qualifier. If the allocated data or index cache is not large enough to store the . Verbose Init. or mapping bottleneck. Optimize Expression transformation: 1.
Use the parameter in the Expressions. Terse Log initialization. It records row level logs. Each time the server pages to the disk the performance slows. Verbose Init. 3. What are tracing levels? Normal-default Logs initialization and status information. 20. Remove Staging area 5. This can be seen from the counters . In addition to normal tracing levels. What is Slowly changing dimensions? Slowly changing dimensions are dimension tables that have slowly increasing data as well as updates to existing data. 1. Expression transformation etc.date. notification of rejected data. 4. Reduce error tracing 19. skipped rows due to transformation errors. In addition to Verbose init. It can be used in SQ expressions.parameter & variables . As stored in the repository object in the previous run. A mapping variable is also defined similar to the parameter except that the value of the variable is subjected to change. As defined in the initial values in the designer. Tune off Session recovery 6. error messages. Verbose Data. 21. it has to be more than the index. It picks up the value in the following order. Since generally data cache is larger than the index cache. it also logs additional initialization information. the server stores the data in a temporary disk file as it processes the session data. 4. errors encountered. summarizes session results but not at the row level. What are mapping parameters and variables? A mapping parameter is a user definable constant that takes up a value before running a session. Define the values for the parameter in the parameter file. From the Session parameter file 2. names of index and data files used and detailed transformation statistics. Steps: Define the parameter in the mapping designer . Default values .
the indicator file contains a number to indicate whether the row was marked for insert.log). the Informatica server creates the target file based on file properties entered in the session property sheet. One if the session completed successfully the other if the session fails. It writes information about session into log files such as initialization process. number of rows written or rejected. It also creates an error log for error messages. You can create two different messages. errors encountered and load summary. What are the output files that the Informatica server creates during the session running? Informatica server log: Informatica server (on UNIX) creates a log for all status and error messages (default name: pm. Output file: If session writes to a target file. The amount of detail in session log file depends on the tracing level that you set.server. Reject file: This file contains the rows of data that the writer does not write to targets. update. You can view this file by double clicking on the session in monitor window. delete or reject. Session detail includes information such as table name. you can configure the Informatica server to create indicator file. For the following circumstances Informatica server creates index and data cache files: Aggregator transformation Joiner transformation Rank transformation Lookup transformation . creation of sql commands for reader and writer threads. To generate this file select the performance detail option in the session property sheet. For each target row. These files will be created in Informatica home directory Session log file: Informatica server creates session log file for each session. Cache files: When the Informatica server creates memory cache it also creates cache files. Performance detail file: This file contains information known as session performance details which helps you where performance can be improved. Post session email: Post session email allows you to automatically communicate information about a session run to designated recipients. Indicator file: If you use the flat file as a target. Session detail file: This file contains load statistics for each target in mapping. The control file contains the information about the target flat file such as data format and loading instructions for the external loader.Q. Control file: Informatica server creates control file and a target file when you run a session that uses the external loader.
Informatica Server saves the value of a mapping variable to the repository at the end of each successful session run and uses that value the next time you run the session Q. The Informatica server builds a cache in memory when it processes the first row of a data in a cached look up transformation. What is target load order? You specify the target load order based on source qualifiers in a mapping. Variable: A mapping variable represents a value that can change through the session. The Informatica server stores condition values in the index cache and output values in the data cache. you can define the order in which Informatica server loads data into the targets . If you have multiple source qualifiers connected to multiple targets. What is the difference between joiner transformation and source qualifier transformation? A. What is meant by lookup caches? A. Q.Q. Parameter: A mapping parameter represents a constant value that you can define before running a session. A mapping parameter retains the same value throughout the entire session. You can join heterogeneous data sources in joiner transformation which we cannot do in source qualifier transformation. Q. What is meant by parameters and variables in Informatica and how it is used? A. It allocates memory for the cache based on the amount you configure in the transformation or session properties.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue listening from where you left off, or restart the preview.