You are on page 1of 8

Mapplet

When you use a mapplet in a mapping, you use an instance of the mapplet. Like a reusable transformation,
any change made to the mapplet is inherited by all instances of the mapplet.

To use a mapplet in a mapping, first we need to configure it for input and output. In addition to
transformation logic that you configure, a mapplet has the following components:

 Mapplet input
 Mapplet output
 Mapplet ports

Note:

Follow the following rules and guidelines when you edit a mapplet that is used by mappings:

•Do not delete a port from the mapplet. The Designer deletes mapplet ports in the mapping when you
delete links to an Input or Output transformation or when you delete ports connected to an Input or
Output transformation.

•Do not change the datatype, precision, or scale of a mapplet port. The datatype, precision, and scale
of a mapplet port is defined by the transformation port to which it is connected in the mapplet.
Therefore, if you edit a mapplet to change the datatype, precision, or scale of a port connected to a port
in an Input or Output transformation, you change the mapplet port.

•Do not change the mapplet type. If you remove all active transformations from an active mapplet, the
mapplet becomes passive. If you add an active transformation to a passive mapplet, the mapplet
becomes active.

Mapplets help simplify mappings in the following ways:

 Include source definitions. Use multiple source definitions and source qualifiers to provide source
data for a mapping.
 Accept data from sources in a mapping. If you want the mapplet to receive data from the
mapping, use an Input transformation to receive source data.
 Include multiple transformations. A mapplet can contain as many transformations as you need.
 Pass data to multiple transformations. You can create a mapplet to feed data to multiple
transformations. Each Output transformation in a mapplet represents one output group in a
mapplet.
 Contain unused ports. You do not have to connect all mapplet input and output ports in a
mapping.

You associate information with repository metadata using metadata extensions.  If you use a Stored Procedure transformation. you may see domains such as Ariba or PowerExchange for Siebel. Target definition 3. delete.Use the following rules and guidelines when you add transformations to a mapplet:  If you use a Sequence Generator transformation. User-defined metadata extensions exist within the User Defined Metadata Domain. You can view and change the values of vendor-defined metadata extensions. you must configure the Stored Procedure Type to be Normal. Vendor-defined metadata extensions exist within a particular vendor domain. but you cannot create. For example. delete. Mappings 5. you must use a reusable Sequence Generator transformation. You see the domains when you create. Metadata Extension PowerCenter allows end users and partners to extend the metadata stored in the repository by associating information with individual objects in the repository. PowerCenter Client applications can contain the following types of metadata extensions: Vendor-defined: Third-party application vendors create vendor-defined metadata extensions. You can also change the values of user- defined extensions. or view metadata extensions. Tasks . you add them to this domain. edit. when you create a mapping. If you use third-party applications or other Informatica products. Transformations 4. You can create. Note: All metadata extensions exist within a domain. Sessions 7. you can store the contact information with the mapping. Source definitions 2. Mapplets 6. edit. When you create metadata extensions for repository objects. and view user-defined metadata extensions. You cannot edit vendor-defined domains or change the metadata extensions in them. or redefine them. User-defined: You create user-defined metadata extensions using PowerCenter. Both vendor and user-defined metadata extensions can exist for the following repository objects: 1.

The transaction control transformation can become ineffective for downstream transformations or targets if you have used transformation that drops the incoming transaction boundaries after it. But the disadvantage is that target database cannot perform rollback/recovery from the failed session.  Aggregator transformation with Transformation scope as "All Input". such as a Custom transformation. And need to disable/remove the key constraints before loading using the bulk mode Normal mode: Database log is not bypassed and therefore the target database can recover from an incomplete session.  Joiner transformation with Transformation scope as "All Input". The session performance is not as high as is in the case of bulk load.  Rank transformation with Transformation scope as "All Input". connected to multiple upstream transaction control points . 8. It creates a new transaction boundary or drops any incoming transaction boundary coming from upstream active source or transaction control transformation. Transaction control transformation can be effective or ineffective for the downstream transformations and targets in the mapping. The following transformations drop the transaction boundaries. Workflows 9.  Custom transformation configured to generate transactions  Transaction Control transformation  A multiple input group transformation.This improves session performance. Transaction Control Transformation Transaction Control Transformation in Mapping Transaction control transformation defines or redefines the transaction boundaries in a mapping.  Custom transformation with Transformation scope as "All Input". Worklets Difference between bulk mode and normal mode property in target: Bulk Load: Powercenter loads the data bypassing the database log.  Sorter transformation with Transformation scope as "All Input".

the input groups must receive data from the same transaction control point.  A Transaction Control transformation may be effective for one target and ineffective for another target. the mapping is valid.  Either all targets or none of the targets in the mapping should be connected to an effective Transaction Control transformation.  You can connect multiple targets to a single Transaction Control transformation. or dynamic MQSeries targets are ineffective for those targets. a rolled-back transaction might result in unsynchronized target data.  You must connect each target instance to a Transaction Control transformation.  Transaction Control transformations connected to any target other than relational. XML.  You can connect only one effective Transaction Control transformation to a target. LOOKUP Connected Unconnected Connected to pipeline Unconnected and receives input from :LKP expression in any other transformation Can use STATIC and DYNAMIC cache Can use only static cache Can return more than one value Can return only one value Caches all the ports Caches only lookup condition port(s) and return port Supports user defined default values which it Do not support any defined default values returns when lookup conditions are not satisfied .  You cannot place a Transaction Control transformation in a pipeline branch that starts with a Sequence Generator transformation. If each target is connected to an effective Transaction Control transformation. and you choose to append or create a new document on commit.  If you use a dynamic Lookup transformation and a Transaction Control transformation in the same mapping. Mapping Guidelines and Validation Use the following rules and guidelines when you create a mapping with a Transaction Control transformation:  If the mapping includes an XML target.

Dynamic cache: Which refresh during the session run by inserting or updating records in cache based on the incoming data from source. Ex.If we check the “Sorted input” in Aggregator and input is not sorted then session will fail. Non-persistent cache: When informatica deletes cache after completion of session. Changes the transaction boundary by defining commit or rollback. Note. Even if data is not sorted for same ports which we are using in group by in aggregator then also session fail. . Note. Note. update. Example.Which does not change the number of rows passes through it and neither change the transaction boundary nor type of row. Dynamic cache Un-cached Lookup Persistent and Non-persistent cache Static cache: Which does not modify the cache once it is built and remains same when session runs.update strategy Passive transformation.Dynamic Cache is synchronized with the target. Change the row type like: insert. Active and Passive transformation Active: An active transformation performs any of the following operation: 1.Transaction control 3.Lookup maintains a table for cache Note. delete or reject. If a row doesn’t satisfy satisfy any condition then it sends that row to default group Acts like WHERE clause in SQL Acts like CASE in SQL How we can improve the performance of the Aggregator? By passing sorted input and check the “sorted input” option in aggregator transformation.Filter 2.By default informatica cache is static Persistent cache: when Informatica retains the cache even after session run. Types of Lookup: Cache Lookup : Static/ read only Cache. Ex.Expression transformation Difference between router and Filter Filter Router Single input single output Single input multi output Either pass row or block if condition doesn’t Itself doesn’t block any row. Change the number of rows between the transformation input and output Ex.

VERBOSE. when Integration service can’t finish processing and writing data into target within timeout period then it kills DTM process and terminates session. Workflow log. VERBOSE INITIALIZATION and NORMAL. Filter. Error log and Badfile Update Override in target Instance By default target is updated using primary key value but if we want to update target using any particular column value then we can include this condition in where clause of default query. .When sorter works like Active transformation? When we select distinct option in its properties When Lookup works Like active transformation? When we check the “” Option Difference between STOP and ABORT In Informatica when we STOP a session then it stops reading data from source but continues the processing and writing data into the target. How many number of session can we take in a batch? Any number of session. List of Transformations supported by sorted input Aggregator. So. When we ABORT session then in ABORT there is timeout period which is set to 60 seconds. Group by and order by. But for best practice there should be less number of sessons in a batch which helps at the time of migration Name any 4 files which Informatica server creates when it runs the session Session log. How we can prevent duplicate records to load into the target? By checking Distinct option in Source qualifier What is Tracing level? It decides the amount of data which you want o store in session log file. There are four levels: TERSE. Joiner and LOOKUP supports sorted input to increase session performance. VERBOSE: Detailed one. SQL Override: It ids an option available in Source qualifier and Lookup Transformation Where we can include Joins.

Can join any two heterogeneous data sources but common key is mandatory. Lookup. There are 10000 records in a flat file and we want to load record number 500 to record number 600 in target. how many rows will come in output? Only last row Difference between Union. Then if sequence generator is reusable then it insures that it will insert unique value for all the records in target. Joiner and Lookup Union. There are 1000 rows which I am passing through aggregator transformation and there is no group by port in aggregator. Types of SCD-2: Versioning Flag Date Different threads in DTM: Master thread Mapping Thread Rader Thread Writer Thread Pre session thread Post session thread Types of schedulingoptions to run a session: . So. Can check whether a particular record exists or not.Can join two sources using sql override. Can we return two columns using Unconnected lookup? Yes. by concatenating two columns but return port will be only one.Can join two tables without common port Joiner. Why and when Sequence generator should be a reusable transformation? When more than one mapping uses sequence generator to load data from source into target. Then use filter to select row number from 500 to 600.How we can do it? Use sequence generator to generate row number.Can we return two columns using LOOKUP? Yes if lookup is connected.

How can we store previous session log and prevent to overrite by current session log? Just run session in time stamp mode.Run only on demand : manually Run Once: Run once only at specified date and time Run every: Informatica server runs session at regular intervals as configured Customized repeat: Informatica server runs the session at the date and time specified in the repeat dialog box. List of different tasks in workflow manager: Session task Assignment task Command task Control task Decision task E-mail task Event raise task Event wait task Timer task Link task . Mapplet: It is a set of reusable transformations for applying same business logic.