Topic Search

Version

Compare

You can view and compare versions of objects in the Workflow Manager. 1.You cannot simultaneously view multiple versions of composite Content objects, such as workflows and worklets. Tasks,might want to view 2.you links, variables, or events can be workflow that version 5 of a searched originally included objects check in :schedulerversion 3 The Workflow Manager of a session, but version 3 of or session configuration saves the last 10 search the session is Workflow objects in the purged from strings in the list the repository. When an Manager.you can run you view version 5 search object query toof the for workflow, version 2 of the them. session appears as partwhen Check out: will happen of the workflow. ever object is edited from 3.You cannot view older browser. versions of sessions if they reference deleted or invalid mappings, or if they do not have a session 1.You can compare objects configuration. across folders and repositories. You must open both folders to compare the objects. 2.You can compare a reusable object with a nonreusable object. 3.You can also compare two versions of the same object.

Patni Confidential

Aggregator

General

Functions

Ports

Nested Aggregate ports

Group By

Null Values

Default Values

Optimization

SORTED INPUT

Incremental Aggregation

Cache

Cache

Difference

Expresssion

General

Ports

Optimization

Patni Confidential

Remove Duplicate records Filter

General

Ports

Condition

Null Values

Default Values

Optimization

Troubleshooting

Router

General

Filter condition

Mapping

Not Passive

Difference

Joiner

Patni Confidential

General

Pipeline

Condition

Join Types

Join Flows

Default Values

Blocking the Source Pipelines

Working with Transactions

SORTED INPUT

Optimization

Caches

Differences

Lookup

General

Cache

Patni Confidential

Types

Patni Confidential

Active and connected

Following types of function can be used MIN MAX In Aggregator transformation, at least one port has to be selected as group by column. By default, AVG aggregator will return the last value for a port ( if there are more than one record for group by column). COUNT Aggregator will also sort the data in ASC order on group by port. FIRST LAST NOTE: If MEDIAN primary columncanthe source is in Aggregator. by port, then can not get the count(*) in one port Nested Aggregate ports of not be used used in group Means, you aggregator will work as sorter transformation. PERCENTILE and use this value in other Aggregator port. This will invalidate the mapping. STDDEV SUMcan include multiple single-level or multiple nested functions in different output ports in an Aggregator You transformation. However, you cannot include be any input, input/output, output, or variable port. When VARIANCE to create groups. The port can both single-level and nested functions in an Aggregator Indicates how transformation.the Aggregator transformation use other row level ofaeach such unless otherwise etc Along with these aggregate function, you can outputs the last rowfunctionsgroup as IIF, DECODEoutput grouping data, Therefore, if an Aggregator transformation contains single-level function in any port, you cannot use a nested function in any other port in that transformation. When you include singlespecified. level and nested functions in the same Aggregator transformation, the Designer marks number of rows Conditional clauses:Use conditional clauses in the aggregate expression to reduce the the mapping or mappletthe in Aggregate Functions both single-level any clause that evaluates to TRUE or FALSE. usedValues aggregation. The conditional clause can beand nested functions, create separate Aggregator Null in invalid. If you need to create transformations. SUM( COMMISSION, COMMISSION > QUOTA )can choose how you want the Integration Service to When you configure the Integration Service, you Non-Aggregate functions: handle null values in aggregate functions. You can choose to treat null values in aggregate functions as You can also use default, the Integration Service aggregate values as NULL in aggregate functions. NULL or zero. By non-aggregate functions in the treats null expression. The following expression group by port to replace nullof items sold for each itemthe Integration Service to Use default values in the returns the highest number input values. This allows (grouped by item). If no items were sold, groups in the aggregation. include null item the expression returns 0. IIF( MAX( QUANTITY ) > 0, MAX( QUANTITY ), 0)) Aggregtor has a property "SORTED INPUT". If you check this property, then aggregator assumes that data is coming in sorted order ( on group by ports). If not, at run time session will fail. Sorted Input improves the aggregator performance. Do not use sorted input if either of the following conditions are true: After you create a session that includes an Aggregator transformation, you can enable the session option, Incremental Aggregation. uses the Integration Service performs incremental aggregation, it passes new The aggregate expressionWhen nested aggregate functions. source data uses incremental aggregation. The session through the mapping and uses historical cache data to perform new aggregation calculations incrementally If you use sorted input and do not sort data correctly, the session fails. The Integration Service stores data in the aggregate cache until it completes aggregate calculations. It Note: The Integration Service uses memory to processthe Aggregator transformation with sorted ports. It stores group values in an index cache and row data in an data cache. When you run a session that You an Aggregator transformation, memory for Aggregator transformations does not use cache memory. uses do not need to configure cachethe Integration Service creates index and data caches in ports. that use sorted memory to process the transformation. If the Integration Service requires more space, it stores overflow values in cache Aggregator and Oracle GROUP BY: Difference between Informatica files. Note: The Integration Service uses data for Group by portsAggregator transformation with data. ports. It 1.Informatica Aggregator Sorts the memory to process an where as Oracle does not sort sorted does not use cache memory. You do not need to configure cache memory for Aggregator transformations that use sorted ports. Limit theand connected. Passive number of connected input/output or output ports to reduce the amount of data the Aggregator transformation stores in the is used Expression Transformation data cache. 1. To perform non-aggregate calculations on row-by-row basis. 2. To test conditional statements. Port names used as can be created in one Expression Transformation. Multiple expressions part of an expression in an Expression transformation follow stricter rules than port names in other types of transformations: A port name must begin with a single- or double-byte letter or single- or double-byte underscore (_). It can contain any of the following single- or double-byte characters: a letter, number, underscore (_), $, #, You can enter multiple expressions in a single Expression transformation. As long as you enter only one or @. expression for each output port, you can create any number of output ports in the transformation. In this way, use one Expression transformation rather than creating separate transformations for each calculation that requires the same set of data.

Patni Confidential

in expression transformation use following code .use in that seq only.ok first create 2 ports .suppose u have to chk emp no.which is repeated then 1) emp2 = emp1 2) emp no is ur i/p port 3) emp1 = connected Active and emp no 4) chk= iff(emp2=emp no , 1 ,0) now if chk is 0 then only fwd it. All ports in a Filter transformation are input/output, and only rows that meet the condition pass through the Filter transformation. You cannot concatenate ports from more than one transformation into the Filter transformation. The input ports for the filter must come from a single transformation. A filter condition returns TRUE or FALSE for each row that passes through the transformation, depending on whether a row meets the specified condition. Only rows that return TRUE pass through this transformation. Discarded rows do not appear in the session log or reject files. To filter out rows containing null values or spaces, use the ISNULL and IS_SPACES functions to test the value of the port. For example, if you want to filter out rows that contain NULLs in the FIRST_NAME port, use the following condition: IIF(ISNULL(FIRST_NAME),FALSE,TRUE) The Filter transformation does not allow setting output default values. This condition states that if the FIRST_NAME port is NULL, the return value is FALSE and the row should be discarded. To maximize session performance, include the Filter transformation as close to the sources in the mapping as possible. Rather than passing rows you plan to discard through the mapping, you then filter out unwanted data early in the flow of data from sources to targets. Case sensitivity. The filter condition provides an alternate way to in some databases do not take this from The Source Qualifier transformation is case sensitive, and queriesfilter rows. Rather than filtering rows into account. within a mapping, the Source Qualifier transformation filters rows when read from a source. The main Appended spaces. If a field qualifier additional spaces, the filter condition needs to check for additional difference is that the source contains limits the row set extracted from a source, while the Filter spaces for the limits of row set Use the target. function to remove additional spaces. transformation lengththe the field.sent to a RTRIMSince a source qualifier reduces the number of rows used 1 Input Group mapping, it Output group transformation. throughout theand Multiple provides better performance. Two types of Source Qualifier transformation default. However, the output groups -Userdefined and only lets you filter rows from relational sources, while the You transformation filters rows from any type of source. Also, create. Filterspecify the test condition for each user defined group you note that since it runs in the database, you yoU make modify or delete condition in the must (0) notsure that the of FALSE, groups. Source Qualifier the equivalent of TRUE. Zerocan is the equivalentfilter default and any non-zero value istransformation only uses standard SQL. The Filter transformation can define a condition using any statement or transformation function that returns The Integration Service determines the order of evaluation for each condition based on the order of the either a TRUE or FALSE value. connected output groups. The Integration Service processes user-defined groups that are connected to a transformation or one group to mapping. The Integration Service only processes user-defined groups that You can connect a target in a one transformation or target. are not connected in a mapping ifin a group to multipleconnected to a transformation or a target. If all of the You can connect one output port the default group is transformations or targets. conditions evaluate to FALSE, the Integrationgroup to passes the row to the default group. If you want the You can connect multiple output ports in one Service multiple transformations or targets. Integration Service to dropthan one in the default group, or anot connect itgrouptransformation or a target in You cannot connect more all rows group to one target do single input to a transformation. a mapping. passive transformation, but one may argue that it is passive because in case Joineruse default Router is not You can connect more than one group to a multiple input group transformation, except for if we group (only) then there is no change in number group to transformations, when you connect each output of rows. a different input group. If a record satisfies Transformationgroup,of the informatica will pass thiswhen we are times. to apply multiple Router is an Active more than one Most then time we use router only row multiple going Router and Filters conditions on a records It is absolutely possible for a record to satisfy one or more conditions thus making the router Active Router is the only active transformation where the number of output records from the Router and Filter transformation are the number of input records has to the transformation. transformation is greater or equal to similar in behavior but routergivensome additional features. Informatica added Router transformation version 5.x onward. Lets first talk about Filter transformation ---------------------Active 1. Filter transformation is an active transformation 2. The Filter transformation allows you to filter rows in a mapping. 3. All the rows which meet the filter condition pass through the transformation. 4. It has only two type of ports, input and output. 5. Use filter transformation as close to source as possible for better performance. This make sense, beacuse number of rows to go through other transformation will be less. 6. Can not merge rows from different transformation in Filter transformation. 7. Expression can be used in the filter condition. Patni Confidential Router -------

join source data from two related heterogeneous sources(different types of sources. ) residing in different locations or file systems. You can also join data from the same source. The two input pipelines include a master pipeline and a detail pipeline.The master pipeline ends at the Joiner transformation, while the detail pipeline continues to the target. By default, when you add ports to a If you transformation, the the join condition, the Integration display as detail sources. Adding order you Joineruse multiple ports inports from the first source pipelineService compares the ports in the the ports specify. from the second source pipeline sets them as master sources. The cannot use a Joiner transformation when either input pipeline contains an have the same You Designer validates datatypes in a condition. Both ports in a condition mustUpdate Strategydatatype. If you need to use transformation. two ports in the condition with non-matching datatypes, convert the datatypes so they match. You cannotthe join type on the Properties you in the transformation. Generator transformation directly the define use a Joiner transformation if tab connect a Sequence The Joiner transformation supports following types of joins: before the Joiner transformation. If you join Char and Varchar datatypes, the Integration Service counts any spaces that pad Char values as Normal part of the string: Master Outerfrom a Single Source : Joining Data Detail Outer Char(40)branches of the same pipeline. Join two = "abcd" Full Outer Join two instances of the same source. Note: A normal or master outer join performs faster than a full outer or detail outer join. Varchar(40) = "abcd" Joining Two Branches of the Same Pipeline values. For example, if both EMP_ID1 and EMP_ID2 The Joiner transformation does not match null When youvalue is afrom the same not contain spaces,eitherthe consider them pipeline. When you branch If a Char set with "abcd"value, the Integration can create two branches of the a match and does the two Theresult row includes fields that dosource, youService does not the sources, the Joiner transformation contain a join data null padded with 36 blank data in and of Integration Service does not join not join a pipeline, you the Char a transformation between null that qualifier and the Joiner transformation populates the To join fields with null values, replace the inputawith will return a NULL then join on the in at fields becauseempty addfield contains trailing you know sourcefielddefault values, and and you do not want values. If spaces. the two rows. must rows least one branch of the pipeline. can set a default value on the Ports tab for Joiner transformation for to insert NULLs in the target, youYou must join sorted data and configure the the corresponding port. default values sorted you run Normalinput. a session with a Joiner transformation, the Integration Service blocks and unblocks the When Join With a normal join, the the mapping configuration and whether data configuremaster andtransformationthat source data, based on Integration Service discards all rows of you from the the Joiner detail source for For example, based on the condition. do not input. you have a source with the following ports: sorted match, Unsorted Joiner Transformation Employee Integration Service processes an unsorted auto parts called apply transformation logic to before For example, you might have two sources of datatransformation, it canPARTS_SIZE and PARTS_COLOR When the a Joiner for Joiner transformation, it reads all master rows all Departmentdetail rows. incoming it reads all master data at a time. detail rows, the Integration drop or with theafollowing data: To ensure data, or one row of rows before theThe Integration Service can Service data in transaction, all it reads the Total Sales preserve transaction boundaries depending from mapping source. Once the Integration Service reads blocks the detail source while it caches rowson thethe masterconfiguration and the transformation scope. In the target, you want rows, it boundaries when you join the following sources:rows. than the average PARTS_SIZE master source) You can preserve transaction the employees who generated sales the were and caches all (master to view unblocks the detail source and reads that detail greater sales for configure the JoinerSIZE PART_ID1 DESCRIPTION To do this, you create a mapping with the following transformations: You can their departments. Sorted Joiner Transformation transformation for sorted input to improve Integration Service performance by 1 Seat Cover Large and the same source sorted Joiner transformation, it blocks data based on the minimizing disk input You join two branches of output.. When the Integration Service processes a pipeline. Use the Transaction transformation scope to preserve Sorter transformation. Sorts the Transaction transformationincrease input to Joiner transformation joins 2 you have boundaries.Blocking data. is possible if you canand detail the number of partitions in a pipeline If Ash Tray Small transaction the partitioning option in PowerCenter, master scope when the the Joiner transformation mapping configuration. Use the logic Sorted Mat Medium sources. Averages the sales same pipeline or two output groups of one 3 FloorAggregator transformation. two branches of the data and group by department. When you perform to improve session source, either data from the different originate from sameperformance. this aggregation,ayou lose the datapossible. Perform session, database when for individual employees. of data and any join data, During a joins in the Use this transformation scope with sorted the master source against the detail transaction generator.Integration Service compares each rowTo maintain employeetype. you must pass a branch of the pipeline tousesAggregatorunsorted Joiner and pass transformationthe same data without as source. To improve performance for an transformation transformation, use the if it can do fewer rows The Integration Service the blocking logic to process the Joiner a branch with source with so to the Joiner blocking two sources, databaseload order group a you join join transformation,In pipeline, youUse the Row transformation to maintain you original data. Whensorted Joiner inOtherwise,thedoes not cases, join thelogic. PARTS_COLOR (detail source) Performing sources in a target is faster than performing a boundaries for the join all a join in a improve performance for simultaneously. the session. detail source. this is not Youmaster source. To and the want to preserve transactionboth branches ofitthe some use blockingfewer use the source with Instead, itsuch asmore the COLOR aggregated DESCRIPTION in thedata. PART_ID2stores withasrowsmaster. two different databases or detail pipeline. Theyou want to perform a possible, data joining original cache. duplicate key values the tables from transformation scope to preserve transaction boundaries in the flat file systems. If Row transformation SortedaCoverInputorder usingUsesofto process data one row 1 Seat Joiner the Integration Service options:Joiner methods: at a time. The Integration Service caches the join Unsorted Bluejoiner transformation, Configure the sort use the following the following transformation to join the sorted aggregated data with scope allows transformation. one a sorted For in database, When cacheBlack Service detail in the join logic to process the index the the original data. matches the can use blocking condition master data. Usetransformation, it stores scope 3 Datathe Integrationall master rows data with the cachedwith unqueJoiner keys Row transformationfewer master data and i) Floor Mat stores rowssorted and Master Outer join files containthe tables in data. that the for each employee and match FilterNormalflat YellowCompares the average use unsorted a database. 4 Indexthe cache, increasing performance. sales data against sales dataorder of the sort columnsfilter out ii) Fuzzy pre-session Create Dice stores all master rows. Use transformation. stored procedure to join with in acache files. When the flat types that sorted data, verify employees withfile. thantransformation to perform the join. Use the Source Qualifier above average sales. in each source less For sorted relational Different sorted ports in the Source Qualifier transformation to sort columns from the UseSorted Input with data. UseSources,and source qualifier transformation: diffrence between joiner transformation To join cachesources100two might sourcethe join conditionthetransactionset the condition as follows: one Note: Joining two branches branches in and Joiner boththe Joiner youindex keys. receives data Input i) Data the data when possible. the PART_IDs in Heterogeneous Join sorted two tables by master rows and sorted to drop same transformation Use the All from source database. Configure the order of you wantports with uniqueeach You join two stores or matchingimpact performance if sources, insourceSource Qualifier transformation. boundaries. A.Source qualifier Homogeneous branch qualifierscope aboutreading the data from logic theall incoming data and the data fromthe master ii) Indexmuch later used the using sorted The see to to rows where as joiner drop transaction first For more information master rows that correspond Using Sorted Ports. the index cache. If the used for transformation Storesto for other branch.ports,Joiner transformation caches all transformation is boundaries source cache is than apply the transformation the database stored in PART_ID1 data tables.cacheusediskInput,transformation to sort relational or flat to transactionrowsSorter for both and writes the rows Use a Sorter cache fills. the Joiner transformation must sorted input. When branch,two transformations. with the same configuring The Servicetransformation fileusethen read a indata data contains multiple You canpipelines.When performance by key, the Integration Service stores more data. Place the the Use Sorter = PART_ID2 you to All if the the Integration Joiner drops incoming than 100 boundaries joining improve session for bothqualifierin andmasterused rows to use transformation Integration processing. to the Joiner fromconfigure the receives the all detailfrom thesorted data, the as canopen transaction. At should be from data disk when can outputs data join you cache. connected/Unconnected the tables but the This an that both the table transformation it Joiner be and tofrom two second branch.each Sorter transformation use the same source and Passive pipelines thealso transformationpipelines. Configurecondition isslow Service improves performance Whenof the sortthe input it shouldamaster pipeline greatest performance improvement when you work with transformation,these ports and the normalsee direction be setthe same the following data: depending on how by minimizing disk and fromoutput. You primary result cached or joined structure. order youdatabase data andwith have thejoin, thecan with includes data concurrently, relational join key tables the sort order the key you Sortedsets. with the Same Source, Joining Two we can join data Use this transformation scope with sorted flat files or one file fromany join For configure the large data Input sort order. from two heterogeneous sources like two and unsorted data and relational using joiner Instances of the Same Source PART_ID from same to SIZE sorted type.can also join flat.all master data by rows in the join condition withthe source. After you create the You one fileDESCRIPTION receivedetailcreatingin the same order as the sort origin. Stores detail rows if the and i) Data cache Stores Define the join conditionsource or COLORdata a second instance of unique keys. second Cover use in the join conditionthe pipelines the thanwhich fewer origin.as the master source.master 1 Seatunsorted Joineronwards) ordetail pipeline faster ports thethe sort the PowerCenter Clientto join Integration Service processes the a designate the source with both instances. If you want and For an source Large transformation, must match from to at source rows Thefile( from PC 7.1.1 Blue can joinrelational database the two master pipeline. Otherwise, stores flat ports you instance, you unsorted between joinerconnect. Use multiple Lookup transformations in a mapping match pipelines. If 3 Floor Mat Mediumrows it stores depends Lookupprocessing rates condition pipelines. the first ports at Row->Unsorted ofmultiple join conditions, on the Transformation: ofjoin the must diffrence data, you can create two instances of the same source and rows. you configure Black When The number must transformation and the ports in the first join the master and detail Integration Service Transaction->Sorted Lookup is Passive. designate the source withService caches all rows that have A. pipeline processes one optimal performance and disk storage, For sort origin. theJoiner is Active and its rows faster than the other, Integration the fewer rows as the master source. Note: When you the multiple transformation compares theaother pipeline finishesthe order offor each source Masteryou configure two different data sources based on conditionsthe master source against the detail the B.Joiner session, already been Join the Joiner keeps method, the of the During Outer processed and conditions, cachedIntegration Service reads the source data its ports When a will join join data using this themthe orderuntil each row of must match processing therows. at instance, soyoujoinuniquenot the slowerstoredthe the two brancheslookupmatching rows fromthewhich A master outer Stores data for berowsperformancefewer iterations of thepipeline.If youstores keyslookup, sort condition ,and pass only session data from in byindex cache. If thejoin comparison occur, for the join origin, and can improve rows rows than the that ii) Index cache you must rows in the ports. satisfydetail source and source. The fewer keeps can theany master, joining caching the of a index cache cache the master Sometimes performance all skip of which table. source. pipeline,process. dynamic sort origin can master source. or equal index remains statickeys for the The canthe join sorted ports in therows. the data forBy default, the lookup cache cache stores and doesjoin condition.discards use a cache rows master choose to the unmatched or static cache. master pipeline. speeds It discards the remainingstores from the be greater than If the to the number of ports at the not you number of the data Guidelines condition. Joiner transformation supports stores datajoins at pipeline. detail pipeline, the session. data cache 4 types of for detail change during Usea you join the Condition Whensortedlevel guidelines when deciding whether to join branches of a pipeline values twothe master of a Examplefollowing sample tables with a master outer join with fewer duplicate key or join as set includes Informatica Join For the of aJoiner transformation, designate the source and the same condition, the result instances source: the following data:configure Sorter transformations in the master and detail pipelines with the following For example, you Normal source. sorted ports: Master Outer Joinoptimal DESCRIPTION SIZE storage,have a large source or with fewer read the source data as the PART_ID Detail Outerperformance and disk COLOR For two branches of a pipeline when you designate the source if you can duplicate key values only once. For example, you can Blue read source data processes a sorted Joiner transformation, it caches rows for 1 Seat source. Large the Integration Servicefrom a message queue once. ITEM_NO Full Outer master Cover When only Join hundred keys at time. 3 Floor Mat Mediuma Black If the master source contains many rows with the same key value, the ITEM_NAME one two branches of pipeline when you use sorted data. If the source data is unsorted and you use a Patni Confidential Sorter transformation to cache data, branch the pipeline after you sort the data. 4 FuzzyTransformation sort themore rows, and performance can be slowed. Integration Service must PRICE Dice NULL Yellow LookUp Join twotransformation basically for Reference,based onblocking transformationsort order: When you configure thesource when you need to add a guidelines to maintain to the pipeline between the Lookup instances of a join condition, use the following the source condition.whentransformation. lookup and the Joiner u want some data based on target

Connected Unconnected:Uses only static cache Does not support user-defined default values

Patni Confidential

Object Query

2.Associat e a query with a deployme nt group. When you create a dynamic deployme nt group, you can associate an object query with it.

Patni Confidential

Sign up to vote on this title
UsefulNot useful