You are on page 1of 10

You can view and compare

versions of objects in the


Workflow Manager.
1.You cannot simultaneously
view multiple versions of
Topic composite
Content objects, such as
workflows and worklets.
Tasks, links, want
2.you might variables, or
to view
Search events can be searched
version 5 of a workflow that
originally included version
check in :scheduler objects3
The Workflow Manager
of
or session configuration 3 of
a session, but version
saves the last 10 search
the session
objects is purged
in the Workflow from
strings in the list
the repository.can
Manager.you Whenrun you
an
view
objectversion
query to 5 of the for
search
workflow,
them. version 2 of the
session
Check out: appears as partwhen
will happen of
the
everworkflow.
object is edited from
Version 3.You
browser.cannot view older
versions of sessions if they
reference deleted or invalid
mappings, or if they do not
have a session
1.You can compare objects
configuration.
across folders and
repositories. You must open
both folders to compare the
objects.
2.You can compare a
reusable object with a non-
reusable object.
3.You can also compare two
Compare versions of the same object.

Patni Confidential
Aggregator

General Functions

Ports

Nested Aggregate ports

Group By

Null Values

Default Values

Optimization SORTED INPUT

Incremental Aggregation

Cache Cache

Difference

Expresssion

General Ports

Optimization

Patni Confidential
Remove
Duplicate
records

Filter

General Ports

Condition

Null Values

Default Values

Optimization

Troubleshooting

Router

General Filter condition

Mapping

Not Passive

Difference

Joiner

Patni Confidential
General Pipeline

Condition

Join Types

Join Flows

Default Values

Blocking the Source Pipelines

Working with Transactions

SORTED INPUT

Optimization

Caches

Differences

Lookup

General

Cache

Patni Confidential
Types

Patni Confidential
Active and connected

Following types of function can be used

MIN
MAX
AVG
In Aggregator transformation, at least one port has to be selected as group by column. By default,
COUNT
aggregator will return the last value for a port ( if there are more than one record for group by column).
FIRST
Aggregator will also sort the data in ASC order on group by port.
LAST
MEDIAN
NOTE:
Nested IfAggregate
primary columnports canof thenotsource
be used is in
used in group by
Aggregator. port, then
Means, aggregator
you can will count(*)
not get the work as in
sorter
one port
PERCENTILE
transformation.
and use this value in other Aggregator port. This will invalidate the mapping.
STDDEV
SUMcan include multiple single-level or multiple nested functions in different output ports in an Aggregator
You
VARIANCE
transformation.
Indicates how toHowever, you cannot
create groups. include
The port can both
be any single-level and nested
input, input/output, functions
output, in an Aggregator
or variable port. When
Along withdata,
transformation.
grouping these Therefore,
theaggregate
Aggregator if function,
antransformation
Aggregator
you can transformation
use other
outputs therow contains
level
last rowfunctions
ofaeach
single-level
such as
group function
IIF, DECODE
unless in any output
otherwise etc
port, you cannot use a nested function in any other port in that transformation. When you include single-
specified.
Conditional
level and nested
clauses:Use
functions conditional
in the same clauses
Aggregator
in the aggregate
transformation,expression
the Designer
to reduce
marks
the the
number
mapping
of rows
or
usedValues
mapplet
Null in the
invalid.
aggregation.
If you need
in Aggregate The toconditional
create bothclause
Functions single-level
can beand anynested
clause functions,
that evaluates
create
to separate
TRUE or Aggregator
FALSE.
SUM( COMMISSION,
transformations.
When you configure the COMMISSION > QUOTA
Integration Service, you )can choose how you want the Integration Service to
Non-Aggregate
handle null values functions:
in aggregate functions. You can choose to treat null values in aggregate functions as
You can
NULL or also
zero.use non-aggregate
By default, functions
the Integration in the aggregate
Service expression.
treats null values as NULL in aggregate functions.
The following
Use expression
default values returnsbythe
in the group highest
port to replacenumbernullofinput
items sold for
values. each
This itemthe
allows (grouped by item).
Integration If noto
Service
items were
include null sold, the expression
item groups returns 0.
in the aggregation.
IIF( MAX( QUANTITY ) > 0, MAX( QUANTITY ), 0))

Aggregtor has a property "SORTED INPUT". If you check this property, then aggregator assumes that data
is coming in sorted order ( on group by ports). If not, at run time session will fail. Sorted Input improves the
aggregator performance.
Do not use sorted input if either of the following conditions are true:
After you create a session that includes an Aggregator transformation, you can enable the session option,
The aggregate
Incremental Aggregation.
expressionWhenuses nested
the Integration
aggregate
Service
functions.
performs incremental aggregation, it passes new
The session
source data through
uses incremental
the mapping aggregation.
and uses historical cache data to perform new aggregation calculations
If you use sorted input and do not sort data correctly, the session fails.
incrementally
The Integration Service stores data in the aggregate cache until it completes aggregate calculations. It
Note: The
stores Integration
group values inService
an indexuses
cachememory to process
and row an Aggregator
data in the data cache.transformation with sorted ports. It
does not
When youuse
runcache
a session
memory.
that uses
You do an not
Aggregator
need to configure
transformation,
cachethe
memory
Integration
for Aggregator
Service creates
transformations
index and
that use
data caches
sorted
in memory
ports. to process the transformation. If the Integration Service requires more space, it
stores overflow
Difference values
between in cache Aggregator
Informatica files. and Oracle GROUP BY:
Note: The Integration
1.Informatica Aggregator Service
Sortsuses memory
the data to process
for Group an Aggregator
by ports transformation
where as Oracle does not with sorted ports. It
sort data.
does not use cache memory. You do not need to configure cache memory for Aggregator transformations
that use sorted ports.
Limit theand
Passive number of connected input/output or output ports to reduce the amount of data the Aggregator
connected.
transformation
Expression stores in the is
Transformation data
used cache.
1. To perform non-aggregate calculations on row-by-row basis.
2. To test conditional statements.
Multiple
Port names
expressions
used as partcan of
bean
created
expression
in oneinExpression
an Expression
Transformation.
transformation follow stricter rules than port
names in other types of transformations:
A port name must begin with a single- or double-byte letter or single- or double-byte underscore (_).
It can contain any of the following single- or double-byte characters: a letter, number, underscore (_), $, #,
or @.can enter multiple expressions in a single Expression transformation. As long as you enter only one
You
expression for each output port, you can create any number of output ports in the transformation. In this
way, use one Expression transformation rather than creating separate transformations for each calculation
that requires the same set of data.

Patni Confidential
in expression transformation use following code .use in that seq only.ok
first create 2 ports .suppose u have to chk emp no.which is repeated then
1) emp2 = emp1
2) emp no is ur i/p port
Active
3) emp1 and
= emp
connected
no
4) chk= iff(emp2=emp no , 1 ,0)
now if chk is 0 then only fwd it.

All ports in a Filter transformation are input/output, and only rows that meet the condition pass through the
Filter transformation.
You cannot concatenate ports from more than one transformation into the Filter transformation. The input
ports for the filter must come from a single transformation.
A filter condition returns TRUE or FALSE for each row that passes through the transformation, depending
on whether a row meets the specified condition. Only rows that return TRUE pass through this
transformation. Discarded rows do not appear in the session log or reject files.

To filter out rows containing null values or spaces, use the ISNULL and IS_SPACES functions to test the
value of the port. For example, if you want to filter out rows that contain NULLs in the FIRST_NAME port,
use the following condition:

IIF(ISNULL(FIRST_NAME),FALSE,TRUE)
The Filter transformation does not allow setting output default values.
This condition states that if the FIRST_NAME port is NULL, the return value is FALSE and the row should
be discarded.

To maximize session performance, include the Filter transformation as close to the sources in the mapping
as possible. Rather than passing rows you plan to discard through the mapping, you then filter out
unwanted data early in the flow of data from sources to targets.

CaseSource
The sensitivity.
Qualifier
The transformation
filter condition isprovides
case sensitive,
an alternate
and queries
way to filter
in some
rows.databases
Rather thando filtering
not takerows
this into
from
within a mapping, the Source Qualifier transformation filters rows when read from a source. The main
account.
difference is
Appended spaces.
that theIf source
a field contains
qualifier limits
additional
the row
spaces,
set extracted
the filter from
condition
a source,
needswhile
to check
the Filter
for additional
transformation
spaces for the length
limits theof the
rowfield.
set sent
Useto the
a RTRIM
target. Since
function
a source
to remove
qualifier
additional
reduces
spaces.
the number of rows used
throughout
1 Input Group theand
mapping,
Multiple it provides
Output group
bettertransformation.
performance.

Two typesthe
However, of output
Sourcegroups
Qualifier-Userdefined
transformation and only
default.
lets you filter rows from relational sources, while the
Filterspecify
You transformation
the test condition
filters rows forfrom
eachany user
typedefined
of source.
groupAlso,
you note
create. that since it runs in the database, you
must
yoU
Zerocanmake
(0) not
is sure
themodify
thatorthe
equivalent delete
filter default
condition
of FALSE, groups.
and inany
the non-zero
Source Qualifier
value istransformation
the equivalent only uses standard SQL. The
of TRUE.
Filter transformation can define a condition using any statement or transformation function that returns
either
The Integration
a TRUE orService
FALSEdetermines
value. the order of evaluation for each condition based on the order of the
connected output groups. The Integration Service processes user-defined groups that are connected to a
transformation
You can connect or one
a target
group in to
a mapping. The Integration
one transformation Service only processes user-defined groups that
or target.
are not
You canconnected
connect onein aoutput
mapping portifinthe defaulttogroup
a group is connected
multiple transformationsto a transformation
or targets. or a target. If all of the
conditions
You evaluate
can connect to FALSE,
multiple output the Integration
ports Service
in one group to passes
multiplethe row to the default
transformations group. If you want the
or targets.
Integration
You cannotService
connecttomore
dropthanall rows
one in the default
group to one group,
target ordoanot connect
single inputitgroup
to a transformation
transformation.or a target in
a mapping.
You
Routercanisconnect
not passive
moretransformation,
than one groupbut to one
a multiple
may argue
input that
group
it istransformation,
passive because except
in case
for Joiner
if we use default
transformations,
group (only) thenwhen
thereyouis no connect
changeeach in number
output ofgroup
rows.to a different input group.

If a record
Router is ansatisfies more than one group,
Active Transformation Most ofthen
the informatica
time we usewill passonly
router thiswhen
row multiple
we are times.
going to apply multiple
conditions
Router andon Filters
a records It is absolutely possible for a record to satisfy one or more conditions thus making
the router Active Router is the only active transformation where the number of output records from the
transformation
Router and Filteris greater
transformation
or equalare
to similar
the number
in behavior
of inputbut
records
routergiven
has some
to theadditional
transformation.
features. Informatica
added Router transformation version 5.x onward. Lets first talk about Filter transformation
Active
----------------------
1. Filter transformation is an active transformation
2. The Filter transformation allows you to filter rows in a mapping.
3. All the rows which meet the filter condition pass through the transformation.
4. It has only two type of ports, input and output.
5. Use filter transformation as close to source as possible for better performance. This make sense,
beacuse number of rows to go through other transformation will be less.
6. Can not merge rows from different transformation in Filter transformation.
7. Expression can be used in the filter condition.Patni Confidential

Router
-------
join source data from two related heterogeneous sources(different types of sources. ) residing in different
locations or file systems. You can also join data from the same source.
The two input pipelines include a master pipeline and a detail pipeline.The master pipeline ends at the
Joiner transformation, while the detail pipeline continues to the target. By default, when you add ports to a
Joiner
If you use transformation,
multiple ports theinports the join from condition,
the first source the Integration pipelineService display compares as detail sources. the portsAdding in the the order portsyou
from the second source pipeline sets them as master sources.
specify.
You Designer
The cannot use validates
a Joinerdatatypes transformation in a condition.
when either Bothinput portspipelinein a condition contains must
an Updatehave the Strategy
same datatype. If
transformation.
you need to use two ports in the condition with non-matching datatypes, convert the datatypes so they
match.
You cannotthe
define usejoin a Joiner type on transformation
the Properties if you
tab in connect
the transformation.
a Sequence Generator The Joinertransformation transformationdirectly supports the
before thetypes
following Joiner of transformation.
joins:
If you join Char and Varchar datatypes, the Integration Service counts any spaces that pad Char values as
part of the string:
Normal
Master Outer
Joining Data from a Single Source :
Char(40)
Detail
Join two Outer = "abcd" of the same pipeline.
branches
Full Outer
Join two instances of the same source.
Varchar(40)
Note: A normal = "abcd" or master outer join performs faster than a full outer or detail outer join.
Joining
The Joiner Twotransformation
Branches of the does Same not Pipeline
match null values. For example, if both EMP_ID1 and EMP_ID2
The
If
Whena result
contain Char you a rowset
value
joinincludes
data
with is "abcd"
afrom fields
null the
padded
value, that
same thedo
withsource,
not36contain
Integration blank
youService can
spaces,
datacreatein doeseither
and two the
notofbranches
the
Integration
consider sources, of
themthe
Service
the apipeline.
Joiner
matchdoestransformation
When
and notdoes joinyouthenot branch
two
join
fields
populates
a
the pipeline,
two because
rows. the
youTo empty
the
must joinChar add
fields
rows field
a with
transformation
contains values.
null values, trailingbetween
If you
spaces.
replace know
the
null source
that awith
input field
qualifierwill return
default and the
values, a NULL
Joiner
and and transformation
then you
joindo onnot thewantin at
to insert
least
default one NULLs
valuesbranchinofthe thetarget,
pipeline. youYou canmust set ajoin default sorted value data onandthe configure
Ports tab the for theJoiner corresponding
transformation port. for
Normalyou
sorted
When input.
Join run a session with a Joiner transformation, the Integration Service blocks and unblocks the
With a normal
source data, based join, the on the Integration
mappingService configuration discards and all whether
rows of data from the master
you configure the Joiner andtransformation
detail source that for
do not
For
sorted example,
match, you
input. based have onathe source
condition. with the following ports:
Unsorted Joiner Transformation
For example,
Employee
When the Integration you might Servicehave two processessourcesa anofJoinerdatatransformation,
unsorted for auto
Joiner parts called
it canPARTS_SIZE
transformation, applyit transformation
reads all and PARTS_COLOR
master logicrows to before
all
with
itDepartment
datareadsthe
in athe followingdetail data:
transaction, rows. allTo incoming
ensure data, it reads or one all master
row of rows data before at a time. theThe detail Integration
rows, theService Integration can Service
drop or
Total Sales
blocks
preserve thetransaction
detail source boundaries
while it caches depending rowson from thethe mapping
masterconfiguration
source. Onceand thethe Integration
transformation Servicescope. reads
InPARTS_SIZE
and
You the target,
caches
can preserveallyou (master
master want
transactionto
source)
rows, view the employees
it boundaries
unblocks the when who
detail you generated
source
join the sales that
andfollowing
reads the were greater
sources:
detail rows. than the average
PART_ID1
sales
Sorted
You canfor their
Joiner DESCRIPTION
configure departments.
Transformationthe JoinerTo SIZE do this, you create
transformation for sorted a mappinginput towith improve the following
Integration transformations:
Service performance by
1 Seat
When
You
minimizing
join the Cover
two Integration
disk Large
branches
input and Service
of the output..
same
processes source a sorted
pipeline. Joiner
Use transformation,
the Transactionittransformation blocks data based scopeontothe preserve
2 you
Sorter
Ashhave
mapping
transaction
If transformation.
Tray Small
configuration.
boundaries.
the partitioning Sorts
Blocking
Useoption the
the data.Transaction
logic
in PowerCenter,
is possible transformation
if you
master canand increase
scope detail whenthe
input number
theto Joiner
the of Joiner
partitions
transformation
transformation
in a pipelinejoins
3 improve
Sorted
Floor
originate
data
to from Aggregator
Matfrom
the Medium
sessionsame
different transformation.
performance.
source,sources. either two Averages branches the of salesthe datasameand grouporbytwo
pipeline department.
output groups When of youone perform
this aggregation,
transaction
During
Perform a joins
session, in ayou
generator. the
databaselose
Usethe
Integration this
when data for individual
transformation
Service
possible. compares scopeemployees.
each
with row sortedToof maintain
the
datamaster andemployeeanysourcejoin type. data, you
against themust detailpass a
branch
The
source. of
Tothe
Integration improvepipeline
Service touses
performance the Aggregator
blockingfor anlogic transformation
unsortedto process Joinerthe and pass transformation
transformation,
Joiner a branch usewith the the if itsame
source canwith dodataso
fewer to the
without rows Joiner
as
transformation
blocking
PARTS_COLOR
You
the
Performing
master
join two all source.
asources to maintain
sources,
join (detail
in Toaindatabase
a target
and source)
improve the original
you load
want order
performance
is fasterto data. group
preserve
than When
for simultaneously.
performing you join
atransaction
sorted both
Joiner
a join
boundaries
in branches
Otherwise,
transformation,
the session. forofthe itthe
does
In pipeline,
detail
use somenot
the use
source. you
cases,
source blocking
Usejoin
this
with the
the logic.
isfewer
not
Row
PART_ID2
aggregated
Instead,
transformation
duplicate
possible, itsuch
stores
key DESCRIPTION
datavalues
as
scope with
more
joiningasthe
to rows
the original
preserve
tables in
COLOR
master.the data.
from cache.
transaction
two different boundariesdatabases in the or detail
flat filepipeline.
systems. The If you
Rowwant transformation
to perform a
1 Seat
Sorted
scope
Configure
join
For UnsortedJoiner
in aallowsCover
database,
thethe transformation.
Blue
sort
Input Integration
order
use
joiner the using Uses
following
Service
one of
transformation, aoptions:
sorted
to
the process Joiner
following data transformation
methods:
one row at a time. to joinThe theIntegration
sorted aggregated Service caches data with the
3 Data
the
When
i) Floor
master original
the Mat
data
cache Integration
data.
Black
and stores matches Service
all master the can detail
rows use in blocking
data thewith jointhe logiccached
condition to process master
with unque thedata. Joiner
index Usekeys transformation,
the Row transformation it stores fewer scope
4 Index
Filter
rows
with
Use Fuzzy
Create
ii) in
transformation.
Normal
sorted atheDice
cache cache,
pre-session
flat
and Yellow
files.
stores increasing
Master Compares
When
stored
all Outer
master
the performance.
procedure
join
flatthe
rows. average
files
types joinsales
contain
tothat use
the
sorted data
unsorted
tables data, against
in adata.
verify sales
database. that data the order for each of the employee
sort columns and filter match out
employees
in
Use eachthe sourceSource withfile. less
Qualifier thantransformation
above averagetosales. perform the join.
UseSorted
For
diffrencesorted Input
relational
between withjoiner data.
Different UseSources,
transformation sorted ports and sourcein the Source qualifier Qualifier
transformation: transformation to sort columns from the
ToData
Note:
You
source
Join
i) join
A.Source join
sorted the
Joining
cache
database.
two two
data two tables
sources
qualifier stores whenbranches
Configureby
100
or matching
possible.
Homogeneous twomastermight
branches
the order rows the in
impact
source ofPART_IDs
and performance
the
andyou join
sortedwant
Joiner inports
both
condition
to ifdrop sources,
the
the
Heterogeneous withJoiner
transaction
sameunique you
in eachset
transformation
index
source the
boundaries.
Source condition
keys. Useas
receives
Qualifier thefollows:
data from one
transformation.
All Input
branch
transformation
For
ii) Index
source more much
cache
qualifier later
information scope
Stores
is used than about
to the
master
apply
for other
using
reading thebranch.
rows sorted
transformation
thatdata
the The
correspond
ports, from Joiner
see logic
the totransformation
Usingthe
to all
database Sorted
rows
incomingstored
wherePorts. caches
data
in the
as all
and
joiner indexthe
dropdata
cache. from
transaction
transformation If the the first for
ismaster
boundaries
used
PART_ID1
branch,
for
Use
You
data both
joining Sorter
can
contains
twoand =writes
improve PART_ID2
pipelines.When
transformations.
data multiple the rows
session
tables. cache
you use
to disk
performance
with
Use AllSorter
the
a Input,
if the
same cache
the
key,Integration
bytransformation
configuring thefills. The
thetoService
Integration Joiner
Joiner
sortService transformation
dropsstores
relational
transformation incoming
or flat more must
to
filetransaction
use
thanthen
data. sorted
100 read
Place boundaries
rows athe
input. indata
SorterWhen
the
from
for both
Passive
source disk
transformation
you
data configure
cache. pipelines
when
qualifier
and initcan
the receives
and
the
Joiner
connected/Unconnected outputs
master
also bethe all
data
transformation
and
used rows
tofrom
detail joinfrom the
pipelines.
to
two use the
second transformation
sorted
tables branch.
Configure
but data, theThis
each
the condition as can
an is
Integration
Sorter open
slow
that transaction.
processing.
transformation
Service
both theimproves tableAt the
to should Joiner
useperformance
thebe same from
When you
transformation,
order
by minimizing
relational of the joinsort
database these
diskthekey inputtables
data
ports
and from
itandand with
should the
output.
the ahave
normal
master
sort You order
the join,
pipeline
see thegreatest
direction
primarythe result
can
keybe with set
cachedincludes
performance
the or joined
same the
data following
concurrently,
improvement
structure. data:when depending
you work onwithhow
Joining
you
large
For
using configure
Sorted
data Two
joiner sets.Instances
Input
we the can sort
with order.
the
join ofdata the Use
Same Same
from this
Source,
twoSource
transformation
heterogeneousscope sources withlike sortedtwo flat andfiles unsorted or onedata and any
file from join
relational
PART_ID
You
type.
Define
i)
and Data can
one the
cachealso
file DESCRIPTION
join join
fromStores
conditionsame
flat. source
all master SIZE
to receive data COLOR
or detail by creating
sorted rowsdataininthe a the
second
joinsame instance
condition
order as withofthe the
unique source.
sort origin.
keys. After
Storesyou detail
createrows the if the
1 Seat
second
The
For
Integration
flat an
ports
file( Cover
source
unsorted
from you
Service
PC Large
useinstance,
Joiner
7.1.1 Blue
inprocesses
theonwards)you condition
can
transformation,
join the orjoin
detail themust pipelines
designate
pipeline
a relational match from
faster
the
database the source
ports
thanthe two
the
at
to whichwiththesource
master
fewer
sort the
both instances.
origin.
pipeline.
rows as the
PowerCenter If master
you Client
Otherwise, want to
source.
stores and join master
3 Floor
unsorted
Row->Unsorted
When
rows.
diffrence
Integration The
you Matdata,
between
numberMedium
configure
Service youjoiner ofmust
can Black
multiple
rows create jointwo
transformation
it stores
connect. instances
conditions,
Use depends and Lookup
multiple on
the ofthe
Lookuptheprocessing
ports same
in thesource
Transformation:
transformationsfirstrates andcondition
join ofinjoin
the the
master
a mapping pipelines.
mustand matchdetail thepipelines.
first portsIfat
Transaction->Sorted
the
For
one
A. Joiner
sort
optimal
pipeline origin.
is Active
performance
processes and Lookup itsand rows diskisfaster
Passive.
storage, than designate
the other, the source Integration withServicethe fewer caches rows all as rows
the master that have source.
MasterWhen
Note:
When
During
already
B.Joiner aOuter
you been
will
session,youJoin
configure
join
processed join
thethetwo data
multiple
Joiner andusing
different this
transformation
conditions,
keeps data method,
them sources
the
cached thebased
compares
order Integration
until
of the on
the
each Service
conditions
aother
row pipeline
of the reads
must master the source
finishes
match source
the
processingdataoffor
order
against the each
itsthe rows.
ports source
detail at the
A master
instance,
sort
source.
ii)
join
Sometimesorigin,
Index
condition The outer
soand
cache performance
fewer
you ,andjoin
you
Stores
canuniquekeeps
must
pass data
improve only can
allforthe
not
rows rows
be
skipthe slower
inrows
session of data
any
the
rows ports.
master,
whichthan
from
stored
performance joining
the
satisfy
the
in the detail
fewer
by two
that
index
caching branches
source
iterations
cache. theand ofIfofthe
lookup the
a pipeline.
matching
join
index comparison
table. cache
If yourowsstoresfrom
cacheoccur,keysthewhich
the master
for
lookup, the
source.
The
speeds
master
you number
condition.discards
can Itchoose
the discards
pipeline, of sorted
join process.
thethe
to the
usedata unmatched
ports
remainingcache
a dynamic in the rows
sort
stores
rows.
or from
origin
the
static data thefor
can
cache. master
be By greater
master
default,source.
thanthe or
pipeline. lookupequal
If thecache to theremains
index number
cache of ports
stores
static andatfor
keys thethe
does join
not
Guidelines
condition.
detail
Joiner
change pipeline,
transformation
during the session. data supports
cache stores 4 typesdata of joins
for detailat pipeline.
When
Use
Example
For athe
Informatica you
sorted following
ofjoinaJoiner
level theguidelines
Join sample
Condition
transformation, tables
whenwith deciding
a master
designate whether
the outer to
source join join and
withbranches
the same
fewer of a
duplicatecondition,
pipeline key valuesor
thejoinresulttwo
as theset
instances
includesof a
master
the following
source:
For
source.
Normal example,data: you configure Sorter transformations in the master and detail pipelines with the following
sorted ports:
Master Outer
Join
PART_ID
For
Detail two
optimal
Outer branches
DESCRIPTION
performance of a pipeline andSIZE disk when COLOR you have
storage, designate a largethe source
source or with
if you fewercan read duplicate the source
key values data as only theonce.
1 Seat
For
ITEM_NO
master
Full example,
Outer Coveryou
source. Large
When can the Blue
onlyIntegration
read source data from
Service processes a message a sorted queue Joiner once. transformation, it caches rows for
3 Floor
Join
ITEM_NAME
one twoMat
hundred branches Medium
keys at of a Blackpipeline
time. If the when master you source
use sorted contains
Patni data.
ConfidentialIf the rows
many source with data theissame unsorted and you
key value, theuse a
4 Fuzzy
Sorter
PRICE
Integration
LookUp transformation
Dice Service
Transformation NULL must to
Yellowsort
cache themore data,rows, branch and theperformance
pipeline after can you besort slowed. the data.
Join two
When
Lookup you instances
configureofthe
transformation a basically
source
join condition,when you use
for Reference,basedneedthe to add on
following a blocking
guidelines
the transformation
to maintain sort to the pipeline between the
order:
source condition.when
lookup and the Joiner transformation. u want some data based on target
Connected
Unconnected:-
Uses only static cache
Does not support user-defined default values

Patni Confidential
2.Associat
e a query
with a
deployme
nt group.
When you
create a
dynamic
deployme
nt group,
you can
associate
an object
query with
Object Query it.

Patni Confidential