You are on page 1of 3

sorter is passive but if distinct selected then it will be active as changes the records what happpens if we put folter

0 1 or null in filter conditions

input to the filter should be only from single transformation. not from 2 or 3 keep filter as close as source qualifier to remove unwanted flow of records #per formance tuning

if we have null record then use filter as flat files cannot put oracle condition s To filter rows containing null values or spaces, use the ISNULL and IS_SPACES fu nctions to test the value of the port. For example, if you want to filter out ro ws that contain NULL value in the FIRST_NAME port, use the following condition: IIF(ISNULL(FIRST_NAME),FALSE,TRUE) This condition states that if the FIRST_NAME port is NULL, the return value is F ALSE and the row should be discarded. Otherwise, the row passes through to the n ext transformation. at session level properties set truncate option at mapping level it will load fr esh data each time or you can set opration delete in pre sql pre sql and post sql : source pre sql is executed at source and traget pre sql i s executed at target. config_object _ stop on error: 1 ---> session will stop as soon as first fatal e rror occures 0 ---> it will not stop ideally should be kept to 1s in some cases client is ok for rejecting the record not to fail workflow ... rej ected files conatin rejected records in aggr transformer for avg or sum of sales the last value for eemp_id will be r eturned at emp_id column AGGR WILL GIVE THE LAST RECORD IN THAT SOURCE SEQUENCE READ if u giv sorted input to aggr then preformance will bee increased if sorted input is provided then it will be already grouped by and well sorted by dept so performance will be improvd if its not sorted then it will wait till cache read and it will group random dat a so that performance will be slow in sorter sequence for the sorting and input for group by should be same otherwi se fail

how to avoid duplicate records in aggr do group by on all columns so that all records will be as grouped so that uniq r ecords will be giveen for flat files use such techniq

when using update stratergy alwa\ys keep treat source as -> data driven left outer join --- retrives all records from left table for update stratergy in session level set properties update else insert it will automatically de sele ct update as update if record is updated it will be reflected otherwise new record is inserted update stratergy at session level -> on;ly one operation delete updaate or insert at mapping lvel ---> more than one stop on error 1 --> session stops error no bad file is rejected in reject file D -- valid or good data --- > its because of duplication or key voilence o -- overflow numeric data --- precison error exceeding the specific precision N --> null value to not null T----> truncated string data static vs dymnamic lookup connected vs unconnected lookup in flat file indirect method of loading you can keep one the source file structre for sourc and create another many file s same structre as source and create one file havung path of such files in wkflw in mapping properties keep method as indirect and include source file a s the file having path of all files included use of parameters : parameer scop will be upto that workflow only format [global] $$varibale=qfqafeq Target load plan : priority set for pipieline to execute