Case transform - This transform allows you to specify multiple conditions and route the data to multiple targets

depending upon the condition. If you observe the case transform icon, it says it will accept one source and will produce multiple targets. Only one of multiple branches is executed per row. Will see what all the options available, Label: This is nothing but the Name of Condition, if case condition is true the data will route to that particular target. Expression: This defines cases and labels for output paths Default: Only available if “Produce default option when all expression are false” option is enabled. True: For one case only option is enabled, the row is passed to 1 st case whose expression returns true else passed to all cases whose expression returns true. Data O/P: Connection between case transform & obj used for a particular case must be labeled. Each O/P label must be used at least once Design Steps:
• • • •

Design a DF that extract records from EMP table Now place a case transform on to the workspace from local object library Connect source to case transform, give a dbl. click on case transform, Case editor window will be opened Now you can add the case conditions according to your requirement.

Here my requirement is, based on the DEPTNO i want to route the records and load them in correspoding tables respectively If DEPTNO = 10 then load in TGT_10 DEPTNO = 20 then load in TGT_20 DEPTNO = 30 then load in TGT_30 DEPTNO = 40 as default

After defining the case condtions, comeout from the editor and link the appropriate targets.

• Now check the data in each target.01. only thing that you need to do is save .01 onwards. It is as simple as that.Validate the DF.01.01. But there is tricky way to workout this. save the job and execute it.5 you when you mention date like this and while validating the DF.14 onwards. In 11. It generates dates incremented as you specify. DI will show you an error stating that date range starts from 1900. Options: • Start date : Well DI document says that start date range starts from 1900. Date_Generation Transform Notes on Date_Generation transform: This is ultimate transform for creating Time dimension tables. I did some sanity test and date range starts from 1752.09.

weekly and monthly. We can increment daily. However. Join rank : While constructing the join. And one can pass variables inspite of selecting values. check the image for reference . Design Steps: • Drag the date_generation transform from the object library on to the workspace. Instead of selecting values we can pass variables also. this has been overwhelmed in later versions. sources will be joined based on their ranks.31. and in the next step connect a query transform and next step is your target object.• • • • the job after designing your time dimension without validating the DF/Job. Your design would be like this • Now open the date_generation transform and mention values. Increment : We can specify date intervasl between the sequence. Cache : The dataset will be cahced in memory to be used in later transforms. End date : The date range ends with 9999.12.

Check the image how i mapped. quarter(). . day_in_month(). year(). week_in_year() etc.• Now in the query transform i had applied some funtions like month().

View the data .• Now you’re done with the design part. save it and execute it.

nothing to do here just validate the window. same names). comeout from the merge transform Now connect it to the TGT table. dataypes and their datasizes Design Steps: • • • • Place your source tables on the work area Drag the Merge transform from the object library and connect each source to merge Click on the merge transform. it is equivalent to UNION ALL operator in Oracle Input datasets must have same structure(same number of columns. give a click on validate all and save the job and execute the job .Merge Transform Mege Transform: Combines two or more scehmas as a single schema.


This key is also known as a surrogate key. . transform/function increments the value for each row. we generally create the tables with a system generated key to unqiuely identify a row in the dimension. Based on this starting value.Key_Generation Transform When creating a dimension table in a data warehouse. It looks into the table and fetches max. existing key value and that will be used as a starting value. Note on Key_Generation: To generate artificial keys in DI we can use either Key_Generation Transform or Key_Generation Funtion.

INTEGER.OWNER. From 11.TABLE) Generated Key Column : The new artificial keys are inserted into this column. Increment value : Specify your interval for system generated key values. then DI will throw an error. Remember your key column should be in any number datatype (REAL. Design Steps: Here I’m populating customer information.7 version onwards. . if it is any other data type. FLOAT.Options: We have three options • • • Table Name : You should provide table name along with the Datastore and Owner (DATASTORE. but I want to maintain a SURROGATE_KEY. DOUBLE. we can pass variables also. Surrogate key will be incremented based on this interval value. DECIMAL). I’ve a primary called Customer_ID in the both source & target tables.

here it is key_gen transform always expects a SURROGATE_KEY column in SCHEMA IN .Have a glance on soruce data.

here is the target customers_dim target data with surrogate key values. .After completion of your job execution.

it overwrites an existing row in the target. The data which is coming from the source. Delete: If the rows are flagged as ‘D’. is usually flagged as normal opcode Insert: It does the same thing. Insert.Map_Operation Transform Map_Operation: It allows you to change the opcodes on your data. Delete and Discard(you’ll see this option in Map_Operaion only) Normal: Indeed. it creates a new row in the target. it creates a new row in the target and the rows will be flagged as ‘I’ – Insert Update: If the rows are flagged as ‘U’ . Normal. Update. those rows will be deleted from the target . Before discussing this we should know about opcodes precisely In DI we have 5 opcodes.

Discard: If you select this option. normal to insert. normal to delete) opcodes. However. Understanding Opcodes: Here is an example. MO_Normal-> i have selected Normal as Normal and discarded rest all opcodes. in the below figure I’m using (normal to normal. . normal to update. those rows will not be loaded into the target. query transform always takes normal rows as input and produces normal rows as output.e. i. will share the remaining opcodes in Table comparison and History Preserving transforms.. Here i have taken normal opcode mainly because. • In the first flow.

• • This flow inserts all records in to the target which are coming from the source In the second flow. inserts all records in to target. . Have a glance on both the data sets before loading in to the target. MO_Insert-> i have selected normal as insert and discarded rest all opcodes. You will see no opcode for Normal as Normal rows(1st flow). • In the third flow. • • It does the same thing. o Let’s say i want to update all the records whose deptno = 20. but you can see Insert opcode indicated as ‘I’ for Normal as Insert (2nd flow). i want to update few records in the target .

you can see the updated rows flagged as ‘U’ • In the fourth flow. I want to delete some records from the target.o Now. • Check the data. • and you can see the data after the map_operation dataset along with delete opcode ‘D’. . o Let’s say i want to delete rows whose deptno = 30. I have selected normal as update in the map_operation and discarded rest all. where i want to delete those from the target. in the map_operation transform i have selected normal as delete and discarded rest all o In the qry transform i have filtered out few records.

the above records will be deleted. For this i have added one more Map_Operation and selected row type delete as insert. . Check the target data • Now here in the sub-flow. i have inserted these deleted records in to another table.• In the target data set.

i have discarded all the opcodes. • Check the data .• Now in the last and final flow. I do not want to load the data in to the target.

I want to convert salary column values in to rows . and Data column contains the actual data in the pivoted columns. you can define a set of columns. Mar_sal). those will be displayed as it is in the target.Pivot Transform Pivot: This creates a new row for every value that you specified as a pivot column. For each set you will be having a Header column and a Data column. Observe the icon. it says that will convert column to rows. Pivot Sets : For each n every pivot set. Options: • • • • Pivot_Sequence : It creates a sequence for each row created from a pivoted column Non Pivot : List of columns specified here by you. Sname. Feb_sal. Design Steps: Having 5 columns in the source table(Sno. Pivot Columns : Set of columns swivelled to rows. Header column consists all the pivoted columns. Jan_sal.

Now.• Drag the source table and target table from Datastore object library on to the workspace. connect each object as shown in the below figure. drag the Pivot transform and place in between your source and target. • Have a glance on source data .

now you can see. PIVOT_DATA) columns in Schema Out. PIVOT_HDR. . PIVOT_SEQ. Check the Pivot sequence name. Default PIVOT_DATA. • • Save the definition. Now drag all SAL Columns on to the Pivotal list.• Give a dbl. PIVOT_HDR names will be ge generated. SNAME. Now I want to load Sno. Sname as it is. click on the Pivot Transform. (SNO. If you want you can change or leave as it is. by default “PIVOT_SEQ” will be there. So i have dragged these two columns on to the Non-pivotal list.

• • • Come out from the Pivot tranform by pressing BACK button on the standard tool bar. . Save the Dataflow. Check out the resultant data. validate it and execute the job.

Sign up to vote on this title
UsefulNot useful