You are on page 1of 50

Module 11 – Basics of Datastage

Module Overview
This module aims to introduce the IBM IIS Datastage.

Module Objective

At the end of the module, you will be able,


 To understand what is IBM IIS & Datastage.
 To understand the basic concept of datastage.
 To understand transformation techniques.
 To understand various stages of Datastage.

Introduction to IBM IIS

IBM InfoSphere Information Server is also known as IBM IIS. It’s provides a single platform for data integration
and governance. InfoSphere Information Server is a leading data integration platform that help you to
understand the meaning, structure, and content of information across a wide variety of sources.

InfoSphere Information Server provides massively parallel processing (MPP) capabilities for a highly scalable and
flexible integration platform that handles all data volumes, big and small.

Page | 1
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
IBM InfoSphere Information Server includes the following core capabilities:
Information Governance: Improve visibility and information governance by enabling complete, authoritative
views of information with proof of lineage and quality. These views can be made widely available and reusable
as shared services, while the rules inherent in them are maintained centrally.
Data Integration: Collect, transform, and distribute large volumes of data. It has several built-in
transformation functions that reduce development time, improve scalability, and provide for flexible design.
Deliver data in real time to business applications through bulk data delivery (ETL), virtual data delivery
(federated), or incremental data delivery (CDC).
Data Quality: Standardize, cleanse, and validate information in batch processing and real time. Load cleansed
information into analytical views to monitor and maintain data quality. Reuse these views throughout your
enterprise to establish data quality metrics that align with business objectives, enabling your organization to
quickly uncover and fix data quality issues.
Link related records across systems to ensure consistency and quality of your information. Consolidate
disparate data into a single, reliable record to ensure that the best data survives across multiple sources.

IBM Information server includes following products:


 InfoSphere DataStage: InfoSphere DataStage is a data integration tool that enables users to move and
transform data between operational, transactional, and analytical target systems.

Page | 2
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
 InfoSphere QualityStage: InfoSphere QualityStage provides capabilities to create and maintain an
accurate view of data entities such as customer, location, vendors, and products throughout your
organization.
 InfoSphere Data Click: InfoSphere Data Click is a web-based tool that you can use to load data
between on-premises and off-premises data sources, including databases and cloud storage platforms
such as Amazon S3.
 InfoSphere Information Analyzer: InfoSphere Information Analyzer provides capabilities to profile and
analyze data to deliver trusted information to your organization.
 InfoSphere FastTrack: InfoSphere FastTrack provides capabilities to automate the workflow of your
data integration project. Users can track and automate multiple data integration tasks, shortening the
time between developing business requirements and implementing a solution.
 InfoSphere Information Governance Catalog: IBM InfoSphere Information Governance Catalog is a
web-based tool that provides capabilities to help you integrate, understand, and govern your
information.
 IBM InfoSphere Information Governance Dashboard: Use IBM InfoSphere Information Governance
Dashboard to measure the effectiveness of information governance initiatives by querying and
visualizing business, technical, and operational metadata from IBM InfoSphere Information Server
products.
 InfoSphere Information Services Director: InfoSphere Information Services Director provides an
integrated environment that enables users to rapidly deploy InfoSphere Information Server logic as
services.

Introduction to Datastage

InfoSphere DataStage is a data integration tool. It provides a graphical framework for developing the jobs that
move data from source systems to target systems. The transformed data can be delivered to data warehouses,
data marts, and operational data stores, real-time web services and messaging systems, and other enterprise
applications. InfoSphere DataStage supports extract, transform, and load (ETL) and extract, load, and
transform (ELT) patterns.
Datastage has following Capabilities:
 Leverage direct connectivity to enterprise applications as sources or targets
 It can integrate data from the widest range of enterprise and external data sources
 Implements data validation rules
 It is useful in processing and transforming large amounts of data
 It uses scalable parallel processing approach
 It can handle complex transformations and manage multiple integration processes
 Leverage metadata for analysis and maintenance

Page | 3
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
 Operates in batch, real time, or as a Web service
 It includes additional components and stages that enable integration between InfoSphere Information
Server and Apache Hadoop.

DataStage Architecture
DataStage follows the client-server architecture. DataStage is divided into two components:
 Client Components:
o Datastage Administrator
o Datastage Designer
o Datastage Manager
o Datastage Director
 Server Components:
o Repository
o Datastage Server
o Datastage Package Installer

DataStage Administrator
It is used for administration tasks. This component of DataStage provides a user interface for administrating
projects. This includes setting up DataStage users & their privileges, setting up purging criteria and creating &
moving projects. It also manages global settings and maintains interactions with various systems.
Datastage Designer
Designer provides a user friendly graphical interface that is used to create DataStage applications or jobs. Each
job explicitly specifies the source of data, required transforms and the destination of data as well. These jobs
are then complied to form executable programs that are scheduled by the Director and run by the Server.
Datastage Manager
It is the main interface of the Repository of DataStage. It is used for the storage and management of reusable
Metadata. Through DataStage manager, one can view and edit the contents of the Repository. Tables and files
layouts, jobs and transforms routines which are defined in the project are displayed by it.

Datastage Director
DataStage Director provides an interface which schedules executable programs formed by the compilation of
jobs. It is used to validate, schedule, execute and monitor DataStage server jobs and parallel jobs.

Page | 4
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
Repository
The Repository stores all the information required for building and running an ETL job.
Datastage Server
The DataStage Server runs jobs that extract, transform, and load data into the warehouse.
Datastage Package Installer
The DataStage Package Installer installs packaged projects/jobs and plug-ins.

Datastage Runtime Architecture & Various Stages

What is Stages?
IBM Infosphere Datastage job consists of stages linked together which describe the flow of data from a data
source to a data target.
A stage usually has -

 At least one data input or one data output.


 Some stages can accept more than one data input, and output to more than one stage.
 A set of predefined and editable properties. These properties are viewed or edited using stage editors.
The different types of job have different stage types. There are mainly three types of stages in IBM Infosphere
Datastage jobs:

 Sequence Job Stages


 Parallel Job Stages
 Server Job Stages

Sequence Job Stages

Wait for file Stage


Waits for a specific file to appear or disappear and launches the processing.
The Wait for File stage contains the following fields:

Page | 5
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
 Filename: The full pathname of the file that the activity is to wait for.
 Wait for file to appear: Select this option if the activity is to wait for the specified file to appear.
 Wait for file to disappear: Select this option if the activity is to wait for the specified file to disappear.
 Timeout Length (hh:mm:ss): The amount of time to wait for the file to appear or disappear before the
activity times out and completes.
 Do not timeout: Select this option to specify that the activity should not timeout, that is, it will wait
for the file forever.
 Do not checkpoint run: Select this option to specify that checkpoint information for this particular
wait for file operation will not be recorded. This means that, if a job later in the sequence fails, and the
sequence is restarted, this wait for file operation will be re-executed regardless of the fact that it was
executed successfully before. This option is only available if the sequence as a whole is checkpointed.

Sequencer Stage
Used for synchronization of a control flow of multiple activities in a job sequence.
The Sequencer stage contains the following field:

 Mode: Choose All or Any to select the mode of operation. In All mode all of the inputs to the
sequencer must be TRUE for any of the sequencer outputs to fire. In Any mode, output triggers can be
fired if any of the sequencer inputs are TRUE.
Job Stage
Specifies a Datastage server or parallel job to execute.
The Job Activity stage contains the following fields:
 Job name: Use this field to specify the name of the job that the activity runs.
 Invocation Id Expression: Enter a name for the invocation or a job parameter that supplies the
instance name at run time. A job parameter name must be delimited by hashes (#).
 Execution Action: Use this option to specify what action the activity takes when the job runs. Choose
one of the following options from the list:
o Run (the default)
o Reset if required then run
o Validate only
o Reset only
 Do not checkpoint job run: Select this option to specify that checkpoint information will not be
recorded for this job. This option specifies that if a job later in the sequence fails, and the sequence is
restarted, this job will run again, regardless of whether it finished successfully in the original run. This
option is only available if checkpoint information is recorded for the entire sequence job.
 Parameters: Use this grid to provide values for any parameters that the job requires. The grid displays
all parameters that are expected by the job.

Page | 6
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
Terminator Stage
Permits shutting down the whole sequence once a certain situation occurs.
The Terminator stage contains the following controls and fields:
 Send STOP requests to all running jobs: Select this option to have the Terminator stage send STOP
requests to all the jobs in the sequence that are still running. You can optionally specify that the
Terminator stage should wait for all these jobs to stop before finishing itself by also selecting and wait
for all jobs to finish option.
 Abort without sending STOP requests: Select this option to have the Terminator abort the job
sequence without sending any STOP requests to any running jobs.
 Final message text: If you specify some final message text, this text will be used as the text for the
sequence abort message (this is in addition to the logging text on the General page, which is output
when the activity starts). Do not enclose the message in inverted commas unless you want them to be
part of the message.

Notification Stage
Used for sending emails to user defined recipients from within Datastage.
The Notification stage includes the following fields.
 SMTP Mail server name: The name of the server or its IP address. You can specify a parameter whose
value you indicate at run time.
 Senders email address: The email address that the notification is sent from.
 Recipients email address: The email address that the notification is sent to. You can specify multiple
email addresses, separated by a space.
 Email subject: The text that is included in the subject line of the notification.
 Attachments: Files to be sent with the notification. Specify a path name or a comma-separated list of
pathnames, which must be enclosed in single-quotes or double-quotes. You can also specify an
expression that resolves to a pathname or comma-separated pathnames.
 Email body: The text that is included in the body of the notification.
 Include job status in email: Checkbox that specifies whether you want to include available job status
information in the message.
 Do not checkpoint run: Checkbox that specifies whether you want to include checkpoint information
for this particular notification operation will not be recorded. This means that, if a job later in the
sequence fails, and the sequence is restarted, this notification operation will be re-executed
regardless of the fact that it was executed successfully before. This option is only available if the
sequence as a whole is checkpointed.

Parallel Job Stages

Page | 7
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
Parallel stages are organized into different groups:
 General
 Database
 Development/Debug
 File
 Processing
 Real Time
 Restructure
General

 Annotation: Annotation is used for adding floating datastage job notes and descriptions on a job
canvas. Annotations provide a great way to document the ETL process and help understand what a
given job does.
 Container: Container (can be private or shared) - The main outcome of having containers is to simplify
visually a complex datastage job design and keep the design easy to understand.
 Description Annotation: Description Annotation shows the contents of a job description field. One
description annotation is allowed in a datastage job.
 Link: Link indicates a flow of the data. There are three main types of links in Datastage: stream,
reference and lookup.

Database Stages
 JDBC connector: JDBC connector is used to connect to supported JDBC data sources and perform data
access and metadata import operations on them.
 ODBC connector: ODBC connector is used to read data, write data, look up data, and filter data from
Microsoft Access or Excel spreadsheets.
 Db2 connector: Db2 connector can be used to create jobs that read, write and load data to a DB2
Database.
 Teradata Load stage: The Teradata Load stage is a passive stage that loads streams of tabular data
into tables of a target Teradata database.
 Teradata Enterprise stage: The Teradata Enterprise stage is a database stage that you can use to read
data from and write data to a Teradata database.
 Informix Enterprise stage: It is used to read data from or write data to an IBM Informix Dynamic
Server.
 Greenplum databases satge: It is used to read data from or write data to or look up data in
Greenplum databases.
 Netezza Enterprise stage: Netezza Enterprise stage is used to write bulk data to Netezza Performance
Server 8000.

Page | 8
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
Development/Debug Stages
 Column Generator stage: This stage adds columns to incoming data and generates mock data for
these columns for each data row processed.
 Head stage: The Head Stage selects the first N rows from each partition of an input data set and
copies the selected rows to an output data set.
 Tail stage: The Tail Stage selects the last N records from each partition of an input data set and copies
the selected records to an output data set.
 Peek stage: The Peek stage lets you print record column values either to the job log or to a separate
output link as the stage copies records from its input data set to one or more output data sets.

File Stages

 Amazon S3 connector: It is used to connect to Amazon Simple Storage Service (S3) and perform
various read and write functions.
 Big Data File stage: It is used to access files on the Hadoop Distributed File System (HDFS) and perform
various read and write functions.
 Complex Flat File stage: It is used to read a file or write to a file, but the same stage cannot be used to
do both.
 Data set stage: The Data Set stage is a file stage that allows you to read data from or write data to a
data set.
 File connector: Use the File connector to read files from and write files to a local file system on the
engine tier.
 File set stage: The File Set stage is a file stage that allows you to read data from or write data to a file
set.
 Lookup file set stage: It is used to create a lookup file set or reference one for a lookup. When
performing lookups, Lookup File Set stages are used with Lookup stages.

Processing Stages

Copy Stage
The Copy stage copies a single input data set to a number of output data sets. Each record of the input data
set is copied to every output data set. Records can be copied without modification or you can drop or change
the order of columns. Copy lets you make a backup copy of a data set on disk while performing an operation
on another copy.

Page | 9
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
Example - Copy data from a table into three separate data sets, and in each case only copying a subset of the
columns
Input Data Set
The column names for the input data set are as follows:
BILL_TO_NUM ,CUST_NAME ,ADDR_1 ,ADDR_2 ,CITY ,REGION_CODE ,ZIP ,ATTENT ,COUNTRY_CODE
,TEL_NUM ,FIRST_SALES_DATE ,LAST_SALES_DATE ,REVIEW_MONTH ,SETUP_DATE ,STATUS_CODE
,REMIT_TO_CODE ,CUST_TYPE_CODE ,CUST_VEND ,MOD_DATE ,MOD_USRNM ,CURRENCY_CODE
,CURRENCY_MOD_DATE ,MAIL_INVC_FLAG ,PYMNT_CODE ,YTD_SALES_AMT ,CNTRY_NAME ,CAR_RTE
,TPF_INVC_FLAG, ,INVC_CPY_CNT ,INVC_PRT_FLAG ,FAX_PHONE, ,FAX_FLAG ,ANALYST_CODE ,ERS_FLAG

Map Input Data Set columns to output link in Mapping Tab of Output page

Map First Output Link


Page | 10
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
Map Second Output Link

Map Third Output Link


Page | 11
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
When the job is run, three copies of the original data set are produced, each containing a subset of the original
columns, but all of the rows. Here is some sample data from each of the data set on DSLink6, which gives
name and address information

"GC13849","JON SMITH","789 LEDBURY ROAD"," ","TAMPA","FL","12345"

"GC13933","MARY GARDENER","127 BORDER ST"," ","NORTHPORT","AL","23456"

"GC14036","CHRIS TRAIN","1400 NEW ST"," ","BRENHAM","TX","34567"

"GC14127","HUW WILLIAMS","579 DIGBETH AVENUE"," ","AURORA","CO","45678"

"GC14263","SARA PEARS","45 ALCESTER WAY"," ","SHERWOOD","AR","56789"

"GC14346","LUC TEACHER","3 BIRMINGHAM ROAD"," ","CHICAGO","IL","67890"

Remove Duplicates Stage


The Remove Duplicates stage takes a single sorted data set as input, removes all duplicate rows, and writes
the results to an output data set.
Removing duplicate records is a common way of cleansing a data set before you perform further processing.
Two rows are considered duplicates if they are adjacent in the input data set and have identical values for the
key column(s). A key column is any column you designate to be used in determining whether two rows are
identical.
The data set input to the Remove Duplicates stage must be sorted so that all records with identical key values
are adjacent. You can either achieve this using the in-stage sort facilities available on the Input page
Partitioning tab, or have an explicit Sort stage feeding the Remove Duplicates stage.

Example – Remove duplicates customer address


To remove the duplicate we need to first sort the data so that the duplicates are actually next to each other.
As with all sorting operations, there are implications around data partitions if you run the job in parallel. You
should hash partition the data using the sort keys as hash keys in order to guarantee that duplicate rows are in
the same partition.

Page | 12
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
In this example we sort on the CUSTOMER_NUMBER columns and the sample of the sorted data shows up
some duplicates:

"GC13849","JON SMITH","789 LEDBURY ROAD","2/17/2007"

"GC13933","MARY GARDENER","127 BORDER ST","8/28/2009"

"GC13933","MARY GARDENER","127 BORDER ST","8/28/2009"

"GC14036","CHRIS TRAIN","1400 NEW ST","9/7/1998"

"GC14127","HUW WILLIAMS","579 DIGBETH AVENUE","6/29/2011"

"GC14263","SARA PEARS","45 ALCESTER WAY","4/12/2008"

"GC14263","SARA PEARS","45 ALCESTER WAY","4/12/2008"

"GC14346","LUC TEACHER","3 BIRMINGHAM ROAD","11/7/2010"

Next step is to set up the Remove Duplicates stage to remove rows that share the same values in the
CUSTOMER_NUMBER column. The stage will retain the first of the duplicate records
Here is a sample of the data after the job has been run and the duplicates removed:

"GC13849","JON SMITH","789 LEDBURY ROAD","2/17/2007"

"GC13933","MARY GARDENER","127 BORDER ST","8/28/2009"

"GC14036","CHRIS TRAIN","1400 NEW ST","9/7/1998"

"GC14127","HUW WILLIAMS","579 DIGBETH AVENUE","6/29/2011"

"GC14263","SARA PEARS","45 ALCESTER WAY","4/12/2008"

"GC14346","LUC TEACHER","3 BIRMINGHAM ROAD","11/7/2010"

Sort stage
The Sort stage is a processing stage that is used to perform more complex sort operations than can be
provided for on the Input page Partitioning tab of parallel job stage editors.

Page | 13
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
You can specify sorting keys as the criteria on which to perform the sort. A key is a column on which to sort
the data, for example, if you had a name column you might specify that as the sort key to produce an
alphabetical list of names.
The first column you specify as a key to the stage is the primary key, but you can specify additional secondary
keys. If multiple rows have the same value for the primary key column, then InfoSphere DataStage uses the
secondary columns to sort these rows.
You can sort in sequential mode to sort an entire data set or in parallel mode to sort data within partitions.

Example - Sort of a list of customer using sequential sort by customer number and then write sorted data into
a single partition.
This job sorts the contents of a sequential file, and writes it to a data set. The data is a list of customers. We
are going to sort it by customer name instead.
Sample of the input data:

"JON SMITH","789 LEDBURY ROAD","GC13849","GlobalCoUS"

"MARY GARDENER","127 BORDER ST","GC13933","GlobalCoUS"

"CHRIS TRAIN","1400 NEW ST","GC14036","GlobalCoUS"

"HUW WILLIAMS","579 DIGBETH AVENUE","GC14127","GlobalCoUS"

"SARA PEARS","45 ALCESTER WAY","GC14263","GlobalCoUS"

"LUC TEACHER","3 BIRMINGHAM ROAD","GC14346","GlobalCoUS"

The Sequential File stage runs sequentially because it only has one source file to read. The Sort stage is set to
run sequentially on the Stage page advanced tab. The sort stage properties are used to specify the column
CUST_NAME as the primary sort key.
When the job is run the data is sorted into a single partition. Here is a sample of the sorted data:

"CHRIS TRAIN","1400 NEW ST","GC14036","GlobalCoUS"

Page | 14
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
"HUW WILLIAMS","579 DIGBETH AVENUE","GC14127","GlobalCoUS"

"JON SMITH","789 LEDBURY ROAD","GC13849","GlobalCoUS"

"LUC TEACHER","3 BIRMINGHAM ROAD","GC14346","GlobalCoUS"

"MARY GARDENER","127 BORDER ST","GC13933","GlobalCoUS"

"SARA PEARS","45 ALCESTER WAY","GC14263","GlobalCoUS"

Join Stage
The Join stage is a processing stage. It performs join operations on two or more data sets input to the stage
and then outputs the resulting data set.
In the Join stage, the input data sets are notionally identified as the "right" set and the "left" set, and
"intermediate" sets. You can specify which is which. It has any number of input links and a single output link.
Join stage can perform one of four join operations:
 Inner transfers records from input data sets whose key columns contain equal values to the output
data set. Records whose key columns do not contain equal values are dropped.
 Left outer transfers all values from the left data set but transfers values from the right data set and
intermediate data sets only where key columns match. The stage drops the key column from the right
and intermediate data sets.
 Right outer transfers all values from the right data set and transfers values from the left data set and
intermediate data sets only where key columns match. The stage drops the key column from the left
and intermediate data sets.
 Full outer transfers records in which the contents of the key columns are equal from the left and right
input data sets to the output data set. It also transfers records whose key columns contain unequal
values from both input data sets to the output data set. (Full outer joins do not support more than
two input links).

Page | 15
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
Example – Left Join Employee table with Department table and deptId is the key column which is going to be
joined on.
Left Input Data Set -

EmpID Name DeptID

101 John 1

102 Russ 2

103 Marie 1

Right Input Data Set -

DeptID dept_Name

1 IT

2 BS

Page | 16
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
Here is the data set that is output if you perform a left outer join on the deptId key column
Output Data Set

EmpID Name DeptID dept_Name

101 John 1 IT

102 Russ 2 BS

103 Marie 1 IT

Lookup Stage
The Lookup stage is a processing stage that is used to perform lookup operations on a data set read into
memory from any other Parallel job stage that can output data. The most common use for a lookup is to map
short codes in the input data set onto expanded information from a lookup table which is then joined to the
incoming data and output.
The Lookup stage is most appropriate when the reference data for all lookup stages in a job is small enough to
fit into available physical memory. Each lookup reference requires a contiguous block of shared memory. If the
Data Sets are larger than available memory resources, the JOIN or MERGE stage should be used.
Lookup stages do not require data on the input link or reference links to be sorted. Be aware, though, that
large in-memory lookup tables will degrade performance because of their paging requirements
Lookups can also be used for validation of a row. If there is no corresponding entry in a lookup table to the
key's values, the row is rejected.
There are 3 types of lookup can be performed in Lookup Stage:

 Normal Lookup: All the data from the database is read into memory, and then lookup is performed.
 Range Lookup: Range Lookup is going to perform the range checking on selected columns.
 Sparse Lookup: For each incoming row from the primary link, the SQL is fired on database at run time.

Example - Find salary of an employee by EmpID


Source -

EmpID Name DeptID

101 John 1

Page | 17
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
102 Russ 2

103 Marie 1

Lookup -

EmpID Salary

101 20000

102 30000

When the job is run with EmpID as a key then output would be as below:

EmpID Name DeptID Salary

101 John 1 20000

102 Russ 2 30000

Merge Stage
The Merge stage combines a master data set with one or more update data sets. The columns from the
records in the master and update data sets are merged so that the output record contains all the columns
from the master record plus any additional columns from each update record that are required. A master
record and an update record are merged only if both of them have the same values for the merge key
column(s) that you specify. Merge key columns are one or more columns that exist in both the master and
update records.
It can have any number of input links, a single output link, and the same number of reject links as there are
update input links.

Page | 18
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
Example - Merge master data set with 1 update data set
Table 1. Master data set

EmpID Name DeptID

101 John 1

102 Russ 2

103 Marie 1

Table 2. Update data set

EmpID Salary

101 20000

102 30000

103 40000

Page | 19
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
Here is the merged data set output by the stage:

EmpID Name DeptID Salary

101 John 1 20000

102 Russ 2 30000

103 Marie 1 40000

Filter Stage
The Filter stage transfers, unmodified, the records of the input data set which satisfy the specified
requirements and filters out all other records. You can specify different requirements to route rows down
different output links. The filtered out records can be routed to a reject link, if required.

Example – Filter out input data if employee name is John.


Input Data Set –

EmpID Name DeptID

101 John IT

102 Russ BS

103 Marie IT

To filter the records first we need to set the predicate property


Page | 20
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
Predicate:
Where clause: Name = “John”
When the job is run then the output would be as below:

EmpID Name DeptID

101 John IT

Aggregator Stage
The Aggregator stage is a processing stage. It classifies data rows from a single input link into groups and
computes totals or other aggregate functions for each group. The summed totals for each group are output
from the stage via an output link.
The aggregator stage gives you access to grouping and summary operations. One of the easiest ways to
expose patterns in a collection of records is to group records with similar characteristics, then compute
statistics on all records in the group. You can then use these statistics to compare properties of the different
groups. For example, records containing cash register transactions might be grouped by the day of the week
to see which day had the largest number of transactions, the largest amount of revenue, and so on.
Records can be grouped by one or more characteristics, where record characteristics correspond to column
values. In other words, a group is a set of records with the same value for one or more columns. For example,
transaction records might be grouped by both day of the week and by month. These groupings might show
that the busiest day of the week varies by season.

Example – Generate report of distance traveled and charges grouped by date and license type.
Input Data Set -

Page | 21
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
Ship Date District Distance Equipment Packing License Charge

2020-06-02 1 1540 D M BUN 1300

2020-07-12 1 1320 D C SUM 4800

2020-08-02 1 1760 D C CUM 1300

2020-06-22 2 1540 D C CUN 13500

2020-07-30 2 1320 D M SUM 6000

The stage first hash partitions the incoming data on the license column, then sorts it on license and date in
Partitioning tab of the Input Stage of editor page.
Next, the properties are then used to specify the grouping and the aggregating of the data in properties tab of
stage editor.
The following is a sample of the output data when the job will run:

Ship Date License Distance Sum Distance Mean Charge Sum Charge Mean 2020-06-
02 BUN 1126053.00 1563.93 20427400.00 28371.39

2020-06-12 BUN 2031526.00 2074.08 22426324.00 29843.55

2020-06-22 BUN 1997321.00 1958.45 19556450.00 19813.26

2020-06-30 BUN 1815733.00 1735.77 17023668.00 18453.02

Change Capture Stage


The Change Capture Stage is a processing stage. The stage compares two data sets and makes a record of the
differences. The Change Capture stage takes two input data sets, denoted before and after, and outputs a
single data set whose records represent the changes made to the before data set to obtain the after data set.
The compare is based on a set a set of key columns, rows from the two data sets are assumed to be copies of
one another if they have the same values in these key columns. You can also optionally specify change values.
If two rows have identical key columns, you can compare the value columns in the rows to see if one is an
edited copy of the other.
The stage assumes that the incoming data is key-partitioned and sorted in ascending order. The columns the
data is hashed on should be the key columns used for the data compare. You can achieve the sorting and

Page | 22
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
partitioning using the Sort stage or by using the built-in sorting and partitioning abilities of the Change Capture
stage.

Encode Stage
It encodes a data set using a UNIX encoding command, such as gzip, that you supply. The stage converts a
data set from a sequence of records into a stream of raw binary data.
An encoded data set is similar to an ordinary one, and can be written to a data set stage. You cannot use an
encoded data set as an input to stages that performs column-based processing or re-orders rows, but you can
input it to stages such as Copy.
You can view information about the data set in the data set viewer, but not the data itself. You cannot
repartition an encoded data set, and you will be warned at runtime if your job attempts to do that. As the
output is always a single stream, you do not have to define meta data for the output link

Decode Stage
It decodes a data set using a UNIX decoding command, such as gzip, that you supply. It converts a data stream
of raw binary data into a data set. As the input is always a single stream, you do not have to define meta data
for the input link.
Page | 23
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
Real Time Stages
 CDC Transaction Stage: The CDC Transaction stage is used to read data that is captured by IBM
InfoSphere Change Data Capture (InfoSphere CDC) and apply the change data to a target database.
 Hierarchical data transformation: Hierarchical Data stage is used to create powerful hierarchical
transformations, parse and compose JSON/XML data, and invoke REST web services with high
performance and scalability.
 Streams connector: The InfoSphere Streams connector enables integration between InfoSphere
Streams and InfoSphere DataStage. By sending data to InfoSphere Streams from the InfoSphere
DataStage jobs, InfoSphere Streams can perform near real-time analytic processing (RTAP) in parallel
to the data being loaded into a warehouse by InfoSphere DataStage.
 Web Services Client Stage: This stage is used when web service to act as either a data source or a data
target during an operation.
 XML Transformer: Use XML Transformer to convert XML documents to another XML hierarchical
format.
 Java Integration Stage: The Java Integration stage is one of several different stages that invokes Java
code. In addition to the Java Integration stage, Java Client stage and Java Transformer stage are also
available.

Restructure Stages
Column Export Stage
This stage exports data from a number of columns of different data types into a single column of data type
ustring, string, or binary. It can have a single input link, a single output link and a single rejects link.
The input data column definitions determine the order in which the columns are exported to the single output
column. Information about how the single column being exported is delimited is given in the Formats tab of
the Input page. You can optionally save reject records, that is, records whose export was rejected.

Page | 24
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
Column Import Stage
The Column Import stage imports data from a single column and outputs it to one or more columns. You
would typically use it to divide data arriving in a single column into multiple columns. The data would be fixed-
width or delimited in some way to tell the Column Import stage where to make the divisions. The input
column must be a string or binary data, the output columns can be any data type.

You supply an import table definition to specify the target columns and their types. This also determines the
order in which data from the import column is written to output columns. Information about the format of the
incoming column (for example, how it is delimited) is given in the Format tab of the Output page. You can
optionally save reject records, that is, records whose import was rejected, and write them to a rejects link.
In addition to importing a column you can also pass other columns straight through the stage. So, for example,
you could pass a key column straight through.

Page | 25
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
Combine Records Stage
The Combine Records stage combines records in which particular key-column values are identical, into vectors
of sub records. As input, the stage takes a data set in which one or more columns are chosen as keys. All
adjacent records whose key columns contain the same value are gathered into the same record in the form of
sub records.
The data set input to the Combine Records stage must be key partitioned and sorted. This ensures that rows
with the same key column values are located in the same partition and will be processed by the same node.
Choosing the (auto) partitioning method will ensure that partitioning and sorting is done. If sorting and
partitioning are carried out on separate stages before the Combine Records stage, InfoSphere DataStage in
auto mode will detect this and not repartition.

Split Vector Stage


The Split Vector stage creates columns of the format name0 to namen, where name is the original vector's
name and 0 and n are the first and last elements of the vector

Server Job Stages


IBM InfoSphere DataStage has several built-in stage types for use in server jobs. These stages are used to
represent data sources, data targets, or conversion stages. Server stages are organized into different groups:
 Database
 File
 Processing
 Real Time

Database Stages

Page | 26
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
 SQL Server Load Stages: The SQL Server Load is a passive stage that bulk loads data into an SQL Server
database table.
 Sybase BCP stage: The BCPLoad stage uses the BCP (Bulk Copy Program) utility to bulk load data into a
single table in a Sybase database.
File Stages
 Folder Stage: Folder stages are used to read or write data as files in a directory located on the IBM
InfoSphere DataStage server.
 Hashed File Stages: Hashed File stages represent a hashed file, that is, a file that uses a hashing
algorithm for distributing records in one or more groups on disk.

Processing Stages

Row Merger Stages


The Row Merger stage reads data one row at a time from an input link. It merges all the columns into a single
string of a specified format. It then writes the string on a given column of the output link. The stage can have a
single input link and a single output link.
In normal operation of the Row Merger stage, each input row with multiple columns results in an output row
of a single column. The stage also offers concatenation facilities, however. These facilities allow you to
concatenate the result of each input row into a single string which is output when the stage detects an end-of-
data (EOD) or end-of-transmission (EOT) signal (that signifies no more input rows are expected).

Row Splitter Stages


The Row Splitter stage reads data one row at a time from an input link. It splits the data fields contained in a
string into a number of columns. It then writes the columns to the output link. The stage can have a single
input link and a single output link.
In normal operation of the Row Splitter stage, each input string processed results in an output row of multiple
columns. In some cases, however, a single input string can represent several rows of input data. In this case
the stage can deconcatenate these into separate rows for output.

InterProcess Stages
An InterProcess (IPC) stage is a passive stage which provides a communication channel between IBM
InfoSphere DataStage processes running simultaneously in the same job. It allows you to design jobs that run
on SMP systems with great performance benefits. To understand the benefits of using IPC stages, you need to
know a bit about how InfoSphere DataStage jobs actually run as processes.
Page | 27
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
The output link connecting IPC stage to the stage reading data can be opened as soon as the input link
connected to the stage writing data has been opened.
You can use InterProcess stages to join passive stages together. For example you could use them to speed up
data transfer between two data sources

In the above example the job will run as two processes, one handling the communication from the Sequential
File stage to the IPC stage, and one handling communication from the IPC stage to the ODBC stage. As soon as
the Sequential File stage has opened its output link, the IPC stage can start passing data to the ODBC stage. If
the job is running on a multiprocessor system, the two processors can run simultaneously so the transfer will
be much faster.

Real Time Stages


IBM WebSphere MQ connector
The IBM WebSphere MQ connector read messages from and write messages to message queues in IBM
WebSphere MQ enterprise applications.
We can use the IBM WebSphere MQ connector in any of the following ways:
 As an intermediary that enables applications to communicate by exchanging messages
 As a path for the transmission of older data to a message queue
 As a message queue reader for transmission to a non-messaging target

Datastage Runtime Architecture


The IBM InfoSphere DataStage and QualityStage Designer client creates IBM InfoSphere DataStage jobs that
are compiled into parallel job flows, and reusable components that execute on the parallel Information Server
engine.
Designer client allows you to develop job flows for extracting, cleansing, transforming, integrating, and loading
data into target files or target systems.

Page | 28
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
When you have finished developing a server or a parallel job, you need to compile it before you can actually
run it.
After successfully compiling the job the designer generates the OSH (Orchestrate Shell Script) and C++ code for
any Transformer stages used.
To run a Datastage job either you can use command line or using the Director client.

Job Execution flow


When you run a job, the generated OSH and contents of the configuration file is used to build the execution
plan (score). At runtime, IBM InfoSphere DataStage identifies the degree of parallelism and node assignments
for each operator, and inserts sorts and partitioners as needed to ensure correct results. It also defines the
connection topology (virtual data sets/links) between adjacent operators/stages, and inserts buffer operators
to prevent deadlocks (for example, in fork-joins). It also defines the number of actual OS processes. Multiple
operators/stages are combined within a single OS process as appropriate, to improve performance and
optimize resource requirements.
Processing begins after the job score and processes are created. Job processing ends when either the last row
of data is processed by the final operator, a fatal error is encountered by any operator, or the job is halted by
DataStage Job Control.

Job scores are divided into two sections:


 data sets
 operators

Both sections identify sequential or parallel processing. The execution manages control and message flow
across processes and consists of the conductor node and one or more processing nodes. Actual data flows
from player to player, the conductor and section leader are only used to control process execution through
control and message channels.

Page | 29
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
Job Runtime Architecture

Conductor

Conductor is the initial framework process. It creates the Section Leader processes (one per node),
consolidates messages to the DataStage log, and manages orderly shutdown. The Conductor node has the
start-up process. The Conductor also communicates with the

Section Leader:

Section Leader is a process that forks player processes (one per stage) and manages up/down
communications. SLs communicate between the conductor and player processes only. For a given parallel
configuration file, one section leader will be started for each logical node.

Players

Players are the actual processes associated with the stages. It sends stderr and stdout to the SL, establishes
connections to other players for data flow, and cleans up on completion. Each player has to be able to
communicate with every other player. There are separate communication channels (pathways) for control,
errors, messages and data. The data channel does not go through the section leader/conductor as this would
limit scalability. Data flows directly from upstream operator to downstream operator.

Page | 30
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
Platform Architecture

IBM IIS (IBM InfoSphere Information Server) provides a unified architecture that works with all types of
information integration. It has 3 core components in the architecture:
 Common services
 Unified Metadata
 Unified Parallel Processing
The architecture of IBM IIS is a service-oriented architecture, which enables it to connect the individual
product modules of InfoSphere Information Server very easily.
The following diagram shows the InfoSphere Information Server architecture.

Page | 31
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
Unified user interface
InfoSphere Information Server provides rich client interfaces for highly detailed development work and thin
clients that run in web browsers for administration.IBM InfoSphere Information Server console and the IBM
InfoSphere Information Server Web console provide a common interface, visual controls, and user experience
across products. Common functions such as catalog browsing, metadata import, query, and data browsing all
expose underlying common services in a uniform way.

Common services
InfoSphere Information Server is built entirely on a set of shared services that centralize core tasks across the
platform. These include administrative tasks such as security, user administration, logging, and reporting.
Shared services allow these tasks to be managed and controlled in one place, regardless of which suite
component is being used. The common services also include the metadata services, which provide standard
service-oriented access and analysis of metadata across the platform. In addition, the common services tier
manages how services are deployed from any of the product functions, allowing cleansing and transformation
rules or federated queries to be published as shared services within an SOA, using a consistent and easy-to-
use mechanism.
InfoSphere Information Server products can access three general categories of service:
 Design: Design services help developers create function-specific services that can also be shared. For
example, InfoSphere Information Analyzer calls a column analyzer service that was created for
enterprise data analysis but can be integrated with other parts of InfoSphere Information Server
because it exhibits common SOA characteristics.
 Execution: Execution services include logging, scheduling, monitoring, reporting, security, and web
framework.
 Metadata: Metadata services enable metadata to be shared across tools so that changes made in one
InfoSphere Information Server component are instantly visible across all of the suite components.
Metadata services are integrated with the metadata repository. Metadata services also enable you to
exchange metadata with external tools.

Unified parallel processing engine


Much of the work that InfoSphere Information Server does takes place within the parallel processing engine.
The engine handles data processing needs as diverse as performing analysis of large databases for IBM
InfoSphere Information Analyzer, data cleansing for IBM InfoSphere QualityStage, and complex
transformations for IBM InfoSphere DataStage. This parallel processing engine is designed to deliver the
following benefits:
 Parallelism and data pipelining to complete increasing volumes of work in decreasing time windows
 Scalability by adding hardware (for example, processors or nodes in a grid) with no changes to the
data integration design
Page | 32
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
 Optimized database, file, and queue processing to handle large files that cannot fit in memory all at
once or with large numbers of small files

Unified metadata
InfoSphere Information Server is built on a unified metadata infrastructure that enables shared understanding
between business and technical domains. This infrastructure reduces development time and provides a
persistent record that can improve confidence in information. All functions of InfoSphere Information Server
share the same metamodel, making it easier for different roles and functions to collaborate.
A common metadata repository provides persistent storage for all InfoSphere Information Server suite
components. All of the products depend on the repository to navigate, query, and update metadata. The
repository contains two kinds of metadata:
 Dynamic: Dynamic metadata includes design-time information.
 Operational: Operational metadata includes performance monitoring, audit and log data, and data
profiling sample data.

Common connectivity

InfoSphere Information Server can connects to various sources whether they are structured, unstructured,
applications or the mainframe. Metadata-driven connectivity is shared across the suite components, and
connection objects are reusable across functions.
Connectors provide design-time importing of metadata, data browsing and sampling, runtime dynamic
metadata access, error handling, and high functionality and high performance runtime data access.

Standard data transformation Techniques

Data transformation is a technique of conversion as well as mapping of data from one format to another. The
tools and techniques used for data transformation depend on the format, complexity, structure and volume of
the data.
There are various methods of data transformation as follows:
 Data Smoothing
 Aggregation
 Discretization
 Generalization

Page | 33
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
 Attribute construction
 Normalization
 Manipulation

Data Smoothing
Data smoothing is used for removing the noise from a dataset. Noise is referred to as the distorted and
meaningless data within a dataset. Smoothing uses algorithms to allow important patterns to more clearly
stand in the data. After removing noise, the process can detect any small changes to the data to detect special
patterns.
Aggregation
Aggregation is the process of collecting data from a variety of sources and storing it in a single format. Here,
data is collected, stored, analyzed and presented in a report or summary format. It helps in gathering more
information about a particular data cluster. The method helps in collecting vast amounts of data. This is a
crucial step as accuracy and quantity of data is important for proper analysis.
Discretization
Data discretization is the process of converting continuous data into a set of data intervals and associating
with each interval some specific data value. This method is also called data reduction mechanism as it
transforms a large dataset into a set of categorical data.
Generalization
Data Generalization is the process of summarizing data by replacing relatively low level values with higher
level concepts. Data generalization can be divided into two approaches – data cube process (OLAP) and
attribute oriented induction approach (AOI).
Attribute construction
In attribute construction or feature construction of data transformation, new attributes are created from an
existing set of attributes.
Normalization
The data normalization also referred to as data pre-processing is a basic element of data mining. The main
purpose of data normalization is to minimize or even exclude duplicated data. It means transforming the data,
namely converting the source data in to another format that allows processing data effectively.
Manipulation

Page | 34
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
Data manipulation is the process of changing or altering data to make it more readable and organized. Data
manipulation tools help identify patterns in the data and transform it into a usable form to generate insights
on financial data, customer behavior etc.

Transforming Data

IBM IIS Datastage has capabilities to transform data to satisfy both simple and complex data integration tasks.
To perform any type of transformation you need to create a job. DataStage provides three types of jobs:
 Server Jobs: Run on the DataStage Server.
 Mainframe Jobs: Available only if you have installed Enterprise MVS Edition and uploaded it to a
mainframe, where they are compiled and run.
 Parallel Jobs: Available only if you have installed the Enterprise Edition and run on DataStage servers
that are SMP, MPP, or cluster systems.

How to create a job


Steps to create a job in Datastage:
Step 1
Define optional project-level environment variables in DataStage Administrator.
Step 2
Import or create table definitions, if they are not already available.
Step 3
Add stages and links to the job to indicate data flow.
Step 4
Edit source and target stages to designate data sources, table definitions, file names, and so on.
Step 5
Edit transformer and processing stages to perform various functions, include filters, create lookups, and use
expressions.
Step 6

Page | 35
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
Save, compile, troubleshoot, and run the job.

Transform data with expressions


By using the Transformer stage you can create transformations to apply to your data. These transformations
can be applied to individual columns in your data. For example, you can trim data by removing leading and
trailing spaces, concatenate data, perform operations on dates and times, perform mathematical operations,
and apply conditional logic.
Transformations are specified by using a set of functions. You can define the transformations by using the
expression editor.

Transform data with expressions

Defining transformations in the expressions editor


In the above example, this sample job includes several transformations: the customer number is trimmed; the
country code is replaced with United States if it is US, the vendor and type codes are concatenated to create a
vender type code, and the current date is inserted.

Aggregate data by common characteristics


Aggregator stage is used to summarize the data. Records can be grouped by one or more characteristics. For
example, you can group sales data both by day of the week and by month, and compute totals.

Page | 36
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
Summarize data by day

Join data from multiple flat files


You can read data from multiple flat files and write the data to a single target file. You can combine the
records from the files by using the Join stage and use techniques like key-based partitioning and in-memory
sorting to accelerate processing. For example, you can combine customer records with order details and write
the customer order to the target file.

Join data from flat files

Pivot data in a table


You can map a set of columns in an input row to a single column in multiple output rows by using a horizontal
pivot by using the Pivot Enterprise stage. You can also do vertical pivots, which combine related data from
multiple rows into a single row with repeating field types.

Page | 37
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
Pivot data

Join heterogeneous sources


In Datastage you can collect, standardize, and consolidate data from a wide array of data sources and data
structures, and create a single, accurate, and trusted source of information. In the below example, the job
extracts data from two different types of databases and loads the results into another type of database in the
appropriate format. The job uses a Join stage to combine the data and Transformer stages to trim and format
the data.

Join heterogeneous sources

Find deltas from yesterday's file


Change Capture stage is used to compare yesterday's file with today's file, and make a record of the changes.
The Change Capture stage takes two input data sets, denoted before and after, and outputs a single data set.
The output records represent the changes that were made to the before data set to obtain the after data set.
In this sample job, the changes between two sequential files are loaded to a Teradata database for analysis.

Page | 38
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
Identify changes with the Change Capture stage

Metadata in the Parallel Framework

Metadata is information about data. It describes the data flowing through your job in terms of column
definitions.
InfoSphere DataStage has two ways of handling metadata:
 Through table definitions: This is the default strategy. Parallel stages derive their Meta data from the
columns defined on the Outputs or Input page Column tab of your stage editor.
 Through Schema files: In some cases you have already schema in your metadata repository. In such
cases you can specify that the stage uses a schema file instead by explicitly setting a property on the
stage editor and specify the name and location of the schema file.

Runtime column propagation


InfoSphere DataStage is also flexible about metadata. It can handle the situation where meta data is not
completely defined. You can define part of your schema and specify that, if your job encounters extra columns
that are not defined in the meta data when it actually runs, it will adopt these extra columns and propagate
them through the rest of the job.
This is known as runtime column propagation (RCP). This can be enabled for a project via the Administrator
client, and set for individual links via the Output Page Columns tab for most stages, or in the Output page
General tab for Transformer stages. Note that, if you want to use a schema file, to define column metadata
you should ensure that runtime column propagation is turned on.

Page | 39
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
Specifying column metadata
Columns metadata can be defined/specified by using Outputs or Input page Column tab of stage editor.
Steps to define column metadata:
Step 1
Double click on the stage. The stage editor opens to the Properties tab of the Input/output page
Step 2
Click on the Columns tab and enter the columns metadata like column name and its type.
Step 3
In the Save Table Definition window, enter the following information:

 Data source type


 Data source name
 Table/file name
 Long description
Step 4
Click OK to specify the location where you want to save the table definition.
Step 5
Click Save to save the column definitions that you specified as a table definition object in the repository. The
definitions can then be reused in other jobs.

Importing metadata
Importing metadata from source files reduces the incidence of error and simplifies the mapping of metadata.
Importing metadata into repository –
Steps to import CSV file metadata into repository
Step 1
In the Designer client, click Import > Table Definitions > Sequential File Definitions.
Step 2
In the Import Meta Data window, import the csv file.
 In the Directory field, click the browse icon to navigate to the directory where the source files exist.
 In the File Type drop-down menu, select Comma Separated (*.csv).
Page | 40
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
 In the Files list, select the csv file.
 In the To folder field, click the browse icon. In the Select Folder window, select the Table definitions
folder that you created in specifying column metadata, and then click OK.
 In the Import Meta Data window, click Import to import metadata from the csv file.
Step 3
In the Define Sequential Metadata window, select the First line is columns names check box.
Step 4

Click OK to close the Define Sequential Metadata window.


Step 5
Click Close to close the Import Meta Data window.

Loading column metadata from the repository


Steps to load column metadata from the repository
Step 1
Double click on the stage. The stage editor opens to the Properties tab of the output page
Step 2
Click on the Columns tab
Step 3
In the Columns tab, click Load.
Step 4
In the Table Definitions window, select the CSV file table definition that you created earlier, then click OK.

Managing metadata
The metadata repository of IBM InfoSphere Information Server stores metadata from suite tools and external
tools and databases and enables sharing among them.
You can import metadata into the repository from various sources, export metadata by various methods, and
transfer metadata assets between design, test, and production repositories.

Page | 41
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
The single metadata repository provides users of each suite tool with a common understanding of the
structure of the data that flows through the tools of the InfoSphere Information Server suite. With a shared
repository, changes that are made in one suite tool are automatically and instantly visible throughout the
suite.
The metadata repository shares, stores, and reconciles a comprehensive spectrum of metadata:

 Business metadata: Business metadata includes glossary terms, stewardship, and examples
 Operational metadata: Describes the runs of IBM InfoSphere DataStage and QualityStage jobs,
including rows written and read, and the database table or data files that are affected
 Technical metadata: Implemented data resources, including host computers, databases and data files,
and their contents. Profiling, quality, and ETL processes, projects, and users, including jobs and
projects that are created in InfoSphere DataStage and QualityStage.

Importing and exporting metadata


IBM IIS offers many methods of importing metadata assets into the metadata repository. Some methods
include the ability to export metadata from the repository to other tools, files, or databases.
Connectors, operators, and plug-ins
InfoSphere DataStage and QualityStage use connectors, operators, and plug-ins to connect to various
databases to extract, transform, and load data. In all cases, metadata about the implemented data resources,
including host, database, schemas, tables, and columns, is stored in the metadata repository for use by other
suite tools.
InfoSphere Metadata Integration Bridges
Bridges let you import metadata into the metadata repository from external applications, databases, and files,
including design tools and BI tools

Explain and Create Schemas

Schemas are an alternative way to specify column definitions for the data used by parallel jobs.
Most parallel job stages take their meta data from the Columns tab, which contains table definitions,
supplemented, where necessary by format information from the Format tab.
For some stages, you can specify a property that causes the stage to take its meta data from the specified
schema file instead. Some stages also allow you to specify a partial schema. This allows you to describe only
those columns that a particular stage is processing and ignore the rest.

Page | 42
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
The schema file is a plain text file. A partial schema has the same format as schema file.

Schema
A schema contains a record (or row) definition. This describes each column (or field) that will be encountered
within the record, giving column name and data type.
Schema format
The following is an example of record schema:

record (

id: int32;

name: not nullable string[255];

dob: date;

address: nullable string[255]

The format of each line describing a column is:

 column_name: This is the name that identifies the column. Names must start with a letter or an
underscore (_), and can contain only alphanumeric or underscore characters. The name is not case
sensitive. The name can be of any length.
 Nullability: You can optionally specify whether a column is allowed to contain a null value, or whether
this would be viewed as invalid. If the column can be null, insert the word 'nullable'. By default
columns are not nullable.
 You can also include 'nullable' at record level to specify that all columns are nullable, then override the
setting for individual columns by specifying `not nullable'. For example:

record nullable (

name:not nullable string[255];

address: string[255]);

dob: date

Page | 43
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
)

 Datatype: This is the data type of the column. This uses the internal data types, see Data Types, not
the SQL data types as used on Columns tabs in stage editors.

Syntax of defining different data types of a column


Integer columns

record (n:int32;) // 32-bit signed integer

record (n:nullable int64;) // nullable, 64-bit signed integer

record (n[10]:int16;) // fixed-length vector of 16-bit signed integer

record (n[]:uint8;) // variable-length vector of 8-bit unsigned int

Date columns

record (dateField1:date; ) // single date

record (dateField2[10]:date; ) // 10-element date vector

record (dateField3[]:date; ) // variable-length date vector

record (dateField4:nullable date;) // nullable date

String Columns

record (var1:string[];) // variable-length string

record (var2:string;) // variable-length string; same as string[]

record (var3:string[80];) // fixed-length string of 80 bytes

record (var4:nullable string[80];) // nullable string

record (var5[10]:string;) // fixed-length vector of strings

record (var6[]:string[80];) // variable-length vector of strings

Page | 44
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
Raw columns

record (var1:raw[];) // variable-length raw field

record (var2:raw;) // variable-length raw field; same as raw[]

record (var3:raw[40];) // fixed-length raw field

record (var4[5]:raw[40];)// fixed-length vector of raw fields

Partial schemas

Some parallel job stages allow you to use a partial schema. This means that you only need define column
definitions for those columns that you are actually going to operate on.

You specify a partial schema using the Intact property on the Format tab of the stage together with the
Schema File property on the corresponding Properties tab. To use this facility, you need to turn Runtime
Column Propagation on, and provide enough information about the columns being passed through to enable
InfoSphere DataStage to skip over them as necessary.

In the file defining the partial schema, you need to describe the record and the individual columns. Describe
the record as follows:

 Intact: This property specifies that the schema being defined is a partial one. You can optionally
specify a name for the intact schema here as well, which you can then reference from the intact
property of the Format tab.
 record_length: The length of the record, including record delimiter characters.
 record_delim_string: String giving the record delimiter as an ASCII string in single quotes.

Describe the columns as follows:

 position: The position of the starting character within the record.


 delim: The column trailing delimiter, can be any of the following:
o ws to skip all standard whitespace characters (space, tab, and newline) trailing after a field.
o end to specify that the last field in the record is composed of all remaining bytes until the end
of the record.
o none to specify that fields have no delimiter.
o null to specify that the delimiter is the ASCII null character.
 text: specifies the data representation type of a field as being text rather than binary. Data is
formatted as text by default.

Page | 45
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
Example -

Sequential file defining rows comprising six fixed width columns, and you are in interested in the last two. You
know that the first four columns together contain 80 characters. Your partial schema definition should be as
follows:

record { intact=details, record_delim_string = '\r\n' }

( colstoignore: string [80]

name: string [20] { delim=none };

income: uint32 {delim = ",", text

};

SCD in Datastage

The Slowly Changing Dimension (SCD) stage is a processing stage that works within the context of a star
schema database. The SCD stage has a single input link, a single output link, a dimension reference link, and a
dimension update link.

The SCD stage reads source data on the input link, performs a dimension table lookup on the reference link,
and writes data on the output link. The output link can pass data to another SCD stage, to a different type of
processing stage, or to a fact table. The dimension update link is a separate output link that carries changes to
the dimension. You can perform these steps in a single job or a series of jobs, depending on the number of
dimensions in your database and your performance requirements.

Type -1 SCD: In the type -1 SCD methodology, it will overwrites the older data (Records) with the new data
(Records) and therefore it will not maintain the historical information. This will used for the correcting the
spellings of names, and for small updates of customers.

Page | 46
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
Type -2 SCD: In the Type-2 SCD methodology, it will tracks the complete historical information by creating the
multiple records for the given natural key (Primary key) in the dimension tables with a separate surrogate keys
or a different version numbers. We have an unlimited historical data preservation, as a new record is inserted
each time a change is made.

Type-3 SCD: In the Type-3 SCD, it will maintain the partial historical information.

Example for Type 1:

SCD type 1 methodology is used when there is no need to store historical data in the dimension table. This
method overwrites the old data in the dimension table with the new data. It is used to correct data errors in
the dimension.

The customer table with the below data-

surrogate_key customer_id customer_name location


1 1 Mracell Illions

Here the customer name is misspelt. It should be Marcel instead of Mracell. If you use type1 method, it just
simply overwrites the data. The data in the updated table will be.

surrogate_key customer_id customer_name location


1 1 Marcel Illions

The advantage of type1 is ease of maintenance and less space occupied. The disadvantage is that there is no
historical data kept in the data warehouse.

Example for Type 3:

In type 3 method, only the current status and previous status of the row is maintained in the table. To track
these changes two separate columns are created in the table. The customer dimension table in the type 3
method will look as

surrogate_key customer_id customer_name current_location previous_location


1 1 Marcel Illions NULL

Let say, the customer moves from Illions to Seattle and the updated table will look as

Page | 47
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
surrogate_key customer_id customer_name current_location previous_location
1 1 Marcel Seattle Illions

Now again if the customer moves from Seattle to NewYork, then the updated table will be

surrogate_key customer_id customer_name current_location previous_location


1 1 Marcel New York Seattle

The type 3 method will have limited history and it depends on the number of columns you create.

Example: Implementation of SCD Type2

Table Name- D_CUSTOMER_SCD2


CUST_ID CUST_ CUST_ CUST_ CUST_ REC_ REC_ REC_
NAME GROUP_ID TYPE_ID COUNTRY_ID VERSION EFFDT CURRENT_IND
DRBOUA7 Dream Basket EL S PL 1 2006-10-01 Y
ETIMAA5 ETL tools info BI C FI 1 2006-09-29 Y
FAMMFA0 Fajatso FD S CD 1 2006-09-27 Y
FICILA0 First Pactonic FD C IT 1 2006-09-25 Y
FRDXXA2 Frasir EL C SK 1 2006-09-23 Y
GAMOPA9 Ganpa LTD. FD C US 1 2006-09-21 Y
GGMOPA9 GG electronics EL S RU 1 2006-09-19 Y
GLMFIA6 Glasithklini FD S PL 1 2006-09-17 Y
GLMPEA9 Globiteleco TC S FI 1 2006-09-15 Y
GONDWA5 Goli Airlines BN S GB 1 2006-09-13 Y

 The dimension table with customers is refreshed daily and one of the data sources is a text file. For
the purpose of this example the CUST_ID=ETIMAA5 differs from the one stored in the database and it
is the only record with changed data. It has the following structure and data:

SCD 2 - Customers file extract

Page | 48
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
 There is a hashed file (Hash_NewCust) which handles a lookup of the new data coming from the text
file.
 A T001_Lookups transformer does a lookup into a hashed file and maps new and old values to
separate columns.

SCD 2 lookup transformer

 A T002_Check_Discrepacies_exist transformer compares old and new values of records and passes
through only records that differ.

SCD 2 check discrepancies transformer

 A T003 transformer handles the UPDATE and INSERT actions of a record. The old record is updated
with current indicator flag set to no and the new record is inserted with current indicator flag set to
yes, increased record version by 1 and the current date.
Page | 49
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd
SCD 2 insert-update record transformer

 ODBC Update stage (O_DW_Customers_SCD2_Upd) - update action 'Update existing rows only' and
the selected key columns are CUST_ID and REC_VERSION so they will appear in the constructed where
part of an SQL statement.
 ODBC Insert stage (O_DW_Customers_SCD2_Ins) - insert action 'insert rows without clearing' and the
key column is CUST_ID.

CUST_ID CUST_ CUST_ CUST_ CUST_ REC_ REC_ REC_


NAME GROUP_ID TYPE_ID COUNTRY_ID VERSION EFFDT CURRENT_IND
DRBOUA7 Dream Basket EL S PL 1 2006-10-01 Y
ETIMAA5 ETL tools info BI C FI 1 2006-09-29 N
FAMMFA0 Fajatso FD S CD 1 2006-09-27 Y
FICILA0 First Pactonic FD C IT 1 2006-09-25 Y
FRDXXA2 Frasir EL C SK 1 2006-09-23 Y
GAMOPA9 Ganpa LTD. FD C US 1 2006-09-21 Y
GGMOPA9 GG electronics EL S RU 1 2006-09-19 Y
GLMFIA6 Glasithklini FD S PL 1 2006-09-17 Y
GLMPEA9 Globiteleco TC S FI 1 2006-09-15 Y
GONDWA5 Goli Airlines BN S GB 1 2006-09-13 Y
ETIMAA5 ETL-Tools.info BI C ES 2 2006-12-02 Y

Page | 50
Business Intelligence Participant Guide By EduBridge Learning Pvt.Ltd

All rights reserved.

No part of this document may be reproduced in any material form (including printing and photocopying or storing it in any medium by electronic or other means and whether
or not transiently or incidentally to some other use of this document) without the prior written permission of EduBridge Learning Pvt. Ltd. Application for written permission
to reproduce any part of this document should be addressed to the CEO of EduBridge Learning Pvt. Ltd

You might also like