You are on page 1of 159

RR ITEC

RR ITEC ODI 11g /12C Classroom Notes

By Ram Reddy

Confidential
Contents

1. Data Warehouse Concepts 4


2. Introduction to Oracle Data Integrator 14
3. ODI 12C Installation And Architecture 16
4. Hands on 1: Creating Master and Work Repository 20
5. Hands On 2: Creating master and work repositories using RCU 28
6. Hands On 3: Configuring RRITEC database 30
6.1 Configuring source database 30
6.2 Configuring Target database 30
7. Hands On 4: Creating and Managing Topology 31
7.1 Creating Physical Architecture 31
7.2 Creating Logical Architecture: 35
7.3 Creating Contexts 36
8. Hands on 5: Creating Model 38
9. Hands on 6: Creating Project 42
10. Hands on 7: Markers 45
11. Hands On 8: Create a Mapping using expression 46
12. Hands On 9:Create a Mapping using Flat File and Filter 49
13. Hands On 10: Create a Mapping using Split 54
14. Hands On 11: Create a Mapping using Joiner 57
15. Hands On 12: Create a Mapping using Lookup 61
16. Hands On 13: Create a Mapping using Sort 63
17. Hands On 14: Create a Mapping using Aggregate 65
18. Hands On 15: Create a Mapping using Distinct 67
19. Hands On 16: Create a Mapping using Set Component 70
20. Hands on 17: User Functions 72
21. Hands on 18: Variables 78
22. Handson 19: Sequences 82
23. Handson 20: Procedures 87
24. Handson 21: Packages 95
25. Handson 22: Scenarios 97
26. Handson 23: Target Load Plan 99
27. Handson 24: Agents 100

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 2

Confidential
28. Handson 25: Version Control 105
29. Handson 26: Modifying Knowledge Modules 108
30. Handson 27: Change Data Capture (CDC) 119
31. Handson 28: Migration (Exporting and Importing) 128
32. Handson 29: Security 138
33. Handson 30: CKM 141
34. Handson 31: Slowly Changing Dimension (SCD2) 153

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 3

Confidential
1. Data Warehouse Concepts

Data

1. Any meaningful information is called as data


2. Data is two types
1. Transactional Data
2. Analytical Data

Transactional Data

1. Is run time data or day to day data


2. is current and detail
3. Is useful to run the business
4. Is stored in OLTP(On Line Transaction Processing)
5. Source of transactional data is Applications
6. Example: ATM Transactions , Share market transactions..etc

Transaction Example Diagram :

Analytical Data

1. is useful to ANALYSE the business


2. is Historical and summarized
3. Is stored in OLAP(On Line Analytical Processing) or DW(Data Warehouse )
4. Source of Analytical data is OLTP

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 4

Confidential
DW Architecture

DW Tools

1. DW tools are divided into two types. some of those tools are

ETL Reporting
Informatica/ODI OBIEE

DataStage Cognos
SSIS SSRS,SSAS
BODI SAP BO

OBIA

1. OBIA stands for Oracle Business Intelligence Applications.


2. OBIA is a predefined work of ETL and Reporting.
3. OBIA some of the important plug-ins are

1. SDE(Source Dependent Extraction) OLTP TO STAGING AREA


2. SIL(Source Independent Loading)Staging Area to DW
3. DAC(Data Warehouse Administration Console)  Scheduling tool of ETLs(SDE
&SIL)
4. OBAW(Oracle Business Analytic Warehouse) data model(set of tables around
1000)

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 5

Confidential
5. Pre build Semantic layer RPD
6. Pre build Reports & Pre build Dashboards Web Catalog

OLTP Vs OLAP

OLTP OLAP
1. Is useful to store Transactional data 1. Is useful to store Analyatical data

2. Is useful to run the business 2. Is useful to Analyze the business

3. The nature of data is current and 3. The nature of data is historical and
Detail summarized

4. OLTP Supports CRUD(Create , 4. OLAP supports only read


Partially read, update and delete)

5. It is a application oriented DB 5. It is subject oriented DB

6. It is volatile 6. It is nonvolatile

7. In OLTP data storage time is fixed 7. In OLAP data storage time is variant

8. OLTP DB are isolated as Applications 8. OLAP is integrated as per subject area

9. No of users are more(customers + emp 9. No of users are less (MM+HM)


)

10. In OLTP we will use normalizes 10. In OLAP we will use Denormalized
schema Schema

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 6

Confidential
RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 7

Confidential
RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 8

Confidential
Schemas

1. A group of tables are called as schema

1. Star
2. Snow Flake
3. Constellation or mixed

1. Star Schema

1. Organizes data into a central fact table with surrounding dimension tables
2. Each dimension row has many associated fact rows
3. Dimension tables do not directly relate to each other
4. All Dimension Tables are de normalized
5. Optimized to read data
6. User friendly ,easy to understand
7. In OBIEE BMM layer only Star schemas are used

Star Schema diagram taken from OBIEE Tool

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 9

Confidential
Star schema fact

1. Contains business measures or metrics


2. Data is often numerical
3. Is the central table in the star

Star Schema Dimension

1. Contains attributes or characteristics about the business


2. Data is often descriptive (alphanumeric)
3. Qualifies the fact data

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 10

Confidential
Star Schema with Sample Data

Star Schema user friendly and easy to understand

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 11

Confidential
Snow Flake Schema

1. Normalized tables are used


2. Is also called as extended star schema
3. Two dimensional tables will be directly joined
4. Like star schema ,it has only one fact table

Snow Flake Schema diagram from OBIEE tool

Snow Flake Schema detail diagram

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 12

Confidential
Mixed Schema

1. It contains more than one fact with some common dimensions (Conformed Dimensions)
2. It is combination of some stars or some snows or both

Conformed Dimensions

1. A dimension table is shared by two or more facts then it is called as conformed


dimension
2. OBIA data model created using conformed dimensions

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 13

Confidential
2. Introduction to Oracle Data Integrator

1. A widely used data integration software product, Oracle Data Integrator provides a new
Declarative design approach to defining data transformation and integration processes,
Resulting in faster and simpler development and maintenance.
2. Based on a unique E-LT architecture (Extract - Load Transform), Oracle Data
Integrator not only guarantees the highest level of performance possible for the
execution of data transformation and validation processes but is also the most cost-
effective solution available today.
3. Oracle Data Integrator provides a unified infrastructure to streamline data and
application integration projects.

The Business Problem

1. In today's increasingly fast-paced business environment, organizations need to use


more specialized software applications; they also need to ensure the coexistence of
these applications on heterogeneous hardware platforms and systems and guarantee
the ability to share data between applications and systems.
2. Projects that implement these integration requirements need to be delivered on-spec,
on-time and on-budget.

A Unique Solution

1. Oracle Data Integrator employs a powerful declarative design approach to data


integration, which separates the declarative rules from the implementation details.
2. Oracle Data Integrator is also based on a unique E-LT (Extract - Load Transform)
architecture which eliminates the need for a standalone ETL server and proprietary
engine, and instead leverages the inherent power of your RDBMS engines. This
combination provides the greatest productivity for both development and maintenance,
and the highest performance for the execution of data transformation and validation
processes.
3. Here are the key reasons why companies choose Oracle Data Integrator for their data
Integration needs:

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 14

Confidential
1. Faster and simpler development and maintenance: The declarative rules
driven Approach to data integration greatly reduces the learning curve of the
product and Increases developer productivity while facilitating ongoing
maintenance. This approach separates the definition of the processes from their
actual implementation, and separates the declarative rules (the "what") from the
data flows (the "how").
2. Data quality firewall: Oracle Data Integrator ensures that faulty data is
automatically detected and recycled before insertion in the target application.
This is performed without the need for programming, following the data integrity
rules and constraints defined both on the target application and in Oracle Data
Integrator.
3. Better execution performance: traditional data integration software (ETL) is
based on proprietary engines that perform data transformation row by row thus
limiting performance. By implementing an E-LT architecture, based on your
existing RDBMS Engines and SQL, you are capable of executing data
transformations on the target server at a set-based level, giving you much higher
performance
4. Simpler and more efficient architecture: The E-LT architecture removes the
need for an ETL Server sitting between the sources and the target server. It
utilizes the source and target servers to perform complex transformations,
most of when happen in batch mode when the server is not busy processing end-
user queries
5. Platform Independence: Oracle Data Integrator supports many platforms,
hardware and OSs with the same software.
6. Data Connectivity: Oracle Data Integrator supports many RDBMSs including
leading Data Warehousing platforms such as Oracle, Exadate, Teradata, IBM
DB2, Netezza, Sybase IQ and numerous other technologies such as flat files,
ERPs, LDAP, XML.
7. Cost-savings: The elimination of the ETL Server and ETL engine reduces both
the initial hardware and software acquisition and maintenance costs. The
reduced learning curve and increased developer productivity significantly reduce
the overall labor costs of the project, as well as the cost of ongoing
enhancements.

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 15

Confidential
3. ODI 12C Installation And Architecture

1. Install Oracle 11.2.0.3/4


http://rritec.blogspot.in/2014/09/02-oracle-database-installation.html
2. JDK 7
3. Download ODI12 C
http://www.oracle.com/technetwork/middleware/data-integrator/downloads/index.html
4. Install ODI 12C
http://rritec.blogspot.in/2014/12/odi-12c-installation-install-oracle-11.html

Architecture

Repositories:

1. Oracle Data Integrator Repository is composed of one Master Repository and several
Work Repositories.
2. ODI objects developed or configured through the user interfaces (ODI Studio) are stored
in one of these repository types.
3. Master repository stores the following information

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 16

Confidential
1. Security information including users, profiles and rights for the ODI platform
2. Topology information including technologies, server definitions, schemas,
contexts, languages and so forth.
3. Versioned and archived objects.
4. Work repository stores the following information
1. The work repository is the one that contains actual developed objects. Several
work repositories may coexist in the same ODI installation (example Dev work
repository ,prod work repository …etc )
2. Models, including schema definition, datastores structures and metadata, fields
and columns definitions, data quality constraints, cross references, data lineage
and so forth.
3. Projects, including business rules, packages, procedures, folders, Knowledge
Modules, variables and so forth.
4. Scenario execution, including scenarios, scheduling information and logs.
5. When the Work Repository contains only the execution information (typically for
production purposes), it is then called an Execution Repository.

ODI Studio and User Interface:

1. Administrators, Developers and Operators use the Oracle Data Integrator


Studio to access the repositories.
2. ODI Studio provides four Navigators for managing the different aspects and
steps of an ODI integration project:

1. Designer Navigator: is used to design data integrity checks and to build


transformations

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 17

Confidential
2. Operator Navigator: is the production management and monitoring tool. It is
designed for IT production operators. Through Operator Navigator, you can
manage your mapping executions in the sessions, as well as the scenarios in
production.

3. Topology Navigator: is used to manage the data describing the information


system's physical and logical architecture. Through Topology Navigator you can
manage the topology of your information system, the technologies and their
data types, the data servers linked to these technologies and the schemas they
contain, the contexts, the languages and the agents, as well as the repositories.
The site, machine, and data server descriptions will enable Oracle Data
Integrator to execute the same mappings in different physical environments.

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 18

Confidential
4. Security Navigator: is the tool for managing the security information in Oracle
Data Integrator. Through Security Navigator you can create users and profiles
and assign user rights for methods (edit, delete, etc) on generic objects (data
server, data types, etc.), and fine tune these rights on the object instances
(Server 1, Server 2, etc).

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 19

Confidential
4. Hands on 1: Creating Master and Work Repository

Create the Master Repository

Step1: Creating Repository Schema in Database.

1. Open SQL PLUS  Type / as sysdba press enter


2. Create a user by executing below commands
a. Create user ODIMR identified by RRitec123;
b. Grant DBA to ODIMR;
c. Conn ODIMR @ORCL
d. Password RRitec123
e. Select count (*) from tab;

3. Please Note that no tables are available in this schema.

Step2: Creating Master Repository.

1. Start ODI Studio

Navigate to below location and double click on ODI.exe


C:\Oracle\Middleware\Oracle_Home\odi\studio\odi.exe

2. Select the New button in the Studio tool bar

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 20

Confidential
3. Select the Master Repository Creation Wizard click on ok

4. Provide below information

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 21

Confidential
5. Select the Master Repository Creation Wizard click on ok
6. Click on test connection click on ok Click on next

7. Provide username and password as SUPERVISOR click on next

8. Select internal password Storage ->Click on Finish

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 22

Confidential
9. Once Process completed

10. Go back to sql plus and execute select count(*) from tab and notice that 67
tables are created

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 23

Confidential
Create the Work Repository

Step1: Creating Work Repository Schema in Database.

1. Open SQL PLUS  Type / as sysdba press enter


2. Create a user by executing below commands
a. Create user ODIWR identified by RRitec123;
b. Grant DBA to ODIWR;
c. Conn ODIWR @ORCL
d. Password RRitec123
e. Select * from tab;

3. Please Note that no tables are available in this schema.

Step2: Creating Work Repository.

1. Start ODI Studio


2. Click on ODI menu  click on connect  Click on new
3. Provide below information

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 24

Confidential
4. Click on test  click on ok  click on ok
5. Click on Topology  expand repositories section
6. Right click on work repositories  click on new work repository

7. Provide below information  Click on next

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 25

Confidential
8. In conformation window click on yes

9. Provide below information


1. Name : RRITEC_DEV_WORKREP
2. Password : RRitec123
3. Work Repository Type : Development

10. In conformation window click on No

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 26

Confidential
11. In ODIWR schema observe that 153 tables created

Exercise: please complete \02 ODI\04 ORACLE REFERENCE\ Creating Master and
Work Repository by RCU.docx (Please do not run drop RCU Section)

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 27

Confidential
5. Hands On 2: Creating master and work repositories using RCU

1. Naviogate to C:\Oracle\Middleware\Oracle_Home\oracle_common\bin
2. Double click on rcu.bat file Click on RUN
3. In welcome screen click on next
4. Select create  click on next

5. Select below all options click on next click on ignoreclick on ok

6. Select oracle data integrator

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 28

Confidential
7. Click on next click on ok

8. Provide password and conform password as RRitec123 click on next

9. Provide the master and work repository passwords

10. Click on next  nextokok create finish

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 29

Confidential
6. Hands On 3: Configuring RRITEC database
6.1 Configuring source database

Step 1: Creating source tables into Scott schema

1. Open SQL PLUS  Type / as sysdba press enter


2. Create a user by executing below commands
a. Create user SDBU identified by RRitec123;
b. Grant DBA to SDBU;
c. Conn SDBU@ORCL
d. Password RRitec123
e. Select count (*) from tab;
3. Go to RRITEC lab copy  lab data folder  take full path of Source. Sql  execute

6.2 Configuring Target database

Step 1: Creating user TDBU and load tables into TDBU schema

4. Open SQL PLUS  Type / as sysdba press enter


5. Create a user by executing below commands
a. Create user TDBU identified by RRitec123;
b. Grant DBA to TDBU;
c. Conn TDBU@ORCL
d. Password RRitec123
e. Select count (*) from tab;
6. Go to RRITEC lab copy  lab data folder  take full path of target. Sql  execute as
for below

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 30

Confidential
7. Hands On 4: Creating and Managing Topology

The Oracle Data Integrator Topology is the physical and logical representation of

The Oracle Data Integrator architecture and components.

7.1 Creating Physical Architecture

1. The physical architecture defines the different elements of the information system, as
well as their characteristics taken into account by Oracle Data Integrator.
2. Each type of database (Oracle, DB2, etc.) or file format (XML, Flat File), or
application software is represented in Oracle Data Integrator by a technology.
3. The physical components that store and expose structured data are defined as

Data servers.

4. A data server is always linked to a single technology. A data server stores


information according to a specific technical logic which is declared into physical
schemas attached to this data server.

Process:

1. Open ODI Studio


2. Go to ODI menu  Click on connect Select login name as RRITEC  Click on
edit  Under work repository section select work repository
RRITEC_DEV_WORKREP  Click on test  Click on ok Click on okClick on
ok

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 31

Confidential
3. Go to  Right click on oracle  Click on New Data Server

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 32

Confidential
4. In Definition tab provide below information

5. In jdbc tab provide below information

6. Click on test connection Click on test click on ok

7. Close

Creating Physical Schema:

1. Right Click on RRITEC_ORCL  Click on New Physical Schema

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 33

Confidential
2. Provide below information

Note: using dedicated schema as a work schema is best practice

3. Click on Save click on ok  close the window


4. Similarly create one more physical schema with TDBU schema

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 34

Confidential
7.2 Creating Logical Architecture:

1. The logical architecture allows a user to identify as a single Logical Schema a group
of similar physical schemas - that is containing data stores that are structurally
identical - but located in different physical locations.
2. Logical Schemas, like their physical counterpart, are attached to a technology.
3. All the components developed in Oracle Data Integrator are designed on top of the
logical architecture. For example, a data model is always attached to logical Schema

Process:

1. Under logical Architecture  Right click on oracle  Click on New Logical


Schema

2. Provide below information save and close the window

4. Similarly create one more new logical schema with the name of TARGET_TDBU

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 35

Confidential
7.3 Creating Contexts

1. Contexts bring together


1. Components of the physical architecture (the real Architecture) of the
information system

With

2. Components of the Oracle Data Integrator logical architecture (the


Architecture on which the user works).
2. For example, contexts may correspond to different execution environments

(Development, Test and Production) or different execution locations (Boston Site,

New-York Site, and so forth.)

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 36

Confidential
Process (optional):

1. By default one context created with installation of ODI with the name of Global
2. We already mapped logical and physical schemas while we were creating logical
schemas  For just conformation right click on global context  open observe mapping

Agents:

1. Oracle Data Integrator run-time Agents orchestrate the execution of jobs. The agent
executes jobs on demand and to start the execution of scenarios according to a
schedule defined in Oracle Data Integrator.

Languages:

1. Languages defines the languages and language elements available when editing
expressions at design-time.

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 37

Confidential
8. Hands on 5: Creating Model

1. Models are the objects that will store the metadata in ODI.
2. They contain a description of a relational data model. It is a group of DataStores
(Known as Tables) stored in a given schema on a given technology.
3. A model typically contains metadata reverse-engineered from the “real” data model
(Database, flat file, XML file, Cobol Copybook, LDAP structure ...etc)
4. Database models can be designed in ODI. The appropriate DDLs can then be generated
by ODI for all necessary environments (development, QA, production)
5. Reverse engineering is an automated process to retrieve metadata to create or update
a model in ODI.
6. Reverse- Engineering also known as RKM(Reverse engineering Knowledge Module)
7. Reverse engineering two types
1. Standard reverse-engineering
i. Uses JDBC connectivity features to retrieve metadata, then writes it to the
ODI repository.
ii. Requires a suitable driver
2. Customized reverse-engineering
i. Read metadata from the application/database system repository, then
writes these metadata in the ODI repository
ii. Uses a technology-specific strategy, implemented in a Reverse-
engineering Knowledge Module (RKM)

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 38

Confidential
8. Some other methods of reverse engineering are
1. Delimited format reverse-engineering
i. File parsing built into ODI.
2. Fixed format reverse-engineering
i. Graphical wizard, or through COBOL copybook for Mainframe files.
3. XML file reverse-engineering (Standard)
i. Uses JDBC driver for XML
9. Reverse engineering is incremental.
10. New metadata is added and Deleted metadata is removed.

Process:

1. Go to Designer Navigator  Click on New Model Folder Name it as


RRITEC_MODEL_FOLDER

2. Right click on RRITEC_MODEL_FOLDER  Click on New Model Name it as


RRITEC_MODEL

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 39

Confidential
3. Provide below information  Click on save Click on reverse engineer

4. Observe all tables

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 40

Confidential
5. Similarly create one more data model with name of RRITEC_TARGET_MODEL

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 41

Confidential
9. Hands on 6: Creating Project

1. A project is a collection of ODI objects created by users for a particular functional


domain. Those objects are(Folders ,Variables …etc)

2. A folder is a hierarchical grouping beneath a project and can contain other folders and
objects.
3. Every package, Mapping, Reusable Mapping and procedures must belong to a folder.
4. Objects cannot be shared between projects. except (Global variables, sequences, and
user functions)
5. Objects within a project can be used in all folders.
6. A knowledge module is a code template containing the sequence of commands
necessary to carry out a data integration task.
7. There are different knowledge modules for loading, integration, checking, reverse
engineering, Service and journalizing.
8. All knowledge modules code will be executed at run time.
9. There are six types of knowledge modules:

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 42

Confidential
MAPPINGS

ings
P

10. Some of the KMs are

1. LKM File to Oracle (SQLLDR)


i. Uses Jython to run SQL*LOADER via the OS
ii. Much faster than basic LKM File to SQL

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 43

Confidential
2. LKM Oracle to Oracle (DBLINK)
i. Loads from Oracle data server to Oracle data server
ii. Uses Oracle DB Link
3. CKM Oracle
i. Enforces logical constraints during data load
ii. Automatically captures error records

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 44

Confidential
10. Hands on 7: Markers

1. A marker is a tag that you can attach to any ODI object, to help organize your project.
2. Markers can be used to indicate progress, review status, or the life cycle of an object.
3. Graphical markers attach an icon to the object, whereas non graphical markers attach
numbers, strings, or dates.
4. Markers can be crucial for large teams, allowing communication among developers from
within the tool.
1. Review priorities.
2. Review completion progress.
3. Add memos to provide details on what has been done or has to be done.
4. This can be particularly helpful for geographically dispersed teams.
5. Project markers:
1. Are created in the Markers folder under a project
2. Can be attached only to objects in the same project
6. Global markers:
1. Are created in the Global Markers folder in the Others view
2. Can be attached to models and global objects

Process:

1. Under Designer Navigator  under Project  Right click on markers New Marker
Group  name it as RRITEC  order as 0  Add two markers as shown

2. Click on save

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 45

Confidential
11. Hands On 8: Create a Mapping using expression

In this exercise, the student will create a mapping that represents the data flow between the
EMPLOYEE source and the ODS_EMPLOYEE target.

A mapping represents the dataflow between sources and targets. The instructions defined in
the mapping tell the ODI Server how to read, Load and transform the data

Step1: Create a mapping

1. Under RRITEC folder Right click on mappings  Click on new mapping  name it
as m_ODS_EMPLOYEE  Click on ok
2. In Logical tab of the mapping  drag and drop employee table from
RRITEC_SOURCE_MODEL
3. Drag and drop ods_employee table from RRITEC_TARGET_MODEL

Step2: Create a Expression Transformation

Expression transformation is useful to do all calculations except aggregate calculations (ex sum,
avg ...etc)

1. From Components pane drag and drop Expression Transformation


2. Drag and drop all columns from employee source to expression Transformation
3. Select Expression transformation  In expression properties pane expand Attributes
4. Select last attribute click on new attribute  name it as FULL_NAME  data type as
varchar
5. Click on expression …

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 46

Confidential
6. Develop below expression

7. Click on ok connect all columns from expression to target ods_employee

8. Click on Validate  save

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 47

Confidential
Step3: Run the mapping

1. From toolbar click on run  click on ok  again ok

2. Go to operator navigator observe the session status

1. Create and populate below table from EMP table


CREATE TABLE EMP_TOTAL_SAL (EMPNO NUMBER, ENAME VARCHAR2 (40),
SAL NUMBER, COMM NUMBER, TOTALSAL NUMBER)

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 48

Confidential
12. Hands On 9:Create a Mapping using Flat File and Filter

1. Flat Files are two types


1. Fixed
2. Delimited
2. A filter is a SQL clause applied to the columns of one source data store. Only records
matching this filter are processed by the Mapping.

Step1: Create Flat File

1. Go to path C:\Oracle\Middleware\Oracle_Home\odi\demo\RRITEC (if this path not

Available create it)

2. Create a notepad file with below data and save it as emp.txt Close file

Empno, sal,deptno

101,1000,10

102,2000,20

103,3000,30

Step2: Create Physical File Data Server and Physical Schema

1. Go to topology  Under physical Architecture Right click on File Click on New


Data Server

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 49

Confidential
2. In Definition and JDBC tab provide below information Click on save

3. Right Click on RRITEC_FLATFILES Schema Click on New Physical Schema


Provide below information  Click on save

Director (Schema): C:\Oracle\Middleware\Oracle_Home\odi\demo\RRITEC

Ditectory (Work Schema) : C:\Oracle\Middleware\Oracle_Home\odi\demo\RRITEC

Step3: Create Logical File Schema

1. Go to topology  Under Logical Architecture Right click on File Click on New


Logical Schema  Provide name and context Click on Save

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 50

Confidential
Step4: Create Model based on File Logical Schema

1. Go to Designer  Right Click on RRITEC_MODEL_FOLDER  Click on New Model


2. Provide Name ,Technology and Logical Schema  Click on Save

3. Right Click on Model RRITEC_FLATFILES  Click on New Data Store


4. In Definition tab provide Name ,Alias ,Rosource Name In Files Tab Provide File
Format ,Heading(No of Rows) ,Field Separator

5. In Attributes tab  click on Reverse Engineering Save

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 51

Confidential
Step5: Create Target Table and adding to model

1. Create target table in TDBU schema

Create table temp as select empno,sal,deptno from scott.emp where 1=2;

2. Open Model RRITEC_TARGET_MODEL Click on Reverse Engineer  Save

Step6: Create Mapping

1. In Designer  under project RRITEC_PROJECT  Right Click on Mapping  New


Mapping  name it as m_FLATFILE_TO_TABLE Click on ok
2. Drag and drop emp from RRITEC_FLATFILES
3. Drag and drop Temp from RRITEC_TARGET_MODEL

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 52

Confidential
4. Drag and drop Filter Transformation from Components pane
5. Drag and drop Deptno from emp to filter
6. Connect empno,sal,deptno from emp to temp

7. Select Filte object In filter Properties pane  develop filter condition to get 10 and 20

8. Click on save  Run  observe Output

Exercise: please complete \02 ODI\04 ORACLE REFERENCE\ Flat File to a Table.docx

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 53

Confidential
13. Hands On 10: Create a Mapping using Split

1. A Split is a Selector component that divides a flow into two or more flows based on if-
then-else conditions.
2. It is a new feature in ODI 12C
3. It is equivalent to ROUTER transformation in Informatica
4. It can be used in place of multiple conditions. The same row may be passed to multiple
flows
5. We can capture rejected records in to a table by selecting Remainder option

Exercise 1: Splitting Sales Non-Sales and Rookies data

The exercises in this lab are designed to walk the student through the process of using a Split
transformation.

Objectives

After completing the lab, the student will be able to:

 Understand the functionality of the Split transformation.


 Apply the Split in a mapping where multiple filter conditions are required.

Summary

In this lab, the student will develop a mapping where a row can be sent to one of three different
target tables. The mapping will use a Split transformation to group employee records based
on various criteria, then route each record to the appropriate target table.

SOURCE: ODS_EMPLOYEE

TARGETS: ODS_EMPLOYEE_SALES

ODS_EMPLOYEE_NON_SALES
ODS_EMPLOYEE_ROOKIE

The conditions for routing the records are as follows:

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 54

Confidential
ODS_EMPLOYEE_SALES should contain records for employees where the TYPE_CODE
column contains the value ‘SALES’.

ODS_EMPOYEE_NON_SALES should contain records for employees where the


TYPE_CODE column contains the value ‘FIN’, ‘ADMIN’ or ‘MGR’.

ODS_EMPLOYEE_ROOKIE should contain records for employees whose start date


(DATE_HIRED column) is less than 2 years from today’s date.

Process:

1. Create a new mapping with the name of m_Split


2. Drag and drop ODS_EMPLOYEE from RRITEC_ TARGET_MODEL
3. Drag and drop ODS_EMPLOYEE_SALES, ODS_EMPLOYEE_NON_SALES,
ODS_EMPLOYEE_ROOKIE from RRITEC_TARGET_MODEL
4. Drag and drop Split Component
5. Drag and Drop Type_code Column from ODS_EMPLOYEE table to Split
6. Select Split transformation under output connector ports change names ,Connected to
and Expression as shown below

7. Drag and Drop all the columns one by one from ODS_EMPLOYEE table to
ODS_EMPLOYEE_SALES
8. Drag and Drop all the columns one by one from ODS_EMPLOYEE table to
ODS_EMPLOYEE_NON_SALES
9. Drag and Drop all the columns one by one from ODS_EMPLOYEE table to
ODS_EMPLOYEE_ROOKIE

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 55

Confidential
10. Click on Validate  Save Run

Exercise: From emp table load deptno 10, 20 and 30 data into different tables .Please do
as shown below

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 56

Confidential
14. Hands On 11: Create a Mapping using Joiner

1. A join is a selector component that creates a join between multiple flows. The attributes
of all flows are combined as the attributes of the join component.
2. The following join types can be created.

1. Inner Join (No check boxes selected)


2. Left Outer Join (Left data store checkbox selected)
3. Right Outer Join (Right data store checkbox selected)
4. Full Outer Join (Both data store checkboxes selected)
5. Cross Join (Cross checkbox selected)
6. Natural Join (Natural checkbox selected)

3. A NATURAL JOIN is a JOIN operation that creates an implicit join clause for you based
on the common columns in the two tables being joined. Common columns are columns
that have the same name in both tables.

Example : SELECT * FROM EMP NATURAL JOIN DEPT

4. NATURAL JOIN was introduced in ODI 11G.


5. To Create natural join we must select option Generate ANSI Syntax
6. Natural join not required any join condition

Exercise 1: Join EMP and DEPT tables

Step1: Create Target Table

1. Go to SQL Developer login as TDBU user


2. Create a table

CREATE TABLE JOINER_EMP

EMPNO NUMBER,

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 57

Confidential
JOB VARCHAR2(20),

SAL NUMBER,

DEPTNO NUMBER,

DNAME VARCHAR2(20)

);

3. Reverse Engineer JOINNER_EMP table into RRITEC_MODEL_TARGET

Step2: Create Mapping

1. Create a mapping with the name of m_JOINNER_EMP


2. Drag and drop Emp,Dept from RRITEC_SOURCE_MODEL
3. Drag and drop JOINNER_EMP from RRITEC_TARGET_MODEL
4. Drag and drop Joiner Component into work area
5. Drag and drop deptno from emp and dept tables to Joiner transformation
6. Connect Empno,Job and Sal from emp table to Joinner_emp target table
7. Connect Deptno,Dname from Dept table to Joinner_emp target table

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 58

Confidential
8. Click on Join Component  Select required type of join by default it is Inner Join

9. Click on Validate and save


10. Run the mapping and observe output

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 59

Confidential
Exercise 2: Join EMP and DEPT tables using Right outer Join

1. In above mapping select Joiner component  Select Dept Check box

2. Select target table Select Truncate_Target_table option as true

3. Validate and save


4. Run the mapping and observe output.

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 60

Confidential
15. Hands On 12: Create a Mapping using Lookup

1. Use a lookup component using a source as the driving table and lookup another Data
Store.
2. Lookups appear as compact graphical objects in the interface sources diagram.
3. Choose how the each lookup is generated
a. SQL left outer join in the FROM clause: The lookup source will be left-outer-
joined with the driving source in the FROM clause of the generated SQL.
b. SQL expression in the SELECT clause: A SELECT-FROM-WHERE statement
for the lookup source will be embedded into the SELECT clause for the driving
source.Left Outer Join in the FROM clause
4. Lookup Works similar to Join
5. Benefits of using lookups
a. Simplifies the design and readability of interfaces using lookups
b. Enables optimized code for execution

Exercise 1: Lookup Dname and Loc from Dept Table

Step1: Create Target Table

1. Go to SQL Developer login as TDBU user


2. Create a table

CREATE TABLE LKP_EMP_DNAME_LOC

EMPNO NUMBER, JOB VARCHAR2(20), SAL NUMBER, DEPTNO


NUMBER, DNAME VARCHAR2(20), LOC VARCHAR2(30)

);

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 61

Confidential
3. Reverse Engineer LKP_EMP_DNAME_LOC table into RRITEC_MODEL_TARGET

Step2: Create Mapping

1. Create a mapping with the name of m_LOOKUP_DNAME_LOC


2. Drag and drop Emp,Dept from RRITEC_SOURCE_MODEL
3. Drag and drop LKP_EMP_DNAME_LOC from RRITEC_TARGET_MODEL
4. Drag and Drop Lookup from components pane
5. Drag and drop deptno from driving table emp on to Lookup
6. Drag and drop deptno from Lookup table Dept on to Lookup
7. Connect Columns Empno,Job,Sal,Deptno from emp to target LKP_EMP_DNAME_LOC
8. Connect Columns Dname ,Loc from Dept to target LKP_EMP_DNAME_LOC

9. Click on Validate Click on SaveClick on Run observe output

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 62

Confidential
16. Hands On 13: Create a Mapping using Sort

1. A sort is a selector component (see Selector Components) that sorts the incoming flow
based on a list of attributes.
2. The functionality is equivalent to the SQL ORDER BY clause.
3. Sort condition fields can be added by dragging source attributes onto the sort
component.

Exercise 1: Sorting emp data based on Deptno and empno

Step1: Create Target Table using ODI

1. Expand RRITEC_SOURCE_MODEL  Right Click on EMP table Click on Duplicate


2. Drag and drop Copy of EMP(EMP) onto RRITEC_TARGET_MODEL
3. Right click on Copy of EMP(EMP)  Click on open  provide name, Alias and
Resource Name as emp_sort

4. Click on save Close

Step2: Create Mapping

1. Create a mapping with the name of m_SORT_EMP


2. Drag and drop Emp from RRITEC_SOURCE_MODEL
3. Drag and drop EMP_SORT from RRITEC_TARGET_MODEL
4. Drag and drop Sort Component into work area
5. Drag and drop Deptno ,empno from EMP table to SORT
6. Connect all the ports one by one from EMP table to SORT_EMP

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 63

Confidential
7. Click on Physical tab  Select target table EMP_SORT Select
CREATE_TARGET_TABLE value as True

8. Validate  Save RunObserve output

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 64

Confidential
17. Hands On 14: Create a Mapping using Aggregate

1. Use to group and aggregate attributes using aggregate functions, such as average,
count, maximum, sum, and so on.
2. ODI automatically selects attributes without aggregation functions to be used as group-
by attributess.
3. You can override this by using the Is Group By Column and Manual Group By Clause
properties.

Exercise 1: Calculate deptno wise sal expenditure

Step1: Create Target Table

1. Go to SQL Developer login as TDBU user


2. Create a table

CREATE TABLE AGG_EMP_DEPTNO_WISE_SAL ( DEPTNO NUMBER, SAL


NUMBER );

3. Reverse Engineer AGG_EMP_DEPTNO_WISE_SAL table into


RRITEC_MODEL_TARGET

Step2: Create Mapping

1. Create a mapping with the name of m_ AGG_EMP_DEPTNO_WISE_SAL


2. Drag and drop Emp from RRITEC_SOURCE_MODEL
3. Drag and drop AGG_EMP_DEPTNO_WISE_SAL from RRITEC_TARGET_MODEL
4. Drag and drop Sort and Aggregate Components into work area

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 65

Confidential
5. Drag and drop Deptno from EMP table to SORT
6. Drag and drop Deptno,sal from EMP table to Aggregate
7. Select Aggregate and change expression as SUM(EMP.SAL)

8. Drag and drop Deptno,sal from EMP table to AGG_EMP_DEPTNO_WISE_SAL

9. Validate  Save RunObserve output

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 66

Confidential
18. Hands On 15: Create a Mapping using Distinct

1. A distinct component is a projector component that projects a subset of attributes in the


flow.
2. The values of each row have to be unique; the behavior follows the rules of the SQL
DISTINCT clause.
3. It is new feature in ODI 12C

Exercise 1: Capturing country code and name

Step1: Create Flat File Source Data

1. Go to path C:\Oracle\Middleware\Oracle_Home\odi\demo\RRITEC (if this path not

Available create it)

2. Create a notepad file with below data and save it as Region_country.txt Close file

REGION_ID,REGION,COUNTRY_ID,COUNTRY

20,South,1,USA

21,West,1,USA

22,East Coast,1,USA

23,Mid West,1,USA

24,South India,2,India

25,North India,2,India

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 67

Confidential
3. Right Click on Model RRITEC_FLATFILES  Click on New Data Store
4. In Definition tab provide Name ,Alias ,Resource Name Click on save  In Files Tab
Provide File Format ,Heading(No of Rows) ,Field Separator

5. In Attributes tab  click on Reverse Engineering Save

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 68

Confidential
Step2: Create Target Table

1. Go to SQL Developer login as TDBU user


2. Create a table

CREATE TABLE DISTINCT_COUNTRY

( COUNTRY_ID NUMBER, COUNTRY_NAME VARCHAR2(50) );

3. Reverse Engineer DISTINCT_COUNRY table into RRITEC_MODEL_TARGET

Step3: Create Mapping

1. Create a mapping with the name of m_DISTINCT_COUNRY


2. Drag and drop Region_Country from RRITEC_FLATFILE_MODEL
3. Drag and drop DISTINCT_COUNRY from RRITEC_TARGET_MODEL
4. Drag and drop DISTINCT Components into work area
5. Drag and drop Country_id,Country from Region_Country table to DISTINCT
6. Drag and drop Country_id,Country from DISTINCT table to DISTINCT_COUNRY

7. Validate  Save RunObserve output

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 69

Confidential
19. Hands On 16: Create a Mapping using Set Component

1. A set component is a projector component that combines multiple input flows into one
output flow by using set operation such as UNION, INTERSECT, EXCEPT, MINUS, and
others.

Exercise 1: Combine using UNION set operator

Step1: Create Tables to Utilize as Source Data

1. Connect to source database Schema Scott

CREATE TABLE EMP1020 AS SELECT DEPTNO,ENAME,JOB, SAL FROM EMP


WHERE DEPTNO IN ( 10,20 );

CREATE TABLE EMP2030 AS SELECT DEPTNO,ENAME,JOB, SAL FROM EMP


WHERE DEPTNO IN ( 20,30 );

2. Reverse Engineer emp1020 and emp2030 tables into RRITEC_SOURCE_MODEL

Step2: Create Target Table

1. Go to SQL Developer login as TDBU user


2. Create a table

CREATE TABLE EMP_UNION102030 AS

SELECT DEPTNO,ENAME,JOB, SAL FROM SCOTT.EMP WHERE 1 = 2 ;

3. table into Reverse Engineer EMP_UNION102030 RRITEC_TARGET_MODEL

Step3: Create Mapping


RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 70

Confidential
1. Create a mapping with the name of m_ EMP_UNION102030
2. Drag and drop emp1020 and emp2030 from RRITEC_SOURCE_MODEL
3. Drag and drop EMP_UNION102030 from RRITEC_TARGET_MODEL
4. Drag and drop SET Components into work area
5. Select SET and change properties as shown

6. Drag and drop DEPTNO,ENAME,JOB, SAL from EMP1020 table to SET


7. Drag and drop DEPTNO,ENAME,JOB, SAL from EMP2030 table to corresponding
columns of SET
8. Drag and drop DEPTNO,ENAME,JOB, SAL from SET table to EMP_UNION102030

9. Validate  Save RunObserve output

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 71

Confidential
20. Hands on 17: User Functions

1. A cross-technology macro defined in a lightweight syntax used to create an alias for a


recurrent piece of codeExample
2. Parameters are case sensitive
3. Functions are two Types
a. Global functions can be used anywhere in any project.
b. Project functions can be used within their project.
4. A simple Example formula:
a. If <param1> is null then <param2> else <param1> end if
i. Can be implemented differently in different technologies:
1. Oracle
a. nvl(<param1>, <param2>)
2. Other technologies:
a. case when <param1> is null then <param2> else
<param1> end
b. And could be aliased to:
i. RemoveNull(<param1>, <param2>)

Exercise 1: Converting dname Descriptions into codes

Step1: Create User Function

1. Under RRITEC_PROJECT  right click on user functions  Click on New User


Function

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 72

Confidential
2. Provide Name ,Group and Syntax

3. Click on Implementations  Click on Add Implementation Provide below syntax and


select technology Click on ok Save

DECODE($(DNAME),'ACCOUNTING','ACC','RESEARCH','R and
D','OPERATIONS','OPT','SALES')

Step2: Creating Target Table

1. Expand RRITEC_SOURCE_MODEL Duplicate dept table Drag and drop copy of


dept onto RRITEC_TARGET_MODEL Right click on copy of dept open Change
name ,alias and resource name as DEPT_CODES

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 73

Confidential
Step 3: Utilizing User Function in a mapping

1. Create a mapping with the name of m_user_function


2. Drag and drop dept as source and dept_codes as target
3. Connect deptno ,loc of source and target
4. Select dname column of target click on expression and develop as shown below

5. Click on physical tab  select target table under integration knowledge module 
select create table property as true
6. Click on validate save  run and observe output

Exercise 2: Converting given Phone Number in to smart phone format

Step1: Creating Flat File and reverse engineer into Source Flat File Model

1. Develop a flat file as shown and save as PHONE.txt

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 74

Confidential
2. Reverse engineer into source flat file model

Step2: Creating Target table and reverse engineer into Target Model

1. Develop a target table as shown below

create table RR_PHONE_FORMAT(ENAME VARCHAR2(20),PHONE VARCHAR2(20))

2. Reverse engineer into target model

Step3: Creating project level user function

1. Under RRITEC_PROJECT right click user functions  new user function


2. Provide name ,group and syntax as shown below

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 75

Confidential
3. Click on implementations tab Click on provide below formula and select
technology

'(' || SUBSTR($(PHONE),1,3)||') - ' ||SUBSTR($(PHONE),4,3) || '-' ||

SUBSTR($(PHONE),7,6)

4. Click on save and close

Step4: using user function in interface

1. Create a mapping with the name of m_ PHONE_FORMAT


2. Drag and drop phone source table from RRITEC_FILE_MODEL
3. Drag and drop RR_PHONE_FORMAT target table from RRITEC_TARGET_MODEL
4. Connect all corresponding columns

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 76

Confidential
5. Connect Ename of source to ENAME of target
6. Select target phone column and call user function

7. Select proper LKM and IKM


8. Save  run

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 77

Confidential
21. Hands on 18: Variables

1. Variable – An ODI object which stores a typed value, such as a number, string or date.
2. Variables are used to customize transformations as well as to implement control
structures, such as if-then statements and loops, into packages.
3. Variables are two types
a. Global
b. Project Level
4. To refer to a variable, prefix its name according to its scope:
a. Global variables: GLOBAL.<variable_name>
b. Project variables: <project_code>.<variable_name>
5. Variables are used either by string substitution or by parameter binding.
a. Substitution: #<project_code>.<variable_name>
b. Binding: :<project_code>.<variable_name>
6. Variables are four types
a. Declare
b. Set
c. Evaluate
d. Refresh
7. Variables can be used in packages in several different ways, as follows:
a. Declaration: When a variable is used in a package (or in certain elements of the
topology which are used in the package), it is strongly recommended that you
insert a Declare Variable step in the package. This step explicitly declares the
variable in the package.
b. Refreshing: A Refresh Variable step allows you to re-execute the command or
query that computes the variable value.
c. Assigning: A Set Variable step of type Assign sets the current value of a
variable.
d. Incrementing: A Set Variable step of type Increment increases or decreases a
numeric value by the specified amount.
e. Conditional evaluation: An Evaluate Variable step tests the current value of a
variable and branches depending on the result of the comparison.
8. Variables also can be used in expressions of mappings, procedures and so forth.

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 78

Confidential
Exercise 1: Filter DEPTNO10 using default value/static assignment

Step 1: Create a static variable

1. Under RRITEC_PROJECT Right click on Variables  new variable  name it as


DEPTNO10 Data type as Numeric and default value as 10

2. Click on Save  Click on Close

Step 2: Using static variable as filter

1. Open m_FLATFILE_TO_TABLE mapping select filter condition change condition as


EMP.deptno =#RRITEC_PROJECT.DEPTNO10

2. Truncate target table


3. Click on validate  Click on save click on Run

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 79

Confidential
Exercise 2: Filter MIN DEPTNO using SQL Query result

Step 1: Create a refreshable variable

1. Under RRITEC_PROJECT Right click on Variables  new variable  name it as


MINDEPTNO Data type as Numeric
2. Click on refreshing tab  under sql query type select min(deptno) from scott.emp 
select schema as source Click on test query

3. Save close

Step 2: Using variable as filter

1. Open m_FLATFILE_TO_TABLE mapping select filter condition change condition


as EMP.deptno =#RRITEC_PROJECT.MINDEPTNO

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 80

Confidential
2. Truncate target table
3. Click on validate  Click on save click on Run
4. Notice that no data loaded into target as we have no value in the variable .From this we
come to understand variable corresponding query will not execute with mapping run

Exercise 3: Filter MIN DEPTNO using refresh variable type of package

Step 1: Creating package

1. Under RRITEC_PROJECT Right click on package  new package  name it as


PKG_VARIABLE_MAPPING
2. Drag and drop MINDEPTNO variable and m_FLATFILE_TO_TABLE mapping
3. From the toolbar click on next step on success drag and drop from variable to
mapping

4. Select variable select type as refresh variable


5. Click on save  click on run  and observe that target table loaded with 10 deptno data

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 81

Confidential
22. Handson 19: Sequences

1. A Sequence is a variable that increments itself automatically each time it is used.


2. It is mainly useful to generate surrogate Keys
3. It is equivalent to Sequence generator transformation of informatica
4. A sequence can be created as a global sequence or in a project. Global sequences
are Common to all projects, whereas project sequences are only available in the project
Where they are defined.
5. Oracle Data Integrator supports three types of sequences:
1. Standard sequences: whose current values are stored in the Repository.
2. Specific sequences: whose current values are stored in an RDBMS table cell.

Oracle Data Integrator reads the value, locks the row (for concurrent updates)
and updates the row after the last increment.

3. Native sequence: that maps a RDBMS-managed sequence.


6. Even we can use directly database Sequence without using ODI sequence types

Exercise 1: Create Sequence by using directly database sequence

Step1: Create a sequence in database

1. Open sql developer  connect TDBU schemaexecute below code

Create sequence EMPID_SEQUENCE


minvalue 1
maxvalue 9999
start with 1
increment by 1

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 82

Confidential
Step2: Create Target Table

1. Open sql developer  connect TDBU schemaexecute below code

CREATE TABLE SEQ_EMP AS

SELECT 1 AS ROW_WID , EMP.* FROM SCOTT.EMP WHERE 1=2

2. Reverse Engineer and import table into RRITEC_TARGET_MODEL

Step3: Create Mapping

1. Create a mapping with the name of m_ DATABASE_SEQ


2. Drag and drop emp from RRITEC_SOURCE_MODEL
3. Drag and drop SEQ_EMP from RRITEC_TARGET_MODEL
4. Connect all corresponding columns from EMP table to SEQ_EMP
5. Select ROW_WID and provide below expression

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 83

Confidential
6. Validate  Save RunObserve output

Exercise 2: Create Sequence by using native sequence

Step1: Create Native Sequence

1. Under RRITEC_PROJECT  Right click on sequence  Click on New Sequence

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 84

Confidential
2. Provide Name: NAT_SEQ_EMP_ROW_WID
3. Select Native sequence  select Schema target_tdbu Select native sequence name
as EMPID_SEQUENCE

4. Click on save

Step2: Using Native Sequence

1. In above mapping m_ DATABASE_SEQ select row_wid column


2. Change expression as shown below

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 85

Confidential
3. Click on save  Click on Run observe output

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 86

Confidential
23. Handson 20: Procedures

1. Procedure is a sequence of commands/Tasks executed by database engines or the


operating system, or using ODI Tools.
2. A procedure can have options that control its behavior.
3. Procedures are reusable components that can be inserted into packages.

Procedure Examples:

1. Email Administrator procedure:


a. Uses the “OdiSendMail” ODI tool to send an administrative email to a user. The
email address is an option.
2. Clean Environment procedure:
a. Deletes the contents of the /temp directory using the “OdiFileDelete” tool
b. Runs DELETE statements on these tables in order: CUSTOMER, CITY,
REGION, and COUNTRY
3. Create and populate RDBMS table:
a. Run SQL statement to create an RDBMS table.
b. Run SQL statements to populate table with records.
4. Initialize Drive procedure:
a. Connect to a network drive using either a UNIX or Windows command
(depending on an option).
b. Create a /work directory on this drive.
5. Email Changes procedure:
a. Wait for 10 rows to be inserted into the INCOMING table.
b. Transfer all data from the INCOMING table to the OUTGOING table.
c. Dump the contents of the OUTGOING table to a text file.
d. Email this text file to a user.

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 87

Confidential
Commands or ODI objects that can be used in ODI procedures:

1. SQL Statements
2. OS Commands
3. ODI Tools
4. JYTHON Programs
5. Variables
6. Sequences
7. User Functions
8. …etc

Using ODI Tools

1. Example of ODI tools are


1. FILES Related
i. OdiFileAppend,OdiFileCopy,OdiFileDelete,OdiFileMove,OdiFileWait,Odi
MkDir,OdiOutFile,OdiSqlUnload,OdiUnZip,OdiZip
2. Internet Related
i. OdiSendMail,OdiFtpGet,OdiFtpPut,OdiReadMail,OdiScpGet,OdiScpPut,
OdiSftpGet,OdiSftpPut
3. .. etc (For more ODI TOOLS see package screen)

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 88

Confidential
Let us see about OdiFileCopy

OdiFileCopy Syntax

OdiFileCopy -DIR=<dir> -TODIR=<dest_dir> [-OVERWRITE=<yes|no>] [-


RECURSE=<yes|no>] [-CASESENS=<yes|no>]

OdiFileCopy -FILE=<file> -TOFILE=<dest_file>|-TODIR=<dest_dir> [-


OVERWRITE=<yes|no>] [-RECURSE=<yes|no>] [-CASESENS=<yes|no>]

Examples

1. Copy the file "host" from the directory /etc to the directory /home:
a. OdiFileCopy -FILE=/etc/hosts -TOFILE=/home/hosts
2. Copy all *.csv files from the directory /etc to the directory /home and overwrite:
a. OdiFileCopy -FILE=/etc/*.csv -TODIR=/home -OVERWRITE=yes
3. Copy all *.csv files from the directory /etc to the directory /home while changing their
extension to .txt:
a. OdiFileCopy -FILE=/etc/*.csv -TOFILE=/home/*.txt -OVERWRITE=yes
4. Copy the directory C:\odi and its sub-directories into the directory C:\Program Files\odi
a. OdiFileCopy -DIR=C:\odi "-TODIR=C:\Program Files\odi" -
RECURSE=yes

Exercise 1: Delete target table data

1. Under RRITEC_PROJECT Right Click on procedure Click on new procedure


2. Name it as Delete target table data Click on Tasks/Details
3. Click on add  Provide below information

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 89

Confidential
1. Name : Delete EMP10
2. LogCounter :Delete
3. Technology :Oracle
4. Context : Global
5. Schema : Target_TDBU
6. Command : delete from <%=snpRef.getObjectName("L","EMP10","D")%>

Note :

"L" use the local object mask to build the complete path of the object.

"D"Returns the complete name of the object in the physical catalog and the data physical
schema that corresponds to the specified tuple (context, logical schema.)

4. Similarly Create two more tasks to delete from EMP20 and EMP30

5. Click on save Run observe that the tables data deleted

Exercise 2: Execute/Call a PL/SQL Procedure

Step1: Creating Table

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 90

Confidential
1. Open SQL Developer  create EMP_PROC table in TDBU schema

CREATE TABLE EMP_PROC ( EMPNO NUMBER(10,0), ENAME


VARCHAR2(30), SAL NUMBER(10,0) )

Step 2: Create PL/SQL procedure

CREATE OR REPLACE PROCEDURE INSERT_RECORD_PROC

AS

BEGIN

INSERT INTO EMP_PROC VALUES (101,'RAM',5000);

END;

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 91

Confidential
Step3: Creating ODI Procedure

1. Under RRITEC_PROJECT Right Click on procedure Click on new procedure


2. Name it as INSER_RECORD_USING_PROCEDURE Click on Tasks
3. Click on add  Provide below information
a. Name : INSERT RECORD
b. LogCounter :Insert
c. Technology :Oracle
d. Context : Global
e. Schema : Target_TDBU
f. Command : BEGIN TDBU.INSERT_RECORD_PROC(); END;

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 92

Confidential
6. Click on save Run observe that the record is inserted in table EMP_PROC

Exercise : Create a target table with the name emp_totalsal_tax with columns empno
ename sal comm. ,totalsal,tax and populate totalsal and tax columns using stored
procedure

Totalsal =sal +nvl(comm.,0)

Tax=totalsal*0.1

Exercise 3: Copy files from one folder to another folder using OdiFileCopy

Step1: Understanding Procedure Options and Disable PL/SQL procedure

1. Open above procedure INSER_RECORD_USING_PROCEDURE


2. Click on options Click on add options  provide name as shown below

(In 11g in project window right click on INSER_RECORD_USING_PROCEDURE  new


option ) (in 11g Boolean option not available hence select check box option )

3. Click on save click on tasksuncheck always Execute

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 93

Confidential
Step2: Create a task to copy files

1. Create one more task with the name of CopyFile


2. In Target Command section select technology as ODI TOOLS
3. Provide below expression in command section

OdiFileCopy -FILE=/Oracle/Middleware/Oracle_Home/odi/demo/RRITEC/*.txt -
TODIR=/Oracle/Middleware/Oracle_Home/odi/demo -OVERWRITE=yes

4. Click on yes runobserve files are moved or not

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 94

Confidential
24. Handson 21: Packages

1. An organized sequence of steps that makes up a workflow. In Package Each step


performs a small task, and they are combined together to make the package.
2. A Simple Package example
a. This package executes Three Mappings, and then archives some files.
b. If one of the Four steps fails, an email is sent to the administrator.

Exercise 1: Running two mappings and archiving .txt files

1. Under RRITEC_PROJECT  Right click on package Click on new package


2. Name it as pkg_MAPPINGS_ARCHIVE_FILES
3. Drag and drop two mappings (m_ODS_EMPLOYEE , m_FLATFILE_TO_TABLE) and
OdiFileMove
4. Connect all steps as shown below

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 95

Confidential
5. Select OdiFileMove and change Filename and Target Directory as shown below

6. Save and run the package

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 96

Confidential
25. Handson 22: Scenarios

1. A scenario is designed to put a source component (mapping, package, procedure,


variable) into production.
2. When a component development is finished and tested, you can generate the scenario
corresponding its actual state. This operation takes place in Designer Navigator.
3. The scenario code (the language generated) is frozen, and all subsequent modifications

of the components which contributed to creating it will not change it in any way.

4. It is possible to generate scenarios for packages, procedures, mappings, or variables.


5. Scenarios generated for procedures, mappings, or variables are single step scenarios

that execute the procedure, mapping, or refresh the variable.

6. Once generated, the scenario is stored inside the work repository. The scenario can be

Exported then imported to another repository (remote or not) and used in different
contexts. A scenario can only be created from a development work repository, but can
be imported into both development and execution work repositories.

Exercise 1: Creating Scenario, exporting Scenario and importing scenario

1. Right click on package pkg_MAPPINGS_ARCHIVE_FILES  click on generate


scenario
2. Select scenario PKG_MAPPINGS_ARCHIVE_FILES version 001  Click on export 
save on to desktop
3. Assume like by accident we deleted the scenario (or need to import in another
environment )
4. Then right click on scenario click on import point to previous exported file click on ok

Note: Regenerate: It is useful to update the scenario with new code of the odi object

Synchronization mode of the scenario:

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 97

Confidential
1 - Synchronous (default). Execution of the session waits until the scenario has terminated;

2 - Asynchronous. Execution of the session continues without waiting for the called session.

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 98

Confidential
26. Handson 23: Load Plan

1. In ODI 11.1.1.5 this feature is introduced


2. A Load Plan is an executable object in Oracle Data Integrator that can contain a hierarchy of
steps that can be executed conditionally, in parallel and in series. The leaves of this hierarchy
are Scenarios.
3. Packages, interfaces, variables, and procedures can be added to Load Plans for executions in the
form of scenarios
4. scheduled using the run-time agent's built-in scheduler or an external scheduler

14.1.2 Differences between Packages and Load Plans

1. A Load Plan is the largest executable object in Oracle Data Integrator. It uses Scenarios in its
steps. When an executable object is used in a Load Plan, it is automatically converted into a
scenario. For example, a package is used in the form of a scenario in Load Plans. Note that Load
Plans cannot be added to a Load Plan. However, it is possible to add a scenario in form of a Run
Scenario step that starts another Load Plan using the OdiStartLoadPlan tool.
2. Load plans are not substitutes for packages but are used to organize at a higher level the
execution of packages.
3. Unlike packages, Load Plans provide native support for parallelism, restartability and exception
handling. Load plans are moved to production as is, whereas packages are moved in the form of
scenarios. Load Plans can be created in Production environments.
4. Unlike Load Plan, Packages are useful to handle loops.

Process
Creating Load Plan
1. In designer navigator  Click on Load Plans and Scenerios  Click on new Load plan Name it
as LP  Click on Steps  Go to new step  add parallel step  drag and drop m_totalsal_tax
and m_flatfile_to_table interfaces
2. To understand serial mode  select root_step  add new step  select serial step  drag and
drop any two interfaces/packages..etc

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 99

Confidential
3.
4. Save  click on run (note: without agent we can not run load plan )
5. We can monitor load plans from operator tab.

6.

Nested load plans

Step1 : Create a package to call load plan


1. In deginer  right click on package  new package  name it as calling load plan  click on
Diagram tab  in Toolbox  expand ALL  click on OdiStartLoadPlan Click on work area 
Clcik on normal cursor  select the object OdiStartLoadPlan  In Load plan name select the
load plan as LP  click on save
Step2 : Create another load plan

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 100

Confidential
1. In designer navigator  Click on Load Plans and Scenerios  Click on new
Load plan Name it as LP1  Click on Steps  drag and drop the package calling
load plan on to root_step  save and run it
Agents

1. Agents are useful to schedule Load plans ,scenarios ..etc


2. These are three types
a. Completely standalone agent (no WLS installed. This OBE uses this kind of
agent.)
b. Standalone collocated agent (WLS is installed, but is not being used. The
agent is started as its own binary.)
c. JEE agent (WLS installed and used. The domain, admin server, managed
servers, node managers, and so on are started in the normal WLS way. The
agent is one of many possible JEE apps running in the domain. )

Exercise 1 : Creating Stand alone Agent

Step 1 : Creating Physical Agent


1. open ODI studio and connect using SUPERVISOR user
2. Go to topology tab → in physical Architecture right click on Agents → Click on new
agent name it as OracleDIAgent → click on save → Click on close

3.

Step 2 : Creating Logical Agent and Mapping Physical and logical agents in context
1. Go to topology tab → in LogicalArchitecture right click on Agents → Click on new
agent name it as OracleDIAgent → under physical agent select OracleDIAgent
opposite to development context → click on save → Click on close

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 101

Confidential
2.

Step 3 : Modifying parameter file odiparams.bat


1. Take backup of file C:\OBIEE_HOME\Oracle_ODI1_agent\oracledi\agent\bin
\odiparams.bat
2. Change Repository Connection Information
a. Let us say Master repository database name : ORCL
b. username : B109ODIMR
c. password : RRitec123
3. encode password of master repository
4. open CMD → and execute as shown

5.
6. Change odi params.bat file repository section as shown

7.
8. Change User credentials for agent startup program
9. Let us say ODI username : SUPERVISOR and password : RRitec123
10. As already we encoded RRItec123 password we can use same
11. Change odi params.bat file user credentials section as shown

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 102

Confidential
12.
13. Change work repository section as shown

14.
15. save and close

Step 4 : Running and Testing Agent

1. Go to location C:\OBIEE_HOME\Oracle_ODI1\oracledi\agent\bin
2. double click on agent.bat file
3. Observe cmd prompt

4.
5. On go if agent is not running then debug using log file

6.

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 103

Confidential
RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 104

Confidential
27. Handson 25: Version Control

1. Oracle Data Integrator provides a comprehensive system for managing and


safeguarding changes.
2. It also allows these objects to be backed up as stable checkpoints, and later restored
from these checkpoints.
3. These checkpoints are created for individual objects in the form of versions, and for
consistent groups of objects in the form of solutions.

Note: Version management is supported for master repositories installed on database engines
such as Oracle, Hypersonic SQL, and Microsoft SQL Server.

4. A version is a backup copy of an object. It is checked in at a given time and may be


restored later.
5. Versions are saved in the master repository. They are displayed in the Version tab of the
object window.
6. The following objects can be checked in as versions

1. Projects, Folders
2. Packages, Scenarios
3. Mappings (including Reusable Mappings), Procedures, Knowledge Modules
4. Sequences, User Functions, Variables
5. Models, Model Folders
6. Solutions
7. Load Plans

Exercise 1: Creating version on package/procedure...Etc, modifying, creating one


more version, restoring to first version

1. Right click on package pkg_MAPPINGS_ARCHIVE_FILES Click on version  click on


Create version click on ok
2. Open the package pkg_MAPPINGS_ARCHIVE_FILES  change filename as shown
below

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 105

Confidential
3. Save close
4. Right click on package pkg_MAPPINGS_ARCHIVE_FILES Click on version  click on
Create version click on ok
5. Now if you want to restore the initial version (1.0.0.0)  Right click on package
pkg_MAPPINGS_ARCHIVE_FILES Click on Restore  Select 1.0.0.0 version click
on ok
6. Open package observe the initial code

Comparing versions

1. Right click on m_ODS_EMPLOYEE mapping  click on version  Click on create


version click on ok
2. Open the mapping select expression  change full name formula by introducing “ –“

3. Save  close
4. Right click on ODS_EMPLOYEE  version  compare with version click on ok
5. Remove deleted ,New and unchanged check boxes and observe the difference

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 106

Confidential
RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 107

Confidential
28. Handson 26: Modifying Knowledge Modules

1. Knowledge Modules are templates of code that define integration patterns and their
implementation
2. They are usually written to follow Data Integration best practices, but can be modified for
project specific requirements
3. These are 6 types

4. RKM (Reverse Knowledge Modules) are used to perform a customized Reverse


engineering of data models for a specific technology. These KMs are used in data
models.
5. LKM (Loading Knowledge Modules) are used to extract data from source systems (files,
middleware, database, etc.). These KMs are used in Mappings.
6. JKM (Journalizing Knowledge Modules) are used to create a journal of data
Modifications (insert, update and delete) of the source databases to keep track of the
changes. These KMs are used in data models and used for Changed Data Capture.
7. IKM (Integration Knowledge Modules) are used to integrate (load) data to the target
tables. These KMs are used in Mappings.

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 108

Confidential
8. CKM (Check Knowledge Modules) are used to check that constraints on the sources
and targets are not violated. These KMs are used in data model’s static check and
interfaces flow checks.
9. SKM (Service Knowledge Modules) are used to generate the code required for creating
data services. These KMs are used in data models.

Mainly used KMs?

1. When processing happens between two data servers, a data transfer KM is required.
a. Before integration (Source  Staging Area)

 Requires an LKM, which is always multi-technology

b. At integration (Staging Area  Target)

 Requires a multi-technology IKM

2. When processing happens within a data server, it is entirely performed by the server.
a. A single-technology IKM is required.
3. LKM and IKMs can use in four possible ways

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 109

Confidential
4. Normally we ever create new KMs .But sometimes we may need to modify the existing
KMs
5. While modifying KMs , Duplicate existing steps and modify them. This prevents typos in
the syntax of the odiRef methods.

Exercise 1: Incremental Loading

1. Inexistent rows are inserted; already existing rows are updated.


2. If we deleted a record in source then it won’t be deleted in the target
3. For incremental update mandatorily we should have a unique/primerkey column on
target

Step1: Create target table

1. Open SQL Developer  create EMP_INCR_MKT table in TDBU schema

CREATE TABLE EMP_INCR_MKT

EMPNO NUMBER PRIMARY KEY ,

ENAME VARCHAR2(30),

SAL NUMBER

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 110

Confidential
2. Reverse Engineer and Import table EMP_INCR_MKT into RRITEC_TARGET_MODEL

Step2: Importing Required KM

1. Under RRITEC_PROJECT  Under Knowledge Modules Right Click on


integration(IKM)
2. Click on import knowledge Modules  provide xml reference path and select IKM
Oracle Incremental Update

3. Click on ok

Step3: Create a mapping for incremental loading

1. Create a mapping with the name of m_ EMP_INCR_LOADING


2. Drag and drop EMP from RRITEC_SOURCE_MODEL
3. Drag and drop EMP_INCR_MKT from RRITEC_TARGET_MODEL
4. Connect corresponding columns
5. In logical tab Select target table  select incremental type as incremental update

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 111

Confidential
6. Click on physical tab  select target table EMP_INCR_MKTSelect integration
Knowledge Module as IKM oracle Incremental Update  Select
Flow_control=FALSE

7. Click on validate click on save Click on run


8. To understand incremental update do below changes
a. Add one new record into source table
b. Modify a existing record of source table
c. Delete a record of source table
d. Run mapping
e. Notice that
i. new recoded will be added
ii. Update record will be modified
iii. Deleted record is not deleted from target

Exercise 2: Modifying Knowledge module and adding audit table

1. Under RRITEC_PROJECT  Under Knowledge Modules Expand integration(IKM)


RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 112

Confidential
2. Right-click on the IKM Oracle Incremental Update Knowledge ModuleRight click
select Duplicate Selection.
3. Open the copy of IKM Oracle Incremental Update and rename it to IKM Oracle
Incremental Update Audit

4. The objective here is to add two steps that will:


a. Create an audit table (and only generate a warning if the table already exists).
Name the table after the target table; simply add _audit at the end of the table
name.
b. In the audit table, insert three columns, the primary key of the record being
processed by the mapping, a time stamp, and an indicator that will tell us what
operation was done on the record (‘I’= Insert ,‘U’ = Update).
5. Click on tasks Click on add  name it as Create Audit table  select ignore errors
6. Select target technology as oracle  provide target command as shown below ok

Create table <%=odiRef.getTable("L", "TARG_NAME", "A")%>_AUDIT

<%=odiRef.getColList("", "[COL_NAME]\t[DEST_CRE_DT]",",\n\t", "", "PK")%>,

AUDIT_DATE DATE,

AUDIT_INDICATOR VARCHAR2(1)

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 113

Confidential
7. Similarly create one more task to Insert records Click on add  name it as Insert into
Audit table  select ignore errors
8. Select target technology as oracle  provide target command as shown below ok

Insert into <%=odiRef.getTable("L", "TARG_NAME", "A")%>_AUDIT

<%=odiRef.getColList("", "[COL_NAME]", ",\n\t", "", "PK")%>,

AUDIT_DATE,

AUDIT_INDICATOR

select <%=odiRef.getColList("", "[COL_NAME]", ",\n\t", "", "PK")%>,

sysdate,

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 114

Confidential
IND_UPDATE

from <%=odiRef.getTable("L","INT_NAME","W")%>

Note: To reduce typing, you can copy the code from a similar step and modify as
needed.

Note: These substitution methods use the following parameters:

GetTable:

a. “L”: Local naming convention. For example, in Oracle that would be schema.table
(versus “R” for remote: schema.table@server).
b. “A”: Automatic. It enables ODI to determine which physical schema to use (the
Data schema [“D”] or the Staging schema [“W”])

getColList:

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 115

Confidential
1. Notice the “PK” parameter. If it is used, only the columns that are part of the
primary key are included.

9. Save it  Verify that your new knowledge module IKM Oracle Incremental Update
Audit appears in the Knowledge Modules tree.
10. Select Create Audit Table task and move to on top of commit transactions task
11. Select Insert into Audit Table task and move to on top of commit transactions task

12. Modify the m_ EMP_INCR_LOADING mapping to be executed with your newly

Created knowledge module. Change IKM entry to use IKM Oracle Incremental Update
Audit

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 116

Confidential
Exercise 3: Add an Option to your KM

Step1: Create a option

1. To make your KM more user friendly, you can add an option that will let the end user

Choose when he/she want to generate audits:

2. In above audit KM Click on options  Add an option Name the option as


AUDIT_CHANGES Set the type to Boolean  Set the default value to True
3. Save the Option

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 117

Confidential
Step2: Link the option to your tasks

1. In above audit KM  click two tasks one by one (create audit table and insert records
into audit table) Unselect Always Execute  Select AUDIT_CHANGES

2. Save and close the IKM. The execution of these steps can now be set by the end-user.
3. In the Physical tab of your mapping, click on the Target table
4. verify that the AUDIT_CHANGES option is set to True.
5. Run the mapping ,check the behavior in the Operator mapping.
6. Change the value to False, run the mapping again, and compare the generated code in
the operator mapping.

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 118

Confidential
29. Handson 27: Change Data Capture (CDC)

1. The purpose of Changed Data Capture is to allow applications to process changed data
only
2. Loads will only process changes since the last load
3. The volume of data to be processed is dramatically reduced
4. CDC is extremely useful for near real time implementations, synchronization, Master
Data Management
5. In general CDC Techniques are Four types
1. Trigger based – ODI will create and maintain triggers to keep track of the
changes
2. Logs based – for some technologies, ODI can retrieve changes from the
database logs. (Oracle, AS/400)
3. Timestamp based – If the data is time stamped, processes written with ODI can
filter the data comparing the time stamp value with the last load time. This
approach is limited as it cannot process deletes. The data model must have
been designed properly.
4. Sequence number – if the records are numbered in sequence, ODI can filter the
data based on the last value loaded. This approach is limited as it cannot
process updates and deletes. The data model must have been designed
properly.
6. CDC in ODI is implemented through a family of KMs: the Journalization KMs
7. These KMs are chosen and set in the model
8. Once the journals are in place, the developer can choose from the interface whether he
will use the full data set or only the changed data
9. Changed Data Capture (CDC), also referred to as Journalizing

Journalizing Components

1. Journals: Contain references to the changed records

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 119

Confidential
2. Capture processes: Captures the changes in the source datastores either by creating
triggers on the data tables, or by using database-specific programs to retrieve log data
from data server log files
3. Subscribers (applications, integration processes, and so on): That use the changes
tracked on a datastore or on a consistent set

CDC Infrastructure in ODI

1. CDC in ODI depends on a Journal table


2. This table is created by the KM and loaded by specific steps of the KM
3. This table has a very simple structure:
1. Primary key of the table being checked for changes
2. Timestamp to keep the change date
3. A flag to allow for a logical “lock” of the records
4. A series of views is created to join this table with the actual data
5. When other KMs will need to select data, they will know to use the views instead of the
tables

Using CDC

1. Set a JKM in your model


2. For all the following steps, right-click on a table to process just that table, or right-click on
the model to process all tables of the model:
3. Add the table to the CDC infrastructure: Right-click on a table and select Changed Data
Capture / Add to CDC
4. Oracle Data Integrator supports two journalizing modes:
1. Simple Journalizing tracks changes in individual datastores in a model.
2. Consistent Set Journalizing tracks changes to a group of the model's data
stores, taking into account the referential integrity between these datastores. The
group of datastores journalized in this mode is called a Consistent Set.
5. Simple vs. Consistent Set Journalizing
Simple Journalizing enables you to journalize one or more datastores. Each journalized
datastore is treated separately when capturing the changes.

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 120

Confidential
This approach has a limitation, illustrated in the following example: You want to process
changes in the ORDER and ORDER_LINE datastores (with a referential integrity
constraint based on the fact that an ORDER_LINE record should have an associated
ORDER record). If you have captured insertions into ORDER_LINE, you have no
guarantee that the associated new records in ORDERS have also been captured.
Processing ORDER_LINE records with no associated ORDER records may cause
referential constraint violations in the integration process.

Consistent Set Journalizing provides the guarantee that when you have an
ORDER_LINE change captured, the associated ORDER change has been also
captured, and vice versa. Note that consistent set journalizing guarantees the
consistency of the captured changes. The set of available changes for which
consistency is guaranteed is called the Consistency Window. Changes in this window
should be processed in the correct sequence (ORDER followed by ORDER_LINE) by
designing and sequencing integration interfaces into packages.

Although consistent set journalizing is more powerful, it is also more difficult to set up. It
should be used when referential integrity constraints need to be ensured when capturing
the data changes. For performance reasons, consistent set journalizing is also
recommended when a large number of subscribers are required.

It is not possible to journalize a model (or datastores within a model) using both
consistent set and simple journalizing.

6. For consistent CDC, arrange the datastores in the appropriate order (parent/child
relationship) : in the model definition, select the Journalized tables tab and click the
Reorganize button
7. Add the subscriber (The default subscriber is SUNOPSIS) Right-click on a table and
select Changed Data Capture / Add subscribers
8. Start the journals: Right-click on a table and select Changed Data Capture / Start
Journal

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 121

Confidential
Exercise 1: Create a simple Journalizing

Step 1: Import Journalizing KM

1. Expand Project  Right Click on JKM click on import KM

2. Provide the KM xml reference path

C:\Oracle\Middleware\Oracle_Home\odi\sdk\xml-reference

3. Select JKM oracle Simple

4. Click on OK

Step 2: Assigning KM to Model

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 122

Confidential
1. Right click on RRITEC_MODEL_SCOTT click on open  select Journal  select
Journalizing mode as simple
2. Select KM as JKM oracle Simple

3. Click on save close model

Step 3: Adding a table to CDC

1. open RRITEC_MODEL_SCOTT Right click on EMP table click on add to CDC

2. Make sure Emp table is appearing under journalized tables of the model

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 123

Confidential
Step 4: Start Journal

1. Right Click on EMP table Changed data capture  Click on start Journal

2. Select SUNOPSIS Subscriber click on ok  select proper context click on ok ok


3. Go to operator tab  observe one by one step code

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 124

Confidential
4. By observing this you come to notice below objects are creating as part of CDC
1. One J$<TABLE> created
2. One Subscriber table created SNP_SUBSCRIBERS
3. Two Views created JV$<TABLE>,JV$D<TABLE>
4. One trigger created T$<TABLE>

Step 4: To understand CDC Create target table in TDBU schema

1. Crate table using below syntax

CREATE TABLE CDC_EMP

EMPNO NUMBER (10) PRIMARY KEY ,

ENAME VARCHAR2(30),

JOB VARCHAR2(30),

SAL NUMBER(10),

Deptno NUMBER(2)

2. Import into target model

Step 5: Create a mapping

1. Right click on mapping folder  create mapping  name it as m_Simple_CDC


2. Drag and drop EMP source table and CDC_EMP target table into work area
3. Connect all corresponding columns
4. in logical tab Select target table CDC_EMP  select integration type as incremental
update

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 125

Confidential
5. Click on Physical tab  select target table  select IKM as IKM oracle incremental
update

6. Select flow_control as false


7. Click on save run

Note: First time we are loading all data into target table

Step 6: Understanding CDC

1. Go to EMP table and update any one record and notice that this change registered in
J$emp table
2. In next run we need to load only the records available in J$emp table for that go to
mapping select source table and select Journalized data only

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 126

Confidential
\

3. Make sure proper subscriber name in journalized condition

4. Save  run

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 127

Confidential
30. Handson 28: Migration (Exporting and Importing)

1. Copying or deploying objects from one environment to another is called as deployment


or migration
1. deploying from development to testing
2. deploying from testing to production
2. Oracle Data Integrator 12c introduces the use of globally-unique object identifiers.
3. Export
1. Object wise
i. On Any object (mapping ,package …etc) we can right click and export it
in XML format .
ii. In this process we miss dependent objects

2. Smart export
i. We can click on connect navigator and export required objects by using
simple drag and drop method
ii. We can export in two formats XML and ZIP file
iii. Even dependencies also exported
iv. It is recommended method

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 128

Confidential
4. Import:
1. Duplication
i. This mode creates a new object (with a new GUID and internal ID) in the
target Repository, and inserts all the elements of the export file.
ii. Note that this mode is designed to insert only 'new' elements.
2. Synonym
i. Synonym Mode INSERT
1. Tries to insert the same object (with the same GUID) into the
target repository. The original object GUID is preserved.
2. If an object of the same type with the same internal ID already

exists then nothing is inserted.

ii. Synonym Mode UPDATE


1. Tries to modify the same object (with the same GUID) in the

repository.

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 129

Confidential
2. This import type updates the objects already existing in the target

Repository with the content of the export file.

3. If the object does not exist, the object is not imported.


iii. Synonym Mode INSERT_UPDATE
1. If no ODI object exists in the target Repository with an identical
GUID, this import type will create a new object with the content of

the export file. Already existing objects (with an identical GUID)

will be updated; the new ones, inserted.

2. Existing child objects will be updated, non-existing child objects

will be inserted, and child objects existing in the repository but not

in the export file will be deleted.

3. This import type is not recommended when the export was done

Without the child components. This will delete all sub-components

of the existing object.

4. If export file contains dependencies then this method is


recommended

3. Smart
i. This method is always given first priority

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 130

Confidential
Exercise 1: Migrating from development to test work repository

Step1: Exporting From Development environment

1. Connect to work repository  Click on Connect Navigator  Select Export Select


Smart Export
2. Drag and drop project RRITEC_PROJECT on to object to be exported Click on
export

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 131

Confidential
3. Select desktop and create a folder and save it.

Step2: Creating Testing work repository

Similar to on page 24Create the Work Repository (refer page no 24 )

Step 3: Creating user TDBU_TEST and load tables into TDBU _TEST schema

1. Open SQL PLUS  Type / as sysdba press enter


2. Create a user by executing below commands
a. Create user TDBU_TEST identified by RRitec123;
b. Grant DBA to TDBU_TEST;
c. Conn TDBU_TEST@ORCL
d. Password RRitec123
e. Select count (*) from tab;
3. Go to RRITEC labcopy labdata folder and take full path of driver and execute as for
below

Step4: Creating a physical schema with testing data base

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 132

Confidential
Step5: Creating a testing context and mapping the source and target schemas

Step6: Importing into testing environment

1. Connect to testing work repository  Click on Connect Navigator


2. Select Import Select Smart Import point to previously exported file click on
nextfinish –Click on close

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 133

Confidential
Step7: Running and loading data into test data base

1. Right click on m_ODS_EMPLOYEE mapping  click on run select testing context


ok

2. Observe data in TDBU_TEST schema ODS_EMPLOYEE table

Exercise 2: Migrating from test work repository to Execute Work Repository

Step1: Exporting Scenarios from testing environment

1. Right click on m_ODS_EMPLOYEE Click on generate scenario Click on ok

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 134

Confidential
2. Right click on scenario m_ODS_EMPLOYEE Version 001  click on export
3. Select export path  provide export file name click on export close

Step2: Creating Execution work repository

Similar to on page 24Create the Work Repository (refer page no 24 ) (however please take
care below step)

Step 3: Creating user TDBU_PROD and load tables into TDBU _PROD schema

4. Open SQL PLUS  Type / as sysdba press enter


5. Create a user by executing below commands
a. Create user TDBU_ PROD identified by RRitec123;
b. Grant DBA to TDBU_ PROD;

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 135

Confidential
c. Conn TDBU_ PROD @ORCL
d. Password RRitec123
e. Select count (*) from tab;
6. Go to RRITEC labcopy labdata folder and take full path of target.sql file and execute

Step4: Creating a physical schema with Production data base

Step5: Creating a Production context and mapping the source and target schemas

Step6: Importing into production environment

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 136

Confidential
1. Connect to RRITEC_EWR work repository  Go to operator tab  Under load Plans
and Scenarios click on import scenario

Step7: Running and loading data into production data base

1. Right click on m_ODS_EMPLOYEE sceanrio  click on run select production context


Click on ok
2. Observe data in TDBU_PROD schema ODS_EMPLOYEE table

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 137

Confidential
31. Handson 29: Security

1. All the security information for Oracle Data Integrator is stored in the master repository
2. Security consist of three objects
a. Object
b. Profiles
c. Users
3. An Object is a representation of a design-time or run-time component
4. For example, agents, models, data stores, scenarios, interfaces/Mappings and even
repositories are objects.
5. A Profile contains a set of privileges for working with Oracle Data Integrator. One or
more profiles can be assigned to a user to grant the sum of these privileges to this user.
6. A profile method is an authorization granted to a profile on a method of an object type.
Each granted method allows a user with this profile to perform an action on an instance
of an object type.

Creating User

1. Go to security navigator  In Users window Click on new user  name it as


B107USER1  Click on enter a password  provide Rritec123

2. Click on save close

Generic vs. Non generic Profiles

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 138

Confidential
1. Generic profiles have the generic privilege option selected for all object methods. This
implies that a user, or role, with such a profile is by default authorized for all methods of
all instances of an object to which the profile is authorized.
2. Nongeneric profiles are not by default authorized for all methods on the instances
because the generic privilege option is not selected for all object methods. You must
grant the user, or role, the rights on the methods for each instance.
3. If you want a user, or role, to have the rights on no instance by default, but want to grant
the rights on a per instance basis, the user or role must be given a nongeneric profile.
4. If you want a user or role to have the rights on all instances of an object type by default,
you must give the user or role a generic profile.
5. Predefined Profiles are

Create a custom profile

1. Go to security navigator  In Profiles window Click on new profile  name it as


RRITEC  Click on Authorization  select required objects and methods
2. Click on save Close

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 139

Confidential
RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 140

Confidential
32. Handson 30: CKM

1.1 Check Knowledge Modules (CKM)

1. Data Quality control is essential in ensuring the overall consistency of the data in
your information System's applications.
2. Application data is not always valid for the constraints and declarative rules imposed
by the information system. You may, for instance, find orders with no customer, or
order lines with no product, etc. In addition, such incorrect data may propagate via
integration flows
3. The CKM can be used in 2 ways:
a. Check the consistency of existing data.This can be done on any data store or
within interfaces, by setting the STATIC_CONTROL option to "Yes".
b. Check consistency of the incoming data before loading the records to a target data
store. This is done by using the FLOW_CONTROL option. In this case, the CKM simulates
the constraints of the target data store on the resulting flow prior to writing to the target.
4. In summary: the CKM can check either an existing table or the temporary "I$" table
created by an IKM.
5. The CKM accepts a set of constraints and the name of the table to check. It creates
an "E$" error table which it writes all the rejected records to. The CKM can also
remove the erroneous records from the checked result set.
6. The following figures show how a CKM operates in both STATIC_CONTROL and
FLOW_CONTROL modes.
Check Knowledge Module (STATIC_CONTROL)

In STATIC_CONTROL mode, the CKM reads the constraints of the table and checks them
against the data of the table. Records that don't match the constraints are written to the
"E$" error table in the staging area.

Check Knowledge Module (FLOW_CONTROL)

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 141

Confidential
In FLOW_CONTROL mode, the CKM reads the constraints of the target table of the
Interface. It checks these constraints against the data contained in the "I$" flow table of the
staging area. Records that violate these constraints are written to the "E$" table of the
staging area.

In both cases, a CKM usually performs the following tasks:


1. Create the "E$" error table on the staging area. The error table should contain the
same columns as the data store as well as additional columns to trace error
messages, check origin, check date etc.
2. Isolate the erroneous records in the "E$" table for each primary key, alternate key,
foreign key, condition, mandatory column that needs to be checked.
3. If required, remove erroneous records from the table that has been checked.

Exercise 1: Static Control on Source table


1. CKM is model level property hence we need to import ckm and need to assign to
model
STEP 1: Importing CKM KM
1. Open project right click on Check(CKM) → Click on Import Knowledge Module

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 142

Confidential
2. Select CKM as CKM ORACLE → Click on ok

STEP 2: Assigning CKM KM to Model


1. Right click on source model → Click on open → select control tab → select KM as
CKM oracle

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 143

Confidential
2. save → close
STEP 3: Checking COMM column for not null values

1. Create a table using below command


a. create table B107EMP as select * from emp
2. Reverse engineer this table into source model
3. expand the table → expand attributes open comm column select mandatory and
static options

4. save → close
RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 144

Confidential
5. right click on this table → click on control → click on check select context properly
→ click on ok → again ok
6. go to database notice that two tables created
a. SNP_CHECK_TAB
b. E$_B107EMP
7. Observe that all comm null records are copied into E$B107EMP table

Exercise 2: Static Control using package on data store


1. Create a package as CKM_STATIC_CONTROL
2. drag and drop B107EMP datastore into package
3. select B107EMP datastore and select type data store check
4. check delete errors from the checked table

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 145

Confidential
5. save → run
6. observe output

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 146

Confidential
Exercise 3: Flow Control Understanding

Step 1: Prepare source data

1. open sql developer → Connect to source connection → double click on table


B107EMP name → go to the data tab

2. Insert records as shown → COMMIT

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 147

Confidential
INSERT INTO "SCOTT"."B108EMP" (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL,
COMM, DEPTNO) VALUES ('5555', 'RRITEC', 'Training', '7698', TO_DATE('08-SEP-81',
'DD-MON-RR'), '1500', '100', '20');
INSERT INTO "SCOTT"."B108EMP" (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL,
COMM, DEPTNO) VALUES ('5555', 'RRITEC', 'Training', '7698', TO_DATE('08-SEP-81',
'DD-MON-RR'), '1500', '100', '20');
INSERT INTO "SCOTT"."B108EMP" (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL,
COMM, DEPTNO) VALUES ('12', 'John', 'Manager', '7782', TO_DATE('08-SEP-81', 'DD-
MON-RR'), '1500', '100', '20');
INSERT INTO "SCOTT"."B108EMP" (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL,
COMM, DEPTNO) VALUES ('6666', 'Nancy', 'Manager', '7782', TO_DATE('08-SEP-81',
'DD-MON-RR'), '1500', '100', '60');

3.

Step 2: CREATE TARGET TABLE


1. open target schema B107TDBU execute below script

create table TGT_B107EMP as select * from scott.emp where 1=2

2.

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 148

Confidential
3. Reverse engineer into target model
Step 3: Creating Constraints
1. open the target model navigate to target table TGT_B107EMP
2. right click on constraints → click on newkey

3. In description name it as PK_EMPNO and select primary key option

4. Click on attributes tab → select empno → click on >

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 149

Confidential
5. save → close
6. Right click on constraints → Click on New Reference

7. In definition tab provide below information

8. In columns tab select DEPTNO as shown below

9. Save → close
10. Again right click on constraints → select New Condition

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 150

Confidential
11. Provide below information

12.
13. Save→ close

Step 4: Creating Mapping


1. Create a mapping with the name of m_CKM_FLOW_CONTROL
2. drag and drop source B107EMP and target TGT_B107EMP
3. map corresponding columns

4. In flow tab select IKM and mark FLOW_CONTROL as TRUE

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 151

Confidential
5. Select Control tab and observe conditions

6. Click on RUN → observe output .

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 152

Confidential
33. Handson 31: Slowly Changing Dimension (SCD2)

1. Although dimension tables are typically static lists, most dimension tables do change
over time.
2. Since these changes are smaller in magnitude compared to changes in fact tables,
these dimensions are known as slowly growing or slowly changing dimensions.
3. If we load first Time

4. Second Time load

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 153

Confidential
5. Third time load

STEP 1 : Creating and Reverse Engineering Source


1. We can use EMP table of SCOTT schema as source
2. Reverse engineer EMP table into SOURCE MODEL
STEP 2 : Creating and Reverse Engineering Target

1. Create target table in Target Schema using below script


2.

CREATE TABLE TGT_SCD2_EMP


( SURR_EMPNO NUMBER PRIMARY KEY,
EMPNO NUMBER(4,0),
ENAME VARCHAR2(10 BYTE),
JOB VARCHAR2(9 BYTE),
SAL NUMBER(7,2),
DEPTNO NUMBER(2,0),
CURR_FLG NUMBER,
EFF_FROM_DT DATE,
EFF_TO_DT DATE
)

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 154

Confidential
3. Reverse engineer into TARGET MODEL

STEP 3 : Configuration for SCD


1. Right Click on table TGT_SCD2_EMP2 → Click on OPEN → Select OLAP Type as
Slowly Changing Dimension → Save → Close

2. Expand table → Expand columns → right click on one by one column and set as
shown below

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 155

Confidential
RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 156

Confidential
3. Save and close one by one

STEP 4 : Creating Interface/Mapping

1. Create interface with the name of m_SCD2


2. Click on mapping tab → Drag and drop source (EMP)and target
(TGT_SCD2_EMP2)in respective locations
3. Map columns as shown
S Target Column Expression Comments
NO Name

1 SURR_EMPNO Map to existing sequence Create new sequence or map


:NAT_SEQ_SCD_NEXTVAL to existing sequence

2 EMPNO EMP.EMPNO

3 ENAME EMP.ENAME

4 JOB EMP.JOB

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 157

Confidential
5 SAL EMP.SAL

6 DEPTNO EMP.DEPTNO

7 CURR_FLG 0

8 EFF_FROM_DT

9 EFF_TO_DT

4. Click on Flow tab


5. Select Target and select IKM Oracle Slowly Changing Dimension
6. Select FLOW_CONTROL = False

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 158

Confidential
7. Click on SAVE
8. Click on RUN
9. Change source data sal and run again
10. Again change source data sal run (in my case i changed Ram Reddy sal from
10000 to 30000 and 30000 to 35000)

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182 159

Confidential

You might also like