Solution for Informatica problems

>> Scenario based solutions

Classic

Flipcard

Magazine

Mosaic

Sidebar

Snapshot

Timeslide

1.
DEC

3

Diff between Primary key and unique key
Scenario: Diff between Primary key and unique key Solution:

Unique key display the unique values which can have one null value where Primary key has unique values without null. Whenever you create the primary key constraints, Oracle by default create a unique index with not null.

Posted 3rd December 2012 by Prafull Dangore
0

Add a comment 2.

3.
JUN

15

SQL Transformation with examples
==================================================================================== =========

SQL Transformation with examples
Use: SQL Transformation is a connected transformation used to process SQL queries in the midstream of a pipeline. We can insert, update, delete and retrieve rows from the database at run time using the SQL transformation. Use SQL transformation in script mode to run DDL (data definition language) statements like creating or dropping the tables. The following SQL statements can be used in the SQL transformation. Data Definition Statements (CREATE, ALTER, DROP, TRUNCATE, RENAME) DATA MANIPULATION statements (INSERT, UPDATE, DELETE, MERGE) DATA Retrieval Statement (SELECT) DATA Control Language Statements (GRANT, REVOKE) Transaction Control Statements (COMMIT, ROLLBACK) Scenario: Let’s say we want to create a temporary tabl e in mapping while workflow is running for some intermediate calculation. We can use SQL transformation in script mode to achieve the same. Below we will see how to create sql transformation in script mode with an example where we will create a table in mapping and will insert some rows in the same table. Solution: Step 1: Create two text files in the $PMSourceFileDir directory with some sql queries. 1. sql_script.txt File contains the below Sql queries (you can have multiple sql queries in file separated by semicolon) create table create_emp_table (emp_id number,emp_name varchar2(100)) 2. sql_script2.txt File contains the below Sql queries (you can have multiple sql queries in file separated by semicolon)

    

insert into create_emp_table values (1,'abc') These are the script files to be executed by SQL transformation on database server. Step 2: We need a source which contains the above script file names with a complete path. So, I created another file in the $PMSourceFileDir directory to store these script file names as Sql_script_input.txt. File contains the list of files with their complete path: E:\softs\Informatica\server\infa_shared\SrcFiles\sql_script.txt E:\softs\Informatica\server\infa_shared\SrcFiles\sql_script2.txt Step 3: Now we will create a mapping to execute the script files using the SQL transformation. Go to the mapping designer tool, source analyzer and Import from file =>then creates source definition by selecting a file Sql_script_input.txt Located E:\softs\Informatica\server\infa_shared\SrcFiles. Source definition will look like

at

Similarly create a target definition, go to target designer and create a target flat file with result and error ports. This is shown in the below image

   

Step 4: Go to the mapping designer and create a new mapping. Drag the flat file into the mapping designer. Go to the Transformation in the toolbar, Create, select the SQL transformation, enter a name and click on create. Now select the SQL transformation options as script mode and DB type as Oracle and click ok.

-: for sql_script. -: For sql_scriptw. Save the mapping.      Go to the workflow manager.    The SQL transformation is created with the default ports. you will find the below data. where it will crate the table and "PASSED". Drag the target flat file into the mapping and connect the SQL transformation output ports to the target. create a new workflow and session. For the SQL transformation.txt. "PASSED". Edit the session. where it will insert rows in to the table Fire a select query on the database to check whether table is created or not.txt. enter the oracle database relational connection as shown below. "PASSED". ============================================================================= Posted 15th June 2012 by Prafull Dangore 0 . "PASSED". Now connect the source qualifier transformation ports to the SQL transformation input port. Open the target file. The mapping flow image is shown in the below picture. enter the source & target file directory. Save the workflow and run it. For source.

e.  Cost Based Optimizer (CBO) and Database Statistics Why Indexes Aren't Used The presence of an index on a column does not guarantee it will be used. IN Presence Checking Inequalities When Things Look Bad! Driving Tables (RBO Only) Improving Parse Speed Packages Procedures and Functions Check Your Stats The Cost Based Optimizer (CBO) uses statistics to decide which execution plan to use.Add a comment 4.     The optimizer decides it would be more efficient not to use the index. WHERE UPPER(name) = 'JONES'. It is not intended as a thorough discussion of the area and should not be used as such. You perform mathematical operations on the indexed column i. The following is a small list of factors that will prevent an index from being used. The following article will help you achieve this aim. You perform a function on the indexed column i. If these statistics are incorrect the decision made by the CBO may be incorrect. For this reason it is important to make sure that these statistics are refreshed regularly.e. The solution to this is to use a Function-Based Index. If your query is returning the majority of the data in a table. WHERE salary + 1 = 10001 You concatenate a column i. then a full table scan is probably going to be the most efficient way to access the table. JUN 7 Efficient SQL Statements : SQL Tunning Tips Efficient SQL Statements This is an extremely brief look at some of the factors that may effect the efficiency of your SQL and PL/SQL code.           Check Your Stats Why Indexes Aren't Used Caching Tables EXISTS vs. WHERE firstname || ' ' || lastname = 'JOHN JONES' .e.

Presence Checking The first question you should ask yourself is. Rule of thumb:   If the majority of the filtering criteria are in the subquery then the IN variation may be more performant. If the majority of the filtering criteria are in the top query then the EXISTS variation may be more performant. It will rarely choose to use an index on column referenced using an OR statement.code = t1.000 rows read from items. a maximum of 1 row from TABLE2 will be read for each row of TABLE1. all rows in TABLE2 will be read for every row in TABLE1. Note. Perform an update and test for no rows updated using SQL%ROWCOUNT. TABLE1 . I would suggest they you should try both variants and see which works the best.1000 rows (A) SELECT t1. In later versions of Oracle there is little difference between EXISTS and IN operations. The effect will be 1.code FROM table2 t2).id FROM table1 t1 WHERE EXISTS (SELECT '1' FROM table2 t2 WHERE t2. the first column (leading-edge) must be used. (B) SELECT t1. IN The EXISTS function searches for the presence of a single row meeting the stated criteria as opposed to the IN statement which looks for all occurrences.000. thus reducing the processing overhead of the statement. "Do I need to check for the presence of a record?" Alternatives to presence checking include:    Use the MERGE statement if you are not sure if data is already present.code) For query A. . The only way of guaranteeing the use of indexes in these situations is to use an INDEX hint. It will even ignore optimizer hints in this situation. In the case of query B. EXISTS vs. Perform an insert and trap failure because a row is already present using theDUP_VAL_ON_INDEX exception handler.  You do not include the first column of a concatenated index in the WHERE clause of your statement.id FROM table1 t1 WHERE t1.1000 rows TABLE2 . For the index to be used in a partial match. The use of 'OR' statements confuses the Cost Based Optimizer (CBO).code IN (SELECT t2. Index Skip Scanning in Oracle 9i and above allow indexes to be used even when the leading edge is not referenced.

If none of these options are right for you and processing is conditional on the presence of certain records in a table, you may decide to code something like the following. SELECT Count(*) INTO v_count FROM items WHERE item_size = 'SMALL'; IF v_count = 0 THEN -- Do processing related to no small items present END IF; If there are many small items, time and processing will be lost retrieving multiple records which are not needed. This would be better written like one of the following. SELECT COUNT(*) INTO v_count FROM items WHERE item_size = 'SMALL' AND rownum = 1; IF v_count = 0 THEN -- Do processing related to no small items present END IF; OR SELECT COUNT(*) INTO v_count FROM dual WHERE EXISTS (SELECT 1 FROM items WHERE item_size = 'SMALL'); IF v_count = 0 THEN -- Do processing related to no small items present END IF; In these examples only single a record is retrieved in the presence/absence check. Inequalities If a query uses inequalities (item_no > 100) the optimizer must estimate the number of rows returned before it can decide the best way to retrieve the data. This estimation is prone to errors. If you are aware of the data and it's distribution you can use optimizer hints to encourage or discourage full table scans to improve performance. If an index is being used for a range scan on the column in question, the performance can be improved by substituting >= for >. In this case, item_no > 100 becomes item_no >= 101. In the first case, a full scan of the index will occur. In the second case, Oracle jumps straight to the first index entry with an item_no of 101 and range scans from this point. For large indexes this may significantly reduce the number of blocks read.

When Things Look Bad! If you have a process/script that shows poor performance you should do the following:
 

 

Write sensible queries in the first place! Identify the specific statement(s) that are causing a problem. The simplest way to do this is to use SQL Trace, but you can try running the individual statements using SQL*Plus and timing them (SET TIMING ON) Use EXPLAIN to look at the execution plan of the statement. Look for any full table accesses that look dubious. Remember, a full table scan of a small table is often more efficient than access by index. Check to see if there are any indexes that may help performance. Try adding new indexes to the system to reduce excessive full table scans. Typically, foreign key columns should be indexed as these are regularly used in join conditions. On occasion it may be necessary to add composite (concatenated) indexes that will only aid individual queries. Remember, excessive indexing can reduce INSERT, UPDATE and DELETE performance. Driving Tables (RBO Only) The structure of the FROM and WHERE clauses of DML statements can be tailored to improve the performance of the statement. The rules vary depending on whether the database engine is using the Rule or Cost based optimizer. The situation is further complicated by the fact that the engine may perform a Merge Join or a Nested Loop join to retrieve the data. Despite this, there are a few rules you can use to improve the performance of your SQL. Oracle processes result sets a table at a time. It starts by retrieving all the data for the first (driving) table. Once this data is retrieved it is used to limit the number of rows processed for subsequent (driven) tables. In the case of multiple table joins, the driving table limits the rows processed for the first driven table. Once processed, this combined set of data is the driving set for the second driven table etc. Roughly translated into English, this means that it is best to process tables that will retrieve a small number of rows first. The optimizer will do this to the best of it's ability regardless of the structure of the DML, but the following factors may help. Both the Rule and Cost based optimizers select a driving table for each query. If a decision cannot be made, the order of processing is from the end of the FROM clause to the start. Therefore, you should always place your driving table at the end of the FROM clause. Subsequent driven tables should be placed in order so that those retrieving the most rows are nearer to the start of the FROM clause. Confusingly, the WHERE clause should be writen in the opposite order, with the driving tables conditions first and the final driven table last. ie. FROM d, c, b, a WHERE a.join_column = 12345 AND a.join_column = b.join_column AND b.join_column = c.join_column AND c.join_column = d.join_column;

If we now want to limit the rows brought back from the "D" table we may write the following. FROM d, c, b, a WHERE a.join_column = 12345 AND a.join_column = b.join_column AND b.join_column = c.join_column AND c.join_column = d.join_column AND d.name = 'JONES'; Depending on the number of rows and the presence of indexes, Oracle my now pick "D" as the driving table. Since "D" now has two limiting factors (join_column and name), it may be a better candidate as a driving table so the statement may be better written as follows. FROM c, b, a, d WHERE d.name = 'JONES' AND d.join_column = 12345 AND d.join_column = a.join_column AND a.join_column = b.join_column AND b.join_column = c.join_column This grouping of limiting factors will guide the optimizer more efficiently making table "D" return relatively few rows, and so make it a more efficient driving table. Remember, the order of the items in both the FROM and WHERE clause will not force the optimizer to pick a specific table as a driving table, but it may influence it's decision. The grouping of limiting conditions onto a single table will reduce the number of rows returned from that table, and will therefore make it a stronger candidate for becoming the driving table. Caching Tables Queries will execute much faster if the data they reference is already cached. For small frequently used tables performance may be improved by caching tables. Normally, when full table scans occur, the cached data is placed on the Least Recently Used (LRU) end of the buffer cache. This means that it is the first data to be paged out when more buffer space is required. If the table is cached (ALTER TABLE employees CACHE;) the data is placed on the Most Recently Used (MRU) end of the buffer, and so is less likely to be paged out before it is re-queried. Caching tables may alter the CBO's path through the data and should not be used without careful consideration. Improving Parse Speed Execution plans for SELECT statements are cached by the server, but unless the exact same statement is repeated the stored execution plan details will not be reused. Even differing spaces in the statement will cause this lookup to fail. Use of bind variables allows you to repeatedly use the same statements whilst changing the WHERE clause criteria. Assuming the statement does not have a cached execution plan it must be parsed before execution. The parse phase for statements can be decreased by efficient use of aliasing. If an alias is not present, the engine must resolve which tables own the specified columns. The following is an example. Bad Statement Good Statement

id AND e.---------. Being database objects their SQL text and compiled code is stored in Data Dictionary and the executable copies reside in the Shared Pool.---------. ID COL1 COL2 COL3 COL4 ---------. This phase can be avoided if all anonymous blocks are stored as Database Procedures. or anonymous block. Functions. countries c WHERE e.first_name. Sorted and Returned The Parse phase is the most time and resource intensive.---------1 ONE TWO THREE FOUR 2 TWO THREE FOUR . country FROM employee. If the first parameter value is null. Assembled. c. JUN 7 Function : NVL2 and COALESCE NVL2 The NVL2 function accepts three parameters. is passed to the server it is processed in three phases.last_name = 'HALL'.country_id = c.country FROM employee e. Packages or Views.---------. it returns the third parameter. e. SELECT e. If the first parameter value is not null it returns the value in the second parameter. Posted 7th June 2012 by Prafull Dangore 0 Add a comment 5. countries WHERE country_id = id AND lastname = 'HALL'.SELECT first_name.last_name. Packages Procedures and Functions When an SQL statement. Phase Actions Parse Syntax Check and Object Resolution Execution Necessary Reads and Writes performed Fetch Resultant rows are Retrieved. SQL> SELECT * FROM null_test_tab ORDER BY id. The following query shows NVL2 in action. last_name.

---------1 ONE 2 TWO 3 THREE 4 THREE 4 rows selected. SQL> SELECT id. SQL> Posted 7th June 2012 by Prafull Dangore 0 Add a comment 6. col3) AS output FROM null_test_tab ORDER BY id.---------1 TWO 2 THREE 3 THREE 4 THREE 4 rows selected. col2. NVL2(col1. COALESCE(col1. into a database table for audit/log purpose. FEB 8 Load the session statistics such as Session Start & End Time. col3) AS output FROM null_test_tab ORDER BY id.3 4 4 rows selected. . SQL> COALESCE The COALESCE function was introduced in Oracle 9i. col2. THREE THREE FOUR THREE SQL> SELECT id. it returns null. If all parameters contain null values. ID OUTPUT ---------. ID OUTPUT ---------. It accepts two or more parameters and returns the first non-null value in a list. Failed Rows and Rejected Rows etc. 7. Success Rows.

SESSION1.Endtime TgtSuccessRows TgtFailedRows This session is used to load the session statistics into a database table. $$TSRows and $$TFRows. into a database table for audit/log purpose. Task and assign as the following workflow VARIABLES variables.File or TableTable. => This should call a mapping say ‘m_sessionLog’ => This mapping m_sessionLog should have mapping variables for the above defined workflow variables such as $$wfname. Success Rows.Scenario: Load the session statistics such as Session Start & End Time. File  WORKFLOW Create => => => => => ASSIGNMENT Use the Expression tab in the = = = = Assignment = $ $ $ $ SESSION1. $$Etime. $$Stime.StartTime SESSION1. Failed Rows and Rejected Rows etc. It can be anything File Table. Solution: After performing the below solution steps your end workflow will look as follows: START => SESSION1 => ASSIGNMENT TASK => SESSION2 SOLUTION STEPS SESSION1 This session is used to achieve your actual business logic. $$Workflowname $$SessionStartTime $$SessionEndTime $$TargetSuccessrows $$TargetFailedRows TASK follows: $$workflowname $$sessionStartTime $$SessionEndTime $$ TargetSuccessrows $$ TargetFailedRows SESSION2 $PMWorkflowName SESSION1. Meaning this session will perform your actual data load. => This mapping m_sessionLog should use a dummy source and it must have a expression transformation and a target => database Audit table) => Inside the expression you must assign the mapping variables to the output ports .

start time. success rows and failed rows. => Connect all the required output ports to the target which is nothing but your audit table. here $OutputFileName=your Give the above mentioned parameter in your parameter file. My question is can define file path in parameter file? If possible can anyone explain how to assign target file path as parameter? Solution: You can define the file file path in path parameter file. Posted 30th December 2011 by Prafull Dangore 0 Add a comment .workflowname=$$wfname starttime=$$Stime endtime=$$Etime SucessRows=$$TSRows FailedRows=$$TFRows => Create a target database table with the following columns Workflowname. PRE-Session Variable => Session 2: In the Pre-session variable assignment tab assign the mapping variable = workflow variable => In our case $$wfname=$$workflowname $$Stime=$$sessionStartTime $$Etime=$$sessionEndTime $$TSRows=$$TargetSuccessrows $$TFRows=$$TargetFailedrows Workflow Execution Posted 8th February 2012 by Prafull Dangore 0 Add a comment 8. DEC 30 Use Target File Path in Parameter File Scenario: I want to use mapping parameter to store target file path. end time.

9. Solution: 1. DEC 27 Insert and reject records using update strategy. Source EMPNO HIRE_DATE(numeric) ----------------1 20101111 2 20090909 Target EMPNO HIRE_DATE (date) ---------------- . DEC 27 Convert Numeric Value to Date Format Scenario: Suppose you are importing a flat file emp.csv and hire_date colummn is in numeric format.DD_REJECT) write the condition like this 3.DD_INSERT. 11. connect out-puts from SQF to Update Strategy transformation.with a format 'YYYYMMDD'. 2. Connectthe Update Strategy to target Posted 27th December 2011 by Prafull Dangore 0 Add a comment 10. In properties of Update Strategy IIF(SAL<3000. like 20101111 . Scenario: Insert and reject records using update strategy.Our objective is convert it to date. There is an emp table and from that table insert the data to targt where sal<3000 and reject other rows.

2) || '.’YYYYMMDD’) Posted 27th December 2011 by Prafull Dangore 1 View comments 2.2) OR SUBSTR(INPUT_FIELD.78 Solution: output = to_decimal(to_integer(input)/100.' || SUBSTR(INPUT_FIELD. DEC 26 How to change a string to decimal with 2 decimal places in informatica? Scenario: How to change Eg:: I want output as 123456. LENGTH(INPUT_FIELD) . -2) Posted 26th December 2011 by Prafull Dangore 0 a string input to decimal with 2 decimal data places in informatica? 12345678 Add a comment 3. In expression make hire_date as input only and make another port hire_date1 as o/p port with date data type. 3. DEC 26 . 1. Connect SQF to an expression.2 Solution: 09/09/2009 1 11/11/2010 1. 2. In o/p port of hire_date write condition like as below TO_DATE(TO_CHAR(hire_date).

Solution: We have an option in Informatica "Append if exists" in target session properties.Append the data in a flat file for a daily run Scenario: I have the flat file in our server location. Posted 26th December 2011 by Prafull Dangore 0 Add a comment 4. Convert the Dayno to corresponding year's month and date and then send to target.’DD’. 5. Target E_NO YEAR_MONTH_DAY ----------------------1 29-OCT-07 2 19-JUL-08 Solution: Use below date format in exp transformation Add_to_date(YEAR. I want to append the data in a flat file for a daily run. to corresponding month and date of year Scenario: Suppose Source you ------ have E_NO --------1 - a source YEAR is like this DAYNO --------01-JAN-07 301 2 01-JAN-08 200 Year column is a date and dayno is numeric that represents a day ( as in 365 for 31-Dec-Year).DAYNO) Posted 22nd December 2011 by Prafull Dangore . DEC 22 Convert Day No.

DEC .empno).0 Add a comment 6.empno=b. OR delete from emp a where rowid != (select min(rowid) from emp b where a. DEC 22 How to get nth max salaries ? Scenario: How to get nth max salaries ? Solution: select distinct hiredate from emp a where &n = (select count(distinct sal) from emp b where a. Posted 22nd December 2011 by Prafull Dangore 0 Add a comment 7. 9. DEC 22 How to delete duplicate rows in a table? Scenario: How to delete duplicate rows in a table? Solution: delete from emp a where rowid != (select max(rowid) from emp b where a.sal). Posted 22nd December 2011 by Prafull Dangore 0 Add a comment 8.sal >= b.empno).empno=b.

sal <= b.select distinct sal from emp a where 3 >= (select count(distinct sal) from emp b where a. Last . Posted 22nd December 2011 by Prafull Dangore 0 Add a comment 11. Posted 22nd December 2011 by Prafull Dangore 0 Add a comment 10.sal).select * from emp minus select * from emp where rownum <= (select count(*) .22 How to get 3 Max & Min salaries? Scenario: How to get 3 Max & Min salaries? Solution: Max .select distinct sal from emp a where 3 >= (select count(distinct sal) from emp b where a.sal desc. DEC 22 .sal) order by a.&n from emp).select * from emp where rownum <= &n. DEC 22 Find FIRST & LAST n records from a table. Min . Scenario: Find FIRST & LAST n records from a table. Solution: First .sal >= b.

Min select distinct sal from emp e1 where 3 = (select count(distinct sal) from emp e2 where e1.sal <= e2. DEC . 13. null) from emp).select * from emp where rowid in (select decode(mod(rownum.select * from emp where rowid in (select decode(mod(rownum.0. Odd . Scenario: Sql query to find EVEN & ODD NUMBERED records from a table. Posted 22nd December 2011 by Prafull Dangore 0 Add a comment 12.sal). DEC 22 Sql query to find EVEN & ODD NUMBERED records from a table.null .0. Posted 22nd December 2011 by Prafull Dangore 0 Add a comment 14.sal). Solution: Even .2).rowid.rowid) from emp).2).Find the 3rd MAX & MIN salary in the emp table Scenario: Find the 3rd MAX & MIN salary in the emp table Solution: Max select distinct sal from emp e1 where 3 = (select count(distinct sal) from emp e2 where e1.sal >= e2.

 Select LAST n records from a table select * from emp minus select * from emp where rownum <= (select count(*) .sal <= b. (ODD NUMBERED) select * from emp where rowid in (select decode(mod(rownum.  Suppose there is annual salary information provided by emp table.sal). select count(EMPNO).deptno. dept b where a.deptno(+) = b. select * from emp a where rowid = (select max(rowid) from emp b where a.dname.deptno(+)=b. Complex Queries in SQL ( Oracle )  To fetch ALTERNATE records from a table. Dept name for all the departments in which there are no employees in the department.dname from emp a.  How to get 3 Max salaries ? select distinct sal from emp a where 3 >= (select count(distinct sal) from emp b where a. select distinct sal from emp e1 where 3 = (select count(distinct sal) from emp e2where e1. b.2).  Count of number of employees in department wise. select * from emp where rownum <= &n. select distinct sal from emp e1 where 3 = (select count(distinct sal) from emp e2 where e1.null .2). null) from emp).deptno).sal).  Select FIRST n records from a table.deptno = b.b.  To select ALTERNATE records from a table.ename.empno).deptno.empno=b.sal >= e2. dept b where a.sal >= b. altertnate solution: select empno.rowid.deptno group by b.sal) order by a.sal desc.  List dept no.sal <= e2.sal >= b.  How to get nth max salaries ? select distinct hiredate from emp a where &n = (select count(distinct sal) from emp b where a.0.sal).&n from emp).empno=b.rowid) from emp). dname from emp a.deptno..  How to get 3 Min salaries ? select distinct sal from emp a where 3 >= (select count(distinct sal) from emp b where a. (EVEN NUMBERED) select * from emp where rowid in (select decode(mod(rownum. alternate solution: select * from dept a where not exists (select * from emp b where a.22 SQL questions which are the most frequently asked in interviews.  How to delete duplicate rows in a table? delete from emp a where rowid != (select max(rowid) from emp b where a.  Select DISTINCT RECORDS from emp table.empno).deptno and empno is null. select * from dept where deptno not in (select deptno from emp).sal).0. How to fetch monthly salary of each and every employee? .  Find the 3rd MIN salary in the emp table.  Find the 3rd MAX salary in the emp table.

 In emp table add comm+sal as total sal .deptno)  If there are two tables emp1 and emp2.'SCOTT'.sal from emp order by deptno.count(sal) from emp.  Select all salary <3000 from emp table.  Select all records where ename starts with ‘S’ and its lenth is 6 char.'KING'.  Select all records where ename may be any no of character but it should end with ‘R’.0)) as totalsal from emp. select * from emp where job not in ('SALESMAN'.'BLAKE'. select * from emp where sal> any(select sal from emp where sal<3000).sal desc. select ename.  Count MGR and their salary in emp table.  Select all record from emp table where deptno=30 and sal>1500.select ename. select ename.deptno=dept.deptno.  Select any salary <3000 from emp table. SELECT deptno.  How can I create an empty table emp1 with same structure as emp? Create table emp1 as select * from emp where 1=2. sum(sal) As totalsal FROM emp GROUP BY deptno HAVING COUNT(empno) > 2 Posted 22nd December 2011 by Prafull Dangore 0 Add a comment 15. select * from emp where ename like'%R'.  Select all record from emp table where deptno =10 or 40.  Select all record from emp where job not in SALESMAN or CLERK. select * from emp where ename in('JONES'.  Select all record from emp where ename in 'BLAKE'.  Select all the employee group by deptno and sal in descending order. select * from emp where deptno=30 or deptno=10. select count(MGR).(sal+nvl(comm. select * from emp where ename like'S____'.'SCOTT'. select * from emp where exists(select * from dept where emp.  How to retrive record where sal between 1000 to 2000? Select * from emp where sal>=1000 And sal<2000  Select all records where dept no of both emp and dept table matches. How can I fetch all the recods but common records only once? (Select * from emp) Union (Select * from emp1)  How to fetch only common records from two tables emp and emp1? (Select * from emp) Intersect (Select * from emp1)  How can I retrive all records of emp1 those should not present in emp2? (Select * from emp) Minus (Select * from emp1)  Count the totalsa deptno wise where more than 2 employees exist. and both have common record. select * from emp where sal> all(select sal from emp where sal<3000).'KING'and'FORD'.sal/12 as monthlysal from emp.'CLERK'). DEC .'FORD'). select * from emp where deptno=30 and sal>1500.

 Find the 3rd MAX salary in the emp table. dept b where a.null .sal desc.2). select * from dept where deptno not in (select deptno from emp).dname from emp a.sal <= e2.sal >= e2.deptno.sal). select distinct sal from emp e1 where 3 = (select count(distinct sal) from emp e2where e1.empno=b.&n from emp).sal) order by a.rowid) from emp).sal).0.sal >= b.22 Complex Queries in SQL ( Oracle ) Complex Queries in SQL ( Oracle )  To fetch ALTERNATE records from a table. How to fetch monthly salary of each and every employee? . b.0.  How to get 3 Max salaries ? select distinct sal from emp a where 3 >= (select count(distinct sal) from emp b where a.deptno = b. null) from emp).  Count of number of employees in department wise..  Suppose there is annual salary information provided by emp table.  Find the 3rd MIN salary in the emp table.sal <= b.sal).deptno(+) = b.  How to get nth max salaries ? select distinct hiredate from emp a where &n = (select count(distinct sal) from emp b where a.empno). select distinct sal from emp e1 where 3 = (select count(distinct sal) from emp e2 where e1.2). alternate solution: select * from dept a where not exists (select * from emp b where a.  Select LAST n records from a table select * from emp minus select * from emp where rownum <= (select count(*) .  List dept no.empno). altertnate solution: select empno. dept b where a. select * from emp a where rowid = (select max(rowid) from emp b where a.  To select ALTERNATE records from a table. select count(EMPNO).deptno.  How to get 3 Min salaries ? select distinct sal from emp a where 3 >= (select count(distinct sal) from emp b where a.rowid.  Select DISTINCT RECORDS from emp table.sal >= b. select * from emp where rownum <= &n. dname from emp a.b.  Select FIRST n records from a table.deptno.empno=b.deptno group by b. (EVEN NUMBERED) select * from emp where rowid in (select decode(mod(rownum.  How to delete duplicate rows in a table? delete from emp a where rowid != (select max(rowid) from emp b where a.deptno). Dept name for all the departments in which there are no employees in the department.dname.sal).deptno and empno is null. (ODD NUMBERED) select * from emp where rowid in (select decode(mod(rownum.ename.deptno(+)=b.

 Select all record from emp where ename in 'BLAKE'.  In emp table add comm+sal as total sal . 17.'KING'.  Select all salary <3000 from emp table.  Select all records where ename starts with ‘S’ and its lenth is 6 char.deptno)  If there are two tables emp1 and emp2.'BLAKE'. select * from emp where sal> all(select sal from emp where sal<3000). select * from emp where ename like'S____'.sal/12 as monthlysal from emp. select ename.  Select any salary <3000 from emp table. sum(sal) As totalsal FROM emp GROUP BY deptno HAVING COUNT(empno) > 2 Posted 22nd December 2011 by Prafull Dangore 0 Add a comment 16. select count(MGR). How can I fetch all the recods but common records only once? (Select * from emp) Union (Select * from emp1)  How to fetch only common records from two tables emp and emp1? (Select * from emp) Intersect (Select * from emp1)  How can I retrive all records of emp1 those should not present in emp2? (Select * from emp) Minus (Select * from emp1)  Count the totalsa deptno wise where more than 2 employees exist. select * from emp where job not in ('SALESMAN'.deptno.deptno=dept. SELECT deptno.'FORD'). select * from emp where deptno=30 and sal>1500.  Select all records where ename may be any no of character but it should end with ‘R’.(sal+nvl(comm. select * from emp where sal> any(select sal from emp where sal<3000). select * from emp where deptno=30 or deptno=10.sal from emp order by deptno.  Count MGR and their salary in emp table.select ename.  Select all record from emp table where deptno=30 and sal>1500.sal desc. .'SCOTT'. and both have common record.'KING'and'FORD'. select * from emp where exists(select * from dept where emp. select * from emp where ename in('JONES'.0)) as totalsal from emp.  How to retrive record where sal between 1000 to 2000? Select * from emp where sal>=1000 And sal<2000  Select all records where dept no of both emp and dept table matches.'CLERK').count(sal) from emp.  Select all record from emp table where deptno =10 or 40. select ename.'SCOTT'.  Select all the employee group by deptno and sal in descending order. select * from emp where ename like'%R'.  Select all record from emp where job not in SALESMAN or CLERK.  How can I create an empty table emp1 with same structure as emp? Create table emp1 as select * from emp where 1=2.

(wrong) Which one is not tracing level?   Explanation: terse verbose .DEC 22 Informatica Quiz: Set 2 Quiz: Informatica Set 2 A lookup transformation is used to look up data in      Explanation: flat file Relational table view synonyms All of the above (correct) Which value returned by NewLookupRow port says that Integration Service does not update or insert the row in the cache?     Explanation: 3 (wrong) 2 1 0 Which one need a common key to join?    Explanation: source qualifier joiner (correct) look up Which one support hetrogeneous join?    Explanation: source qualifier joiner (correct) look up What is the use of target loader?    Explanation: Target load order is first the data is load in dimension table and then fact table. Target load order is first the data is load in fact table and then dimensional table. Load the data from different target at same time.

   initialization verbose initialization terse initialization (correct) Which output file is not created during session running?      Explanation: Session log workflow log Error log Bad files cache files (correct) Is Fact table is normalised ?   Explanation: yes no (correct) Which value returned by NewLookupRow port says that Integration Service inserts the row into the cache?     Explanation: 0 (wrong) 1 2 3 Which transformation only works on relational source?     Explanation: lookup Union joiner Sql (correct) Which are both connected and unconnected?     Explanation: External Store Procedure (omitted) Stote Procedure (correct) Lookup (correct) Advanced External Procedure Transformation Can we generate alpha-numeric value in sequence generator?   Explanation: yes no (correct) Which transformation is used by cobol source? .

The VSAM normalizer transformation is the source qualifier transformation for a flat file source definition. the record is blocked Can we calculate in aggrigator ?     .    Explanation: Advanced External Procedure Transformation Cobol Transformation Unstructured Data Transformation Normalizer (correct) What is VSAM normalizer transformation?     Explanation: The VSAM normalizer transformation is the source qualifier transformation for a COBOL source definition. DEC 22 Informatica Quiz: Set 1 Quiz: Informatica Set 1 Which one is not correct about filter transformation? Explanation: Filter generally parses single condition. For multiple condition we can use router Act as a 'where' condition Can't passes multiple conditions Act like 'Case' in pl/sql (wrong) If one record does not match condition. The VSAM normalizer transformation is the source qualifier transformation for a flat file source definition. The VSAM normalizer transformation is the source qualifier transformation for a xml source definition. (wrong) Non of these Posted 22nd December 2011 by Prafull Dangore 0 Add a comment 18. (wrong) Non of these What is VSAM normalizer transformation?     Explanation: The VSAM normalizer transformation is the source qualifier transformation for a COBOL source definition. The VSAM normalizer transformation is the source qualifier transformation for a xml source definition.

Non of these (wrong) What is a mapplet? Explanation: Combination of reusable transformation. and sends character data. It can use in multiple mapping only once It can use in multiple mapping multiple times (correct) Which one is not an option in update strategy? Explanation: dd_reject 4 (correct) 2 dd_delete Can we update records without using update strategy? Explanation: Yes (correct) No How to select distinct records form Source Qualifier? Explanation:                           .  Explanation: No Yes (correct) Which one is not a type of fact? Explanation: Semi-aditive Additive Confirm fact Not additive (wrong) Which one is not a type of dimension ? Explanation: Conformed dimension Rapidly changing dimension (correct) Junk dimension Degenerated dimension Which of these not correct about Code Page? Explanation: A code page contains encoding to specify characters in a set of one or more languages A code page contains decoding to specify characters in a set of one or more languages In this way application stores. Combination of reusable mapping Set of transformations and it allows us to reuse (correct) Non of these What does reusable transformation mean? Explanation: It can be re-used across repositories I can only be used in mapplet. receives.

used and free space? .   Choose 'non duplicate' option Choose 'select distinct' option (correct) Choose 'Select non duplicate' What type of repository is no available in Informatica Repository Manager? Explanation: Standalone Repository Local Repository User Defined Versioned Repository Manual Repository (wrong) Joiner does not support flat file. In which transformation we can achive this? Explanation: Aggrigator Lookup Filter Expression (correct) Which one is not an active transformation?                    Explanation: Sequence generator Normalizer Sql Store Procedure (wrong) Posted 22nd December 2011 by Prafull Dangore 0 Add a comment 19. Explanation: False (correct) True How to execute PL/SQL script from Informatica mapping? Explanation: Lookup Store Procdure (correct) Expression Non of these NetSal= bassic+hra. DEC 22 How large is the database.

%date:~4.bytes) / 1024 / 1024 / 1024 ) || ' GB' "Database Size" .p Posted 22nd December 2011 by Prafull Dangore 0 Add a comment 20. (select sum(bytes) as p from dba_free_space) free group by free.2%.2%%date:~7.p / 1024 / 1024 / 1024) || ' GB' "Free space" from (select bytes from v$datafile union all select bytes from v$tempfile union all select bytes from v$log) used .2%%time:~6.txt Echo This is much easier in UNIX > c: emp\%LOG_FILE_NAME% :exit .p / 1024 / 1024 / 1024) || ' GB' "Used space" .2%%ti me:~3.2%%date:~10.bytes) / 1024 / 1024 / 1024 ) round(free.Scenario: How large is the database.used and free space? Solution: select round(sum(used. round(free.4%. DEC 20 Batch File to Append Date to file name Scenario: Batch File to Append Date to file name Solution: @echo off REM Create a log file with the current date and time in the filename REM the ~4 in the Date skips the first four characters of the echoed date stamp and writes the remainder and so on set LOG_FILE_NAME=Example_File_Name. 21. round(sum(used.%time:~0.

in which items can be declared. It supports all SQL data types.1%"=="_" set t=0%t% set t=%t:~0. The language includes object oriented programming techniques such as encapsulation.4 delims=/ " %%i in ('date/t') do set d=%%k%%i%%j for /F "tokens=5-8 delims=:.^| time ^| find "current" ') do set t=%%i%%j set t=%t%_ if "%t:~3. Once declared. PL/SQL's language syntax.3. . Also has its own data types i. PL SQL is a block structured programming language. " %%i in ('echo. First comes the declarative part. It combines data manipulation & data processing power. and an exception-handling part.4% set "theFilename=%d%%t%" echo %theFilename% Posted 20th December 2011 by Prafull Dangore 0 Add a comment 22. 3. What is PL/SQL ? PL/SQL is Oracle's Procedural Language extension to SQL. an executable part. Exceptions raised during execution can be dealt with in the exception-handling part. information hiding (all but inheritance). and so. structure and datatypes are similar to that of ADA. What is the basic structure of PL/SQL ? A PL/SQL block has three parts: a declarative part. What are the components of a PL/SQL block ? PL/SQL Declare : Variable Begin Procedural Block contains : : optional declaration Manadatory statements.BINARY INTEGER 2. function overloading.e BOOLEAN.3.OR @echo off for /F "tokens=2.4 delims=/ " %%i in ('date/t') do set y=%%k for /F "tokens=2. brings state-of-the-art programming to the Oracle database server and a variety of Oracle tools. DEC 19 PL/SQL Interview Questions 1. items can be manipulated in the executable part.

Exception : any End : 5. What are Following are Scalar BINARY_INTEGER DEC DECIMAL DOUBLE FLOAT INT INTEGER NATURAL NATURALN NUMBER NUMERIC PLS_INTEGER POSITIVE POSITIVEN REAL SIGNTYPE SMALLINT CHAR CHARACTER LONG LONG NCHAR NVARCHAR2 RAW ROWID STRING UROWID VARCHAR VARCHAR2 DATE INTERVAL INTERVAL TIMESTAMP TIMESTAMP TIMESTAMP BOOLEAN Composite RECORD TABLE VARRAY LOB BFILE BLOB CLOB errors to be Optional trapped Mandatory in PL/SQL ? the datatypes a available the datatype supported in oracle PLSQL Types PRECISION RAW DAY YEAR WITH WITH LOCAL TO TO TIME TIME SECOND MONTH ZONE ZONE Types Types .

So row 15 could have a value of `Fox' and row 15446 a value of `Red'. It is. % ROWTYPE provides the record type that represents a entire row of a table or view or columns selected in the cursor. The PL/SQL table grows dynamically as you add more rows to the table. no rows for PL/SQL tables are allocated for this structure when it is defined.g. Each row simply contains the same set of columns. It is. if one change the type or size of the column in the table. similar to a one-dimensional array. The range of a BINARY_INTEGER is from -231-1 to 231-1.deptno from emp. With PL/SQL Release 2. Sparse In a PL/SQL table. TYPE rec RECORD is to be used whenever query returns columns of different table or views and variables. The advantages are : I.NCLOB Reference REF REF object_type 6. This number acts as the "primary key" of the PL/SQL table. 7. in this way. What Types CURSOR are % TYPE and % ROWTYPE ? What are the advantages of using these over datatypes?% TYPE provides the data type of a variable or a database column to that variable.implicit cursor returns only one row. Related to this definition. in this way. the data type of a variable changes accordingly. indexed by integers One-dimensional A PL/SQL table can have only one column. very different from an array. e_rec emp% ROWTYPE cursor c1 is select empno. TYPE r_emp is RECORD (eno emp. homogeneous. 8. however. Indexed by integers PL/SQL tables currently support a single indexing mode: by BINARY_INTEGER. Homogeneous elements Because a PL/SQL table can have only a single column. Need not know about variable's data type ii. Unbounded or Unconstrained There is no predefined limit to the number of rows in a PL/SQL table. homogeneous. sparse collection of homogenous elements. a row exists in the table only when a value is assigned to that row.empno% type. Advantage is. The PL/SQL table is. e_rec c1 %ROWTYPE. with no other rows defined in between. What is a cursor ? Why Cursor is required ? Cursor is a named private SQL area from where information can be Cursors are required to process rows individually for queries returning multiple rows. What is PL/SQL table ? A PL/SQL table is a one-dimensional. Instead you can assign a value to any row in the table. E. What is difference between % ROWTYPE and TYPE RECORD ? % ROWTYPE is to be used whenever query returns a entire row of a table or view. implicit cursor: implicit cursor is a type of cursor which is automatically maintained by the Oracle server itself. 10. . therefore.3. If the database definition of a column in a table changes. Rows do not have to be defined sequentially.ename emp ename %type). so you have an awful lot of rows with which to work 9. you can have PL/SQL tables of records. all rows in a PL/SQL table contain values of the same datatype. unbounded. Explain the two type of Cursors ? accessed. The resulting table is still. it will be reflected in our program unit without making any change. %type is used to refer the column's datatype where as %rowtype is used to refer the whole record in a table.

fetches rows of values from active set into fields in the record and closes when all the records have been processed.Explicit Cursor: Explicit Cursor is defined by the Proframmer.fetch and close. 12.number of rows fetched/updated/deleted. Let's see how this clause would improve the previous example.and phases:declare. you would need to repeat the WHERE clause of your cursor in the WHERE clause of the associated UPDATEs and DELETEs. 'YYYY'). on the other hand. It would be so much more convenient and natural to be able to code the equivalent of the following statements: Delete the record I just fetched. FOR emp_rec IN C1 LOOP salary_total := salary_total +emp_rec sal.opens a cursor. These attributes are proceeded with SQL for Implicit Cursors and with Cursor name for Explicit Cursors 13. as explained above: I have coded the same logic in two places. I want to UPDATE the record that was currently FETCHed by the cursor. OPEN cursor name. What are the PL/SQL Statements used in cursor processing ? DECLARE CURSOR cursor name.fetch.task AND year = TO_CHAR (SYSDATE. cursor for loop is use for automatically open .to check whether cursor is open or not % ROWCOUNT . but it is one of many areas in your code where you can leverage subtle features in PL/SQL to minimize code redundancies. Without WHERE CURRENT OF. if the table structure changes in a way that affects the construction of the primary key. If you use WHERE CURRENT OF. Utilization of WHERE CURRENT OF. % NOT FOUND . you only have to modify the WHERE clause of the SELECT statement. eg. and this code must be kept synchronized. and %ROWTYPE declaration attributes.open. %TYPE. What is a cursor for loop ? Cursor for loop implicitly declares %ROWTYPE as loop index. True if rows are fetched. As a result. True if no rows are featched. Explain the usage of WHERE CURRENT OF clause in cursors ? PL/SQL provides the WHERE CURRENT OF clause for both UPDATE and DELETE statements inside a cursor in order to allow you to easily make changes to the most recently fetched row of data.explicit Cursor returns more than one row. or: Update these columns in that row I just fetched. it has for 11. FETCH cursor name INTO or Record types. I do this in the UPDATE statement by repeating the same WHERE used in the cursor because (task. The general format for the WHERE CURRENT OF clause is as follows: UPDATE table_name SET set_clause WHERE CURRENT OF cursor_name. and other PL/SQL language constructs can have a big impact on reducing the pain you may experience when you maintain your Oracle-based applications. This might seem like a relatively minor issue. year) makes up the primary key of this table: WHERE task = job_rec.to check whether cursor has fetched any row. What are the cursor attributes used in PL/SQL ? %ISOPEN . cursor FOR loops. FROM Notice that the WHERE CURRENT OF clause references the cursor and not the record into which the next fetched row is deposited. . you have to make sure that each SQL statement is upgraded to support this change. local modularization.to check whether cursor has fetched any row. % FOUND . In the jobs cursor FOR loop above.DELETE table_name WHERE CURRENT OF cursor_name. This is a less than ideal situation. The most important advantage to using WHERE CURRENT OF where you need to change the row fetched last is that you do not have to code in two (or more) places the criteria used to uniquely identify a row in a table. END LOOP.close 15. CLOSE cursor name.

o. For triggers related to UPDATE only OLD. .. For example. EXIT. COMMIT. 21. The two virtual table available are old and new. o. Delete o. What is an Exception ? What are types of Exception ?  Predefined Do not declare and allow the Oracle server to raise implicitly NO_DATA_FOUND TOO_MANY_ROWS .column_name values only available.the procedure should be declared as an AUTONOMOUS TRANSACTION. job_rec LOOP FETCH job_rec. For triggers related to DELETE only OLD.k.column_name NEW.A perfect fit for WHERE CURRENT OF! The next version of my winterization program below uses this clause. 17. CLOSE fall_jobs_cur.BEGIN OPEN fall_jobs_cur. o.. I have also switched to a simple loop from FOR loop because I want to exit conditionally from the loop: DECLARE CURSOR fall_jobs_cur IS SELECT . If WHEN clause is specified. For triggers related to INSERT only NEW..k. Before Statement After Statement If FOR EACH ROW clause is specified.k.k. then the trigger for each Row affected by the statement.column_name.k.column_name values only available. done using Database triggers. done using Integarity Constraints.do_it_yourself_flag = 'YOUCANDOIT' THEN SET responsible = 'STEVEN' WHERE fall_jobs_cur.k. What is a database trigger ? Name some usages of database trigger ? A database trigger is a stored procedure that is invoked automatically when a predefined event occurs. Write the order of precedence for validation of a column in a table ? I. What are two virtual tables available during database trigger execution ? The table columns are referred as OLD.column_name values only available. How many types of database triggers can be specified on a table ? What are they ? Insert Before Row After Row Update o. END 16. 20.END. o. o. Database triggers enable DBA's (Data Base Administrators) to create additional relationships between separate databases. 19. o. ELSIF job_rec.k..k. o. ii.k.k.k. END LOOP. o. IF UPDATE winterize CURRENT OF IF. . the trigger fires according to the returned Boolean value. the modification of a record in one database could trigger the modification of a record in a second database. same as before fall_jobs_cur%ROWTYPE. fall_jobs_cur INTO fall_jobs_cur%NOTFOUND THEN EXIT. By this the procedure will be treated as an separate identity.k.What happens if a procedure that updates a column of table X is called in a database trigger of the same table ? To avoid the mutation table error . o.column_name and NEW. o. the different types of triggers: * Row Triggers and Statement Triggers * BEFORE and AFTER Triggers * INSTEAD OF Triggers * Triggers on System Events and User Events 18.

put_line(salary). IF confidition the RAISE EXCEPTION or RAISE_APPLICATION_ERROR o o 22.100).INVALID_CURSOR ZERO_DIVIDE INVALID_CURSOR WHEN EXCEPTION THEN …  Non predefined Declare within the declarative section and allow allow Oracle server to raise implicitly SQLCODE – Returns the numeric value for the seeor code SQLERRM – Returns the message associated with error number DECLARE -. What are the return values of functions SQLCODE and SQLERRM ? Pl / Sql Provides Error Information via two Built-in functions.and handle it accoding to your own way.This way you Improve the Readbility of your program. Syntax is:Raise_application_error (error_number. dbms_output. What is Raise_application_error ? Raise_application_error is used to create your own error messages which can be more descriptive than named exceptions. example declare salary number. Returns " User Defined Exception " 25. end.put_line(SQLERRM).error_messages). 23. error_number) RAISE – WHEN EXCEPTION_NAME THEN …  User defined Declare within the declarative section and raise explicitly. exception WHEN FOUND_NOTHING THEN dbms_output. where error_number is between -20000 to -20999. SQLCODE Returns the Current Error Code. . 24. SQLERRM Returns the Current Error Message Text. begin select sal in to salaryfrom emp where ename ='ANURAG'..PRAGMA EXCEPTION_INIT (exception. What is Pragma EXECPTION_INIT ? Explain the usage ? Pragma exception_init Allow you to handle the Oracle predefined message by you'r own message. Pragma exception_init(FOUND_NOTHING . SQLCODE & SQLERRM. It should be declare at the DECLARE section. FOUND_NOTHING exception. Returns 1. Where the Pre_defined_exceptions are stored ? PL/SQL declares predefined exceptions in the STANDARD package. means you can instruct compiler toassociatethe specific message to oracle predefined message at compile time.

.. What are the modes of parameters that can be passed to a procedure ? 1. 31. Package Body contains actual procedures and local declaration of the procedures and cursor declarations.. exception handlers end. Package Specification contains declarations that are global to the packages and local to the schema. Reusability.Modularity. because the program runs right inside the databaseserver. Using a stored procedure is faster than doing the same work on a client.in out: it is used to define in and out 29.) is local variable declarations BEGIN Executable statements.. What are two parts of package ? of an SQL/PL SQL Expression The two parts of package are PACKAGE SPECIFICATION & PACKAGE BODY.. Stored procedures are nomally written in PL/SQL or Java. Explain how procedures and functions are called in a PL/SQL block ? Procedure a) b) c) can CALL be called <procedure in the following name> ways direc environment EXCECUTE <Procedure name> can <procedure from be other called name> procedures in from or the calling functions or packages ways Functions following a) EXCECUTE <Function name> from calling environment. b) As part 33. Give the structure of the function ? FUNCTION name (argument list .) Return datatype is local variable declarations Begin executable statements Exception execution handlers End.. Maintainability and one time compilation. 32.out: out is used to return values to callers of subprograms 3.in: in parameter mode is used to pass values to subprogram when invoked. What are the two parts of a procedure ? PROCEDURE name (parameter list. .26. Exception. What is a stored Stored Procedure is the PlSQL subprgram stored in the databasse .. 28. procedure ? Stored Procedure A program running in the database that can take complex actions based on the inputs you send it.. Always use a variable to get the return value. 2. advantages fo Stored Procedure Extensibility.

c1 loop dbms_output. Packages The main advantages of packages are 1. .PROCEDURE NAME (parameters). The Scope of cursor declared in a package specification is global . PRO* COBOL c. This will create a package Now You can use this cursor any where. this will dispaly all empno and enames. ALL_ERRORS. EXECUTE PACKAGE NAME.33. Example: create or replace package curpack is cursor c1 is select * from emp. How packaged procedures and functions are called from the following? a. end curpack. DBA_ERRORS 37. end loop. ie char and Varchar2 are from same class. BEGIN PACKAGE NAME.FUNCTION NAME (arguments). EXEC SQL EXECUTE b. A function can not be called. only body gets invalidated and not the spec. c. What is Overloading of procedures ? USER_OBJECTS. Like: set serveroutput on begin for r1 in curpack. Stored procedure or anonymous block b. So any other proc/func dependent on package does not gets invalidated.Since packages has specification and body separate so. Arguments needs to be different by class it self.FUNCTION NAME (arguments). variable := PACKAGE NAME. It will be better to use ref cursor in packages 35.PROCEDURE NAME (parameters) variable := PACKAGE NAME. end. The scope of A cursor declared in a procedure is limited to that procedure only. END EXEC.What is difference between a Cursor declared in a procedure and Cursor declared in a package specification ? A cursor declared in a package specification is global and can be accessed by other procedures or procedures in a package. whenever any ddl is run and if any proc/func(inside pack) is dependent on that. DBA_OBJECTS DBA_SOURCE USER_DEPENCENCIES Overloading procs are 2 or more procs with the same name but different arguments. an application program such a PRC *C.Name the tables where characteristics of Package. 36. ALL_OBJECTS. SQL *PLUS a. PACKAGE NAME. ALL_SOURCE. A cursor declared in a procedure is local to the procedure that can not be accessed by other procedures. END.empno||' '||r1.ename). procedure and functions are stored ? The Data dictionary tables/ Views where the characteristics of subprograms and Packages are stored are mentioned below a) b) c) d) USER_ERRORS.PROCEDURE if the procedures does not have any out/in-out parameters.put_line(r1. USER_SOURCE.

A function can call directly by sql statement like select "func_name" from dual while procedure cannot. 3. END. X---------. whole package is loaded into memory and hence all objects of pack is availaible in memory which means faster execution if any is called. to affect with the commit or rollback of the surrounding transactions.Is it possible to use Transaction control Statements such a ROLLBACK or COMMIT in Database Trigger ? Why ? Autonomous Transaction is a feature of oracle 8i which maintains the state of its transactions and save it . BEGIN (123). completed. What is difference between a PROCEDURE & FUNCTION ? A function always return a values while procedure can return one or more values through Parameters.Whenever any func/proc from package is called. What is Data Concarency and Consistency? Concurrency . 39. X---------123 ora816 SamSQL :> Select * from Test_Table_B.2. . And since we put all related proc/func in one package this feature is useful as we may need to run most of the objects. INSERT INTO Test_Table_A(x) values END INSERT into Test_Table_B(x) values SamSQL Procedure :> declare InsertInTest_Table_B is BEGIN (1). PRAGMA AUTONOMOUS_TRANSACTION override this behavior. Rollback. The Rollback at line-no 11 actually did nothing.1 Notice in above pl/sql COMMIT at line no 6 . commits the transaction at line-no 5 and line-no 9. Let us the see the following example with PRAGMA AUTONOMOUS_TRANSACTION. Commit. Commit/ROLLBACK at nested transactions will commit/rollback all other DML transaction before that. 40. Here is the simple example to understand this :ora816 2 3 4 5 6 7 8 9 10 11 12 13 ora816 / SamSQL PL/SQL :> Select procedure * from successfully Test_Table_A. InsertInTest_Table_B.we can declare global variables in the package 38.

BULK COLLECT and FORALL are the new features in Oracle 8i. Talk about "Exception Handling" in PL/SQL? the exception are written to handle the exceptions thrown by programs. at table. system exceptions are raised due to invalid data(you dont have to deaclre these). transactions or statements 41.Why Functions are used in oracle ?Can Functions Return more than 1 values?Why Procedures are used in oracle ?What are the Disadvantages of packages?What are the Global Variables in Packages? The functions are used where we can't used the procedure. UPDATE or DELETE statements are executed to retrieve from.. It is true that function can return only one value.i. So that for avoiding the context switching betn two engine we user FORALL keyword by using the collection pl/sql tables for DML. we can use the TCL commands(commit/rollback) in the exception block of a stored procedure/function. 9i and 10g PL/SQL that can really make a different to you PL/SQL performance Bulk Binding is used for avoiding the context switching between the sql engine and pl/sql engine. Collections. RAISE. 44. few examples are when no_data_found.by using out parameters and also by using ref cursors. To do bulk binds with Insert. Update and Delete statements. when others etc. or store data in. FORALL ) are a PL/SQL technique where.e we can use a function the in select statments. we have user defined and system exception. What is bulk binding please explain me in brief ? Bulk Binds (BULK COLLECT . and so on. . you include the Bulk Collect INTO a collection clause in the SELECT Statement instead of using Simply into . If we use simple For loop in pl/sql block it will do context switching between sql and pl/sql engine for each row processing that degrades the performance of pl/sql bloack. all of the operations are carried out at once. then back to the PL/SQL engine. user defined exception are the exception name given by user (explicitly decalred and used) and they are raised to handle the specific behaviour of program. but a function can be used to return more than one value. instead of multiple individual SELECT.But the procedure can't used like that. forall is pl/sql keyword. END 46. code_desc) VALUES(‘1403’. ‘No data found’) COMMIT. in bulk. It will provides good result and performance increase. To do bulk binds with Select statements.in the where clause of delete/update statments. This avoids the context-switching you get when the PL/SQL engine has to pass over to the SQL engine. EXCEPTION WHEN NO_DATA_FOUND THEN INSERT INTO err_log( err_code. INSERT.How well can multiple sessions access the same data simultaneously Consistency How consistent is the view of the data between and within multiple sessions. BEGIN ……. Can we use commit or rollback command in the exception part of PL/SQL block? Yes. you enclose the SQL statement within a PL/SQL FORALL statement. 47. You can include any business functionality whenever a condition in main block(body of a proc/func) fails and requires a follow-thru process to terminate the execution gracefully! DECALRE …. when you individually access rows one at a time. The code in this part of the program gets executed like those in the body without any restriction.

400000). 3. Weak cursor and 2. What are the restrictions on Functions ? Function cannot have DML statemets and we can use select statement in function If you create function with DML statements we get message function will be created But if we use in select statement we get error 50. PL/SQL table is nothing but one dimensional array. SELECT ALTER TRUNCATE INTO INTO INTO INTO TABLE parttab BY northwest southwest LIST VALUES VALUES (state) ('OR'.say Binary index And another column for the datas.2)) ( 'WA') uwdata.It has two columns. standard cursors. parttab. Can e truncate some of the rows from the table instead of truncating the full table. NUMBER(10. With REF cursors. 'CA') uwdata). 52. What is PL/SQL table? Pl/Sql table is a type of datatype in procedural language Extension. parttab southwest.Strong cursor Type ref_name is Ref cursor [return type] [return type] means %Rowtype if Return type is mentioned then it is Strong cursor else weak cursor . You can truncate a single partition and keep remaining. CREATE state sales PARTITION PARTITION TABLESPACE PARTITION TABLESPACE INSERT INSERT INSERT INSERT COMMIT. you can build the cursor on the fly Normal Cursor is a Static Cursor. 300000).which might further extend to any number of rows (not columns)in future. With REF Cursors.There is no harm in using the out parameter. Its is indexed by binary integer. you know the cursor's query ahead of time. ('AZ'. What is the difference between a reference cursor and normal cursor ? REF cursors are different than your typical. It is used to hold similar type of data for temporary storage. you can write the PL/SQL code in exception part. can i write plsql block inside expection Yes you can write PL/SQL block inside exception section. ('WA'. ('AZ'.when functins are used in the DML statements we can't used the out parameter(as per rules). 100000). 54. Refernce Cursor is used to create dynamic cursor. ( VARCHAR2(2). To handle the exception which may be raised in your exception part. There are two types of Ref Cursors: 1. 200000). 56. With standard cursors.One for the index. that time you can write insert into statement in exception part. Suppose you want to insert the exception detail into your error log table. SELECT * FROM parttab. You can truncate few rows from a table if the table is partitioned. you do not have to know the query ahead of time. ('CA'. parttab parttab parttab parttab * VALUES VALUES VALUES VALUES FROM TABLE PARTITION ('OR'. 49. What happens when a package is initialized ? when a package is initialised that is called for the first time the entire package is loaded into SGA and any variable declared in the package is initialises.

this view ia mainly used in datawarehousing . 59. but cannot on simple view. If any user id have the Grants for access table of diff. moreover it is a standard data which is required by many other user also for REPORTS generation then create view .if you have complex query from which you want to extract data again and again . whereas All_tables stores all the tables created in different schema. Every session of oracle access the p-code which have the Execute permission on that objects. functions etc which are stored in Oracle system defined data dictionary. including information from the current user's schema as well as information from objects in other schemas. Parse tree generation. when to create materialized view[1] if data is in bulk and you need same data in more than one database then create summary table at one database and replica in other databases [2] if you have summary columns in projection list of query. Function.What happens when DML Statement fails?A. Source code stored in user_objects data dictinary for user defined Store proc. Eg : Create a table t1. 58. ALL_objects stores all the db objects in sp. we can use Ref cursor as an IN OUT parameter . Is there any limitation on no. While  A USER_ view displays all the information from the schema of the current user. No special privileges are required to query these views.giving the final P-code ready for data fetch or manipulation . package. User_tables data dictionary contains all the tables created by the users under that schema. and further execution of the parse tree. DB. 61. if the current user has access to those objects by way of grants of privileges or roles.Sustem evel Rollback When a DML statement executes (fails/sucess) an automatic Commit is executed.. names.update or delte by diff columns. What is the difference between all_ and user_ tables ?  An ALL_ view displays all the information accessible to the current user. Then again to create the same object t1. Symantic check. for view . Trigger. column. 63.User level rollbackB. trigger. Refcusor is a type which is going to hold set of records which can be sent out through the procedure or function out variables. whereas sourcecode is a simple code of sp.. Avoid to insert / update / delete through view unless it is essential. Package. what is p-code and sourcecode ? P-code is Pre-complied code stored in Public cache memory of System Global Area after the Oracle instance is started. schema. So create a summary table in a database and make the replica(materialized view) in other database. remember in datawarehousing we deal in GB or TB datasize . we can create n triggers based on each table. of triggers that can be created on a table? There is no you can write as if table has got n limit on number of triggers on one many u want for insert. if you have two databases and you want a view in both databases . Source code: The code say a PLSQL block that the user types for the exectionP-Code: The source code after -Syntax check.Statement Level RollbackC. so for storing data you need table . main advatages of materialized view over simple view are [1] it save data in database whether simple view's definition is saved in database [2] can create parition or index on materialize view to enhance the performance of view . DBA_object stores all the db objects in sp. schema then he can see that table through this dictionary.The Reference cursor does not support For update clause. keep view as read only (FOR SHOWING REPORTS) for materialized view . 64. Based on what conditions can we decide whether to use a table or a view or a materialized view ? Table is the basic entity in any RDBMS .. . Normal cursor is used to process more than one record in plsql. Insert a record in t1.

Avoid WHERE clauses that are non-sargable. "!=".000 rows from the driving table if you could instead use another index that fetches 100 rows and choose selective indexes. Indexes can't be used when Oracle is forced to perform implicit datatype conversion. and "LIKE %500" can prevent the query optimizer from using an index to perform a search. Only use IN if the table in the sub-query is extremely small. 6. are not sargable. Depending on the range of numbers in a BETWEEN. "!<". 7. substr. 11. "!>". Avoid a full-table scan if it is more efficient to get the required rows through an index. Do not use SQL functions in predicate clauses or WHERE clauses or on indexed columns. 5.) as this prevents the use of the index. "NOT". For maximum performance when joining two or more tables. rtrim. In a where clause (or having clause). "NOT EXISTS". ltrim etc. 8. 4. Always use the where clause in your select statement to narrow the number of rows returned. decode. Ex: Replace SELECT * FROM DEPT WHERE DEPTNO IN (SELECT DEPTNO FROM EMP E) With SELECT * FROM DEPT D WHERE EXISTS (SELECT 1 FROM EMP E WHERE D. When you have a choice of using the IN or the BETWEEN clauses in your SQL. "NOT IN". such as "IS NULL". It decides full table scan if it has to read more than 5% of the table data (for large tables).65. "NOT LIKE". . l l 10. 12. concatenation.DEPTNO = E. or expressions that have the same column on both sides of the operator. "<>". Use function based indexes where possible 2. Choose the join order so you will join fewer rows to tables later in the join order.g. the Oracle performs a full table scan on our table and returns all of the rows. 9. expressions that include a function on a column. the indexes on the columns to be joined should have the same data type. Non-sargable search arguments in the WHERE clause. Convert multiple OR clauses to UNION ALL.DEPTNO) Note: IN checks all rows. If we don’t use a where clause.What steps should a programmer should follow for better tunning of the PL/SQL blocks? SQL Queries – Best Practices 1. constants or bind variables should always be on the right hand side of the operator. the optimizer will choose to do a full table scan or use the index. 3. "OR". Avoid using an index that fetches 10. (e. use smaller table as driving table have first join discard most rows Set up the driving table to be the one containing the filter condition that eliminates the highest percentage of the table. It is better if you use with indexed column joins. Use equijoins. In addition. Use EXISTS clause instead of IN clause as it is more efficient than IN and performs faster. use the BETWEEN clause as it is much more efficient than IN.

STATE UNION SELECT NULL. Match SQL where possible... 20.* FROM A. 1. Use bind variables in queries passed from the application (PL/SQL) so that the same query can be reused.. No matter how many indexes are created.CITY 17. CREATE INDEX CREATE TABLE.CITY FROM B) With SELECT A. (Standard) 15..CITY FROM B WHERE NOT EXISTS (SELECT 'X' FROM A. B WHERE A.. 22...STATE=B. B. The following operations always require a sort: SELECT DISTINCT SELECT UNIQUE SELECT . how much optimization is done to queries or how many caches and buffers are tweaked and tuned if the design of a database is faulty. Use Parallel Query and Parallel DML if your system has more than 1 CPU.. SELECT.CITY FROM A. Replace Outer Join with Union if both join columns have a unique index: Replace SELECT A.CITY. Replace SELECT * FROM EMPLOYEE WHERE SALARY +1000 = :NEWSALARY With SELECT * FROM EMPLOYEE WHERE SALARY = :NEWSALARY –1000 14.STATE=B. B WHERE A. AS SELECT with primary key specification . B WHERE A. This avoids parsing. Applications should use the same SQL statements wherever possible to take advantage of Oracle's Shared SQL Area.GROUP BY. Minimize the use of DISTINCT because it forces a sort operation.STATE=B. 21. All reserve words will be capitalized and all user-supplied objects will be lower case. B.STATE) 18.CITY IN (SELECT B. Try joins rather than sub-queries which result in implicit joins Replace SELECT * FROM A WHERE A.STATE (+) With SELECT A. If you want the index used.CITY = B. 3) = ‘KES’ Use the LIKE function instead of SUBSTR () 13. 16.SELECT * FROM EMP WHERE SUBSTR (ENAME..CITY. B... don’t perform an operation on the field. All SQL statements will be in mixed lower and lower case..ORDER BY.. The SQL must match exactly to take advantage of this.CITY FROM A. A good application starts with a good design. 19. the performance of the overall system suffers.

whenever block completes execution. first all the rows that match the criteria Gender = 'Female' are returned and in these returned rows. MINUS. the order of the conditions in the WHERE clause must be in such a way that the last condition gives minimum collection of potential match rows and the next condition must pass on even little and so on.1. This was the unique no savepoint functionality until 8. SELECT Emp_id FROM Emp_table WHERE Last_Name = 'Smith' AND Middle_Initial = 'K' AND Gender = 'Female'.can u explain in brief and clear my these concepts. 66.5. User Level You can(have to) call some SAVEPOINT commands and rollback to a savepoint on errors by yourself.nes_tab nes_tabtype. System Rollback behavior level or the current transaction entirely on of old drivers becauase PG has transaction level errors. What is PL/Sql tables?Is cursor variable store in PL/SQL table? pl/sql table is temparary table which is used to store records temrparaily in PL/SQL Block. 71. table is also finished. if we fine tune the above query.3. The look up for matches in the table is performed by taking the conditions in the WHERE cluase in the reverse order i.Use of INTERSECT.4). There fore. The driver calls a SAVEPOINT command just before starting each (ODBC) statement and automatically ROLLBACK to the savepoint on errors or RELEASE it on success. SELECT Emp_id FROM Emp_table WHERE Gender = 'Female' AND Middle_Initial = 'K' AND Last_Name = 'Smith' . please use this level. 2. If you expect Oracle-like automatic per statement rollback. Please note you have to rollback the current transcation or ROLLBACK to a savepoint on errors (by yourself) to continue the application 74.empno%type. as Last_Name Smith would return far more less number of rows than Gender = 'Female' as in the former case. it should look like. 3.What is the difference between User-level.also give a small and sweet example of both these. the conditon Last_Name = 'Smith' is looked up. Varry and Nestead table both are belongs to CollectionsThe Main difference is Varray has Upper bound. Statement-level and System-level Rollback? Can you please give me example of each? 1.Syntax of VarryTYPE List_ints_t IS VARRAY(8) OF NUMBER(2). It's size is unconstrained like any other database tableNestead table can be stored in DatabaseSyntax of Nestead TableTYPE nes_tabtype IS TABLE OF emp. 69. Nested table can be indexed where as VArray can't.aList List_ints_t:=List_ints_t(2. Details about FORCE VIEW why and we can use Generally we are not supposed to create a view without base table. So.e.5. For example for a select statement. If you want to create any view . What is the DATATYPE of PRIMARY KEY Binary Integer 72. Statement Rollback the current (ODBC) statement on errors (in case of 8..what is difference between varray and nested table. and UNION set operators Unindexed table joins Some correlated sub-queries Also the order in which the conditions are given in the 'WHERE' cluase are very important while performing a 'Select' query. The Performance Difference is unnoticed ifother wise the query is run on a Massive Database.0 or later version servers).0. where as Nestead table doesn't.

Syntax: CREATE FORCE VIEW AS < SELECT STATMENT >. The purpose of FORCE keyword is to create a view if the underlying base table doesnot exists.insert on the parent table and doing a select in the child tableAll these happen only in a row level trigger 90. dbms_output.0.OUT parameter will be useful for returning the value from subprogram. It acts as explicitly declared variable.2. check the explain paln for the query and avoid for the nested loops / full table scan (depending on the size of data retrieved and / or master table with few rows) 80. example using prgma autonomous incase of mutation problem happens in a trigger.2)If nothing is assigned to a out parameter in a procedure then NULL will be returned for that parameter.FIRST.num_tab. num_tab NumList := NumList(10.20.So IN OUT will be useful than OUT parameter.0.LAST SAVE EXCEPTIONS DELETE * FROM emp WHERE sal > 500000/num_tab(i). Indexed the columns (Primary key) 2. How to handle exception in Bulk collector? During bulk collect you can save the exception and then you can process the exception. What is the purpose of FORCE while creating a VIEW usually the views are created from the basetable if only the basetable exists. 79.update. update.without base table that is called as Force View or invalid view.if u want to retain the value that is being passed then use seperate (IN and OUT)otherwise u can go for IN OUT. if u r deleting (delete cascade). ex : create or replace FORCE view <viewname> as <query> while using the above syntax to create a view the table used in the query statement doesnot necessary to exists in the database 83.IN OUT parameter will be used to pass the value to subprogram and as well as it can be used to return the value to caller of subprogram. What is autonomous Transaction? Where are they used? Autonomous transaction is the transaction which acts independantly from the calling part and could commit the process done. What is Mutation of a trigger? why and when does it oocur? A table is said to be a Mutating table under the following three circumstances 1) When u try to do delete.put_line('Number of errors is ' || errors). errors NUMBER.1).30. . 1) Why it is recommonded to use INOUT instead of OUT parameter type in a procedure? 2) What happen if we will not assign anything in OUT parameter type in a procedure? Hi. How can I speed up the execution of query when number of rows in the tables increased Standard practice is 1.9. BEGIN FORALL i IN num_tab.199. insert into a table through a trigger and at the same time u r trying to select the same table.COUNT.0. Therefore it can be assigned value and its value can be assigned to another variable. 75. 78. thanks to JL for pointing this out errors := SQL%BULK_EXCEPTIONS.this is not in the doco. Look at the below given example: DECLARE TYPE NumList IS TABLE OF NUMBER. 2) The same applies for a view 3) Apart from that. EXCEPTION WHEN OTHERS THEN -. That View will be created with the message View created with compilation errors Once you create the table that invalid view will become as valid one. 1) IN OUT and OUT selection criteria depends upon the program need.11. value can be assigned only once and this variable cannot be assigned to another variable.12.. Use the indexed / Primary key columns in the where clause 3.

put_line(v_sal(r)). example: declare type sal_rec is table of number. For example. namely PL/SQL and Java.What is the difference between Oracle table & PL/SQL table? Table is logical entity which holds the data in dat file permanently .What R built in Packages in Oracle? . 3. begin select sal bulk collect into v_sal from emp.errors SQL%BULK_EXCEPTIONS(i). END LOOP.FOR i IN 1. v_sal sal_rec.count loop dbms_output. for r in 1.What is bulk collect? Bulk collect is part of PLSQL collection where data is stored/ poped up into a variable. SQL%BULK_EXCEPTIONS(i). #2 Will JAVA replace PL/SQL? Internally the Oracle database supports two procedural languages. Oracle 9iDB supports native compilation of Pl/SQL code to binaries. LOOP -- -Iteration Error code is is 91. refer above example sal_rec table will hold data only till programme is reaching to end.What is bulk 2. all indications are that PL/SQL still has a bright future ahead of it. Many Oracle applications are based on PL/SQL and it would be difficult of Oracle to ever desupport PL/SQL.What is the difference between Oracle table 4.ERROR_CODE. Many enhancements are still being made to PL/SQL.. 4.what is the difference between row migration & row changing? & in PL/SQL collect? trigger table? Oracle? 1.What is instead trigger instead triggers are used for views. PL/SQL and Java appeal to different people in different job roles. 1.. end. where as scope of plsql table is limited to the particular block / procedure .END.What is instead 3. This leads to questions like "Which of the two is the best?" and "Will Oracle ever desupport PL/SQL in favour of Java?".#1 What are the advantages and disadvantages of using PL/SQL or JAVA as the primary programming tool for database automation.ERROR_INDEX. In fact. 2. The following table briefly describes the difference between these two language environments: PL/SQL:  Data centric and tightly integrated into the database  Proprietary to Oracle and difficult to port to other database systems  Data manipulation is slightly faster in PL/SQL than in Java  Easier to use than Java (depending on your background) Java:  Open standard. end loop. v_sal.What R built in Packages 5. not proprietary to Oracle  Incurs some data conversion overhead between the Database and Java type systems  Java is more difficult to use (depending on your background) 110.

RowChaining: while inserting the data if data of one row takes more then one block then this row is stored in two blocks and rows are chained. 14 rows selected ..disable....enable_at_time(sysdate 15/1440). loop fetch c1 into emp_cur. commit. this is called migration..Can anyone tell me the difference between instead of trigger. and whenever an oracle error occurs. 131. 5... commit.? Flahsback is used to take your database at old state like a system restore in windows. whenever a user logs on or log off.emp_cur.. HI. more then 1000 dbms_utility oracle builtin dbms_pipe packges like: . user should have execute permission on dbms_flashback package for example: at 1030 am from scott user : delete from emp. emp_cur.ename. when a condition may arise that update/delete statement takes more then pctfree then it takes the space from anther block.. emp_cur. database trigger. end loop. No DDL and DML is allowed when database is in flashback condition.DML statement.hiredate. emp_cur.sal.. and schema trigger? INSTEAD OF Trigger control operation on view . insert into emp values(emp_cur. / select * from emp. emp_cur emp%rowtype.. begin dbms_flashback. So what we can do is we can create instead off trigger and perform dml operations on the view. end...mgr. Database triggers fire whenever the database startup or is shutdown. emp_cur. these tigger provide a means of tracking activity in the database if we have created a view that is based on join codition then its not possibe to apply dml operations like insert.job. insted of triggers: They provide a transparent way of modifying view that can't be modified directly through SQL. dbms_flashback.empno.comm.What is Flashback query in Oracle9i.deptno).. emp_cur. at 1040 am I want all my data from emp table then ? declare cursor c1 is select * from emp. The 40% space is used for update and delete statements .what is the difference between row migration & row changing? Migration: The data is stored in blocks whic use Pctfree 40% and pctused 60% ( normally).There R Dbms_output. open c1. emp_cur. 111. not table. exit when c1%notfound. update and delete on that view. They can be used to make nonupdateable views updateable and to override the behvior of view that are updateable.

min(sal) from emp. Disadvantages: Updating one of the functions/procedures will invalid other objects which use different functions/procedures since whole package is needed to be compiled. count(*) . ename) "Running Total". What is a NOCOPY parameter? Where it is used? NOCOPY Parameter Option Prior to Oracle 8i there were three types of parameter-passing options to procedures and functions:    IN: parameters are passed by reference OUT: parameters are implemented as copy-out IN OUT: parameters are implemented as copy-in/copy-out The technique of OUT and IN OUT parameters was designed to protect original values of them in case exceptions were raised. avg(sal) . we cant pass parameters to packages 137. Mention the differences between aggregate functions and analytical functions clearly with examples? Aggregate functions are sum(). sal "Sal". this method imposed significant CPU and memory overhead when the parameters were large data collections —for example. However. Database server is a server on which the instance of oracle as server runs. what are the advantages & disadvantages of packages ? Modularity. ename SELECT SELECT PARTITION ) ) WHERE * deptno. Disadvantages of Package . 134..whereas datadictionary is the collection of information about all those objects like tables indexes views triggers etc in a database. rollback could be done.More memory may be required on the Oracle database server when using Oracle PL/SQL packages as the whole package is loaded into memory as soon as any object in the package is accessed.Information Hiding. count(). SUM(sal) OVER (ORDER BY deptno. BY ename. max(sal) . OVER deptno ORDER Top3 FROM sal. ROW_NUMBER() OVER (PARTITION BY deptno ORDER BY ENAME) "Seq" FROM emp ORDER BY deptno.132. max(). BY FROM <= ( ROW_NUMBER() ( sal DESC emp 3 Top3 136. what is the difference between database server and data dictionary Database server is collection of all objects of oracle Data Dictionary contains the information of for all the objects like when created. avg().. analytical fuction differ from aggregate function some of examples: SELECT ename "Ename". who created etc.Added Functionality. Because a copy of the parameter set was made. SUM(SAL) OVER (PARTITION BY deptno ORDER BY ename) "Dept Total". PL/SQL Table or VARRAY types. min() like: select sum(sal) . .Better Performance.Easier Application Design. so that changes could be rolled back. deptno "Deptno".

141. -prints 10 n3 := 30. which store data based on remote tables are also.TYPE Notebook IS VARRAY(2000) OF Note. What is materialized view? A materialized view is a database object that contains the results of a query. -prints 30 END. parameter set copy is not created and. DECLARE n NUMBER := 10. OUT and IN OUT parameters are passed by reference. or are used to create summary tables based on aggregations of a table's data. oracle copies the data from the parameter variable into the block and then copies it back to the variable after processing.END. To change the owner just recreate the table in the new schema and drop the previous table .PUT_LINE(n1). or completely fails to occur. 138. in case of an exception rollback. cannot be performed and the original values of parameters cannot be restored..PUT_LINE(n). you either are or you aren't 140. Created_When DATE.EMPNO>B. DBMS_OUTPUT.A materialized view can query tables. DBMS_OUTPUT.you can't be "halfway pregnant". How to get the 25th row of a table. n2 IN OUT NUMBER.. (Which other wise.PUT_LINE(n1). The processing will be done accessing data from the original variable. n. Materialized views. indicating that the parameter is passed as a reference and hence actual value should not be copied in to the block and vice versa. Here is an example of using the NOCOPY parameter option: TYPE Note IS RECORD( Title VARCHAR2(15). This would put extra burdon on the server if the parameters are of large collections/sizes) For better understanding of NOCOPY parameter. Collectively these are called master tables (a replication term) or detail tables (a data warehouse term). I will suggest u to run the following code and see the result. views. n3 IN OUT NOCOPY NUMBER) IS BEGIN n2 := 20. PROCEDURE do_something ( n1 IN NUMBER. and other materialized views. select * from Emp where rownum < 26 minus select * from Emp where rownum<25 SELECT * FROM EMP A WHERE 25=(SELECT COUNT(*) FROM EMP BWHERE A. What is Atomic transaction? An atomic transaction is a database transaction or a hardware transaction which either completely occurs. Memo VARCHAR2(2000)). 139. Created_By VARCHAR2(20).EMPNO). NOCOPY is a hint given to the compiler.CREATE OR REPLACE PROCEDURE Update_Notes(Customer_Notes IN OUT NOCOPY Notebook) ISBEGIN . However. They are local copies of data located remotely. n). DBMS_OUTPUT. know as snapshots. A prosaic example is pregnancy . -prints 20 END.With the new NOCOPY option. How to change owner of a table? Owner of a table is the schema name which holds the table. BEGIN do_something(n. which avoids copy overhead.

Everything works fine until we try of fill the column with 20 two-byte characters. 148. what is diff between strong and weak ref cursors A strong REF CURSOR type definition specifies a return type.strong TYPE GenericCurTyp IS REF CURSOR. This approach was fine as the number of bytes equated to the number of characters when using single-byte character sets.creating a table. video. What will happen to an anonymus block. The trigger with latest timestamp will be fired at last.if there is no statement inside the block?eg:-declarebeginend We cant have declare begin end we must have something between the begin and the end keywords otherwise a compilation error will be raised. but a weak definition does not. how to insert a music file into the database LOB datatypes can be used to store blocks of unstructured data like graphic images. VARCHAR2(20 CHAR)Option 1 uses the default length semantics defined by the NLS_LENGTH_SEMANTICS parameter which defaults to BYTE. When defining an alphanumeric column it is now possible to specify the length in 3 different ways: 1. Option 2 allows only the specified number of bytes to be stored in the column.Oracle9i has solved this problem with the introduction of character and byte length semantics. If the exception is not raised then all the transaction will be rolled back. regardless of how many characters this represents. Wheather a Cursor is a Pointer or Reference? cursor is basically a pointer as it's like a address of virtual memory which is being used storage related to sql query & is made free after the values from this memory is being used 146. -weak in a strong cursor structure is predetermined --so we cannot query having different structure other .Suppose we had a requirement for a table with an id and description column. With the increasing use of multibyte character sets to support globalized databases comes the problem of bytes no longer equating to characters. 145. Option 3 allows the specified number of characters to be stored in the column regardless of the number of bytes this equates to. where the description must hold up to a maximum of 20 characters.. All of a sudden the column is trying to store twice the data it was before and we have a problem. 151.all the transaction made before will be commited.We then decide to make a multilingual version of our application and use the same table definition in a new instance with a multibyte character set. etc 152. what is the difference between VARCHAR2(80) and VARCHAR2(80 BYTE)? Historically database columns which hold alphanumeric data have been defined using the number of bytes they store. The triggers will be fired on the basis of TimeStamp of their creation in Data Dictionary. 147.142.which create insert trigger trigger on executes trig2 tab1. -. audio. first. VARCHAR2(20) 2. VARCHAR2(20 BYTE) 3. we have same create insert trigger with different trigger on names for a table? trig1 tab1.Can eg: after and eg: after If yes. How can i see the time of execution of a sql statement? sqlplus >set time on 144. what happens when commit is given in executable section and an error occurs ?please tell me what ha Whenever the exception is raised . DECLARE TYPE EmpCurTyp IS REF CURSOR RETURN emp%ROWTYPE.

its possible to have same name for package and the procedure in that package. SELECT ID FROM func t1 WHERE ROWID = (SELECT MIN (ROWID) FROM func Also: You SELECT ID FROM func t1 GROUP BY id can use a group by without a WHERE ID = t1. at the same time. This query could be used as part of a delete statement to remove duplicates if needed. What are the disadvantages of Packages and triggers?? Disadvantages of 1. also tables has no upper limit where as arrays has.read no packge state.rowid : variables directly or reference host This query will return the first row for each unique id in the table.column and a. You cannot reference remote packaged 2. get what is the erro r If you reopen a cursor without closing it first. Trigger: 169..W 164. what is the basic structure of the instead of trigger.Write no package state. Explain speci .so we can query with any structure Strong Ref cursor type is less Error prone. 159. Without closing the cursor. Explain. We are not able to grant a procedure in package. 154.column from tab a.Like WNDB(WRITE NO DATA BASE).dup. 170.ID) summary function 173. 168. Yes.PL/SQL raises the predefined exception CURSOR_ALREADY_OPEN. 161. Writing more number of codes. If you want to open it what will happen.rowid<>b. Is it possible to have same name for package and the procedure in that package. where as tables can store values of diff datatypes.RNDB(read no data base).column=b.tab b where a. Select for update Packages: indirectly. How do you set table for read only access ? If you update or delete the records in the table. without using distinct command Using Self join like select dup. variable. Why we use instead of trigger. because oracle already knows what type you are going to return as compare to weak ref type. Disadvantages of 1. What is PRAGMA RESTRICT_REFERENCES: By using pragma_restrict_references we can give the different status to functions. What is difference between PL/SQL tables and arrays? array is set of values of same datatype.. how can we avoid duplicate rows. Inside package you cannot 3.dup. If error.. How to disable a trigger for a particular table ? alter trigger <trigger_name> disable 172. no body can update or delete the same records which you updated or deleted because oracle lock the data which u updated or deleted....than emp%rowtype in weak cursor structure is not predetermined -.

This statement causes the INSTEAD OF trigger to fire. 'pigpic. NOT NULL INSERT 824. Once we've created this INSTEAD OF trigger.Delete e.c) 185. :NEW.keywords as a local case keywords_holder. END END END. and as long as the primary key value (image_id = 41265) does not already exist. (And in case you're wondering. INSTEAD OF triggers are very simple.file_name. IF.bytes). :NEW. :NEW. Unlike a conventional BEFORE or AFTER trigger.. Similarly. 'FARM ANIMAL'))). New Form Instance e. and in the case of IN each and every records will be checked. Button Pressed.Conceptually.image_id. 'JPG'. Oracle's usual DML behavior. we can insert a record into this object view (and hence into bothunderlying tables) quite easily using: THEN DECLARE use of :NEW. Compare EXISTS and IN Usage with advantages and disadvantages. INTO images_v VALUES (Image_t(41265.c) wheras application triggers are froentend triggers and perform as any event taken on application level (Ex.jpg'. 189. EXISTS only checks the existence of records (True/False). exist is faster than IN Command exist do full table scan. an INSTEAD OF trigger takes the place of. Inset. Which type of binding does PL/SQL use? it uses latebinding so only we cannot use ddl staments directly it uses dynamicbinding . you cannot use BEFORE/AFTER triggers on any type of view. END. LOOP. we can write additional triggers that handle updates and deletes.LAST LOOP INTO keywords keywords_holder(the_keyword)).keywords_holder. IF /* || :NEW.. */ INSERT INTO images VALUES (:NEW.update.file_type. Keyword_tab_t('PIG'.t.keywords Note: apparent The workaround || variable keywords_holder FOR VALUES IS bug prevents is to store (in this Keyword_tab_t the_keyword INSERT (:NEW.keywords. :NEW. BEGIN IN 1. rather than supplements. what is the difference between database trigger and application trigger? Database triggers are backend triggeres and perform as any event occurs on databse level (ex..LAST. 180. the trigger will insert the data into the appropriate tables.t. You write code that the Oracle server will execute when a program performs a DML operation on the view.image_id. even if you have defined an INSTEAD OF trigger on the view.keywords.) */ := :NEW.so it is faster than IN Use Exists whenever possible. These triggers use the predictable clauses INSTEAD OF UPDATE and INSTEAD OF DELETE.) CREATE OR REPLACE TRIGGER images_v_insert INSTEAD OF INSERT ON images_v FOR EACH ROW BEGIN /* This will fail with DUP_VAL_ON_INDEX if the images table || already contains a record with the new image_id. 'BOVINE'.  performace wise EXISTS is better.

Why DUAL Because its a dummy table. 25. 2. DEC 19 How do you identify existing rows of data in the target table using lookup transformation? Scenario: How do you identify existing rows of data in the target table using lookup transformation? Solution: There are two ways to lookup the target table to verify a row exists or not: 1. table is not visible? Posted 19th December 2011 by Prafull Dangore 0 Add a comment 23. Use connect dynamic cache lookup and then check the values of NewLookuprow Output port to decide whether the incoming record already exists in the table / cache or not. DEC 19 What is Load Manager? Scenario: What is Load Manager? . Use Unconnected lookup and call it from an expression transformation and check the Lookup condition port value (Null/ Not Null) to decide whether the incoming record already exists in the table or not.191. Posted 19th December 2011 by Prafull Dangore 0 Add a comment 24.

Distributes sessions to worker servers. Sends post-session email if the DTM terminates abnormally. 2. 4. Posted 19th December 2011 by Prafull Dangore 0 Add a comment 26. 11. Verifies connection object permissions. Reads the parameter file and expands workflow variables. Runs post-session stored procedures and SQL. Checks query Conversions if data code page validation is disabled. 10. Runs sessions from master servers. Starts the DTM to run sessions. 3. the PowerCenter Server uses the Load Manager process and the Data Transformation Manager Process (DTM) to run the workflow and carry out workflow tasks. Runs workflow tasks. 7. 5. Fetches session and mapping metadata from the repository. Sends post-session email. The Load Manager performs the following tasks: 1. When the PowerCenter Server runs a session. 8. 2. 4. the DTM performs the following tasks: 1. 6.Solution: While running a Workflow. Runs pre-session shell commands. Creates and expands session variables. When the PowerCenter Server runs a workflow. 8. Locks the workflow and reads workflow properties. 5. reader. and transformation threads to extract. Runs pre-session stored procedures and SQL. Creates the workflow log file. 3. 6. 9. transformation. Runs post-session shell commands. Validates session code pages if data code page validation is enabled. Creates the session log file. Creates and runs mapping. writer. 7. DEC 19 In which conditions we can not use joiner transformation (Limitations of joiner transformation)? Scenario: In which conditions we can not use joiner transformation (Limitations of joiner transformation)? . and load data.

We cannot connect a Sequence Generator transformation directly before the Joiner transformation.Solution: 1. To join rows with null values you can replace null input with default values and then join on the default values. DEC 19 How can U improve session performance in aggregator transformation? Scenario: How can U improve session performance in aggregator transformation? Solution: You can use the following guidelines to optimize the performance of an Aggregator transformation. Limit the number of connected input/output or output ports to reduce the amount of data the Aggregator transformation stores in the data cache. Sorted input reduces the amount of data cached during the session and improves session performance. . For example if both EMP_ID1 and EMP_ID2 from the example above contain a row with a null value the PowerCenter Server does not consider them a match and does not join the two rows. The Joiner transformation does not match null values. Use sorted input to decrease the use of aggregate caches. Filter before aggregating. Limit connected input/output or output ports. Use this option with the Sorter transformation to pass sorted data to the Aggregator transformation. Posted 19th December 2011 by Prafull Dangore 0 Add a comment 27. When our data comes through Update Strategy transformation or in other words after Update strategy we cannot add joiner transformation 2.

DEC 19 What is Slowly Changing Dimensions (SCD) ? . Posted 19th December 2011 by Prafull Dangore 0 Add a comment 30.If you use a Filter transformation in the mapping place the transformation before the Aggregator transformation to reduce unnecessary aggregation. 29. Posted 19th December 2011 by Prafull Dangore 0 Add a comment 28. DEC 19 How can you recognize whether or not the newly added rows in the source are gets insert in the target? Scenario: How can you recognize whether or not the newly added rows in the source are gets insert in the target? Solution: In the Type2 mapping we have three options to recognize the newly added rows Version number Flagvalue Effective date Range If it is Type 2 Dimension the above answer is fine but if u want to get the info of all the insert statements and Updates you need to use session log file where you configure it to verbose. You will get complete set of data which record was inserted and which was not.

Country and State names may change over time. it would be better to change the current data structure to overcome the above primary key violation. This would be very useful for reporting purposes. The "Product" table mentioned below contains a product named. 2. a product price changes over time. Product1's price changes from $150 to $350. there is no way to find out the old value of the product "Product1" in year 2004 since the table now contains only the new price and year information. With this information. In the year 2005. Slowly Changing Dimensions are often categorized into three types namely Type1. The following section deals with how to capture and handling these changes over time. let us explain the three types of Slowly Changing Dimensions.SCD TYPE2(Slowly Changing Dimension) : contains current data + complete historical data. In this Type 1. Product Product ID(PK) Effective DateTime(PK) Year Product Name Product Price Expiry DateTime . In this Type 2. People change their names for some reason. the price of Product1 was $150 and over the time. 3. So at any point of time. In the year 2004.SCD TYPE3(Slowly Changing Dimension) : contains current data + one type historical data. the old values will not be replaced but a new row containing the new values will be added to the product table. the current data structure doesn't clearly specify the effective date and expiry date of Product1 like when the change to its price happened. Type2 and Type3. Product Product ID(PK) Year Product Name Product Price 1 2004 Product1 $150 1 2005 Product1 $250 The problem with the above mentioned data structure is "Product ID" cannot store duplicate values of "Product1" since "Product ID" is the primary key.Slowly Changing Dimensions Dimensions that change over time are called Slowly Changing Dimensions. Product1 with Product ID being the primary key. Product Price in 2004: Product ID(PK) Year Product Name Product Price 1 2004 Product1 $150 1. So. Type 1: Overwriting the old values. Also. the difference between the old values and new values can be retrieved and easily be compared.SCD TYPE1(Slowly Changing Dimension) : contains current data. Product Product ID(PK) Year Product Name Product Price 1 2005 Product1 $250 Type 2: Creating an additional record. then the old values of the columns "Year" and "Product Price" have to be updated and replaced with the new values. if the price of the product changes to $250. For instance. These are a few examples of Slowly Changing Dimensions since some changes are happening to them over a period of time.

variable measurements. Example of Source Data System Name Attribute Name Column Name Datatype Values Source System 1 Customer Application Date CUSTOMER_APPLICATION_DATE NUMERIC(8. we are able to see the current price and the previous price of the product.00AM 2004 Product1 $150 12-31-2004 11.0) 11012005 Source System 2 Customer Application Date CUST_APPLICATION_DATE DATE 11012005 Source System 3 Application Date APPLICATION_DATE DATE 01NOV2005 In the aforementioned example. Type 3: Creating new fields. in year 2006. "Product ID" and "Effective DateTime" are composite primary keys. This inconsistency in data can be avoided by integrating the data into a data warehouse with good standards. Example of Target Data(Data Warehouse) . From that. if the product1's price changes to $350. attribute name. Product1.1 01-01-2004 12. Since dimensions are not that big in the real world. then the complete history may not be stored. since for every changed record. "Effective DateTime" and "Expiry DateTime" provides the information about the product's effective date and expiry date which adds more clarity and enhances the scope of this table. Addition of new columns. since the old values would have been updated with 2005 product information. encoding structures. datatype and values are entirely different from one source system to another. Example mentioned below illustrates how to add new columns and keep track of the changes. has millions of customers and the lines of business of the enterprise are savings. So source systems will be different in naming conventions. then we would not be able to see the complete history of 2004 prices. Consider a bank that has got several branches in several countries. if the product price continuously changes.00AM 2005 Product1 $250 In the changed Product table's Data structure. Type2 approach may need additional space in the data base. and loans. For example. In this Type 3. So there would be no violation of primary key constraint. column name. the latest update to the changed values can be seen. additional space is negligible. over the years. Product Product ID(PK) Year Product Name Product Price Old Product Price Old Year 1 2006 Product1 $350 $250 2005 Example: In order to store data. only the latest change will be stored. is over years. Product Product ID(PK) Current Year Product Name Current Product Price Old Product Price Old Year 1 2005 Product1 $250 $150 2004 The problem with the Type 3 approach. The following example explains how the data is integrated from source systems to target systems. and physical attributes of data. many application designers in each branch have made their individual decisions as to how an application and database should be built. an additional row has to be stored.59PM 1 01-01-2005 12.

state lookup. Dimension tables are sometimes called lookup or reference tables. We may see good sales profit in one region and loss in another region. these tables would be merged as a single table called LOCATION DIMENSION for performance and slicing data requirements. the reasons for that may be a new competitor in that area. locations. Location Dimension In a relational data modeling. departments. county lookup. represented as hierarchical. This location dimension helps to compare the sales in one region with another region. country lookup. DEC 19 What is Dimension Table? Dimension Table Dimension table is one that describes the business entities of an enterprise. categorical information such as time. and city lookups are not merged as a single table. If it is a loss. and products. or failure of our marketing strategy etc. and datatypes are consistent throughout the target system.Target System Attribute Name Column Name Datatype Values Record #1 Customer Application Date CUSTOMER_APPLICATION_DATE DATE 01112005 Record #2 Customer Application Date CUSTOMER_APPLICATION_DATE DATE 01112005 Record #3 Customer Application Date CUSTOMER_APPLICATION_DATE DATE 01112005 In the above example of target data. column names. This is how data from various source systems is integrated and accurately stored into the data warehouse. Example of Location Dimension: Country Lookup Country Code Country Name DateTimeStamp USA United States Of America 1/1/2005 11:23:31 AM State Lookup State Code State Name DateTimeStamp NY New York 1/1/2005 11:23:31 AM FL Florida 1/1/2005 11:23:31 AM CA California 1/1/2005 11:23:31 AM NJ New Jersey 1/1/2005 11:23:31 AM County Lookup County Code County Name DateTimeStamp NYSH Shelby 1/1/2005 11:23:31 AM FLJE Jefferson 1/1/2005 11:23:31 AM . In a dimensional data modeling (star schema). Posted 19th December 2011 by Prafull Dangore 0 Add a comment 31. for normalization purposes. attribute names.

product subcategory lookup. product lookup.9 Product Category Lookup Product Category Code Product Category Name DateTimeStamp 1 Apparel 1/1/2005 11:23:31 AM 2 Shoe 1/1/2005 11:23:31 AM Product Sub-Category Lookup Product Sub-Category Code Product Sub-Category Name DateTime Stamp 11 Shirt 1/1/2005 11:23:31 AM 12 Trouser 1/1/2005 11:23:31 AM 13 Casual 1/1/2005 11:23:31 AM 14 Formal 1/1/2005 11:23:31 AM Product Lookup Product Code Product Name DateTimeStamp 1001 Van Heusen 1/1/2005 11:23:31 AM 1002 Arrow 1/1/2005 11:23:31 AM 1003 Nike 1/1/2005 11:23:31 AM 1004 Adidas 1/1/2005 11:23:31 AM Product Feature Lookup Product Feature Code Product Feature Description DateTimeStamp 10001 Van-M 1/1/2005 11:23:31 AM . and and product feature lookups are are not merged as a single table. product category lookup. Example of Product Dimension: Figure 1. these tables would be merged as a single table called PRODUCT DIMENSION for performance and slicing data requirements.CAMO Montgomery 1/1/2005 11:23:31 AM NJHU Hudson 1/1/2005 11:23:31 AM City Lookup City Code City Name DateTimeStamp NYSHMA Manhattan 1/1/2005 11:23:31 AM FLJEPC Panama City 1/1/2005 11:23:31 AM CAMOSH San Hose 1/1/2005 11:23:31 AM NJHUJC Jersey City 1/1/2005 11:23:31 AM Location Dimension Location Dimension Id Country Name State Name County Name City Name DateTime Stamp 1 USA New York Shelby Manhattan 1/1/2005 11:23:31 AM 2 USA Florida Jefferson Panama City 1/1/2005 11:23:31 AM 3 USA California Montgomery San Hose 1/1/2005 11:23:31 AM 4 USA New Jersey Hudson Jersey City 1/1/2005 11:23:31 AM Product Dimension In a relational data model. In a dimensional data modeling(star schema). for normalization purposes.

This dimension helps us to find the products sold or serviced within the organization by the employees. branch lookup. branch basis and employee basis. Based on the performance.10002 Van-L 1/1/2005 11:23:31 AM 10003 Arr-XL 1/1/2005 11:23:31 AM 10004 Arr-XXL 1/1/2005 11:23:31 AM 10005 Nike-8 1/1/2005 11:23:31 AM 10006 Nike-9 1/1/2005 11:23:31 AM 10007 Adidas-10 1/1/2005 11:23:31 AM 10008 Adidas-11 1/1/2005 11:23:31 AM Product Dimension Product Dimension Id Product Category Name Product Sub-Category Name Product Name Product Feature Desc DateTime Stamp 100001 Apparel Shirt Van Heusen Van-M 1/1/2005 11:23:31 AM 100002 Apparel Shirt Van Heusen Van-L 1/1/2005 11:23:31 AM 100003 Apparel Shirt Arrow Arr-XL 1/1/2005 11:23:31 AM 100004 Apparel Shirt Arrow Arr-XXL 1/1/2005 11:23:31 AM 100005 Shoe Casual Nike Nike-8 1/1/2005 11:23:31 AM 100006 Shoe Casual Nike Nike-9 1/1/2005 11:23:31 AM 100007 Shoe Casual Adidas Adidas-10 1/1/2005 11:23:31 AM 100008 Shoe Casual Adidas Adidas-11 1/1/2005 11:23:31 AM Organization Dimension In a relational data model. In any industry. these tables would be merged as a single table called ORGANIZATION DIMENSION for performance and slicing data.10 Corporate Lookup Corporate Code Corporate Name DateTimeStamp CO American Bank 1/1/2005 11:23:31 AM Region Lookup Region Code Region Name DateTimeStamp SE South East 1/1/2005 11:23:31 AM MW Mid West 1/1/2005 11:23:31 AM Branch Lookup Branch Code Branch Name DateTimeStamp FLTM Florida-Tampa 1/1/2005 11:23:31 AM ILCH Illinois-Chicago 1/1/2005 11:23:31 AM Employee Lookup Employee Code Employee Name DateTimeStamp E1 Paul Young 1/1/2005 11:23:31 AM E2 Chris Davis 1/1/2005 11:23:31 AM Organization Dimension Organization Dimension Id Corporate Name Region Name Branch Name Employee Name DateTime Stamp . and employee lookups are not merged as a single table. corporate office lookup. an organization can provide incentives to employees and subsidies to the branches to increase further sales. Example of Organization Dimension: Figure 1. for normalization purposes. region lookup. In a dimensional data modeling(star schema). we can calculate the sales on region basis.

11 Year Lookup Year Id Year Number DateTimeStamp 1 2004 1/1/2005 11:23:31 AM 2 2005 1/1/2005 11:23:31 AM Quarter Lookup Quarter Number Quarter Name DateTimeStamp 1 Q1 1/1/2005 11:23:31 AM 2 Q2 1/1/2005 11:23:31 AM 3 Q3 1/1/2005 11:23:31 AM 4 Q4 1/1/2005 11:23:31 AM Month Lookup Month Number Month Name DateTimeStamp 1 January 1/1/2005 11:23:31 AM 2 February 1/1/2005 11:23:31 AM 3 March 1/1/2005 11:23:31 AM 4 April 1/1/2005 11:23:31 AM 5 May 1/1/2005 11:23:31 AM 6 June 1/1/2005 11:23:31 AM 7 July 1/1/2005 11:23:31 AM 8 August 1/1/2005 11:23:31 AM 9 September 1/1/2005 11:23:31 AM 10 October 1/1/2005 11:23:31 AM 11 November 1/1/2005 11:23:31 AM 12 December 1/1/2005 11:23:31 AM Week Lookup Week Number Day of Week DateTimeStamp 1 Sunday 1/1/2005 11:23:31 AM 1 Monday 1/1/2005 11:23:31 AM 1 Tuesday 1/1/2005 11:23:31 AM 1 Wednesday 1/1/2005 11:23:31 AM 1 Thursday 1/1/2005 11:23:31 AM 1 Friday 1/1/2005 11:23:31 AM 1 Saturday 1/1/2005 11:23:31 AM 2 Sunday 1/1/2005 11:23:31 AM 2 Monday 1/1/2005 11:23:31 AM 2 Tuesday 1/1/2005 11:23:31 AM . quarter lookup. This dimensions helps to find the sales done on date. In a dimensional data modeling(star schema). year lookup. these tables would be merged as a single table called TIME DIMENSION for performance and slicing data. for normalization purposes. monthly and yearly basis.1 American Bank South East Florida-Tampa Paul Young 1/1/2005 11:23:31 AM 2 American Bank Mid West Illinois-Chicago Chris Davis 1/1/2005 11:23:31 AM Time Dimension In a relational data model. weekly. Example of Time Dimension: Figure 1. and week lookups are not merged as a single table. month lookup. We can have a trend analysis by comparing this year sales with the previous year or this week sales with the previous week.

Measures that can be added across few dimensions and not with others. time dimension. The primary key of a fact table is usually a composite key that is made up of all of its foreign keys. Fact tables store different types of measures like additive. region name). non additive and semi additive measures. • Semi Additive . branch name. These tables are called as Factless Fact tables. it is possible to have a fact table that contains no measures or facts. A fact table typically has two types of columns: those that contain facts and those that are foreign keys to dimension tables. DEC 19 What is Fact Table? Fact Table The centralized table in a star schema is called as FACT table.(reg ion name. • Identify dimensions for facts (product dimension. Measure Types • Additive . • Identify measures or facts (sales dollar). A fact table might contain either detail level facts or facts that have been aggregated (fact tables that contain aggregated facts are often instead called summary tables). In the example fig 1. organization dimension). • Non Additive . • Determine the lowest level of summary in a fact table (sales dollar). • List the columns that describe each dimension.2 Wednesday 1/1/2005 11:23:31 AM 2 Thursday 1/1/2005 11:23:31 AM 2 Friday 1/1/2005 11:23:31 AM 2 Saturday 1/1/2005 11:23:31 AM Time Dimension Time Dim Id Year No Day of Year Quarter No Month No Month Name Month Day No Week No Day of Week Cal Date DateTime Stamp 1 2004 1 Q1 1 January 1 1 5 1/1/2004 1/1/2005 11:23:31 AM 2 2004 32 Q1 2 February 1 5 1 2/1/2004 1/1/2005 11:23:31 AM 3 2005 1 Q1 1 January 1 1 7 1/1/2005 1/1/2005 11:23:31 AM 4 2005 32 Q1 2 February 1 5 3 2/1/2005 1/1/2005 11:23:31 AM Posted 19th December 2011 by Prafull Dangore 0 Add a comment 32.6 "Sales Dollar" is a fact(measure) and it can be added across several dimensions. for a product in a year within a location sold or serviced by an employee .Measures that can be added across all dimensions. Steps in designing Fact Table • Identify a business process for analysis (like sales).Measures that cannot be added across all dimensions. location dimension. In the real world. 33.

• Determine the lowest level of summary in a fact table(sales dollar). • List the columns that describe each dimension. These hierachies helps to drill down the data from topmost hierarchies to the lowermost hierarchies. DEC 19 Steps in designing Star Schema Steps in designing Star Schema • Identify a business process for analysis (like sales). • Identify measures or facts (sales dollar). By default every row is marked to be inserted in the target table. branch name. • Whereas hierarchies are broken into separate tables in snow flake schema. time di mension. a dimension table will not have any parent table. • Hierarchies for the dimensions are stored in the dimensional table itself in star schema.(region name. The condition can be specified in Update Strategy to mark the processed row for update or insert. a dimension table will have one or more parent tables. If the row has to be updated/inserted based on some logic Update Strategy transformation is used.Posted 19th December 2011 by Prafull Dangore 0 Add a comment 34. organization dimension). region name). Posted 19th December 2011 by Prafull Dangore 0 Add a comment 35. location dimension. Important aspects of Star Schema & Snow Flake Schema • In a star schema every dimension will have a primary key. • In a star schema. • Identify dimensions for facts (product dimension. DEC 19 What is update strategy and what are the options for update strategy? Scenario: What is update strategy and what are the options for update strategy? Solution: Informatica processes the source data row-by-row. . • Whereas in a snow flake schema.

the Aggregator transformation can only be used to perform calculations on groups. Whenever a session is created for a mapping Aggregate Transformation. the aggregator transformation outputs the last row of each group unless otherwise specified in the transformation properties. . Unlike Expression Transformation. COUNT. DD_DELETE : If this is used the Update Strategy flags the row for deletion. MIN. STDDEV. Equivalent numeric value of DD_REJECT is 3. DD_REJECT : If this is used the Update Strategy flags the row for rejection. Equivalent numeric value of DD_UPDATE is 1. MAX.Following options are available for update strategy : DD_INSERT : If this is used the Update Strategy flags the row for insertion. such as averages and sums. DEC 19 What is aggregator transformation? & what is Incremental Aggregation? Scenario: What is aggregator transformation? & what is Incremental Aggregation? Solution: The Aggregator transformation allows performing aggregate calculations. the session option for Incremental Aggregation can be enabled. The Expression transformation permits calculations on a row-byrow basis only. When PowerCenter performs incremental aggregation. Equivalent numeric value of DD_INSERT is 0. SUM. VARIANCE. FIRST. PERCENTILE. Posted 19th December 2011 by Prafull Dangore 0 Add a comment 36. LAST. it passes new source data through the mapping and uses historical cache data to perform new aggregation calculations incrementally. Equivalent numeric value of DD_DELETE is 2. MEDIAN. Various group by functions available in Informatica are : AVG. While grouping the data. Aggregator Transformation contains group by ports that indicate how to group the data. DD_UPDATE : If this is used the Update Strategy flags the row for update. 37.

Posted 19th December 2011 by Prafull Dangore 0 Add a comment 39. DEC 19 What are the different types of locks? Scenario: What are the different types of locks? Solution: There are five kinds of locks on repository objects: 1. Worklet & workflow? . 4. Sessions. 3. Sessions. Also created when you open an object with an existing write lock. Execute lock => Created when you start a session or batch. mapplets. DEC 19 What is Shortcuts. Write lock => Created when you create or edit a repository object in a folder for which you have write permission. Batches. mapplets.Posted 19th December 2011 by Prafull Dangore 0 Add a comment 38. mappings. or when the Informatica Server starts a scheduled session or batch. Batches. 2. Read lock => Created when you open a repository object in a folder for which you do not have write permission. Worklet & workflow? Scenario: What is Shortcuts. Fetch lock => Created when the repository reads information about repository objects from the database. Save lock => Created when you save information to the repository. 5. mappings.

Shortcuts to folders in the same repository are known as local shortcuts. Mappings A mapping specifies how to move and transform data from sources to targets. 41. Use the Mapping Designer tool in the Designer to create mappings. or a domain. Sessions and Batches Sessions and batches store information about how and when the Informatica Server moves data through mappings. and mapplets.Solution: Shortcuts? We can create shortcuts to objects in shared folders. Mappings include source and target definitions and transformations. DEC . Mapplets You can design a mapplet to contain sets of transformation logic to be reused in multiple mappings within a folder. Use the Mapplet Designer tool in the Designer to create mapplets. session A session is a set of instructions to move data from sources to targets. Use the Server Manager to create sessions and batches. Rather than recreate the same set of transformations each time. then add instances of the mapplet to individual mappings. You can group several sessions together in a batch. Worklet Worklet is an object that represents a set of tasks. all shortcuts inherit the change. We use a shortcut as if it were the actual object. Shortcuts provide the easiest way to reuse objects. you can create a mapplet containing the transformations. workflow A workflow is a set of instructions that tells the Informatica server how to execute the tasks. Posted 19th December 2011 by Prafull Dangore 0 Add a comment 40. a repository. We use the Designer to create shortcuts. and when we make a change to the original object. Transformations describe how the Informatica Server transforms data. Shortcuts to the global repository are called global shortcuts. Mappings can also include shortcuts. You create a session for each mapping you want to run. reusable transformations.

. The domain supports the administration of the distributed services.x Architecture Informetica PowerCenter 8.19 Informetica PowerCenter 8. A domain is a collection of nodes and services that you can group in folders based on administration ownership.x Architecture The PowerCenter domain is the fundamental administrative unit in PowerCenter.

Services for the domain include the Service Manager and a set of application services: Service Manager. DEC 16 What is a Star Schema? Which Schema is preferable in performance oriented way? Why? Scenario: What is a Star Schema? Which Schema is preferable in performance oriented way? Why? . The application services that runs on a node depend on the way you configure the services. Services that represent PowerCenter server-based functionality. authorization. For more information about the Service Manager. It runs the application services and performs domain functions on each node in the domain. Services and processes run on nodes in a domain. Application services. One node in the domain acts as a gateway to receive service requests from clients and route them to the appropriate service and node. Posted 19th December 2011 by Prafull Dangore 0 Add a comment 42. A service that manages all domain operations. Some domain functions include authentication. The availability of a service or process on a node depends on how you configure the service and the node. such as the Repository Service and the Integration Service. and logging. see Service Manager.A node is the logical representation of a machine in a domain.

Dimension Tables contain descriptive information about those transactions or values.  In Star Schemas. Because Dimension Tables are denormalized. Dimension Tables are denormalized tables and Fact Tables are highly normalized. It is called a star schema because the entity-relationship diagram between dimensions and fact tables resembles a star where one fact table is connected to multiple dimensions. The center of the star schema consists of a large fact table and it points towards the dimension tables. there will be no need to go for joins all the time. Steps in designing Star Schema • Identify a business process for analysis(like sales). performance increase and easy understanding of data. . The advantage of star schema are slicing down. one Fact Table and multiple Dimension Tables.Solution: A Star Schema is composed of 2 kinds of tables. F1act Table contains the actual transactions or values that are being analyzed. Star Schema Star Schema is preferable because less number of joins will result in performance. • Identify measures or facts (sales dollar).

the first name and last name should also be loaded in uppercase Error logging As per client’s IT standards. • Hierarchies for the dimensions are stored in the dimensional table itself in star schema. Requirements The load should have a batch number of 100. region name). These hierachies helps to drill down the data from topmost hierachies to the lowermost hierarchies. • Determine the lowest level of summary in a fact table(sales dollar). time dimension. location dimension. loaded successfully and failed due to errors. its been decided that all user keys on the table should be stored in uppercase. This file contains Contact records which need to be loaded into the EIM_CONTACT table. • Whereas hierachies are broken into separate tables in snow flake schema. We need to convert this into (XXX) YYYYYYY format where XX X is the area code in brackets followed by a space and the 7 digit telephone number. a dimension table will not have any parent table. • Whereas in a snow flake schema. organization dimension).• Identify dimensions for facts(product dimension. Any extensions should be dropped. Informatica mapping from flat file to EIM_CONTACT table . its expected that any data migration run would provide a automated high level report (a flat file report is acceptable) which will give information on how many records were read. If the record count exceeds 500. Some facts A contact can be uniquely identified by concatenating the First name with the Last name and Zip code. branch name. DEC 16 Informatica Case study Informatica Case study Scenario: Data has to be moved from a legacy application to Siebel staging tables (EIM). Important aspects of Star Schema & Snow Flake Schema • In a star schema every dimension will have a primary key. The client will provide the data in a delimited flat file. increment the batch number by 5 Since the flat file may have duplicates due to alphabet case issues. • In a star schema. Output expected from case study: 1. a dimension table will have one or more parent tables. Known issues A potential problem with the load could be the telephone number which is currently stored as a string (XXX-YYY-ZZZZ format). Posted 16th December 2011 by Prafull Dangore 0 Add a comment 43. For uniformity sake.(region name. • List the columns that describe each dimension.

d. a. 3. d. b. Informatica is an ETL tool where ETL stands for Extract – Transform – Load Evaluate – Transform – Load Extract – Test – Load Evaluate – Test – Load Informatica allows for the following: One source – multiple targets to be loaded within the same mapping Multiple source – multiple targets to be loaded within the same mapping Multiple sources – single target to be loaded within the same mapping Multiple sources – multiple targets to be loaded provided mapplets are used within the mapping The ____ manages the connections to the repository from the Informatica client application Repository Server Informatica Server Informatica Repository Manager Both a & b During development phase. d. c. Log file created for error logging Posted 16th December 2011 by Prafull Dangore 0 Add a comment 2. c. a. b. DEC 16 Informatica Training Effectiveness Assessment Informatica Training Effectiveness Assessment Trainee: Trainer: Date of training: 1. a. b. 3. a. its best to use what type of tracing levels to debug errors Terse tracing Verbose tracing Verbose data tracing .2. c. 4. b. 2. c.

Repository Server. 4. b. The file cannot be loaded from your workstation. however. c. d. to create the repository we need the Informatica client installed and the repository server process should be running There is a requirement to concatenate the first name and last name from a flat file and use this concatenated value at 2 locations in the target table. You then proceeded with developing a mapping and validated it for correctness using the “Validate” function. Informatica did not have access to the NT directory on your workstation where the file is stored d. b. b. c. True False There is a requirement to increment a batch number by one for every 5000 records that are loaded.d. Informatica Client Either of the above is fine. You then set it up for execution in the workflow manager. 8. what is the installation sequence? Informatica Client. b. Informatica Client Repository Server. a. b. a. a. 5. You forgot to mention the location of the file in the workflow properties and hence the error 11. you get an error stating that the file was not found. 2. 7. 3. 1. The most probable cause of this error is: a. The best way to achieve this is: Create a mapplet to encapsulate the reusable functionality and call this in the 3 mappings Create a worklet and reuse this at the session level during execution of the mapping Cut and paste the code across the 3 mappings Keep this functionality as a separate mapping and call this mapping in 3 different mappings – this would make the code modular and reusable 9. The best way to achieve this functionality is by using the Expression transformation Filter transformation Aggregator transformation Using the character transformation The workflow monitor does not allow the user to edit workflows.TXT” from you workstation into the Source qualifier in Informatica client. When you execute the mapping. c. Informatica Server. You imported a delimited flat file “ABC. Your mapping is not correct and the file is not being parsed correctly by the source qualifier b. 10. Normal tracing During Informatica installation. The best way to achieve this is: Use Mapping parameter in the session Use Mapping variable in the session Store the batch information in the workflow manager Write code in a transformation to update values as required There is a requirement to reuse some complex logic across 3 mappings. Repository Server. it has to be on the server c. Various administrative functions such as folder creation and user access control are done using: Informatica Administration console Repository Manager Informatica Server . 6. a. Informatica Server Informatica Server. c. d. d. a.

d.

Repository Server

12. You created a mapping a few months back which is not invalid because the database schema underwent updates in the form of new column extensions. In order to fix the problem, you would: a. Re-import the table definitions from the database b. Make the updates to the table structure manually in the mapping c. Informatica detects updates to table structures automatically. All you have to do is click on “Validate” option for the mapping d. None of the above. The mapping has to be scrapped and a new one needs to be created 13. a. b. c. 14. a. b. c. d. 15. a. b. c. d. The parameter file is used to store the following information Workflow parameters, session parameters, mapping parameters and variables Workflow variables, session variables, mapping variables Mapping parameters, session constants, workflow constants. The Gantt chart view in Informatica is useful for: Tracking dependencies for sessions and mappings Schedule workflows View progress of workflows and view overall schedule Plan start and end dates / times for each workflow run When using the debugger function, you can stop execution at the following: Errors or breakpoints Errors only Breakpoints only First breakpoint after the error occurs

16. There is a requirement to selectively update or insert values in the target table based on the value of a field in the source table. This can be achieved using: a. Update Strategy transformation b. Aggregator transformation c. Router transformation d. Use the Expression transformation to write code for this logic 17. A mapping can contain more than one source qualifier – one for each source that is imported. a. True b. False 18. a. b. c. d. 19. a. b. c. d. Which of the following sentences are accurate Power Channels are used to improve data migration across WAN / LAN networks Power Channels are adapters that Informatica provides for various ERP / CRM packages Power Connect are used to improve data migration across WAN / LAN networks None of the above To create a valid mapping in Informatica, at a minimum, the following entities are required: Source, Source Qualifier, Transformation, Target Source Qualifier, Transformation, Target Source and Target Source, Transformation, Target

20. When one imports a relational database table using the Source Analyzer, it always creates the following in the mapping: a. An instance of the table with a source qualifier with a one to one mapping for each field b. Source sorter with one to one mapping for each field c. None of the above Name: Score: Pass / Fail: Ans: 1. a 2. b 3. a 4. c 5. a 6. a 7. a 8. b 9. a 10. b 11. b 12. a,b 13. a 14. c 15. a 16. a 17. b8. a 19. a 20. a
Posted 16th December 2011 by Prafull Dangore
0

Add a comment

4.
DEC

16

What is difference between mapping Parameter, SESSION Parameter, Database connection session parameters? It’s possible to create 3parameters at a time? If Possible which one will fire FIRST?
Scenario: What is difference between mapping Parameter, SESSION Parameter, Database connection session parameters? It’s possible to create 3parameters at a time? If Possible which one will fire FIRST? Solution: We can pass all these three types of parameters by using Perameterfile.we can declare all in one parameter file. A mapping parameter is set at the mapping level for values that do not change from session to session for example tax rates. Session parameter is set at the session level for values that can change from sesion to session, such as database connections for DEV, QA and PRD environments.

The database connection session parameters can be created for all input fields to connection objects. For example, username, password, etc. It is possible to have multiple parameters at a time? The order of execution is wf/s/m.

Posted 16th December 2011 by Prafull Dangore
0

Add a comment

5.
DEC

16

What is parameter file? & what is the difference between mapping level and session level variables?
Scenario: What is parameter file? & what is the difference between mapping level and session level variables? Solution: Parameter file it will supply the values to session level variables and mapping level variables. Variables are of two types: Session level variables Mapping level variables Session level variables are of four types: $DBConnection_Source $DBConnection_Target $InputFile $OutputFile Mapping level variables are of two types: Variable Parameter What is the difference between mapping level and session level variables? Mapping level variables always starts with $$. A session level variable always starts with $.

       

Posted 16th December 2011 by Prafull Dangore
0

DEC 16 Differences between dynamic lookup cache and static lookup cache? Scenario: Differences between dynamic lookup cache and static lookup cache? Solution: Dynamic Lookup Cache Static Lookup Cache In dynamic lookup the cache memory will In static lookup the cache memory will not get refreshed as soon as the record get get refreshed even though record inserted . If both workflow exists in same folder we can create 2 worklet rather than creating 2 workfolws. We can set the dependency between these two workflow using shell script is one approach.  1. 7.  3.  If both workflows exists in different folders or repository then we cannot create worklet. The other approach is event wait and event rise.  2. DEC 16 What is Worklet? Scenario: What is Worklet? Solution: Worklet is a set of reusable sessions.  5. There we can set the dependency.Add a comment 6. Finally we can call these 2 worklets in one workflow. If we want to run 2 workflow one after another. Posted 16th December 2011 by Prafull Dangore 0 Add a comment 8. We cannot run the worklet without workflow.

It is a default cache.inserted or updated/deleted in the lookup table. Where as in lookup we can configure to use persistence cache. If we use static lookup first record it will go to lookup and check in the lookup cache based on the condition it will not find the match so it will return null value then in the router it will send that record to insert flow. or updated in the lookup table it will refresh only in the next session run. shared cache. you can only use the equality operator in the lookup condition. What informatica mapping has to do here is first record needs to get insert and last record should get update in the target table. . NewLookupRow port will enable automatically. When we configure a lookup transformation to use a dynamic lookup cache. In joiner we cannot configure to use persistence cache. Posted 16th December 2011 by Prafull Dangore 0 Add a comment 9. shared cache. uncached and dynamic cache. DEC 16 What is the difference between joiner and lookup? Scenario: What is the difference between joiner and lookup? Solution: Joiner In joiner on multiple matches it will return all matching records. But still this record dose not available in the cache memory so when the last record comes to lookup it will check in the cache it will not find the match so it returns null value again it will go to insert flow through router but it is suppose to go to update flow because cache didn’t get refreshed when the first record get inserted into target table. uncached and dynamic cache Lookup In lookup it will return either first record or last record or any value or error value. Best example where we need to use dynamic cache is if suppose first record and last record both are same but there is a change in the address.

you will do well on the examination.We cannot override the query in joiner We can perform outer join in joiner transformation. If you are thoroughly knowledgeable in the areas mentioned in this Skill Set Inventory. <. The examination is designed to test for “expert level” knowledge. (i.1 The PowerCenter 8 Mapping Design examination is composed of the fourteen sections listed below.>. We cannot perform outer join in lookup transformation. . Designer configuration A. We cannot use relational operators in joiner transformation. In order to ensure that you are prepared for the test. The Informatica documentation is an excellent source of information on the material that will be covered in the examination. Informatica strongly urges you to attain a complete understanding of these topics before you attempt to take the examination. review the subtopics associated with each section.(i. and non-shared folders. 11.e. 1. Be familiar with the rules for using shared B.>. Understand the meaning of each of the Designer configuration options. <.1. Where as in lookup we can use the relation operators.e. Hands-on experience with the software is the best way to gain this understanding.<= and so on) We can override the query in lookup to fetch the data from multiple tables. DEC 15 Syllabus for Informatica Professional Certification Skill Set Inventory Informatica Professional Certification Examination S PowerCenter 8 Mapping Design Includes Informatica PowerCenter 8.<= and so on) Posted 16th December 2011 by Prafull Dangore 0 Add a comment 10.

Know what Designer options can be configured D. Know the rules for linking transformation ports B. B. Know the types of source and target definitions PowerCenter supports. datatypes. Be familiar with the types of data operations that can be performed at the port level. for each. Transformation ports separately for each client machine. A. E. Understand how the repository stores referential integrity. Know what types of transformation ports are supported and the uses D. such as Find. A. 4. Understand how to use strings correctly in PowerCenter expressions. 2. Know the rules regarding connecting transformations to other transformations. 5. Understand how editing source and target definitions affects associated objects such as mappings and mapplets. B. 3. Transformation language . Understand the rules and guidelines of overriding target types. C. Know how to edit flat file definitions at any time. Know the rules for using and converting the PowerCenter C. Validation A. F. Know how to determine if a session is considered to have heterogeneous targets. D. Know the rules for mapping and mapplet validation. C. Be familiar with the Designer toolbar functions. D. Source and target definitions together. Know all the possible reasons why an expression may be invalid.C.

Know the rules and guidelines for using connected and unconnected Lookup transformations. Understand how to be able to use a variable port in an Aggregator C. expressions. D. Know how to configure a Joiner transformation for sorted input. Understand how the Sorter transformation uses hardware resources. Know how to create and use Joiner transformations. Be familiar with all transformation language functions and B. Understand how to use the Source Qualifier transformation to perform various types of joins. D. Know what can happen to a given row for each different type of row operation. Be familiar with the meaning of all Lookup transformation properties. Know how to use an Update Strategy transformation in conjunction with the session properties. Be familiar with the Update Strategy transformation properties and options. B. C. D. Understand how the Source Qualifier transformation handles datatypes. C. Understand the difference in the ports used in the Sequence Generator transformation and how each can be used. 10. C. A. B. transformation. Aggregator transformation A. transformation. Know how the Integration Service processes a dynamic lookup cache. Understand the supported join types and options available for controlling the join.A. C. Know the rules associated with defining and using aggregate caches. Be able to predict the output of a given Aggregator D. Joiner transformation A. Know how the Integration Service processes data at a Sorter transformation. 6. Understand the meaning and use of the Distinct Output Rows Sorter transformation property. Know the rules and guidelines for using the Sorter transformation. E. C. E. Update Strategy transformation A. B. 8. Know what types of Lookup transformations are supported under various configurations. Know how the Integration Service evaluates C. functions. 9. 11. B. Understand how an Update Strategy transformation affects downstream transformations. . Source Qualifier transformation key words. Know how to use PowerCenter aggregate B. Lookup transformation A. Be able to predict the output or result of a given expression. Know the ways a Lookup transformation may cause a session to fail. Sorter and Sequence Generator transformations A. Know how the default query is generated and the rules for modifying it. B. 7.

Mapplets and reusable logic A. Know the purpose and uses for each of the windows in the client tools (Output window. 2. Know the rules and guidelines for previewing data using the PowerCenter Client.12. 14. mapplets. Platform components and Service Architecture A. regarding and output passive mapplets.1 The PowerCenter 8 Architecture and Administration examination is composed of the twelve sections listed below. Workflow Manager. Data preview A. Hands-on experience with the software is the best way to gain this understanding. Skill Set Inventory Informatica Professional Certification Examination R PowerCenter 8 Architecture and Administration Includes Informatica PowerCenter 8. B. Informatica strongly urges you to attain a complete understanding of these topics before you attempt to take the examination.. Designer.1. Be familiar with the rules and guidelines B. The Informatica documentation is an excellent source of information on the material that will be covered in the examination. Know the purpose and uses for each of the tabs and folders in the PowerCenter Administration Console. . Be able to define all object types and properties used by the client and service tools. review the subtopics associated with each section. Know how to use mapplet Output transformations C. 1. Repository Manager. Know the rules regarding active and D. Task View. Understand how to create and use Router and Filter Transformations. Know the rules and guidelines for copying parts of a mapping. Know what operations can be performed with each client tool (Administration Console. groups. Be able to specify which components are necessary to perform various development and maintenance operations. C. Workflow Monitor). 13. you will do well on the examination. etc). If you are thoroughly knowledgeable in the areas mentioned in this Skill Set Inventory. In order to ensure that you are prepared for the test. D. The examination is designed to test for “expert level” knowledge. Gantt Chart View. B. Details window. Filter and Router transformations A. Know the connectivity requirements and options for previewing data using the PowerCenter Client. Nomenclature A. Navigator window.

Know the meaning of the terms used to describe development and maintenance operations. Understand the differences between copies and shortcuts. Know how each client and service component communicates with relational databases. B. H. Be familiar with the connectivity options that are available for the different tools. Understand how the client and service tools access flat files. 6. Security A. E. C. 7. E. Understand the relationships between all PowerCenter object F. Repository Service A. B. Be familiar with the security permissions for application users. D. C. Know the rules associated with transferring and sharing objects between folders. E. Know how to work with repository E. F. Repository organization and migration . Know which tools are used to create and modify all objects.B. Be familiar with the data movement mode options. D. Be familiar with the sequence of events involving starting the Repository Service. 5. Be familiar with network related requirements and limitations. 3. C. Know which passwords and other key information are needed to install and connect new client software to a service environment. Know the basic steps for creating and configuring application users. Know the requirements for using various types of ODBC drivers with the client tools. D. Know which components are needed to perform a repository upgrade. Be familiar with the properties of the Repository Service and the Integration Service. Know how local and global repositories interact. Be familiar with the meaning of the various user types for an Informatica system. Know which object properties are inherited in shortcuts. Understand how user security affects folder operations. C. G. types. D. Object sharing A. B. Know which repository operations can be performed from the command line. D. B. and XML Files. Understand the basic procedure for installing the client and service software. Know the rules associated with transferring and sharing objects between repositories. COBOL files. Know what non-Informatica hardware and software is required for installation. Know the meaning of all database connection properties. 4. Installation A. variables. C.

F. B. Workflow properties A. Know the rules for linking tasks within a workflow. E. Understand the various options for organizing a repository. Workflow Manager configuration A. Know the capabilities and limitations of folders and other repository objects. Be able to identify which interface features in the Workflow Manager are user configurable. 11. Be familiar with all user-configurable workflow properties. C. files. C. Understand how tasks behave when run outside of a workflow. external loader. D. Be familiar with the properties and rules of all types of workflow tasks. E. Know what privileges and permissions are needed to perform various operations with the Workflow Manager. B. B. 10. D. Know the differences between using native and ODBC database connections in the Integration Service. D. Know how to use a workflow to read a parameter file. Be familiar with how a repository stores information about its own properties. Be familiar with the rules associated with workflow links. Database connections A. E. Know what type of information is stored in the repository. Understand how to to abort work or with stop a workflow and workflow session or log task. Know what permissions are required to make all possible changes to workflow properties. C. F. Understand how and why target rows may be rejected for loading. . Understand how and why the client tools use database connectivity. B. Know which mapping properties can be overridden at the session level. Know the reasons why a workflow may fail and how these reasons relate to the workflow properties. Know the differences between client and service connectivity. 8. Know how B. 12. B. and FTP configuration using the Workflow Manager. C. Know what types of privileges and permissions are needed to run and schedule workflows. Know how to work with reusable workflow schedules. Running and monitoring workflows A. Be familiar with database. D.A. Be familiar with metadata extensions. C. Workflow and task errors A. 9. Understand the purpose and relationships between the different types of code pages.

Be familiar with how transformation functions handle null values. If you are thoroughly knowledgeable in the areas mentioned in this Skill Set Inventory. E. The exams are intended for candidates who have acquired their knowledge through hands-on experience. All candidates should be aware that Informatica does n’t recommend preparing for certification exams only by studying the software documentation. Know how to use the Workflow Monitor to quickly determine the status of any workflow or task files.Understand the date/time formats available in the transformation language. Know the rules for working with C. Know how to test expressions in a debug session. 3. The Debugger A. XML sources and targets A. The examination is designed to test for “expert level” knowledge.Know how Debugger session properties and breakpoints can be saved.1 The PowerCenter 8 Advanced Mapping Design examination is composed of the sections listed below. there will not be any surprises when you take the examination. Know how to define and use an XML target definition.Know the valid input datatypes for the various conversion functions. E. F.1. Know the limitations associated with using XML targets. 2. D. Understand how to work with reject D. Informatica strongly urges you to attain a complete understanding of these topics before you attempt to take the examination. C.Be familiar with the options available while using the Debugger. Datatype formats and conversions A. E. . Skill Set Inventory Informatica Professional Certification Examination U PowerCenter 8 Advanced Mapping Design Includes Informatica PowerCenter 8. Hands-on experience with the software is the best way to gain this understanding. breakpoints. D. Know how to edit existing XML definitions. B. debug session.Know how to extract a desired subset of data from a given input (the hour from a date/time value. D. Know how transformation functions behave when given incorrectly formatted arguments. Be familiar with the procedure to run a B.Be familiar with the procedures and methods involved in defining an XML source definition. for example). B. C.C. This Skill Set Inventory is intended to help you ensure that there are no gaps in your knowledge. Understand how the Designer validates XML sources. 1. Understand how XML definitions are related to code pages.

B.Understand the rules and guidelines of the Custom and Union transformations. Know the purpose of the Union transformation. input and output groups.Know the difference between effective and ineffective Transaction Control transformation.Understand the differences between mapping parameters and variables. Mapping parameters and variables A. E. etc. E. F.Know how to improve Lookup transformation performance. Be familiar with the rules regarding Lookup SQL overrides. a mapping. 8. Know the parameter order of precedence. Be familiar with the rules regarding reusable Normalizer transformations.Know how the OCCURS and REDEFINES COBOL keywords affect the Normalizer transformation. Know how to create and use mapping parameters and variables. Know how to create expressions with user-defined functions.Know the advantages and disadvantages of dynamic lookup caches. 6. 5.Understand how the Transaction Control transformation works and the purpose of each related variable.Know how to create and use a Transaction Control transformation in a mapping. D. C. Understand what affects the value of a mapping variable. C. . Understand how to read a COBOL data source in C. User-defined functions A.4. B. 7. B. Transaction control A. C. D. Custom and Union transformations A.Be familiar with the rules affecting parameters used with mapplets and reusable transformations. Understand how the Custom transformation works. B.Know how to use a dynamic Lookup transformation in a mapping. Know the difference between static and dynamic lookup caches. E. Understand the different properties for user-defined functions. scope. Normalizer transformation A. 9. C. D. B. D.Know how to use and manage user-defined functions. modes. Know the scope of user-defined functions. Be familiar with the possible uses of the Normalizer B. Know how to create user-defined functions. Understand how to use the property mapping variable aggregation type. C.Lookup transformation caching and performance A.

Know how to use expressions to set variables. Know the difference between a deleted object and a purged object. Installation and Support Posted 15th December 2011 by Prafull Dangore 0 Add a comment 12. 11. Mapping optimization A. 13.Be familiar with specific mapping optimization techniques described in the PowerCenter documentation. Know the details behind the meaning and use of expression time stamps. Incremental Aggregation A. D. B.Know the allowable input datatypes for the Informatica transformation language functions. C. Know how to work with the session Sort Order property. B.Understand how to view object history and how/when objects get versioned. Advanced expressions A.Know which files are created when a session using incremental aggregation runs.Know how to use local variables to improve transformation performance. DEC 15 . B. Be familiar with all special functions. Understand how to use the system variables. D.Understand how incremental aggregation works. Understand parent/child relationships in a versioned repository.transformations and what makes it effective or ineffective. E. such as ABORT and ERROR. C. D. Global SDK 15. 12. C. Know when is the best time in the development cycle to optimize mappings.Know how to use incremental aggregation C.Know how to collect and view performance details for transformations in a mapping.Understand the meaning and purpose of a transaction control unit. B. 14. Version control A. 10. E. D.Know the rules and guidelines for using Transaction Control transformations in a mapping.

a target Consider * * * * * * performing Drop the following indexes Increase Use Use database Optimize tasks to and checkpoint bulk external network target increase key Bottlenecks to a target target. You have a source bottleneck if the new session runs in about the same time. a read test mapping. Increase packet performance: constraints. size.measure the time taken to process a given amount of data. loading. Measure the time taken to return the first row and the time to return all rows. * Extract the query from the session log and run it in a query tool. then add an always false filter transformation in the mapping after each source qualifier so that no data is processed past the filter transformation. Rank. filters. intervals. If the you have bottleneck.Informatica Performance Tuning Scenario: Informatica Performance Tuning Solution: Identifying Target -----------------------------The most common performance bottleneck occurs when the Informatica Server writes database. loading. * Read Test Session . or a database query to identify source bottlenecks: * Filter Transformation . You can also identify mapping bottlenecks by examining performance counters. If there is a significant difference in time. Identifying Mapping Bottlenecks ------------------------------If you determine that you do not have a source bottleneck. You have a source bottleneck if the new session runs in about the same time. add an Always False filter transformation in the mapping before each target definition so that no data is loaded into the target tables. you have a mapping bottleneck. protocol. you can use a filter transformation. If the time it takes to run the new session is the same as the original session. Identifying Source Bottlenecks -----------------------------If the session reads from relational source. or Joiner . databases. you can use an optimizer hint to eliminate the source bottleneck Consider performing the following tasks to increase * Optimize the * Use conditional * Increase database network packet * Connect to Oracle databases using IPC performance: query. You can identify target bottlenecks by configuring the session to write to a flat file session performance increases significantly when you write to a flat file.compare the time taken to process a given set of data using the session with that for a session based on a copy of the mapping with all transformations after the source qualifier removed with the source qualifiers connected to file targets. Readfromdisk and Writetodisk Counters: If a session contains Aggregator. size.

Optimize ---------------lookups . the Informatica Server pauses to determine its cause. examine each Transformation_readfromdisk and Transformation_writetodisk counter. the counters must be examined during the run. which is more likely to be a bottleneck Errorrows Counters: If a session has large numbers in any of the Transformation_errorrows counters.instead of reading the same data several times. then back to an Integer column. 2. Note that if the session uses Incremental Aggregation. if your mapping moves data from an Integer column to a Decimal column. multiply the MaxSessions parameter by 200kb to calculate the additional shared memory requirements. and stores it in the same directory as the session log. BufferInput_efficiency and BufferOutput_efficiency counters: Any dramatic difference in a given set of BufferInput_efficiency and BufferOutput_efficiency counters indicates inefficiencies that may benefit from tuning. If these counters display any number other than zero. Click the Performance tab in the Properties dialog box. Set session property Collect Performance Data (on Performance tab) 2. 2. because the Informatica Server writes to disk when saving historical data at the end of the run. you can improve session performance by increasing the index and data cache sizes. In large numbers they restrict performance because for each one. the unnecessary data type conversion slows performance. The Informatica Server names the file session_name. To view the performance details file: 1.transformations. To view performance details in the Workflow Monitor: 1. Eliminate Transformation Errors (conversion errors.perf. remove the row from the data flow and write it to the session log or bad file. you might improve performance by eliminating the errors. Open the file in any text editor. General Optimizations --------------------Single-pass reading . Rowsinlookupcache Counter: A high value indicates a larger lookup. If you create performance details for all sessions. conflicting mapping logic. Locate the performance details file. right-click the session in the Workflow Monitor and choose Properties. While the session is running. As a short term fix. and any condition set up as an error. combine mappings that use the same set of source data and use a single source qualifier Avoid unnecessary data conversions: For example. such as null input). reduce the tracing level on sessions that must generate large numbers of errors. To enable collection of performance data: 1. Factor out common expressions/transformations and perform them before data pipelines split Optimize Char-Char and Char-Varchar Comparisons by using the Treat CHAR as CHAR On Read option in the Informatica Server setup so that the Informatica Server does not trim trailing spaces from the end of Char source fields. Increase the size of the Load Manager Shared Memory by 200kb for each session in shared memory that you configure to create performance details.

they are based on a complex view or an unindexed table) Optimize Cached lookups o Use a persistent cache if the lookup data is static o Share caches if several lookups are based on the same data set o Reduce the number of cached rows using a SQL override with a restriction o Index the columns in the lookup ORDER BY o Reduce the number of co Posted 15th December 2011 by Prafull Dangore 0 Add a comment 13. DEC 15 Informatica OPB table which have gives source table and the mappings and folders using an sql query Scenario: Informatica OPB table which have gives source table and the mappings and folders using an sql query Solution: SQL query select OPB_SUBJECT. opb_widget_inst where opb_subject. opb_src.Cache lookups if o the number of rows in the lookup table is significantly less than the typical number of source rows o un-cached lookups perform poorly (e.SRC_ID and OPB_widget_inst.g.source_name from opb_mapping. OPB_MAPPING.SUBJECT_ID and OPB_MAPPING. opb_subject.SUBJ_ID = opb_mapping.widget_type=1. .WIDGET_ID = OPB_SRC. OPB_SRC.MAPPING_ID and OPB_WIDGET_Inst.MAPPING_ID = OPB_WIDGET_INST.SUBJ_NAME.MAPPING_NAME. Posted 15th December 2011 by Prafull Dangore 0 Add a comment 14.

EMPLOYEES_SRC. as the transformation logic is pushed to database. EMPLOYEES_SRC. The Integration Service can usually push more transformation logic to a database if a native driver is used.MANAGER_ID)) WHERE . HIRE_DATE. EMPLOYEES_SRC. When inserting into Targets. MANAGER_NAME.EMAIL || ‘@gmail. EMPLOYEES_SRC. EMAIL. COMMISSION_PCT. FIRST_NAME. Database schema (SQ Connection. 2)). no parsing time). EMPLOYEES_SRC. DEPARTMENT_ID) SELECT CAST(PM_SJEAIJTJRNWT45X3OO5ZZLJYJRY. the Integration Service cannot detect the database type and generates ANSI SQL. EMPLOYEE_ID. But In case of Pushdown Optimization.COMMISSION_PCT. EMPLOYEES_SRC. EMPLOYEES_SRC. For any SQL Override. JOB_ID. CAST(EMPLOYEES_SRC.com’) AS VARCHAR2(25)). making easier to debug.PHONE_NUMBER. COMMISSION_PCT. LAST_NAME. Integration Service do row by row processing using bind variable (only soft parse – only processing time. :5.JOB_ID. JOB_ID. :8. Few Benefits in using PO    There is no memory or disk space required to manage the cache in the Informatica server for Aggregator. :4. EMPLOYEES_SRC. :3. SQL Generated by Informatica Integration service can be viewed before running the session through Optimizer viewer. Similarly it also create sequences (PM_*) in the database.DEPARTMENT_ID FROM (EMPLOYEES_SRC LEFT OUTER JOIN EMPLOYEES PM_Alkp_emp_mgr_1 ON (PM_Alkp_emp_mgr_1. DEPARTMENT_ID) VALUES (:1. :7. the statement will be executed once. EMAIL.LAST_NAME. SALARY.HIRE_DATE AS date). MANAGER_ID.MANAGER_NAME.NEXTVAL AS NUMBER(15. :11.EMPLOYEE_ID.FIRST_NAME. else the session will fail. :10. The Source or Target Database executes the SQL queries to process the transformations. :9. NULL. Without Using Pushdown optimization: INSERT INTO EMPLOYEES(ID_EMPLOYEE. PHONE_NUMBER. :12. :13) –executes 7012352 times With Using Pushdown optimization INSERT INTO EMPLOYEES(ID_EMPLOYEE.EMPLOYEE_ID = EMPLOYEES_SRC. In case of ODBC drivers. LAST_NAME. EMPLOYEE_ID. the Integration Service translates the transformation logic into SQL queries and sends the SQL queries to the database. should have the Create View / Create Sequence Privilege.15. :6. :2. Sorter and Joiner Transformation. PHONE_NUMBER. instead of an ODBC driver. FIRST_NAME. Integration service creates a view (PM_*) in the database while executing the session task and drops the view after the task gets complete. HIRE_DATE. MANAGER_ID.MANAGER_ID. SALARY. LKP connection). Lookup. How does Pushdown Optimization (PO) Works? The Integration Service generates SQL statements when native database driver is used.SALARY. CAST((EMPLOYEES_SRC. When a session is configured to run for Pushdown Optimization. EMPLOYEES_SRC. DEC 15 What is Pushdown Optimization and things to consider Scenario: What is Pushdown Optimization and things to consider Solution: The process of pushing transformation logic to the source or target database by Informatica Integration service is known as Pushdown Optimization.

in the database. backup a repository. Creating Mapplets and Mappings etc. Viewing the history of sessions. Viewing the locks on various objects and removing those locks etc. Creating & editing folders within a repository. Repository Manage for Creating and Adding repositories. the Integration Service can treat null values as lowest. Establishing users. Monitoring the triggered sessions and batches. privileges & folder permissions. SYSDATE built-in variable: Built-in Variable SYSDATE in the Integration Service returns the current date and time for the node running the service process.MANAGER_ID = (SELECT PM_Alkp_emp_mgr_1. The database and Integration Service produce different output when the following settings and conversions are different: Nulls treated as the highest or lowest value: While sorting the data.MANAGER_ID))) OR (0=0)) –executes 1 time Things to note when using PO There are cases where the Integration Service and Pushdown Optimization can produce different result sets for the same transformation logic. What are the components of Informatica? And what is the purpose of each? Ans: Informatica Designer. the SYSDATE returns the current date and time for the machine hosting the database. case sensitivity. Scheduling the sessions & batches. giving post and pre session commands. Server Manager for creating sessions & batches. it cannot trace all the events that occur inside the database server. However.EMPLOYEE_ID = EMPLOYEES_SRC. Date Conversion: The Integration Service converts all dates before pushing transformations to the database and if the format is not supported by the database. the results can vary. the session fails. and sorting of data. but database treats null values as the highest value in the sort order. 2. The statistics the Integration Service can trace depend on the type of pushdown optimization. Logging: When the Integration Service pushes transformation logic to the database. the database handles the errors. creating database connections to various instances etc. delete. What is a repository? And how to add it in an informatica client? . DEC 14 Informatica Interview Questionnaire Informatica Interview Questionnaire 1.((EMPLOYEES_SRC. Designer for Creating Source & Target definitions. If the time zone of the machine hosting the database is not the same as the time zone of the machine running the Integration Service process. This can happen during data type conversion. When the database handles errors. When the Integration Service runs a session configured for full pushdown optimization and an error occurs. Copy. sequence generation. Server Manager & Repository Manager.EMPLOYEE_ID FROM EMPLOYEES PM_Alkp_emp_mgr_1 WHERE (PM_Alkp_emp_mgr_1.     Posted 15th December 2011 by Prafull Dangore 0 Add a comment 16. handling null values. groups. the Integration Service does not write reject rows to the reject file.

in that there is the menu “Database connections”. How to use an oracle sequence generator in a mapping? Ans: We have to write a stored procedure. Where are the source flat files kept before running the session? 7. by selecting any one of them you can import a source definition. you can enter the entire override. Lookup – Lookup looks up values and passes to other objects. 8. Filter – Filter serves as a conditional filter. 11. 12. You can even create reusable transformations in Transformation developer. It allows you to reuse transformation logic and can contain as many transformations as you need. How to create the source and target database connections in server manager? Ans: In the main menu of server manager there is menu “Server Configuration”. 6. 10. which can take the sequence name as input and dynamically generates a nextval from that sequence. Flat File. Name at least 5 different types of transformations used in mapping design and state the use of each. and then in the session properties of that mapping it is specified as file. Explain what is sql override for a source table in a mapping. Then in the mapping we can use that stored procedure through a procedure transformation. 3. or have the database perform aggregate calculations. Cobol File & XML file. by checking that it becomes reusable. From here you can create the Source and Target database connections. The lookup query override can include WHERE clause. You might enter your own SELECT statement. What are mapplets? How is it different from a Reusable Transformation? Ans: A mapplet is a reusable object that represents a set of transformations. . When you are in Warehouse Designer there is an option in main menu to import the target from Database. while a reusable transformation is a single one. When entering a Lookup SQL Override. 5. Its different than a reusable transformation as it may contain a set of transformations. What is a session and how to create it? Ans: A session is a set of instructions that tells the Informatica Server how and when to move data from sources to targets. Ans: Source Qualifier – Source Qualifier represents all data queries from the source.Aggregator performs aggregate calculations. or generate and edit the default SQL statement. So while creating the target definition for a file in the warehouse designer it is created considering it as a table. Ans: The Source Qualifier provides the SQL Query option to override the default query. Expression – Expression performs simple calculations. You create and maintain sessions in the Server Manager. 4. How can a transformation be made reusable? Ans: In the edit properties of any transformation there is a check box to make it reusable. What is lookup override? Ans: This feature is similar to entering a custom query in a Source Qualifier transformation. Aggregator .Ans: It’s a location where all the mappings and sessions related information is stored. We can add a repository through the Repository manager. You create mapplets in the Mapplet Designer. You can enter any SQL statement supported by your source database. 9. There is no way to import target definition as file in Informatica designer. How are the sources and targets definitions imported in informatica designer? How to create Target definition for flat files? Ans: When you are in source analyzer there is a option in main menu to Import the source from Database. or call a stored procedure or stored function to read the data and perform some tasks. XML from File and XML from sources you can select any one of these. Basically it’s a data base where the metadata resides.

Compare Router Vs Filter & Source Qualifier Vs Joiner. A source qualifier can join data coming from same source database. but before that there is a Stored Procedure transformation which takes around 5 minutes to run and gives output ‘Y’ or ‘N’. 18. 17. (Hint: since SP transformation takes more time compared to the mapping. 15. It can even join data from two tables from same database. W hile the connected lookup forms a part of the whole flow of mapping.Ans: The source flat files can be kept in some folder on the Informatica server or any other machine. If Y then continue feed or else stop the feed. 13. How can we join the records from two heterogeneous sources in a mapping? Ans: By using a joiner. which do not have key fields? Ans: To Update a table that does not have any Keys we can do a SQL Override of the Target Transformation by specifying the WHERE conditions explicitly. A mapping just take 10 seconds to run. A source qualifier can join more than two sources. 22. Here you can test data based on one or more group filter conditions. Difference between Lookup Transformation & Unconnected Stored Procedure Transformation – Which one is faster ? 20. it takes a source file and insert into target. Difference between Connected & Unconnected look-up. But a joiner can join only two sources. How to update or delete the rows in a target. You write an expression using the :LKP reference qualifier to call the lookup within another transformation. 23. dd_delete & dd_reject. 16. In this case you have to specifically mention the Key for Target table definition on the Target transformation in the Warehouse Designer and delete the row using the Update Strategy transformation. What is option by which we can run all the sessions in a batch simultaneously? Ans: In the batch edit box there is an option called concurrent. Ans: A Router transformation has input ports and output ports. 19. dd_update. Informatica settings are available in which file? Ans: Informatica settings are available in a file pmdesign. 21. Ans: Constraint Based Loading (if no relationship at oracle level) OR Target Load Plan (if only 1 source qualifier for both tables) OR select first the header target table and then the detail table while dragging them in mapping. 14. In a mapping there are 2 targets to load header and detail. how to ensure that header loads first then detail table. How to Join 2 tables connected to a Source Qualifier w/o having any relationship defined ? Ans: By writing an sql override. . What are the oracle DML commands possible through an update strategy? Ans: dd_insert. But in filter you can filter data based on one or more conditions before writing it to targets. While a joiner is used to combine data from heterogeneous sources. it shouldn’t run row wise).ini in Windows folder. and output ports reside in the output groups. Delete cannot be done this way. By checking that all the sessions in that Batch will run concurrently. Ans: An unconnected Lookup transformation exists separate from the pipeline in the mapping. which is in its domain. Input ports reside in the input group.

very efficient for where class 3. force) 2. That is. each primary key values in dimension table associated with foreign key of fact tables.on commit and 2.Difference between OLTP and DWH? OLTP system is basically application orientation (eg. low updates. item. which gives you better query performance Refresh m views 1. fast. Snowflake schema basically a normalized dimension tables to reduce redundancy in the dimension tables 4. purchase order it is functionality of an application) Where as in DWH concern is subject orient (subject in the sense custorer. Data warehousing concepts 1. 6. Snowflake schemas normalized dimension tables to eliminate redundancy.What is star schema? And what is snowflake schema? The center of the star consists of a large fact table and the points of the star are the dimension tables. Here a fact table contains all business measures (normally numeric data) and foreign key values.What is difference between view and materialized view? Views contains query whenever execute views it has read from base table Where as M views loading or replicated takes place only once. never.What is bitmap index why it’s used for DWH? A bitmap for each key value replaces a list of rowids. the Dimension data has been grouped into multiple tables instead of one large table.Ans: There is an option to run the stored procedure before starting to load the rows. Star schema contains demoralized dimension tables and fact table. Cleaning in the sense your merging data which comes from different source 5. Bitmap index more efficient for data warehousing because low cardinality.What are the steps to create a database in manually? create os service and create init file and start data base no mount stage then give create data base command. time) OLTP . on demand (Complete. and dimension tables has details about the subject area.Why need staging area database for DWH? Staging area needs to clean operational data before loading into data warehouse. product.

Where as data warehouse is enterprise-wide/organizational The data flow of data warehouse depending on the approach 9. Thus making decisions that were not previous possible 8.Why need data warehouse? A single.What is difference between data mart and data warehouse? A data mart designed for a particular line of business.· · · · · · · · · · · · Application Oriented Used to run business Detailed data Current up to date Isolated Data Repetitive access Clerical User Performance Sensitive Few Records accessed at a time (tens) Read/Update Access No data redundancy Database Size 100MB-100 GB DWH · · · · · · · · · · · · Subject Oriented Used to analyze business Summarized and refined Snapshot data Integrated Data Ad-hoc access Knowledge User Performance relaxed Large volumes accessed at a time(millions) Mostly Read (Batch Update) Redundancy present Database Size 100 GB . such as sales. What kind of scd used in your project? .What is the significance of surrogate key? Surrogate key used in slowly changing dimension table to track old and new values and it’s derived from primary key.What is slowly changing dimension.few terabytes 7. A process of transforming data into information and making it available to users in a timely enough manner to make a difference Information Technique for assembling and managing data from various sources for the purpose of answering business questions. complete and consistent store of data obtained from a variety of different sources made available to end users in a what they can understand and use in a business context. or finance. marketing. 10.

one is we can overwrite the existing record. DB2. 19. second one is create additional new record at the time of change with the new attribute values. SQL Server Select (list the columns you want) from (select salary from employee order by salary) Where rownum<5 17.What are the types of index? And is the type of index used in your project? Bitmap index.Dimension attribute values may change constantly over the time.When you give an update statement how undo/rollback segment will work/what are the steps? Oracle keep old values in undo segment and new values in redo entries.What is difference between primary key and unique key constraints? Primary key maintains uniqueness and not null values Where as unique constrains maintain unique values and null values 12. When you say commit erase the undo segment values and keep new vales in permanent. It has record indicator and column indicator. 11.name. B-tree index. 13. Informatica Administration 18. . We used Bitmap index in our project for better performance. Third one is create new field to keep new values in the original dimension table.What is DTM? How will you configure it? DTM transform data received from reader buffer and its moves transformation to transformation on row by row basis and it uses transformation caches when necessary. Function based index. Say for example Update employee partition(name) a.A table have 3 partitions but I want to update in 3rd partitions how will you do? Specify partition name in the update statement. set a.Write a query to find out 5th max salary? In Oracle. and address) customer address may change over time.empno=10 where ename=’Ashok’ 15. Once it completed stored in the shared sql area wherein previously allocated memory 16.How is your DWH data modeling(Details about star schema)? 14. Otherwise allocate memory in shared sql area and then create run time memory in Private sql area to create parse tree and execution plan. How will you handle this situation? There are 3 types.You transfer 100000 rows to target but some rows get discard how will you trace them? And where its get loaded? Rejected records are loaded into bad files. reverse key and composite index.When you give an update statement how memory flow will happen and how oracles allocate memory for that? Oracle first checks in Shared sql area whether same Sql statement is available if it is there it uses. (Say for example customer dimension has customer_id. When you say rollback it replace old values from undo segment.

targets. Say for example a source definition want to use in 10 mappings in 10 different folder without creating 10 multiple source you create 10 shotcuts. Normally data may get rejected in different reason due to transformation logic 20. filter. minimize input/out port as much as possible Use Filter wherever possible to avoid unnecessary data flow 26. Save) 22.How do you increase the performance of mappings? Use single pass read(use one source qualifier instead of multiple SQ for same table) Minimize data type conversion (Integer to Decimal again back to Integer) Optimize transformation(when you use Lookup. And also it use to create informatica user’s and folders and copy.2-delete. transformation which are helps logically organize our data warehouse. Designer.How will you do sessions partitions? . Fetch. Use designer. Execute. Repository data base contains metadata it read by informatica server used read data from source.What are the different uses of a repository manager? Repository manager used to create repository which contains metadata the informatica uses to transform data from source to target. folder permission and locking. Client tools such as Repository manager. users) Locking(Read.Record indicator identified by (0-insert. Write. administer server. backup and restore the repos itory 21.Explain Informatica Architecture? Informatica consist of client and server.Can you create a folder within designer? Not possible 24. 25. 27.Ooverflow. increase cache size. Administer repository. transforming and loading into target.3-reject) and column indicator identified by (D-valid. 23. Server manager. rank and joiner) Use caches for lookup Aggregator use presorted port.What are shortcuts? Where it can be used? What are the advantages? There are 2 shortcuts(Local and global) Local used in local repository and global used in global repository. Browse repository.1-update.What is a folder? Folder contains repository objects such as sources.T-truncated). The advantage is reuse an object without creating multiple objects. aggregator. groups. Create session and batches.N-null.How do you take care of security using a repository manager? Using repository privileges. Repository privileges(Session operator. super user) Folder permission(owner. mappings.

What are the tracing level? Normal – It contains only session initialization details and transformation details no. how will you do that? Use Round() function 37. DD_DELETE. DD_UPDATE.What you will do in session level for update strategy transformation? In session property sheet set Treat rows as “Data Driven” 31. Output.What is active and passive transformations? Active transformation change the no.What is difference between connected and unconnected lookup transformation? Connected lookup return multiple values to other transformation Where as unconnected lookup return one values If lookup condition matches Connected lookup return user defined default values Where as unconnected lookup return null values Connected supports dynamic caches where as unconnected supports static 30.Why did you used connected stored procedure why don’t u se unconnected stored procedure? 33.What are the port available for update strategy . Lookup.How will you make records in groups? Using group by port in aggregator 36.It’s not available in power part 4. records rejected. sequence generator.How will you move mappings from development to production database? . Return Input. of records when passing to targe(example filter) where as passive transformation will not change the transformation(example expression) 34. Output Output only Input. DD_REJECT 29. Output 32.7 Transformation 28. stored procedure transformation? Transformations Update strategy Sequence Generator Lookup Stored Procedure Port Input. applied Terse . Settings and all information about the session 35. Lookup. Verbose data – Verbose init.Need to store value like 145 into target when you use aggregator.Only initialization details will be there Verbose Initialization – Normal setting information plus detailed information about the transformation.What are the constants used in update strategy? DD_INSERT.

Lookup transformation used for check values in the source and target tables(primary key values).Copy all the mapping from development repository and paste production repository while paste it will promt whether you want replace/rename.What is aggregator transformation how will you use in your project? .Can you use mapping without source qualifier? Not possible. 2. 3 and Detail . 3. Result set 1. 41. 2.What is difference between aggregator and expression? Aggregator is active transformation and expression is passive transformation Aggregator transformation used to perform aggregate calculation on group of records really Where as expression used perform calculation with single record 39. Purpose of sp transformation. Result set 1. If source RDBMS/DBMS/Flat file use SQ or use normalizer if the source cobol feed 40.row wise check Pre-Load Source . 4) Detail Outer(It takes all the values from master source and maching values from detail table. How did you go about using your project? Connected and unconnected stored procudure.1. Result set 1. 3.What is lookup and difference between types of lookup. 38. 3) Master Outer(It takes all the rows from detail table and maching rows from master table. Unconnected stored procedure used for data base level activities such as pre and post load Connected stored procedure used in informatica level for example passing one parameter as input and capturing return value from the stored procedure.(Capture source incremental data for incremental aggregation) Post-Load Source . Connected lookup return default user value if the condition does not mach Where as unconnected return null values Lookup cache does: Read the source/target table and stored in the lookup cache 43. Normal .What are stored procedure transformations. There are 2 type connected and unconnected transformation Connected lookup returns multiple values if condition true Where as unconnected return a single values through return port. If say replace informatica replace all the source tables with repository database.When do you use a normalizer? Normalized can be used in Relational to denormilize data. 3) Full Outer(It takes all values from both tables) 44. 4) Normal(If the condition mach both master and detail tables then the records will be displaced. How does a dynamic lookup cache work.1.What is a joiner transformation? Used for heterogeneous sources(A relational source and a flat file) Type of joins: Assume 2 tables has values(Master .(Delete Temporary tables) Pre-Load Target . What exactly happens when a lookup is cached.(Check disk space available) Post-Load Target – (Drop and recreate index) 42.

50. Suppose if we want lookup a target table we can use dynamic cache) Shared cache (we can share lookup transformation between multiple transformations in a mapping.Which path will the cache be created? User specified directory. Commission >2000) Use non-aggregate function iif (max(quantity) > 0. If we say c:\ all the cache files created in this directory. Data caches files hold row data until it performs necessary calculation. 2 lookup in a mapping can share single lookup cache) 47.What are the contents of index and cache files? Index caches files hold unique group values as determined by group by port in the transformation.Where do you specify all the parameters for lookup caches? Lookup property sheet/tab.stored procedure name(arguments) .Explain lookup cache.Used perform aggregate calculation on group of records and we can use conditional clause to filter data 45.How do you remove the cache files after the transformation? After session complete. various caches? Lookup transformation used for check values in the source and target tables(primary key values).Can you use one mapping to populate two tables in different schemas? Yes we can use 46. 52. In case using persistent cache and Incremental aggregation then caches files will be saved. 49.What is the use of aggregator transformation? To perform Aggregate calculation Use conditional clause to filter data in the expression Sum(commission. 0)) 51. 48. Various Caches: Persistent cache (we can save the lookup cache files and reuse them the next time process the lookup transformation) Re-cache from database (If the persistent cache not synchronized with lookup table you can configure the lookup transformation to rebuild the lookup cache) Static cache (When the lookup condition is true. Informatica server return a value from lookup cache and it’s does not update the cache while it processes the lookup transformation) Dynamic cache (Informatica server dynamically inserts new rows or update existing rows in the cache and the target. DTM remove cache memory and deletes caches files. Max(quantitiy).How do you call a store procedure within a transformation? In the expression transformation create new out port in the expression write :sp.

How Informatica read data if source have one relational and flat file? Use joiner transformation after source qualifier before other transformation.When we use router transformation? .) 62. We cant 58. 56. What will happens? Insert take place.Can you use flat file for lookup table? No. Informatica server dynamically insert new values or it updates if the values exist and passes to target table.53. How? Yes Unconnected lookup much more faster than connected lookup why because in unconnected not connected to any other transformation we are calling it from other transformation so it minimize lookup cache value Where as connected transformation connected to other transformation so it keeps values in the lookup cache. Target based commit set 10000 but writer buffer fills every 7500. We cant 59. next time buffer fills 15000 now commit statement will fire then 22500 like go on. Because this option override the mapping level option Sessions and batches 61. so writer buffer does not rows held the buffer) Target based commit (Based on the rows in the buffer and commit interval. Rank transformation return one value from the group.What is dynamic lookup? When we use target lookup table. of rank 1. 54.Without Source Qualifier and joiner how will you join tables? In session level we have option user defined join.Is there any performance issue in connected & unconnected lookup? If yes.What are the commit intervals? Source based commit (Based on the no.Update strategy set DD_Update but in session level have insert. 60. Where we can write join condition. 57. That the values will be a unique one.How you will load unique record into target flat file from source flat files has duplicate data? There are 2 we can do this either we can use Rank transformation or oracle external table In rank transformation using group by port (Group the records) and then set no. Commit interval set 10000 rows and source qualifier reads 10000 but due to transformation logic 3000 rows get rejected when 7000 reach target commit will fire.Can you use flat file for repository? No. of active source records(Source qualifier) reads. 55.

every month) Run only on demand(Manually run) this not session scheduling. What are the advantages and dis-advantages of a concurrent batch? Sequential(Run the sessions one by one) Concurrent (Run the sessions simultaneously) Advantage of concurrent batch: It’s takes informatica server resource and reduce time it takes run session separately. 65. end after. Increase aggregate cache size. hour.How do you handle a session if some of the records fail. Configure Microsoft exchange server in mail box applet(control panel) Step3. Based on the error we set informatica server stop the session. Use this feature when we have multiple sources that process large amount of data in one session. Use filter before aggregation so that it minimize unnecessary aggregation. what for they used? Post-session used for email option when the session success/failure send email. 64. forever) Customized repeat (Repeat every 2 days.How did you schedule sessions in your project? Run once (set 2 parameter date and time when session should start) Run Every (Informatica server run session at regular interval as we configured.) 63.errors.What are different types of batches.How do you use the pre-sessions and post-sessions in sessions wizard. Pre-session used for even scheduling (Say for example we don’t know whether source file available or not in particular directory. Split sessions and put into one concurrent batches to complete quickly. Can it be achieved in mapping level or session level? It can be achieved in session level only. end on. 67.How you do improve the performance of session. For that we write one DOS command to move file directory to destination and set event based scheduling option in session property sheet Indicator file wait for). (Say for example source records 50 filter condition mach 10 records remaining 40 records get filter out but still we want perform few more filter condition to filter remaining 40 records. Should have a informatica startup account and create outlook profile for that user Step2. Lookup transformation use lookup caches Increase DTM shared memory allocation . every week. daily frequency hr. log files tab one option is the error handling Stop on -----. How do you stop the session in case of errors. minutes.When we want perform multiple condition to filter out data then we go for router. For that we should configure Step1. min. parameter Days. Disadvantage Require more shared memory otherwise session may get failed 66. In session property sheet. If we use Aggregator transformation use sorted port. Configure informatica server miscellaneous tab have one option called MS exchange profile where we have specify the outlook profile name.

Eliminating transformation errors using lower tracing level(Say for example a mapping has 50 transformation when transformation error occur informatica server has to write in session log file it affect session performance) 68. How many mappings have you developed on your whole dwh project? 45 mappings 75. What are the sources you worked on? Oracle 74.Explain incremental aggregation. rather than processing the entire source and recalculating the same calculation each time session run. Update Strategy. Choose Reinitialize cache in the aggregation behavior in transformation tab 69. Lookup. Stored Procedure. how frequently loading to DWH? I did 22 Mapping. How many mapping. dimension tables. So when the source table completely change we have reinitialize the aggregate cache and truncate target table use new source table.Concurrent batches have 3 sessions and set each session run if previous complete but 2nd fail then what will happen the batch? Batch will fail General Project 70. Will that increase the performance? How? Incremental aggregation capture whatever changes made in source used for aggregate calculation in a session. Loading data every day 71. One complex mapping I did for slowly changing dimension table. Rank. Sequence generator. 4 dimension table and one fact table. Source Qualifier. What are the different transformations used in your project? Aggregator. How did you populate the dimensions tables? 73. 72. Joiner. Database size is 9GB. Therefore it improve session performance. Expression. Only use incremental aggregation following situation: Mapping have aggregate calculation Source table changes incrementally Filtering source incremental data by time stamp Before Aggregation have to do following steps: Use filter transformation to remove pre-existing records Reinitialize aggregate cache when source table completely changes for example incremental changes happing daily and complete changes happenings monthly once. What is OS used your project? Windows NT . Fact tables and any complex mapping you did? And what is your database size. Filter.

product 77. .What is difference between Informatica power mart and power center? Using power center we can create global repository Power mart used to create local repository Global repository configure multiple server to balance session load Local repository configure only single server 78.What are the key you used other than primary key and foreign key? Used surrogate key to maintain uniqueness to overcome duplicate value in the primary key.Data flow of your Data warehouse(Architecture) DWH is a basic architecture (OLTP to Data warehouse from DWH OLAP analytical and report building. 82. Use this feature when we have multiple sources that process large amount of data in one session. 81. and database size) Fact table contains all business measures (numeric values) and foreign key values. Split sessions and put into one concurrent batches to complete quickly.What are the batches and it’s details? Sequential(Run the sessions one by one) Concurrent (Run the sessions simultaneously) Advantage of concurrent batch: It’s takes informatica server resource and reduce time it takes run session separately.Have you done any complex mapping? Developed one mapping to handle slowly changing dimension table. I-Flex Interview (14 May 2003) th 80.Explain details about DTM? Once we session start. 79. Explain your project (Fact table.Difference between Power part and power center? Using power center we can create global repository Power mart used to create local repository Global repository configure multiple server to balance session load Local repository configure only single server 83. load manager start DTM and it allocate session shared memory and contains reader and writer.76. Reader will read source data from source qualifier using SQL statement and move data to DTM then DTM transform data to transformation to transformation and row by row basis finally move data to writer then writer write data into target using SQL statement. Dimension table contains details about subject area like customer. dimensions.

Bitmap join index used to join dimension and fact table instead reading 2 different index. Quater4) Hash (Used for unpredictable values say for example we cant able predict which value to allocate which partition then we go for hash partition. where as mapping not like that. low updates. Quarter 3. very efficient for where clause. Quarter 2.What are the partitions in 8i/9i? Where you will use hash partition? In oracle8i there are 3 partition (Range. of return value when we use unconnected transformation? Only one. Source qualifier filter data while reading where as filter before loading into target. What is difference between the source qualifier filter and filter transformation? Source qualifier filter only used for relation source where as Filter used any kind of source. RAM 64MB . 94. Unix AIX(IBM) Informatica Server runs on Windows NT / Unix Minimum Hardware requirements Informatica Client Hard disk 40MB. 93. Hash. 92.What are the index you used? Bitmap join index? Bitmap index used in data warehouse environment to increase query response time. Unix Solaris. 86.What is external table in oracle. How oracle read the flat file Used for read flat file. Composite) In Oracle9i List partition is additional one Range (Used for Dates values for example in DWH ( Date values are Quarter 1. What are the environments in which informatica server can run on? Informatica client runs on Windows 95 / 98 / NT. 85. RAM 64MB Informatica Server Hard Disk 60MB.Disadvantage Require more shared memory otherwise session may get failed 84. If we set partition 5 for a column oracle allocate values into 5 partition accordingly).What is main difference mapplets and mapping? Reuse the transformation in several mappings. List (Used for literal values say for example a country have 24 states create 24 partition for 24 states each) Composite (Combination of range and hash) 91. Oracle internally write SQL loader script with control file. If any changes made in mapplets it automatically inherited in all other instance mapplets. since DWH has low cardinality. What is the maximum no.

Locate the performance details file named called session_name. Rank in rank transformation is set to 10. How do you really know that paging to disk is happening while you are using a lookup transformation? Assume you have access to server? We have collect performance data first then see the counters parameter lookup_readtodisk if it’s greater than 0 then it’s read from disk Step1. List three option available in informatica to tune aggregator transformation? Use Sorted Input to sort data before aggregation Use Filter transformation before aggregator Increase Aggregator cache size 100.perf file in the session log file directory Step4.Assume there is text file as source having a binary field to. In 5. Joiner transformation is joining two tables s1 and s2. How many rows the rank transformation will output? 5 Rank . to source qualifier What native data type informatica will convert this binary field to in source qualifier? Binary data type for relational source for flat file ? 101. We cant call connected lookup in other transformation. Can unconnected lookup do everything a connected lookup transformation can do? No.x can we copy part of mapping and paste it in other mapping? I think its possible 97. 10 in parameter file. Monitor server then click server-request  session performance details Step3. Source table has 5 rows. Find out counter parameter lookup_readtodisk if it’s greater than 0 then informatica read lookup table values from the disk. Which table you will set master for better performance of joiner transformation? Why? Set table S2 as Master table because informatica server has to keep master table in the cache so if it is 1000 in cache will get performance instead of having 10000 rows in cache 103. While running session which value informatica will read? Informatica read value 15 from repository 102. so that the sessions run one after the other? We have select an option called “Run if previous completed” 98. Find out how many rows in the cache see Lookup_rowsincache 99.000 rows and s2 has 1000 rows . What option do you select for a sessions in batch. Rest of things it’s possible 96.95.Variable v1 has values set as 5 in designer(default). s1 has 10. Step2. Choose the option “Collect Performance data” in the general tab session property sheet. 15 in repository.

Give a way in which you can implement a real time scenario where data in a table is changing and you need to look up data from it. 22500. of rows * size of the connected output ports) 110. 40000. of rows * size of the column in the lookup condition) + (Total no. 20000. How will you configure the lookup transformation for this purpose? In slowly changing dimension table use type 2 and model 1 106. 1.000 rows and source has 50. Can unconnected lookup return more than 1 value? No INFORMATICA TRANSFORMATIONS           Aggregator Expression External Procedure Advanced External Procedure Filter Joiner Lookup Normalizer Rank Router . explain each thread in brief? DTM receive data from reader and move data to transformation to transformation on row by row basis. data. Suppose session is configured with commit interval of 10. It’s create 2 thread one is reader and another one is writer 107. 40000. 3) Second Column – Column Indicator (D. Assume appropriate value wherever required? Target Based commit (First time Buffer size full 7500 next time 15000) Commit Every 15000. 50000 Source Based commit(Does not affect rows held in buffer) Commit Every 10000. T) 109.000 rows explain the commit points for source based commit & target based commit. 30000. 30000. 50000 108.Row indicator (0.What does first column of bad file (rejected rows) indicates? First Column .104. How to capture performance statistics of individual transformation in the mapping and explain some important statistics that can be captured? Use tracing level Verbose data 105. What is DTM process? How many threads it creates to process data. O. index caches? Index cache size = Total no. of rows * size of the column in the lookup condition (50 * 4) Aggregator/Rank transformation Data Cache size = (Total no. N. 2. What is the formula for calculation rank data caches? And also Aggregator.

     Sequence Generator Stored Procedure Source Qualifier Update Strategy XML source qualifier .

use the Aggregator. Where as the Expression transformation permits you to calculations on row-by-row basis only. There are two parameters NEXTVAL. Stops with configured end value) Reset No of cached values NOTE Reset is disabled for Reusable SGT Unlike other transformations. Aggregator Transformation Difference between Aggregator Expression Transformation and We can use Aggregator to perform calculations on groups. you can use one expression transformation rather than creating separate transformations for each calculation that requires the same set of data. In this way. you cannot override SGT properties at session level. you can create any number of output ports in the Expression Transformation. you must include the following ports. The server generates a value each time a row enters a connected transformation. Otherwise.Expression Transformation You can use ET to calculate values in a single row before you write to the target You can use ET. server cycles through sequence range. even if that value is not used. Input port for each value used in the calculation Output port for the expression NOTE You can enter multiple expressions in a single ET. CURRVAL) SGT Properties Start value Increment By End value Current value Cycle (If selected. . This protects the integrity of sequence values generated. such as sums of averages. to perform any non-aggregate calculation To perform calculations involving multiple rows. As long as you enter only one expression for each port. Sequence Generator Transformation Create keys Replace missing values This contains two output ports that you can connect to one or more transformations. Unlike ET the Aggregator Transformation allow you to group and sort data Calculation To use the Expression Transformation to calculate values for a single row. CURRVAL The SGT can be reusable You can not edit any default ports (NEXTVAL.

the server process the entire source. what is reason behind that? . use only changes in the source as source data for the session. The server creates the file in local directory. and saves the incremental changes. If the server requires more space. the index file and data file. Components Aggregate Expression Group by port Aggregate cache When a session is being run using aggregator transformation. Incremental Aggregation Using this. using the aggregate data for that group. the server creates Index and data caches in memory to process the transformation.The server performs aggregate calculations as it reads and stores necessary data group and row data in an aggregator cache. you apply captured changes in the source to aggregate calculation in a session. the server passes new source data through the mapping and uses historical cache data to perform new calculation incrementally. DEC 13 Router T/R is active or passive. rather than forcing it to process the entire source and recalculate the same calculations each time you run the session. Steps: The first time you run a session with incremental aggregation enabled. At the end of the session. If the source changes only incrementally and you can capture changes. the server assumes all data is sorted by group. The second time you run the session. The server then performs the following actions: For each input record. then: If it finds a corresponding group – The server performs the aggregate operation incrementally. the session checks the historical information in the index file for a corresponding group. it stores overflow values in cache files. you can configure the session to process only those changes This allows the sever to update the target incrementally. When this is selected. When Incremental aggregation occurs. Else Server create a new group and saves the record data (1) Posted 14th December 2011 by Prafull Dangore 0 Add a comment 17. NOTE The performance of aggregator transformation can be improved by using “Sorted Input option”. the server stores aggregate data from that session ran in two files.

1 c. values: 1 a. output will look like emp_id emp_name Counter 101 soha 1 101 soha ali 2 101 soha ali kahn 3 102 Siva 4 . 19. user a sorter transformation and sort the data as per emp_id 2. what transformations should be used for this? Scenario: If the source has duplicate records as id and name columns. 1 b. what is reason behind that? Solution: First of all Every Active transformation is a Passive transformation. 2 a. values: 1 a. We can avoid this Default group by some transformation Settings. DEC 13 If the source has duplicate records as id and name columns. 2 b. 2 b. V_emp_name) O_emp_full_name = V_emp_full_name O_counter = iif(O_counter is null. the target should be loaded as 1 a+b+c or 1 a||b||c. V_emp_name||’ . Use Exp transformation: Create blow ports V_emp_id = emp_id V_previous_emp_id = emp_id V_emp_name = emp_name V_emp_full_name = iif(V_emp_id = V_previous_emp_id ’|| V_emp_full_name. Now It’s Active. Because of Default Group it’s passive. the target should be loaded as 1 a+b+c or 1 a||b||c. In Router Transformation there is a special feature with Default group.O_counter+1) 3. what transformations should be used for this? Solution: Follow the below steps . but every passive not Active. 1 b. Posted 13th December 2011 by Prafull Dangore 0 Add a comment 18.Scenario: Router T/R is active but some people are saying some times passive. 1 c.1. 2 a.smiler exp 1.

in which I have two fields emp_id. you will get desire output as Emp_id Emp_name 101 Soha ali Kahn 102 Siva shanker Reddy Posted 13th December 2011 by Prafull Dangore 0 Add a comment 2. in which I have two fields emp_id.emp_id emp_name 101 soha 101 ali 101 kahn 102 Siva 102 shanker 102 Reddy How to merge the names so that my output is like this Emp_id Emp_name 101 Soha ali Kahn 102 Siva shanker Reddy Scenario: I have a flat file. The data is like thisemp_id emp_name 101 soha 101 ali 101 kahn 102 Siva 102 shanker 102 Reddy How to merge the names so that my output is like this 101 102 Emp_id Emp_name Soha ali Kahn Siva shanker Reddy Solution: Follow the below steps 1. user a sorter transformation and sort the data as per emp_id . emp_name. emp_name. Send Emp_id and Counter to Agg. DEC 13 I have a flat file. The data is like this. where take a max counter for each id so o/p will be Emp_id Counter 101 3 102 6 5. Joint output of step three and 4.102 Siva shanker 102 Siva shanker Reddy 5 6 4.

Nonvolatile collection . output will look like emp_id emp_name Counter 101 soha 1 101 soha ali 2 101 soha ali kahn 3 102 Siva 4 102 Siva shanker 5 102 Siva shanker Reddy 6 . where take a max counter for each id so o/p will be Emp_id Counter 101 3 102 6 5.O_counter+1) 3.2.1. Use Exp transformation: Create blow ports V_emp_id = emp_id V_previous_emp_id = emp_id V_emp_name = emp_name V_emp_full_name = iif(V_emp_id = V_previous_emp_id ’|| V_emp_full_name. Joint output of step three and 4. department level and developed with a specific Integrated. you will get desire output as Emp_id Emp_name 103 Soha ali Kahn 104 Siva shanker Reddy Posted 13th December 2011 by Prafull Dangore 0 Add a comment 3. DEC 12 Difference between data mart and data warehouse Scenario: Difference between data mart and data warehouse Solution: Data Mart Data Warehouse Data mart is usually sponsored at the Data warehouse is a “Subject-Oriented. Time-Variant. Send Emp_id and Counter to Agg. V_emp_name) O_emp_full_name = V_emp_full_name O_counter = iif(O_counter is null. V_emp_name||’ 4.

a data mart is a data warehouse with a focused objective. A data mart is used on a business division/ department level.issue or subject in mind. A data warehouse is used on an enterprise level A Data Warehouse is simply an integrated consolidation of data from a variety of sources that is specially designed to support strategic and tactical decision making. It contains a fact table surrounded by dimension tables. In star schema each of the dimensions is represented in a single table . If a dimension is normalized. In star schema only one join establishes the relationship between the fact table and any one of the dimension tables. of data in support of decision making”. Data Marts are built for specific user groups. A star schema optimizes the performance by Snow Flake Schema Snowflake schema is a more complex data warehouse model than a star schema. It contains a fact table surrounded by dimension tables.It should not have any hierarchies between dims. we say it is a star schema design. In snow flake schema since there is relationship between the dimensions tables it has to do many joins to fetch the data. provide an integrated environment and Performance and Clarity Objectives can be coherent picture of the business at a point in attained. A Data Mart is a subset of data from a Data Warehouse. In snow flake schema at least one hierarchy should exists between dimension tables. Posted 12th December 2011 by Prafull Dangore 0 Add a comment 4. Snowflake schemas normalize dimensions to . we say it is a snow flaked design. 5. time. If the dimensions are denormalized. By providing decision makers with only a subset The main objective of Data Warehouse is to of data from the Data Warehouse. DEC 12 What is the difference between snow flake and star schema Scenario: What is the difference between snow flake and star schema Solution: Star Schema The star schema is the simplest data warehouse scheme. Privacy.

It is a de-normalized structure. Its volatile system. DEC 12 Difference between OLTP and DWH/DS/OLAP Scenario: Difference between OLTP and DWH/DS/OLAP Solution: OLTP DWH/DSS/OLAP OLTP maintains only current information. It’s a dimensional model. data. Posted 12th December 2011 by Prafull Dangore 0 Add a comment 7. Posted 12th December 2011 by Prafull Dangore 0 Add a comment 6. It cannot be used for reporting purpose. It’s a pure relational model. It is a normalized structure. Its non-volatile system. It is called a snowflake schema because the diagram resembles a snowflake. It’s not time variant. The result is more complex queries and reduced query performance.keeping queries simple and providing fast response time. eliminated redundancy. Its time variant. It is called a star schema because the diagram resembles a star. Since it is normalized structure so here it Here it does not require much joins to fetch the requires multiple joins to fetch the data. It’s a pure reporting system. All the information about the each level is stored in one row. OLAP contains full history. DEC 12 Differences between rowid and rownum .

Rownum Rownum is a row number returned by a select statement. It is created at the time the row is inserted into the table. Rownum is temporary. Stored procedure can be used to solve the Function can be used to calculations business logic. and destroyed when it is removed from a table. Rowid is a globally unique identifier for a row in a database. Rowid is permanent. Stored procedure is a pre-compiled statement.Scenario: Differences between rowid and rownum Solution: Rowid Rowid is an oracle internal id that is allocated every time a new record is inserted in a table. The rownum pseudocoloumn returns a number indicating the order in which oracle selects the row from a table or set of joined rows. . This ID is unique and cannot be changed by the user. Functions Function should return at least one output parameter. Posted 12th December 2011 by Prafull Dangore 0 Add a comment 8. 9. But function is not a pre-compiled statement. argument. Can return more than one parameter using OUT argument. Stored procedure accepts more than one Whereas function does not accept arguments. DEC 12 Differences between stored procedure and functions Scenario: Differences between stored procedure and functions Solution: Stored Procedure Stored procedure may or may not return values.

SELECT Can affect the state of database using commit.g. Restrict normal query by where Restrict group by function by having In where clause every record is filtered In having clause it is with aggregate records based on where. Posted 12th December 2011 by Prafull Dangore 0 Add a comment 11. Cannot be invoked from SQL statements.e. Posted 12th December 2011 by Prafull Dangore 0 Add a comment 10. group by. DEC 12 Differences between where clause and having clause Scenario: Differences between where clause and having clause Solution: Where clause Having clause Both where and having clause can be used to filter the data. E. Where as having clause is used to test some condition on the group rather than on individual rows. Where clause applies to the individual rows. DEC 12 .Stored procedures are mainly used to process the tasks. compiled form. Parsed and compiled at runtime. Functions are mainly used to compute values Can be invoked form SQL statements e. Where clause is used to restrict rows. But having clause is used to restrict groups. Stored as a pseudo-code in database i. SELECT Cannot affect the state of database.g. (group by functions). Where as in where clause it is not But having clause we need to use it with the mandatory.

Posted 12th December 2011 by Prafull Dangore 0 Add a comment 12. Its not a database object. So that we can avoid multiple joins at report run time. It is a database object. 13. Then it can simply perform select statement on materialized view. When we do select * from view it will fetch the When we do select * from materialized view it data from base table. Materialized view can be created based on multiple tables. We can keep aggregated data into materialized view. So the same report logic if we put in the materialized view. DEC 12 SQL command to kill a session/sid . will fetch the data from materialized view.What is the difference between view and materialized view? Scenario: What is the difference between view and materialized view? Solution: View Materialized view A view has a logical existence. We can perform DML operation on materialized view. In view we cannot schedule to refresh. It does not A materialized view has a physical existence. We cannot perform DML operation on view. Materialized View Materialized view is very essential for reporting. We can fetch the data directly from materialized view for reporting purpose. This process is very slow since it involves multiple joins. If we don’t have the materialized view it will directly fetch the data from dimension and facts. In materialized view we can schedule to refresh. It is always necessary to refresh the materialized view. contain data.

piece.sid. round((sofar/totalwork)*100.USERNAME='NAME' and sid=95 order by sid. Query to find SID : . Query to find SID : select module.machine. a. b.serial#'. sofar.SQL_ADDRESS=b.2) pct_done from v$session_longops where SID=95 and serial#=2020.--t0 get serial no Posted 12th December 2011 by Prafull Dangore 0 Add a comment 14.ADDRESS --and a. Query to find serial# select * from v$session where type = 'USER' and status = 'ACTIVE'.piece from v$session a. totalwork. DEC 12 SQL command to find execution timing of a query Like total execution time and so far time spent Scenario: SQL command to find execution timing of a query Like total execution time and so far time spent Solution: select target.Scenario: SQL command to kill a session/sid Solution: ALTER SYSTEM KILL SESSION 'sid.v$sqltext b where status='ACTIVE' and a.SQL_TEXT.

piece.--t0 get serial no Posted 12th December 2011 by Prafull Dangore 0 Add a comment 15. b.sid.piece from v$session a.select module.USERNAME='NAME' and sid=95 order by sid. .v$sqltext b where status='ACTIVE' and a. a.USERNAME='NAME' and sid=95 order by sid.piece from v$session a.SQL_TEXT.machine.machine.ADDRESS --and a. Query to find serial# select * from v$session where type = 'USER' and status = 'ACTIVE'.ADDRESS --and a.v$sqltext b where status='ACTIVE' and a.sid.SQL_TEXT.piece. Posted 12th December 2011 by Prafull Dangore 0 Add a comment 16. a. b.SQL_ADDRESS=b. DEC 12 Query to find the SQL text of running procedure Scenario: How to find which query part/query of Procedure is running? Solution: select module.SQL_ADDRESS=b.

1. 1. We can assign value to variable V_row_number by two ways By using sequence generator By using below logic in the expression transformation V_ row_number =V_ row_number O_ row_number =V_ row_number Input. where you will get O_ row_number=5 then add a dummy port into a same exp with value 1 now join this exp2 with the very first exp1 so that you will get output like below Input. DEC 7 Design a mapping to load the first record from a flat file into one table A. 2. 5 c. 2.it will sent last record to table B. Table B . 5 .Now again there are two ways to identify last records. the last record from a flat file into table B and the remaining records into table C? Scenario: Design a mapping to load the first record from a flat file into one table A. 1. 2. Table c . Add an variable port as V_row_number. 2. Table A . 5 e. e.Now send out of step 4 to an exp2 transformation. By using max in agg 5.17. 3. +1 row_number 3. 1. O_ last_row_number a. send data from exp transformation to filter where you filter out first row as O_ row_number = 1 to table A. 4. O_ a. 5 d. 5. c. 1. 2.In one pipeline. 4. 5 b. Pass all rows from exp1 transformation to agg transformation and don’t select any column in group by port. d. 5. 4. 3. b. the last record from a flat file into table B and the remaining records into table C? Solution: Please follow the below steps From source qualifier pass data to the exp1 transformation. O_ row_number.

DEC 7 Separating duplicate and non-duplicate rows to separate tables Scenario: Separating duplicate and non-duplicate rows to separate tables Solution: Please follow the below steps 1. pass data to the Router in which create two route as duplicate where count>1and non-duplicate where count=1 3. after aggregator transformation. send data to aggregator transformation and use count agg function 2.Now pass the data to filter and add condition add O_ row_number <> 1 and O_ row_number <> O_ last_row_number Posted 7th December 2011 by Prafull Dangore 0 Add a comment 18. After source qualifier. Col2 and Col3. DEC 7 Converion columns to rows without using Normaliser transformation Scenario: We have a source table containing 3 columns : Col1. Then send data to the respective target tables Posted 7th December 2011 by Prafull Dangore 0 Add a comment 19. There is only 1 row in the table as follows: Col1 Col2 Col3 a b c .

Unix] else any third party scheduling tool to run which gives more flexibility in setting time and more control over running the job.Exp2 & Exp3 to 3 instances of same target table. Then pass data from Exp1. It will unscheduled the wf if there is a failure in previous run (the IS removes the workflow from the schedule.Col2 to Exp2 and Col3 to Exp3 2. tivoli) Note: You can depend on operating system native schedulers like [Windows Scheduler . Design a mapping so that the target table contains 3 rows as follows: Col a b c Without using Normaliser transformation. 21.autosys. Posted 7th December 2011 by Prafull Dangore 0 Add a comment 22. send data to three different Exp transformation like pass Col1 to Exp1. Posted 7th December 2011 by Prafull Dangore 0 Add a comment 20.Windows or crontab . Solution: Please follow the below steps 1. maestro. After source qualifier.There is target table contain only 1 column Col. If you want to run the workflow based on the success or failure of other application job like mainframe job you need help of third party tool (like control m. . DEC 7 What are the limitations of Informatica scheduler? Scenario: What are the limitations of Informatica scheduler? Solution: 1.) 2.

DEC 6 Combining Sessions which Read Data from Different Schemas Scenario: I have to combine 32 sessions which read data from different (32) schemas and load target to same table. 25. DEC 6 . Can you please tell me how to read parameter file which contain schema connections? Solution: If you want to parameterize connection. Posted 6th December 2011 by Prafull Dangore 0 Add a comment 23. Then specify the value for the connection variable in the parameter file. DEC 6 Command to Run Workflow with Conditions Scenario: I have to see whether a file has been dropped in a particular location.g. then the workflow should run Solution: To run from window server: pmcmd IF exist E:\softs\Informatica\server\infa_shared\SrcFiles\FILE_NAME*.csv startworkflow -sv service -d Dom -u userid -p password wf_workflow_name Posted 6th December 2011 by Prafull Dangore 0 Add a comment 24. $DBConnection<Name>. setup the session to use connection variables instead of specific connections e. If the file dropped .

DEC 2 Concatenate the Data of Just the First Column of a Table in One Single Row Scenario: Concatenate the Data of Just the First Column of a Table in One Single Row Solution: Step 1: pass Emp_Number to expression Step 2: in expression transformation use var1 : var2 : var3 : IIF(ISNULL(var1). Pass data to target table. variable port var2 Emp_Number '||Emp_Number) port var3 Don't do any . Total I have 10 regions. transformation. I have to select all the distinct regions and apply it to 'ALL'. then pass all columns to normalizer and in Normalizer create o/p port in which for Region port set occurrence to 10.Emp_Number.var3||' Step 3: In outport out_Emp_Number : Step 4: Pass this port group by or aggregation. I have a column 'region' in table 'user'. It is exception handling. 3. through aggregator transformation. the output should have 10 records of the user corresponding to each region. In second flow pass the data with Region=ALL to exp where create 10 output port with value of 10 region 3. Use two flow in your mapping in first flow pass all data with Region != 'ALL' 2. Posted 6th December 2011 by Prafull Dangore 0 Add a comment 26. How can I equal 'ALL' to 10 regions and get 10 records into the target? Scenario: Please follow the below steps 1.select all the distinct regions and apply it to 'ALL' Scenario: I have a task for which I am not able to find a logic. Exception is 1 user has 'ALL' in the region column.Informatica Logic Building . 1 user can belong to more than 1 region.

2) OUT Parameter: The OUT parameters are used to send the OUTPUT from a procedure or a function. 2) OUT type parameter: These types of parameters are used to get values from stored procedures.is optional.. IN . The General syntax to create an OUT parameter is CREATE [OR REPLACE] PROCEDURE proc2 (param_name OUT datatype) The parameter should be explicity declared as OUT parameter. NOTE: If a parameter is not explicitly defined a parameter type. are unique parameter names. but we cannot change its value inside the procedure. we cannot pass values to OUT paramters while executing the stored procedure. The General syntax to pass a IN parameter is CREATE [OR REPLACE] PROCEDURE procedure_name ( param_name1 IN datatype. This is a write-only parameter i... • param_name2. 1) IN type parameter: These types of parameters are used to send values to stored procedures.defines the datatype of the variable. 1) IN parameter: This is similar to passing parameters in programming languages. by default it is a IN type parameter. we can pass parameters to procedures and functions in three ways. 3) IN OUT parameter: These types of parameters are used to send values and get values from stored procedures. NOV 29 How to pass parameters to Procedures and Functions in PL/SQL ? Parameters in Procedure and Functions: In PL/SQL. param_name12 IN datatype . This type of parameter is a read only parameter. but we can assign values to OUT parameter inside the stored procedure and the calling program can recieve this output value. We can assign the value of IN type parameter to a variable or use it in a query. datatype .Posted 2nd December 2011 by Prafull Dangore 1 View comments 27. 3) IN OUT Parameter: The IN OUT parameter allows us to pass values into a procedure and get output values from the ..e. )    param_name1. We can pass values to the stored procedure through these parameters or variables. then by default it is an IN type parameter. This is similar to a return type in functions.

procedure. This parameter is used if the value of the IN parameter can be changed in the calling program. By using IN OUT parameter we can pass values into a parameter and return a value to the calling program using the same parameter. But this is possible only if the value passed to the procedure and output value have a same datatype. This parameter is used if the value of the parameter will be changed in the procedure. The General syntax to create an IN OUT parameter is
CREATE [OR REPLACE] PROCEDURE proc3 (param_name IN OUT datatype)

The below examples show how to create stored procedures using the above three types of parameters. Example1: Using IN and OUT parameter: Let’s create a procedure which gets the name of the employee when the employee id is passed.
1> 2> 3> 4> 5> 6> 7> CREATE OR REPLACE PROCEDURE emp_name (id IN NUMBER, emp_name OUT NUMBER) IS BEGIN SELECT first_name INTO emp_name FROM emp_tbl WHERE empID = id; END; /

We can call the procedure ‘emp_name’ in this way from a PL/SQL Block.
1> DECLARE 2> empName varchar(20); 3> CURSOR id_cur SELECT id FROM emp_ids; 4> BEGIN 5> FOR emp_rec in id_cur 6> LOOP 7> emp_name(emp_rec.id, empName); 8> dbms_output.putline('The employee ' || empName || ' has id ' || emprec.id); 9> END LOOP; 10> END; 11> /

In the above PL/SQL Block In line no 3; we are creating a cursor ‘id_cur’ which contains the employee id. In line no 7; we are calling the procedure ‘emp_name’, we are passing the ‘id’ as IN parameter and ‘empName’ as OUT parameter. In line no 8; we are displaying the id and the employee name which we got from the procedure ‘emp_name’. Example 2: Using IN OUT parameter in procedures:
1> CREATE OR REPLACE PROCEDURE emp_salary_increase 2> (emp_id IN emptbl.empID%type, salary_inc IN OUT emptbl.salary%type) 3> IS 4> tmp_sal number; 5> BEGIN 6> SELECT salary 7> INTO tmp_sal 8> FROM emp_tbl 9> WHERE empID = emp_id; 10> IF tmp_sal between 10000 and 20000 THEN 11> salary_inout := tmp_sal * 1.2; 12> ELSIF tmp_sal between 20000 and 30000 THEN 13> salary_inout := tmp_sal * 1.3;

14> ELSIF tmp_sal > 30000 THEN 15> salary_inout := tmp_sal * 1.4; 16> END IF; 17> END; 18> /

The below PL/SQL block shows how to execute the above 'emp_salary_increase' procedure.
1> DECLARE 2> CURSOR updated_sal is 3> SELECT empID,salary 4> FROM emp_tbl; 5> pre_sal number; 6> BEGIN 7> FOR emp_rec IN updated_sal LOOP 8> pre_sal := emp_rec.salary; 9> emp_salary_increase(emp_rec.empID, emp_rec.salary); 10> dbms_output.put_line('The salary of ' || emp_rec.empID || 11> ' increased from '|| pre_sal || ' to '||emp_rec.salary); 12> END LOOP; 13> END; 14> /

Posted 29th November 2011 by Prafull Dangore
0

Add a comment 28.

29.
NOV

29

Explicit Cursors

Explicit Cursors
An explicit cursor is defined in the declaration section of the PL/SQL Block. It is created on a SELECT Statement which returns more than one row. We can provide a suitable name for the cursor. The General Syntax for creating a cursor is as given below:
CURSOR cursor_name IS select_statement;  

cursor_name – A suitable name for the cursor. select_statement – A select query which returns multiple rows.

How to use Explicit Cursor?
There are four steps in using an Explicit Cursor.

   

DECLARE the cursor in the declaration section. OPEN the cursor in the Execution Section. FETCH the data from cursor into PL/SQL variables or records in the Execution Section. CLOSE the cursor in the Execution Section before you end the PL/SQL Block. 1) Declaring a Cursor in the Declaration Section:
DECLARE CURSOR emp_cur IS SELECT * FROM emp_tbl WHERE salary > 5000;

In the above example we are creating a cursor ‘emp_cur’ on a query which returns the records of the employees with salary greater than 5000. Here ‘emp_tbl’ in the table which contains records of all the employees. 2) Accessing the records in the cursor: Once the cursor is created in the declaration section we can access the cursor in the execution section of the PL/SQL program. all

How to access an Explicit Cursor?
These are the three steps in accessing 1) Open the 2) Fetch the records in the cursor one 3) Close the cursor. General Syntax to open a cursor is:
OPEN cursor_name;

the at a

cursor. cursor. time.

General Syntax to fetch records from a cursor is:
FETCH cursor_name INTO record_name;

OR
FETCH cursor_name INTO variable_list;

General Syntax to close a cursor is:
CLOSE cursor_name;

When a cursor is opened, the first row becomes the current row. When the data is fetched it is copied to the record or variables and the logical pointer moves to the next row and it becomes the current row. On every fetch statement, the pointer moves to the next row. If you want to fetch after the last row, the program will throw an error. When there is more than one row in a cursor we can use loops along with explicit cursor attributes to fetch all the records. Points to remember while fetching a row: · We can fetch the rows in a cursor to a PL/SQL Record or a list of variables created in the PL/SQL Block. · If you are fetching a cursor to a PL/SQL Record, the record should have the same structure as the cursor. · If you are fetching a cursor to a list of variables, the variables should be listed in the same order in the fetch statement as the columns are present in the cursor. General Form of using an explicit cursor is:
DECLARE variables; records; create a cursor; BEGIN OPEN cursor;

we are opening the cursor in the execution section in line no 8. (emp_rec.first_name || ' ' || In the above example. Attributes %FOUND %NOTFOUND %ROWCOUNT Return values TRUE. we are displaying the first_name and last_name of the employee in the record emp_rec in line no 10. Third. Sixth. Second. the PL/SQL Example Cursor_name%FOUND Cursor_name%NOTFOUND Cursor_name%ROWCOUNT . Lets Example 1: Look at the example below 1> DECLARE 2> emp_rec emp_tbl%rowtype. These are the attributes available to check the status of an explicit cursor. process the records. we are declaring a cursor ‘emp_cur’ from a select query in line no 3 . FALSE. if fetch statement doesn’t return a row. When does an error occur while accessing an explicit cursor? a) When we try to open a cursor which is not closed in the previous operation.put_line emp_rec. 10> dbms_output. TRUE. 11> CLOSE emp_cur. The number of rows fetched by the fetch statement If no row is returned. We can also create a record with a cursor by replacing the table name with the cursor name. if fetch statement doesn’t return a row. Fourth. We use these attributes to avoid errors while accessing cursors through OPEN. if fetch statement returns at least one row. FALSE. What are Explicit Cursor Attributes? Oracle provides some attributes known as Explicit Cursor Attributes to control the data processing while using cursors. we are closing the cursor in line no 11. we are fetching the cursor to the record in line no 9. 9> FETCH emp_cur INTO emp_rec. CLOSE cursor. FETCH and CLOSE Statements. . first we are creating a record ‘emp_rec’ of the same structure as of table ‘emp_tbl’ in line no 2. 12> END.6. Fifth. b) When we try to fetch a cursor after the last operation.last_name). 3> CURSOR emp_cur IS 4> SELECT * 5> FROM 6> WHERE salary > 10. if fetch statement returns at least one row. 7> BEGIN 8> OPEN emp_cur.FETCH cursor. END.

5> BEGIN 6> IF NOT sales_cur%ISOPEN THEN 7> OPEN sales_cur. we are using the cursor attribute %NOTFOUND to check whether the fetch returned any row. 14> END LOOP. last_name.salary). 15> END. we need to reverse the logic of the program. 10> WHILE sales_cur%FOUND THEN . In line no 6. if there is a row found the program continues.last_name 13> || ' ' ||emp_cur. we are using the cursor attribute %ISOPEN to check if the cursor is open. salary FROM emp_tbl. salary FROM emp_tbl. if the cursor is already open in Cursor_name%ISNAME the program FALSE. if the condition is true the program does not open the cursor again. it directly moves to line no 9. 12> dbms_output. So use these attributes in appropriate instances. 9> LOOP 10> FETCH emp_cur INTO emp_rec. WHILE LOOP and FOR LOOP. last_name.put_line(emp_cur. if the cursor is not opened in the program. In line no 11. 8> END IF. If there is no rows found the program would exit. 9> FETCH sales_cur INTO sales_rec. These loops can be used to process multiple rows in the cursor. If we do so.first_name || ' ' ||emp_cur. 16> / In the above example we are using two cursor attributes %ISOPEN and %NOTFOUND. a condition which exists when you fetch the cursor after the last row. Using Loops with Explicit Cursors: Oracle provides three types of cursors namely SIMPLE LOOP. 8> END IF. 5> BEGIN 6> IF NOT sales_cur%ISOPEN THEN 7> OPEN sales_cur. 11> EXIT WHEN emp_cur%NOTFOUND. 4> emp_rec emp_cur%rowtype. 1> DECLARE 2> CURSOR emp_cur IS 3> SELECT first_name. Cursor with a While Loop: Lets modify the above program to use while loop. Cursor with a Simple Loop: 1> DECLARE 2> CURSOR emp_cur IS 3> SELECT first_name. We can use %FOUND in place of %NOTFOUND and vice versa. TRUE. Here I will modify the same example for each loops to explain how to use loops with cursors. 4> emp_rec emp_cur%rowtype.%ISOPEN statement returns an error.

put_line(emp_cur. This indicates the oracle engine that the PL/SQL program has ended and it can begin processing the statements. 18> / In the above example.first_name || ' ' ||emp_cur. 1> DECLARE 2> CURSOR emp_cur IS 3> SELECT first_name. NOV 24 .last_name 13> || ' ' ||emp_cur. 16> END LOOP. before fetching the record again. we are using backward slash ‘/’ at the end of the program. 17> END. These functions are accomplished by the FOR LOOP automatically. NOTE: In the examples given above.salary). 11>END. In the loop we use fetch statement again (line no 15) to process the next row. 5> BEGIN 6> FOR emp_rec in sales_cur 7> LOOP 8> dbms_output. when the FOR loop is processed a record ‘emp_rec’of structure ‘emp_cur’ gets created. If the fetch statement is not executed once before the while loop the while condition will return false in the first instance and the while loop is skipped.11> LOOP 12> dbms_output. By using FOR Loop in your program.last_name 9> || ' ' ||emp_cur.first_name || ' ' ||emp_cur. if true the program moves into the while loop. Posted 29th November 2011 by Prafull Dangore 0 Add a comment 30. salary FROM emp_tbl. 10> END LOOP. you can reduce the number of lines in the program. the cursor is opened. in line no 10 we are using %FOUND to evaluate if the first fetch statement in line no 9 returned a row.. Cursor with a FOR Loop: When using FOR LOOP you need not declare a record or variables to store the cursor values. always process the record retrieved by the first fetch statement. the rows are fetched to the record ‘emp_rec’ and the cursor is closed after the last row is processed. In the loop. 12> / In the above example. 15> FETCH sales_cur INTO sales_rec.. else you will skip the first row. Let’s use the above example to learn how to use for loops in cursors. General Syntax for using FOR LOOP: FOR record_name IN cusror_name LOOP process the row. END LOOP. fetch and close the cursor.put_line(emp_cur. 4> emp_rec emp_cur%rowtype. last_name.salary). need not open.

$$MappingDateVariable. Step 3: In the mapping.how to check table size in oracle 9i ? Scenario : how Solution select segment_name sum(bytes)/(1024*1024) from user_extents where and segment_name GROUP BY segment_name = segment_type='TABLE' 'TABLE_NAME' table_name. it reads only rows dated 11/16/2010. table_size_meg to check table size in oracle 9i ? : Posted 24th November 2011 by Prafull Dangore 0 Add a comment 31. use the variable function to set the variable value to increment one day each time the session runs. NOV 1 how to do the incremental loading using mapping variable Scenario: how to do the incremental loading using mapping variable Solution: Step 1: create a mapping variable. It saves 11/17/2010 to the repository at the end of the session. lets say you set the initial value of $$MappingDateVariable as 11/16/2010. $$MappingDateVariable and hard code the value from which date you need to extract. Step 2: Create a mapping variable. transaction date or any date column to help do incremental load). such as: transaction_date = $$MappingDateVariable (I am assuming you have a column in your source table called. . The next time it runs the session. it reads only rows from 11/17/2010. In the source qualifier. The first time the integration service runs the session. create a filter to read only rows whose transaction date equals $$MappingDateVariable. And would set $$MappingDateVariable to 11/17/2010.

KB 117753).3000000000000". (This is to bypass a current iDQ bug where large numbers are presented in profiling results in scientific notation . NOV 1 i get out put file frist field like #id. 33. E.Posted 1st November 2011 by Prafull Dangore 0 Add a comment 32. Posted 1st November 2011 by Prafull Dangore 0 Add a comment 34. NOV 1 Data type conversion issue Scenario: We are creating hundreds of passthrough mappings that need to store numeric source data columns as varchar2 string columns in the target Oracle staging tables. So if you want to avoid trailing 0's.pt_Status. Can this be avoided without manipulating each mapping with an additional expression with a string function and new output port for each column? Solution: Enabling high pression ensures the source data type (both scale and prescision intact).csv i get out put first field #along with columan name. u can use TRIM function in the SQ override query. but i dont want # Scenario: my source .csv stg oracle tgt .g.3 into a target column of varchar2(20) will populate with "12. and i want to delete dummy files my server is windows Solution: .e_ID. a source column (15.2) passing value 12. Unfortunaly Powercentre pads all available significant places with zeros.

OCT 21 Need to get the lastest ID Scenario: We have data Source is Oracle database: OLD_ID ---------101 102 103 105 106 Need OLD_ID ---------101 102 103 105 106 108 from source is coming as below the output as NEW_ID ---------102 103 104 106 108 below.You need to provide the targetfile path and the name in the input filename and output filename you can provide the file location and the name you want to have in the target file (final file). .txt > d:\Informaticaroot\TGTfiles\ Posted 1st November 2011 by Prafull Dangore 0 Add a comment 35. Ex: Oracle_emp(source)--> SQ-->Logic-->TGT(emp.txt)(Flatfile) In post session sucess command sed 's/^#//g' d:\Informaticaroot\TGTfiles\ emp.It is not a problem. NEW_ID -----------104 104 104 108 Can anyone help me todo this in informatica.

you have to maintain the old_id of prev row in exp variable.increment value of new_id by 1.Solution: Mapping def Exp2 TGT Agg Sq --> Exp1 --> ---->Jnr ----> Explnation: In OLD_ID ---------101 102 103 105 106 exp1 .then you have to minus it with current row lod_id New_id .starting with one. Thane OLD_ID ---------101 102 103 105 106 and NEW_ID ---------104 108 send below NEW_ID ---------102 103 104 106 108 in Agg rows to Exp 2 New_id ---------1 1 1 2 2 o/p New_id ---------1 2 Then join exp2 o/p with agg o/p based on New_id column so you will get required o/p OLD_ID ---------101 102 103 105 NEW_ID -----------104 104 104 108 . if value of prev row of Diff_of_rows does not match with current row Diff_of_rows.add NEW_ID ---------102 103 104 106 108 1 1 1 2 1 tow variable Diff_of_rows ----------------(1) (102-101) (103-102) (105-103) (106-105) shown below New_id ---------1 1 1 2 2 Diff_of_rows .

Could anybody let me know the reason behind this as I couldn't find any info in the log file as well. It continues processing and writing data and committing data to targets . . stop the outermost batch . But last time I coordinated with oracle team updated OPB table info related workflow status. except it has a 60 second timeout.1 Scenario: I am trying to abort a session in the workflow monitor by using 'Abort' option.If the session you want to stop is a part of batch. Finally I had to request the UNIX team to kill the process. But the status of the session is still being shown as 'Aborting' and remains same for the past 4 days. OCT 21 Aborting a Session in Informatica 8. you must stop the batch If the batch is part of nested batch. If the server cannot finish processing and committing data within 60 seconds. you can issue the ABORT command. It is similar to stop command. 37.6. Solution: .When you issue the stop command.106 108 Posted 21st October 2011 by Prafull Dangore 0 Add a comment 36.If the server cannot finish processing and committing data. As you said to kill the process we need to contact UNIX Admin. the server stops reading data. You need to kill DTM process and terminates the session.

Add a comment 39.Posted 21st October 2011 by Prafull Dangore 0 Add a comment 38. OCT 20 Combining multiple rows as one based on matching colum value of multiple rows Scenario: I ID 529 529 840 840 616 have my source Line-no 3 4 2 2 1 data like below: Text DI-9001 DI-9003 PR-031 DI-9001 PR-029 . OCT 20 Condition to Check for NULLS and SPACES in Informatica Scenario: I have String data and I want to filter out NULLS and SPACES from that set of Data. What can be the condition given in Informatica to check for NULLS and SPACES in ONE EXPRESSION OR FILTER TRANSFORMATION.do this in exp t/n and use this flag in filte. 1) -. 0. Posted 20th October 2011 by Prafull Dangore 0 LENGTH(LTRIM(RTRIM(column_name)))<>0 in filter transformation. Solution: Use OR IIF(ISNULL(column_name) or LTRIM(RTRIM(column_name)) = ''.

OCT 20 How to Read the Data between 2 Single Quotes Scenario: . variable line_no ports ASC order as ID(I/O) Line_no(i/o) Text(i) text_v : iif(ID=pre_id pre_id(v):ID pre_line_no(v):Line_no Text_op:Text_v Aggtrans-->group concatenation by ID and and Line_no=pre_line_no.874 2 DI-9003 874 1 PR-031 959 1 PR-019 Now I want my target to be ID Line-no Text 529 3 DI-9001 529 4 DI-9003 840 2 PR-031&DI-9001 616 1 PR-029 874 2 DI-9003 874 1 PR-031 959 1 PR-019 It means if both the ID and the LINE_NO both are same then the TEXT should concatenate. Posted 20th October 2011 by Prafull Dangore 0 Add a comment 40. else no change Solution: The mapping flow like this: source-->sq-->srttrans-->exptrans--->aggtrans--->target srttrans--->sort exp-->use by ID. 41.Text_v||'&'||Text.Text) Line_no. It will of return last row which is text Then pass to the Target.

wskjd' = city='eytw' If I split the record based on comma(.wskjd' 7. Is there a way to read the data in between 2 single quotes ??? Solution: Please Field1 try below = solution it may help u in some extent " "id='102'. 6.city='eytw' 4.name. name= ) --O/P 'yty. value1 = replace(v_5. v_5 = substring-before(v_2. if a comma comes in between two single quotes then we have to suppress the comma(.).name. v_3 = substring-after ( v_1.e..name='yty.) in value of name. '') --i. Is there a way where we can achieve the solution in easier way i.e. '"' . id= ) --O/P '102'.'') --O/P '102' 8. no space 2. I gave a try with different inbuilt functions but couldnt make it.I field1 have = a record like "id='102'.'. 7. It varies.city='eytw' this: " Note: sometimes data might come as [id.) then the result wont come as expected as there is a comma(. = id='102' = name='yty. value3 = replace(v_6. v_6 = substring-before ( v_3. value3 = v_4 --O/P 'eytw' Posted 20th October 2011 by Prafull Dangore 0 Add a comment 42.. v_1 = Replace(field1.'. city= ) --O/P 'eytw' 5.'. v_4 = substring-after ( v_1.city='eytw' Steps 1.name='yty..wskjd'.id].name='yty.city='eytw' 3. v_2 = substring-after ( v_1. I need value1 value2 value3 to store the value of field1 into different fields.wskjd'. OCT 20 How to use Lookup Transformation for Incremental data loading in target table? .wskjd'. name) --O/P '102'.wskjd'. city] or sometimes it might come as [code.wskjd'. city ) --O/P 'yty.'.'') --O/P 'yty.

for second or third time. if there is any new record in the source table.if there is no match then insert those record into the target The logic should be as below SQ--> LKP--> FILTER-->TGT In lookup match the ID column from src with ID column in the target . OR 2) If there is any primary key column in the target table then we can create a lookup on the target table and match the TGT primary key with the source primary key. First time it have to load all the data from source to target. if i have run the mapping second day. once you check it you can see NewLookupRow column in lookup port through which you can identify whether incoming rows are new or existing. For eg: i have done a mapping from source table (Which contain bank details) to target table.Scenario: how to load new data from one table to another. create a lookup on target table and select dynamic lookup cache in property tab. you can use associate port to compare the specific/all columns of target table lookup with source column. For first time i will load all the data from source to target. Also in lookup port. If you have any datestamp in the source table then you can pull only the newly inserted records from the source table based on the time stamp (this approach will applicable only if the source table has a lastmodifieddate column). . i need to get the data which is newly entered in the source table. In filter allow only null ID values which is returing from the lookup. only that record must load to the target.The lookup will return the ID's if it is avail in the target else it will return null value. How to use the lookup transformation for this issue? Solution: 1) In mapping. OR 3. So after lookup you can use router to insert or update it in target table.its a connected lookup where you send a source rows to lookup as input and/or output ports and lookup ports as output and lookup.If the lookup finds a match then ignore those records . by comparing both source and the target.

OCT 20 Is it possible to use a parameter to specify the 'Table Name Prefix'? Scenario: Is it possible to use a parameter to specify the 'Table Name Prefix'? Solution: Yes.Lookups Informatica PowerCenter performance .Use that variable in the table prefix property Posted 20th October 2011 by Prafull Dangore 0 Add a comment 44.Lookups Lookup performance . you can use the parameter for specifying the tablename prefix.x and p2. OCT 19 Informatica PowerCenter performance .Posted 20th October 2011 by Prafull Dangore 0 Add a comment 43. 45.x you can load into the tables seperately by specifying the tablenameprefix value in the parameter file. All you need to do is to create a workflow variable and assign value to the variable in the param file. Say suppose if you have table x with different tablename prefix like p1.

What is a lookup transformation? It is just not another transformation which fetches you data to look against the source data. Conditional call of lookup: Instead of going for connected lookups with filters for a conditional lookup call. SQL query: We will start from the database. as the number of source rows is quite less than that of the lookup. I now try to explain different scenarios where you can face problems with Lookup and also how to tackle them. makes your flow run for ages. It is a transformation when used improperly. When the PowerCenter doesn't find the data you are lookingup in the . then the problem is certainly with the cache. PowerCenter gives you all the columns in the table. Is the single column return bothering for this? Go ahead and change the SQL override to concatenate the required columns into one big column. Increase cache: If none of the above seems to be working. you have 10 rows in the source and one of the columns has to be checked against a big table (1 million rows). Unwanted columns: By default. Size of the source versus size of lookup: Let us say. Use uncached lookup instead of building the static cache. if the Lookup transformation is after the source qualifier and there is no active transformation in-between. when you create a lookup on a table. yourself are not an SQLer. but be sure to delete the unwanted columns from the lookup as they affect the lookup cache very much. You may have to take the help of a database developer to accomplish this if you. You only need columns that are to be used in lookup condition and the ones that have to get returned from the lookup. Whatever data that doesn't fit into the cache is spilt into the cache files designated in $PMCacheDir. you can as well go for the SQL over ride of source qualifier and join traditionally to the lookup table using database joins. go for unconnected lookup.Lookup is an important and a useful transformation when used effectively. if both the tables are in the same database and schema. It takes more time to build the cache of 1 million rows than going to the database 10 times and lookup against the table directly. The cache that you assigned for the lookup is not sufficient to hold the data or index of the lookup. Then PowerCenter builds the cache for the lookup table and then checks the 10 source rows against the cache. Break them at the calling side into individual columns again. JOIN instead of Lookup: In the same context as above. Find the execution plan of the SQL override and see if you can add some indexes or hints to the query to make it fetch data faster.

Posted 19th October 2011 by Prafull Dangore 0 Add a comment 46. Also. with what username and password can we access this table in . Then PowerCenter fails the session with an error. What if your data is huge and your whole system cache is less than that? Don't promise PowerCenter the amount of cache that it can't be allotted during the runtime. Increase the cache so that the whole data resides in the memory. Customer_Data is a folder. Which schema. For example.cache. This is quite expensive for obvious reasons being an I/O operation. Cachefile file-system: In many cases. A target flat file by name EMP_FF exists with some structure. then you have to go for shared cache or reuse the lookup. the cache file piling up may take time and result in latency. • A session is the physical representation of the mapping. If you promise 10 MB and during runtime. then use the persistent cache because the consecutive runs of the flow don't have to build the cache and waste time. • A workflow is synonymous to a set of programs in any other programming language. it swaps the data from the file to the cache and keeps doing this until it finds the data. • A folder is a logical entity in a PowerCenter project. • A mapping is a single program unit that holds the logical mapping betwee n source and target with required transformations. Useful cache utilities: If the same lookup SQL is being used in someother lookup. The mapping doesn’t say in which schema this EMP table exists an d in which physical location this EMP_FF table going to be stored. The session defines what a maping didn’t do. So with the help of your system administrator try to look into this aspect as well. The session stores the information about where this EMP table comes from. if you have cache directory in a different file-system than that of the hosting server. if you have a table that doesn't get data updated or inserted quite often. A mapping will just say a source table by name EMP exists with some structure. your system on which flow is running runs out of cache and can only assign 5MB. OCT 19 PowerCenter objects – Introduction PowerCenter objects – Introduction • A repository is the highest physical entity of a project in PowerCenter.

OCT 19 Dynamically generate parameter files Scenario: Dynamically generate parameter files Solution: Parameter file format for PowerCenter: For a workflow parameter which can be used by any session in the workflow. below is the format in which the parameter file has to be created. Typical examples of transformations are Filter.Workflow_Name:ST. It also tells about the target flat file. • A transformation is a sub-program that performs a specific task with the input it gets and returns some output.Workflow_Name] $$parameter_name1=value $$parameter_name2=value For a session parameter which can be used by the particular session. that are reusable can be built into something called mapplet. A mapplet is a set of transformations aligned in a specific order of execution. . Posted 19th October 2011 by Prafull Dangore 0 Add a comment 47. PowerCenter also allows parameters to be passed to have flexibility built into the flow. [Folder_name:WF. Parameters are always passed as data in flat files to PowerCenter and that file is called the parameter file. Sorter etc. [Folder_name:WF. • To Parameter have handling flexibility in in maintaining a the data parameter model: files. Aggregator. In which physical location the file is going to get created. It can be assumed as a stored procedure in any database. Lookup. • A set of transformations. As with any other tool or programing language.that schema.Session_Name] $$parameter_name1=value $$parameter_name2=value 3. below is the format in which the parameter file has to be created.

id = wfw.WF:' wfw.PARAMETER_VALUES dtl where fol.id and dtl. NULL)'=' dtl. Parameter file view: CREATE OR REPLACE VIEW PARAMETER_FILE ( HEADER.folder_name'.logical_name. DETAIL ) AS select '['fol.• To reduce the overhead for the support to change the parameter file every time a value of a parameter changes • To ease the deployment. 2. a.session_name is null UNION .pmr_id = pmr. FOLDER 4 tables table are will to have be created entries in for the each database: folder. 4. PARAMETER_VALUES table will hold the parameter of each session with references to PARMETERS table for parameter name and WORKFLOWS table for the workflow name. WORKFLOWS table will have the list of each workflow but with a reference to the FOLDERS table to say which folder this workflow is created in.id and dtl.value detail from folder fol . PARAMETERS table will hold all the parameter names irrespective of folder/workflow.folder_id and dtl. all the parameters have to be maintained in Oracle or any database tables and a PowerCenter session is created to generate the parameter file in the required format automatically.pmr.parameters pmr . '_'dtl. To get the actual names because PARAMETER_VALUES table holds only ID columns of workflow and parameter. this. 3.wfw_id = wfw. For 1.logical_name. that means the parameter is a workflow variable which can be used across all the sessions in the workflow. we create a view that gets all the names for us in the required format of the parameter file.WORKFLOWS wfw .parameter_namenvl2(dtl. Below is the DDL for the view. When the session name is NULL.workflow_name']' header .

folder_name'.value detail from folder fol .mapplet_name'. FOLDER table ID (NUMBER) FOLDER_NAME (varchar50) DESCRIPTION (varchar50) c.parameters pmr .folder_id and dtl. we create one parameter $$SOURCE.id and dtl.mapplet_name.decode(dtl.workflow_name'. dtl.') pmr. NULL.ID DESCRIPTION (varchar50) d.pmr_id = pmr. WORKFLOWS table ID (NUMBER) WORKFLOW_NAME (varchar50) FOLDER_ID (NUMBER) Foreign Key to FOLDER.session_name']' header .select '['fol.WORKFLOWS wfw . PARAMETERS table ID (NUMBER) PARAMETER_NAME (varchar50) DESCRIPTION (varchar50) e.session_name is not null b. PARAMETER_VALUES table ID (NUMBER) WF_ID (NUMBER) PMR_ID (NUMBER) LOGICAL_NAME (varchar50) VALUE (varchar50) SESSION_NAME (varchar50) • LOGICAL_NAME is a normalization initiative in the above parameter logic.PARAMETER_VALUES dtl where fol. '_'dtl. in a mapping if we need to use $$SOURCE_FX as a parameter and also $$SOURCE_TRANS as another mapping parameter. Then FX and TRANS will be two LOGICAL_NAME records of the PARAMETER_VALUES table. Posted 19th October 2011 by Prafull Dangore .id and dtl. • m_PARAMETER_FILE is the mapping that creates the parameter file in the desired format and the corresponding session name is s_m_PARAMETER_FILE.ST:' dtl. NULL. instead of creating 2 different parameters in the PARAMETERS table. For example.logical_name. NULL)'=' dtl.parameter_namenvl2(dtl.WF:' wfw.id = wfw.logical_name.wfw_id = wfw.

we should make use of the special "FileName" port in the target file. . OCT 19 How to generate target file names (like YYYYMMDDHH24:MISS. You can't create this special port from the usual New port button. Below two screen-shots tell you how to create the special port in your target file.0 Add a comment 48. 49. There is a special button with label "F" on it to the right most corner of the target flat file when viewed in "Target Designer".csv) dynamically from the mapping? Scenario: How to generate target file names (like YYYYMMDDHH24:MISS.csv) dynamically from the mapping? Solution: In order to generate the target file names from the mapping.

csv'. We'll come to the mapping again. just use a port from an Expression transformation before the target to pass a value of Output Port with expression $$FILE_NAMEto_char(sessstarttime. The other file is the actual file created with the desired file name and data. One is a dummy file with zero bytes size and the file name is what is given in the Session properties under 'Mappings' tab for target file name. each second a new file will get created. the job is done. OCT 18 . This mapping generates two files. it will change if you have 100s of millions of records and if the session may run for an hour.Once this is done. Please note that $$FILE_NAME is a parameter to the mapping and I've used sessstarttime because it will be constant through out the session run. Posted 19th October 2011 by Prafull Dangore 0 Add a comment 50. Please note that a new file gets created with the current value of the port when the port value which maps to the FileName changes. If you use sysdate. 'YYYYMMDDHH24:MISS')'. When you want to create the file name with a timestamp attached to it.

remaining records into different targets when source is a flat file? Scenario: How to load first. These details are present only in the session log file so how to grep these details from the log file? or Is there anyother method? I actually have to insert these details into a table.The other details which i have to include in the table are session name. target table name. Thus Session_name Tgt_Table_name Start_time End_time Inserted_count Deleted_count Updated_count Solution: Hey u will get ths info through INFA metadata tables my table structure is Posted 18th October 2011 by Prafull Dangore 0 Add a comment 51.To find the inserted deleted and updted rows count Scenario: I want to find the no.updated and deleted on the successful execution of a session.sesion start time.remaining records into different targets when source is a flat file? Solution: .last. OCT 18 How to load first.session end time.of rows inserted.last.

Solution: Below is the approach.sal. But the requirement is to send it with some text as error report and the system in the subject line. I am sending this error at the end of the process via an email. For remaining record you can use minus function with above out puts. . First record: SRC-->SQ-->EXP SEQ--> For Last record: select * from (select * from (select empno.rownum from emp) order by rownum DESC) where rownum=1. OCT 18 Email date in subject line Scenario: Is there a way to add the sysdate to the email subject sent from Informatica? I am running a mapping which create an error file.If you are using seq and aggregator then the mapping flow should be like below -->AGG -->JNR-->RTR-->TGT1 -->TGT2 -->TGT3 In router if seq value =1 then that record will go to target1 if seq value and agg count out put equal that means that is last record so it has to go to target 3 the remaining all records has to pass to target 2. 53.job.ename.mgr. last and remaining records try the below For select * from emp where rownum=1. for sql query to get first. Posted 18th October 2011 by Prafull Dangore 0 Add a comment 52.

OCT 18 Route records as UNIQUE AND DUPLICATE Scenario: I HAVE A SRC TABLE AS : A B C C B D B I HAVE 2 TGT TABLES UNIQUE AND DUPLICATE : The first table A D The second target B B B C C hOW DO I DO THIS : Solution: Try the following approach.Create a workflow variable $$Datestamp as datetime datatype. Posted 18th October 2011 by Prafull Dangore 0 Add a comment 54. one one out put port count(column) .In assignment task assign the sysdate to that variable and in email subject use the $$Datestamp variable and it will send the timestamp in the subject. AGG--> RTR-->TGT1 should contain the following output should contain the following output SRC-->SQ--------------> -->TGT2 JNR--> from source pass all the data to aggregator and group by source column.

COUNT A.3 of joiner will be like below In router create two groups one for Unique and another one for duplicate Unique=(count=1) Duplicate=(count>1) Posted 18th October 2011 by Prafull Dangore 0 Add a comment 55.3 D.1 B.2 C. Out put COLUMN.COUNT A. OCT 18 Informatica Source Qualifier (Inner Joins ) Scenario: have ENO 001 002 008 006 ENO 001 002 007 ENO 006 008 001 an ENAM XXX JJJJ KKK HJJH 3 tables HIREDATE MAY/25/2009 OCT/12/2010 JAN/02/2011 AUG/12/2012 S-ID OO OO OO V-ID DD DD DD .1 B.2 C.2 B.2 D.1 B.so from COLUMN.1 agg you have two ports out puts Now join this data with source based on column.3 C.

eno=b.eno table can and third use table can be lookup used as . when I run the session I'm getting the error: "CMN_1650 A duplicate row was attempted to be inserted into a dynamic lookup cache .eno. OR You Second ENO.s-id table3.NULL) Add a comment 56. Solution: Better u do it in source qualifier sql query by case statement select CASE WHEN THEN ELSE END from where and b. OCT 18 CMN_1650 A duplicate row was attempted to be inserted into a dynamic lookup cache Dynamic lookup error. v_id=IF(HIREDATE>JAN/01/2011.HIREDATE. lookup. In the router it is a simple case of inserting new target rows (NewRowLookup=1) or rejecting existing rows (NewRowLookup=0). 57.s-id table1 a. and then to a router.ENAM.NULL ) Posted 18th October 2011 by Prafull Dangore 0 expression: lkp_2nd_tbl.Using informatica source qualifier or other transformations I should be able to club the above tables in such a way that if the HIREDATE>JAN/01/2011 then eno should select v-id and if HIREDATE<JAN/01/2011 the ENO should select s-id and make a target table leaving the ID columns blank based on condition IT SHOULD HAVE EITHER S-ID OR V-ID BUT NOT BOTH .table2 b. Scenario: I have 2 ports going through a dynamic lookup.table3 c a. (HIREDATE<JAN/01/2011 table2. However.eno=c. ENO ENAM HIREDATE S-ID V-ID Please give me the best advice for the following situation. In an s_id= IF (HIREDATE<JAN/01/2011.lkp_3rd_table.

the session fails with this error. The pair exists on the target so surely should just return from the dynamic lookup newrowlookup=0. there is only 1 set of them (no duplicates). OR Do a Select DISTINCT in the lookup cache SQL. Solution: This occurs when the table on which the lookup is built has duplicate rows. Since a dynamic cached lookup cannot be created with duplicate rows. whilst investigating the initial error that is logged for a specific pair of values from the source." I thought that I was bringing through duplicate values so I put a distinct on the SQ. OR Make sure the data types of source and look up fields match and extra spaces are trimmed. However. looks like the match is failing between src and lkp so the lookup is trying to insert the row in cache even though its present already. OCT 18 Update Strategy for Deleting Records in Informatica Scenario: I am using an update strategy transformation for deleting records from my target table. There is also a not null filter on both ports.Dynamic lookup error. Is this some kind of persistent data in the cache that is causing this to think that it is duplicate data? I haven't got the persistent cache or recache from database flags checked. In my Warehouse Designer. The dynamic lookup cache only supports unique condition keys. Posted 18th October 2011 by Prafull Dangore 0 Add a comment 58. Make sure there are no duplicate rows in the table before starting the session. I have defined one column (say col1) as Primary Key and another column (say col2) as Primary/Foreign Key. My target has rows like this: .

Look up is used to cache the Target records and compare the incoming records with the records in Target. If incoming record is new it will be insert in target otherwise not. This will serve your purpose. You need to know a particular record is existing or not in target target. Will linking the fields Col1 and Col2 from the Update Strategy transformation to the Target serve the purpose? Solution: Define both the columns as primary key in target definition and link only col1 and col2 in mapping. if you do only delete then update strategy is not required at all. If it is new Record is flagged as 'I' with the sense of Insert.Col1 ---1 2 3 Col2 A A B Col3 value1 value2 value3 I want to delete the record from the target which has the combination (Col1="2" and Col2="A"). I mean why do you need a look up transformation with update strategy in a mapping to mark the records for update or Insert when you have update else Insert option in the target? Is there any difference between both? Can someone please let me know what is the difference and when to use which option? Solution: In slowly growing targets (Delta loads) target is loaded incrementally. dimension needs to is maintained. Expression is used to flag a record whether it is a new or existing. History of Hence if a record exists in the Target and if it will be flagged as 'U' with sense of Update. update then it Posted 18th October 2011 by Prafull Dangore . In Slowly Changing Dimensions(SCD). Posted 18th October 2011 by Prafull Dangore 0 Add a comment 59. BTW. OCT 18 Target Rows as Update or Insert Option in the Target Scenario: When you have the option to treat target rows as update or Insert option in the target why do you need lookup transformation.

Recently I developed a workflow/mapping that populates an SCD table with a surrogate key column. OR advise. Due to this reason. 2. First delete the record which is not been used from last 10 days in per-sql instead of deleting at the end. I am new to working with surrogate key columns in database.0 Add a comment 60. This seems fine and works OK. Posted 17th October 2011 by Prafull Dangore 0 Add a comment 62. Now load all the data in target table with sequence generator. 1. I find a lot of gaps in the surrogate key column. 61. 3. after testing the ETL process for over 15 days.in sg change the setting so that its value reset to 0 for every new run. load the all data in temp table including old and new. OCT 17 How to Fill Missing Sequence Numbers in Surrogate Key Column in Informatica Scenario: Hello all. For each record that is inserted. I created a logic in expression t/r such that it generates a new sequence number. We have a purge logic that runs every day in post-sql that will delete records that have not been updated for the last 10 days. Please Solutions: Hello. . Is there a way/logic in Informatica with which I can fill these gaps while loading the target and create a new sequence number only if there a no gaps? Or can this be done at database level? I searched over the Internet but did not find any solution whatsoever. Now. If you can make a bit changes to ur mapping u can achive it.

CLAIM_INJRY_SID. Customer Number in Customer table) can change and this makes updates more difficult. ?.LASTNAME) VALUES ( ?. ?. Some tables have columns such as AIRPORT_NAME or CITY_NAME which are stated as the primary keys (according to the business users) but . They can use Infa sequence generator. The only requirement for a surrogate primary key is that it is unique for each row in the table. Posted 17th October 2011 by Prafull Dangore 0 Add a comment 63.J OB. ?) . key for the dimension tables primary keys. AIRPORT_ID. say. or Oracle sequence.DM_ROW_PRCS_DT. (also known as artificial or identity key). indexing on a numerical value is probably better and you could consider creating a surrogate key called. Data warehouses typically use a surrogate.e.DM_CRRNT_ROW_IND. or SQL Server Identity values for the surrogate key. Function Name : Execute SQL Stmt : INSERT INTO D_CLAIM_INJURY_SAMPLEE(CK_SUM.ENAME.not only can these change. ?.INCDT_ID.SYS_C00163872) occurred: violated unique constraint Database driver error.OCT 17 Surrogate Key Scenario: What is a surrogate key and where do you use it? Solution: A surrogate key is a substitution for the natural primary key. ?. ?. OCT 14 Unique Constraint Violated Scenario: Database ORA-00001: errors (INF_PRACTICE1. It is just a unique identifier or number for each row that can be used for the primary key to the table. ?. This would be internal to the system and as far as the client is concerned you may display only the AIRPORT_NAME. It is useful because the natural primary key (i. ?. ?.DM_RO W_PRCS_UPDT_DT.FIRSTNAME.

INCDT_ID. ?. if create select from group having index index def on like targettable(col1.col3 count(1)>1 by either u have to delete those records from source or use agg in informatica mapping Posted 14th October 2011 by Prafull Dangore 0 Add a comment 64.col2.DM_CRRNT_ROW_IND.count(1) sourcetable col1.J OB. you can validate your workflow which contains the multiple worklet to validate multiple sessions..LASTNAME) VALUES ( ?.col3).col3.ENAME. ?) Solution: check the definition of unique index columns and then below query on source to fine out thd duplicate rows.col2.CLAIM_INJRY_SID.. After doing this. ?. ?.DM_RO W_PRCS_UPDT_DT.Database driver error. 65. ?. ?. .FIRSTNAME. ?. col1. OCT 13 Validating Multiple Sessions in Informatica Scenario: Is there anyway of validating multiple workflows and their respective sessions at the same time in Informatica. Function Name : Execute Multiple SQL Stmt : INSERT INTO D_CLAIM_INJURY_SAMPLEE(CK_SUM. Validating them separately is tedious. Solution: Best approach is to create a worklet instead of workflow then put a set of sessions in worklet then call all those worklets in single workflow.col2. ?. ?.DM_ROW_PRCS_DT.

1st workflow should be executed. if 2.Posted 13th October 2011 by Prafull Dangore 0 Add a comment 66. does anyone know of a way to find what objects are in checkout status and who has it checked out? Solution: .ABC=2.2. Solution: If there are few values 1.ABC=3 and load target tables in three different mappings.3 for ABC Then we can have filter in the mapping having source table with column ABC. 2nd workflow should be executed and so on. Filter the records with condition ABC=1. Create three different sessions and then use decision task in workflow level as If tgtsuccessrows=1 for session1 then run worklet1 If tgtsuccessrows=2 for session2 then run worklet2 If tgtsuccessrows=2 for session3 then run worklet3 Posted 13th October 2011 by Prafull Dangore 0 Add a comment 67. OCT 13 Informatica Workflow Execution based on Conditions Scenario: I have a table which contains a single row which has a column ABC. For ex. if the value of ABC is say 1. The value of ABC defines different scenarios. OCT 13 Finding Objects in Checkout Status Scenario: So.

Posted 13th October 2011 by Prafull Dangore 0 Add a comment 68. For example. etc. Language and Territory Common character sets: WE8ISO8859P15 European English includes euro character US7ASCII American English The DATE datatype always stores a four-digit year internally. Posted 13th October 2011 by Prafull Dangore .'DD-MM-YY') NLS settings include Character set. the following query is evaluated correctly only if the language specified for dates is American: SELECT ENAME FROM EMP WHERE HIREDATE > '1-JAN-01' This can be made independent of the current date language by specifying NLS_DATE_LANGUAGE: SELECT ENAME FROM EMP WHERE HIREDATE > TO_DATE('1-JAN-01'. 'NLS_DATE_LANGUAGE = AMERICAN') Using all numerics is also language-independent: SELECT ENAME FROM EMP WHERE HIREDATE > TO_DATE('1-01-01'.it is strongly recommended you apply a specific format mask. If you use the standard date format DD-MON-YY YY will assume a year in the range 1900-1999 . saved-by. It will show you details such as last check-out.Under the Repository database. Open that folder and then right click and goto version->find checkouts->all users. OCT 13 Convert data as per specific region Scenario : Convert date as per specific region Solution: Specifying an NLS parameter for an SQL function means that any User Session NLS parameters (or the lack of) will not affect evaluation of the function. 69. last-saved time. This will show the history of changes made and saved on that particular code. This feature may be important for SQL statements that contain numbers and dates as string literals.'DD-MON-YY'. there must be folders that you have created.

one for data records & other for trailer record 2. Solution : Using Infomratica: I believe you can identify the data records from the trailer record. OCT 11 Loading Multiple Flat Files using one mapping Scenario: Can any one explain that how can we load multiple flat files using one mapping Solution: . Using UNIX : If you are on Unix. OCT 13 Compare the Total Number of Rows in a Flat File with the Footer of the Flat File Scenario : I have a requirement where I need to find the number of rows in the flat file and then compare the row count with the row count mentioned in the footer of the flat file.use joiner to get one record from these two data streams it will give you two different count ports in single record 4.1 i. Assign it to variable y. Now equate both these variables and take decision.0 Add a comment 70. you can use following method to identify the count of the records 1. then go for a couple of line script or commands: Count number of lines in file by wc -l. Assign the count to variable x = (wc -l) . Posted 13th October 2011 by Prafull Dangore 0 Add a comment 71. use expression for comparing the counts and proceed as per you rules. use router to create two data streams . 3. neglecting footer record. Grep the number of records from footer using grep/sed.e. use aggregator (with out defining any group key) and use count() aggregate function now both data stream will have single record.

. 2. Select Add Currently Processed Flat File Name Port./.. To remove the CurrentlyProcessedFileName port. If this flat file source option is selected. OCT 11 Capture filename while using indirect file Scenario : I have 5 source files which I am planning to load using indirect file as they all are of same format and go to same target table. Click the Columns tab to see your changes.dat Posted 11th October 2011 by Prafull Dangore 0 Add a comment 72./. You may change the precision of the CurrentlyProcessedFileName port if you wish. In the file list you can have actual file names with complete path. Is there any simple way to achieve this? The filename column is there only for file targets.txt ABC.filename1. the file name port will be added in the ports of the source..dat /home/. Open the flat file source definition in the Source Analyzer. To add the CurrentlyProcessedFileName port: 1. Solution: Sol 1..5 there is an option called Add Currently Processed Flat File Name Port. 3. click the Properties tab and clear the Add Currently Processed Flat File Name Port check box...txt will contain all the input file names with complete path.. 5..Indirect and File Name ABC. Effective with PowerCenter 8.. Click the Properties tab.Use Indirect option in session properties and give file_list name. One requirement is to capture the source file name in the target. The CurrentlyProcessedFileName port is a string port with default precision of 256 characters. The Designer adds the CurrentlyProcessedFileName port as the last column on the Columns tab. Ex: In Session Properties SourceFileType --. For previous versions a shell script or batch file can be used in a pre-session command task. 73. 4.filename. like /home/. not for file sources.

#!/bin/ksh for a in $(ls *.Short desc Double click flat file source and Go to Properties tab and check "Add Currently Processed Flat File Name Port" check box. You can append Filename ) sed -i -e "s/$/ t$/g" $ done. Here the delimiter is TAB..The "CurrentlyProcessedFileName" port is coming properly in test environment. exit 0 This adds file name for all . You can append Filename Hi. This will add a column "CurrentlyProcessedFileName" in flat file columns list. you can change the script according to your spec.------. OCT 11 CurrentlyProcessedFileName port in source coming as NULL Issue . I imported the same objects into stage. Posted 11th October 2011 by Prafull Dangore 0 Add a comment 74. "CurrentlyProcessedFileName" port is always coming as NULL.CSV) do fname=$( echo $a | tr Hi.CSV files. Isn't it? or Sol 2.. But in stage. So simple. You can append Filename to the file using Shell script. .

-. BEGIN IF number_on_hand < 1 THEN RAISE out_of_stock. That way. as Example 10-7 shows: Example 10-7 Using RAISE to Force a Pre-Defined Exception DECLARE acct_type INTEGER := 7. Edit source defn by removing CurrentlyProcessedFileName port and add again. END. Example 10-6 Using RAISE to Force a User-Defined Exception DECLARE out_of_stock EXCEPTION. Posted 11th October 2011 by Prafull Dangore 0 Add a comment 75. number_on_hand NUMBER := 0. Raising Exceptions with the RAISE Statement PL/SQL blocks and subprograms should raise an exception only when an error makes it undesirable or impossible to finish processing. other user-defined exceptions must be raised explicitly by RAISE statements.raise predefined exception . this should solve your problem. BEGIN IF acct_type NOT IN (1. 3) THEN RAISE INVALID_NUMBER. You can place RAISE statements for a given exception anywhere within the scope of that exception.raise an exception that we defined END IF. as are user-defined exceptions that you have associated with an Oracle error number using EXCEPTION_INIT. -.Solution 1.handle the error DBMS_OUTPUT. However.'). In Example 10-6. you alert your PL/SQL block to a user-defined exception named out_of_stock. EXCEPTION WHEN out_of_stock THEN -. APR 19 How PL/SQL Exceptions Are Raised ? How PL/SQL Exceptions Are Raised Internal exceptions are raised implicitly by the run-time system. an exception handler written for the predefined exception can process other errors. 2.PUT_LINE('Encountered out-of-stock error. / You can also raise a predefined exception explicitly.

see "Defining Your Own Error Messages: Procedure RAISE_APPLICATION_ERROR". Figure 10-2. For a workaround. the exception reproduces itself in successive enclosing blocks until a handler is found or there are no more blocks to search.PUT_LINE('HANDLING INVALID INPUT BY ROLLING BACK. That is. If no handler is found. Figure 10-1. Figure 10-1 Propagation Rules: Example 1 Description of the illustration lnpls009.gif Figure 10-2 Propagation Rules: Example 2 .'). END. Exceptions cannot propagate across remote procedure calls done through database links.END IF. if PL/SQL cannot find a handler for it in the current block or subprogram. EXCEPTION WHEN INVALID_NUMBER THEN DBMS_OUTPUT. / How PL/SQL Exceptions Propagate When an exception is raised. PL/SQL returns an unhandled exception error to the host environment. the exception propagates. and Figure 10-3 illustrate the basic propagation rules. ROLLBACK. A PL/SQL block cannot catch an exception raised by a remote subprogram.

sub-block begins past_due EXCEPTION.Description of the illustration lnpls010.gif An exception can propagate beyond its scope.gif Figure 10-3 Propagation Rules: Example 3 Description of the illustration lnpls011. Example 10-8 Scope of an Exception BEGIN DECLARE ---------. BEGIN . that is. as shown in Example 10-8. beyond the block in which it was declared. due_date DATE := trunc(SYSDATE) . todays_date DATE := trunc(SYSDATE).1.

DBMS_OUTPUT.PUT_LINE('Salary ' || erroneous_salary || ' is out of range.sub-block ends EXCEPTION WHEN OTHERS THEN ROLLBACK.'). For example. only an OTHERS handler can catch the exception. which is formatted as follows: . the exception propagates to the enclosing block. use a RAISE statement without an exception name. you want to reraise an exception. / Handling Raised PL/SQL Exceptions When an exception is raised.handle the error more thoroughly erroneous_salary := current_salary. ------------. BEGIN BEGIN ---------. END. you might want to roll back a transaction in the current block.sub-block ends EXCEPTION WHEN salary_too_high THEN -. Once the exception name is lost. RAISE.sub-block begins IF current_salary > max_salary THEN RAISE salary_too_high. -.'). because the scope where it was declared no longer exists. / Because the block that declares the exception past_due has no handler for it. that is. the calling application gets this error: ORA-06510: PL/SQL: unhandled user-defined exception Reraising a PL/SQL Exception Sometimes.IF due_date < todays_date THEN RAISE past_due. max_salary NUMBER := 10000. To reraise an exception. handle it locally. current_salary := max_salary. END IF. END. But the enclosing block cannot reference the name PAST_DUE.first step in handling the error DBMS_OUTPUT.raise the exception END IF. DBMS_OUTPUT. -----------. If there is no handler for a user-defined exception. then pass it to an enclosing block. EXCEPTION WHEN salary_too_high THEN -. END. normal execution of your PL/SQL block or subprogram stops and control transfers to its exception-handling part. then log the error in an enclosing block.PUT_LINE('Maximum salary is ' || max_salary || '. current_salary NUMBER := 20000.PUT_LINE('Revising salary from ' || erroneous_salary || ' to ' || current_salary || '. -.'). erroneous_salary NUMBER. which is allowed only in an exception handler: Example 10-9 Reraising a PL/SQL Exception DECLARE salary_too_high EXCEPTION.reraise the current exception END.

Exceptions Raised in Declarations Exceptions can be raised in declarations by faulty initialization expressions.PUT_LINE('Can''t handle an exception in a declaration.handle the error If any of the exceptions in the list is raised. You can have any number of exception handlers.Cannot catch the exception. followed by a sequence of statements to be executed when that exception is raised. a block or subprogram can have only one OTHERS handler. it must appear by itself. and each handler can associate a list of exceptions with a sequence of statements. the values of explicit cursor attributes are not available in the handler. In other words.. you write exception handlers.handler for exception1 sequence_of_statements1 WHEN exception2 THEN -. To catch raised exceptions. END. which specifies an exception. Thus. when an exception is raised inside a cursor FOR loop. / Handlers in the current block cannot catch the raised exception because an exception raised in a declaration propagates immediately to the enclosing block. list the exception names in the WHEN clause. If you want two or more exceptions to execute the same sequence of statements. -. an exception name can appear only once in the exception-handling part of a PL/SQL block or subprogram. Each handler consists of a WHEN clause. DBMS_OUTPUT. WHEN OTHERS THEN -. the cursor is closed implicitly before the handler is invoked. However. control does not return to where the exception was raised.'). which is always the last handler in a block or subprogram.. as follows: EXCEPTION WHEN over_limit OR under_limit OR VALUE_ERROR THEN -.EXCEPTION WHEN exception1 THEN -. the associated sequence of statements is executed. the following declaration raises an exception because the constant credit_limit cannot store numbers larger than 999: Example 10-10 Raising an Exception in a Declaration DECLARE credit_limit CONSTANT NUMBER(3) := 5000. you cannot resume processing where you left off. This handler is never called. Handling Exceptions Raised in Handlers . However. acts as the handler for all exceptions not named specifically. separating them by the keyword OR. The usual scoping rules for PL/SQL variables apply. Therefore. Use of the OTHERS handler guarantees that no exception will go unhandled. so you can reference local and global variables in an exception handler.raises an error BEGIN NULL.optional handler for all other errors sequence_of_statements3 END. These statements complete execution of the block or subprogram.another handler for exception2 sequence_of_statements2 . For example. The keyword OTHERS cannot appear in the list of exception names. EXCEPTION WHEN OTHERS THEN -. The optional OTHERS exception handler.

The number that SQLCODE returns is negative unless the Oracle error is no data found. and message inserts such as table and column names. You cannot use SQLCODE or SQLERRM directly in a SQL statement. v_errm VARCHAR2(64). SQLERRM returns the corresponding error message. A GOTO statement cannot branch into an exception handler. An exception raised inside a handler propagates immediately to the enclosing block. in which case SQLERRM returns the message associated with that error number. For example: EXCEPTION WHEN INVALID_NUMBER THEN INSERT INTO . you must assign their values to local variables. BEGIN SELECT last_name INTO name FROM employees WHERE employee_id = -1. The maximum length of an Oracle error message is 512 characters including the error code. DBMS_OUTPUT. From there on. that same handler cannot catch the exception. DECLARE name employees. The message begins with the Oracle error code.cannot catch the exception END. v_errm := SUBSTR(SQLERRM. successful completion. in which case SQLCODE returns +100. in which caseSQLERRM returns the message no data found. You can pass an error number to SQLERRM.might raise DUP_VAL_ON_INDEX WHEN DUP_VAL_ON_INDEX THEN . Instead. v_code NUMBER. SQLCODE returns the number of the Oracle error..AUTONOMOUS_TRANSACTION. happened TIMESTAMP). For user-defined exceptions. successful completion. 64).last_name%TYPE. Example 10-11 Displaying SQLCODE and SQLERRM CREATE TABLE errors (code NUMBER.When an exception occurs within an exception handler. Passing a positive number to SQLERRM always returns the message user-defined exception unless you pass +100. .Normally we would call another procedure. message VARCHAR2(64). you can use the built-in functions SQLCODE and SQLERRM to find out which error occurred and to get the associated error message. then use the variables in the SQL statement. nested messages.PUT_LINE('Error code ' || v_code || ': ' || v_errm).. declared with PRAGMA -. SQLCODE returns +1 and SQLERRM returns the message User-Defined Exception unless you used the pragma EXCEPTION_INIT to associate the exception name with an Oracle error number. in which case SQLCODE returns that error number and SQLERRM returns the corresponding error message. 1 . Retrieving the Error Code and Error Message: SQLCODE and SQLERRM In an exception handler. -. SQLCODE returns zero and SQLERRM returns the message: ORA-0000: normal. Branching to or from an Exception Handler A GOTO statement can branch from an exception handler into an enclosing block. -. the exception propagates normally. as shown in Example 10-11. -. Passing a zero to SQLERRM always returns the message normal. For internal exceptions. Make sure you pass negative error numbers to SQLERRM.. EXCEPTION WHEN OTHERS THEN v_code := SQLCODE. or from an exception handler into the current block.. If no exception has been raised. to insert information about errors. which is searched to find a handler for this new exception.

DECLARE sal_calc NUMBER(8. salary. which determines the outcome. SYSTIMESTAMP). 2500. 0). The functions SQLCODE and SQLERRM are especially useful in the OTHERS exception handler because they tell you which internal exception was raised. Unhandled exceptions can also affect subprograms. Also. / The string function SUBSTR ensures that a VALUE_ERROR exception (for truncation) is not raised when you assign the value ofSQLERRM to err_msg. if the SELECT INTO statement raises ZERO_DIVIDE. then continue with the next statement. a local handler can . you learn techniques that increase flexibility.2). But when the handler completes. EXCEPTION WHEN ZERO_DIVIDE THEN NULL. However. / You can still handle an exception for a statement. PL/SQL assigns values to OUT parameters. You can avoid unhandled exceptions by coding an OTHERS handler at the topmost level of every PL/SQL program. you cannot specify the constraints WNPS andRNPS if the function calls SQLCODE or SQLERRM.1). Place the statement in its own sub-block with its own exception handlers. if it cannot find a handler for a raised exception. BEGIN INSERT INTO employees_temp VALUES (301. Tips for Handling PL/SQL Errors In this section. If you exit a subprogram successfully. commission_pct FROM employees.INSERT INTO errors VALUES (v_code. You cannot return to the current block from an exception handler. PL/SQL does not roll back database work done by the subprogram. If an error occurs in the sub-block. if a stored subprogram fails with an unhandled exception. PL/SQL does not assign values to OUT parameters (unless they are NOCOPYparameters). END. if you exit with an unhandled exception. sal_calc/100. INSERT INTO employees_temp VALUES (302. When using pragma RESTRICT_REFERENCES to assert the purity of a stored function. v_errm. In the following example. . END. PL/SQL returns an unhandled exception error to the host environment. in the Oracle Precompilers environment. Catching Unhandled Exceptions Remember. any database changes made by a failed SQL statement or PL/SQL block are rolled back. Continuing after an Exception Is Raised An exception handler lets you recover from an otherwise fatal error before exiting a block. the block is terminated. you cannot resume with the INSERT statement: CREATE TABLE employees_temp AS SELECT employee_id. SELECT salary / commission_pct INTO sal_calc FROM employees_temp WHERE employee_id = 301. For example.

You can also perform a sequence of DML operations where some might fail. as shown in Example 10-12.catch the exception. res_answer VARCHAR2(3) ). EXCEPTION WHEN ZERO_DIVIDE THEN sal_calc := 2500. control transfers to the exception handler. Retrying a Transaction After an exception is raised. In Example 10-13. then exit from the loop. BEGIN -.. / In this example. 2. we exit from the loop immediately. you might want to retry it. If the transaction succeeds.sub-block ends INSERT INTO employees_temp VALUES (304. Before starting the transaction. then try to fix the problem. When the sub-block ends. commit. If the INSERT succeeds. and process the exceptions only after the entire operation is complete. mark a savepoint. . 2500. BEGIN INSERT INTO employees_temp VALUES (303. The technique is: Encase the transaction in a sub-block. See alsoExample 5-38. EXCEPTION WHEN ZERO_DIVIDE THEN NULL. where you roll back to the savepoint undoing any changes. we change the value that needs to be unique and continue with the next loop iteration.5 LOOP -. DECLARE name VARCHAR2(20) := 'SMYTHE'. 'YES').2). you should use a FOR or WHILE loop to limit the number of attempts. If the transaction fails. the local handler catches it and sets sal_calc to 2500. . With this technique. so the sub-block terminates. 0). -. END. Place the sub-block inside a loop that repeats the transaction. and execution continues with the INSERT statement.sub-block begins SELECT salary / commission_pct INTO sal_calc FROM employees_temp WHERE employee_id = 301. 'NO'). suffix NUMBER := 1. In that case. Example 10-12 Continuing After an Exception DECLARE sal_calc NUMBER(8. 3. the enclosing block continues to execute at the point where the sub-block ends.1). "Collection Exceptions". answer VARCHAR2(3) := 'NO'.try 5 times 1. Example 10-13 Retrying a Transaction After an Exception CREATE TABLE results ( res_name VARCHAR(20). the INSERT statement might raise an exception because of a duplicate value in a unique column. INSERT INTO results VALUES ('JONES'. as described in "Handling FORALL Exceptions with the %BULK_EXCEPTIONS Attribute". if the SELECT INTO statement raises a ZERO_DIVIDE exception. BEGIN FOR i IN 1. sal_calc/100. INSERT INTO results VALUES ('SMYTHE'. CREATE UNIQUE INDEX res_name_ix ON results (res_name). rather than abandon your transaction. Execution of the handler is complete. END.

such as INSERT. END. -.designates 1st SELECT statement SELECT table_name INTO name FROM user_tables WHERE table_name LIKE 'ABC%'. stmt_no := 2.designates 2nd SELECT statement SELECT table_name INTO name FROM user_tables WHERE table_name LIKE 'XYZ%'. you use the PLSQL_WARNINGS initialization parameter. you can turn on checking for certain warning conditions. / Using Locator Variables to Identify Exception Locations Using one exception handler for a sequence of statements. They might point out something in the subprogram that produces an undefined result or might create a performance problem. -. EXCEPTION WHEN DUP_VAL_ON_INDEX THEN ROLLBACK TO start_transaction. /* Add a survey respondent's name and answers. / CALL loc_var(). -. you can use a locator variable: Example 10-14 Using a Locator Variable to Identify the Location of an Exception CREATE OR REPLACE PROCEDURE loc_var AS stmt_no NUMBER.undo changes suffix := suffix + 1. EXCEPTION WHEN NO_DATA_FOUND THEN DBMS_OUTPUT. */ DELETE FROM results WHERE res_answer = 'NO'. the DBMS_WARNING package.BEGIN -. -.mark a savepoint /* Remove rows from a table of survey results. DELETE.PUT_LINE('Table name not found in query ' || stmt_no). -. END. PL/SQL Warning Categories . */ INSERT INTO results VALUES (name. END. EXIT.sub-block ends END LOOP. These conditions are not serious enough to produce an error and keep you from compiling a subprogram. or UPDATE statements. can mask the statement that caused an error. To work with PL/SQL warning messages. -. Overview of PL/SQL Compile-Time Warnings To make your programs more robust and avoid problems at run time.raises DUP_VAL_ON_INDEX if two respondents have the same name COMMIT. and theUSER/DBA/ALL_PLSQL_OBJECT_SETTINGS views.try to fix problem name := name || TO_CHAR(suffix).sub-block begins SAVEPOINT start_transaction. answer). name VARCHAR2(100). If you need to know which statement failed. -. BEGIN stmt_no := 1.

turn off all warnings when deploying for production. For more information.   PL/SQL warning messages are divided into categories. SEVERE. Warning messages can be issued during compilation of PL/SQL subprograms. You can enable and disable entire categories of warnings (ALL. or the original setting that was stored with the subprogram. ALTER PACKAGE. To see any warnings generated during compilation. and make the database treat certain warnings as compilation errors so that those conditions must be corrected. the current settings for that session are used. Controlling PL/SQL Warning Messages To let the database issue warning messages during PL/SQL compilation. -. INFORMATIONAL: Messages for conditions that do not have an effect on performance or correctness. such as aliasing problems with parameters.For debugging during development ALTER SESSION SET PLSQL_WARNINGS='ENABLE:ALL'. depending on whether you include the REUSE SETTINGS clause in the statement. including 'ERROR:05003' in the PLSQL_WARNINGS setting makes that condition trigger an error message (PLS_05003) instead of a warning message. If you recompile the subprogram with an ALTER . PL/SQL warning messages all use the prefix PLW. If you recompile the subprogram with a CREATE OR REPLACE statement. such as unnecessary code or performance. you use the SQL*Plus SHOW ERRORS command or query the USER_ERRORS data dictionary view. INFORMATIONAL. but that you might want to change to make the code more maintainable.To focus on one aspect ALTER SESSION SET PLSQL_WARNINGS='ENABLE:PERFORMANCE'... so that you can suppress or display groups of similar warnings during compilation. The settings for the PLSQL_WARNINGS parameter are stored along with each compiled subprogram. anonymous blocks do not produce any warnings. or turn on some warnings when working on a particular subprogram where you are concerned with some aspect.. PERFORMANCE).To turn off all warnings ALTER SESSION SET PLSQL_WARNINGS='DISABLE:ALL'. if you know that the warning message PLW-05003represents a serious problem in your code. such as unreachable code that can never be executed. You might turn on all warnings during development. This parameter can be set at the system level or the session level. and -. For example.want PLW-06002 warnings to produce errors that halt compilation ALTER SESSION SET PLSQL_WARNINGS='ENABLE:SEVERE'. see ALTER FUNCTION. -.Display 'severe' warnings. The keyword All is a shorthand way to refer to all warning messages. The categories are: SEVERE: Messages for conditions that might cause unexpected behavior or wrong results. Example 10-15 Controlling the Display of PL/SQL Warnings -. enable and disable specific message numbers. such as passing a VARCHAR2 value to aNUMBER column in an INSERT statement.Recompile with extra checking ALTER PROCEDURE loc_var COMPILE PLSQL_WARNINGS='ENABLE:PERFORMANCE' REUSE SETTINGS. 'DISABLE:PERFORMANCE'.. don't want 'performance' warnings. the current session setting might be used. -. you set the initialization parameter PLSQL_WARNINGS. andALTER PROCEDURE in Oracle Database SQL Reference. -. You can also set it for a single compilation by including it as part of the ALTER PROCEDURE . PERFORMANCE: Messages for conditions that might cause performance problems.COMPILE statement. 'ERROR:06002'. COMPILE statement. You can also treat particular messages as errors instead of warnings. . An error message causes the compilation to fail.

Using the DBMS_WARNING Package If you are writing a development environment that compiles PL/SQL subprograms. ELSE DBMS_OUTPUT. SHOW ERRORS. END IF. you can control PL/SQL warning messages by calling subprograms in the DBMS_WARNING package. You can save the current state of thePLSQL_WARNINGS parameter with one call to the package. where different warning settings apply to different subprograms. -.set_warning_setting_string: ALTER PROCEDURE unreachable_code COMPILE PLSQL_WARNINGS = 'ENABLE:ALL' REUSE SETTINGS.enable all warning messages for this session CALL DBMS_WARNING.When warnings disabled. In Example 10-16. you could have used the following ALTER PROCEDURE without the call toDBMS_WARNINGS.PUT_LINE('FALSE'). then restore the original parameter value. / -. or it could be intentionally hidden by a debug flag. made up of several nested SQL*Plus scripts. APR 19 What are the different types of pragma and where can we use them? .set_warning_setting_string('ENABLE:ALL' .Recompile the procedure and a warning about unreachable code displays ALTER PROCEDURE unreachable_code COMPILE. -. END unreachable_code.get_warning_setting_string() FROM DUAL. Posted 19th April 2011 by Prafull Dangore 0 Add a comment 76. You might also use this package when compiling a complex application.Check the current warning setting SELECT DBMS_WARNING. For example. change the parameter to compile a particular set of subprograms. Example 10-16 Using the DBMS_WARNING Package to Display Warnings -. BEGIN IF x THEN DBMS_OUTPUT. so you might or might not want a warning message for it. the following procedure compiles with no warnings CREATE OR REPLACE PROCEDURE unreachable_code AS x CONSTANT BOOLEAN := TRUE.'SESSION'). It could represent a mistake.PUT_LINE('TRUE'). 77. Example 10-16 is a procedure with unnecessary code that could be removed.

In other words. Associating a PL/SQL Exception with a Number: Pragma EXCEPTION_INIT To handle error conditions (typically ORA. The following pragmas are available: AUTONOMOUS_TRANSACTION: Prior to Oracle 8. not at run time. EXCEPTION_INIT: The most commonly used pragma. or package using the syntax . This pragma can perform an autonomous transaction within a PL/SQL block between a BEGIN and END statement without affecting the entire transaction. the pragma EXCEPTION_INIT tells the compiler to associate an exception name with an Oracle error number. Pragmas are defined in the declarative section in PL/SQL. or sequence of error messages. this is used to bind a user defined exception to a particular error number. -20000). Prior to Oracle8i if you were to invoke a function within a package specification from a SQL statement. A pragma is a compiler directive that is processed at compile time. the one on top is the one that you can trap and handle. you must use the OTHERS handler or the pragmaEXCEPTION_INIT. The syntax for pragmas are as follows PRAMA The instruction is a statement that provides some instructions to the compiler. BEGIN . Oracle8i PL/SQL addresses that short comings with the AUTONOMOUS_TRANSACTION pragma. For example: Declare I_GIVE_UP EXCEPTION. RESTRICT_REFERENCES: Defines the purity level of a packaged program.. For instance. changes were all or nothing. END. EXCEPTION WHEN I_GIVE_UP do something. You code the pragma EXCEPTION_INIT in the declarative part of a PL/SQL block. In PL/SQL. PRAGMA EXCEPTION_INIT(I_give_up. you would have to provide a RESTRICT_REFERENCE directive to the PL/SQL engine for that function.messages) that have no predefined name. this type of pragma can be used. This is not required starting with Oracle8i. When you see an error stack.. That lets you refer to any internal exception by name and to write a specific handler for it. if rollback or commit needs to take place within the block without effective the transaction outside the block. each Oracle session in PL/SQL could have at most one active transaction at a given time.=========================================== ============================= What are the different types of pragma and where can we use them? Pragma is a keyword in Oracle PL/SQL that is used to provide an instruction to the compiler.1. subprogram.

Example 10-4 Using PRAGMA EXCEPTION_INIT DECLARE deadlock_detected EXCEPTION. PRAGMA EXCEPTION_INIT(deadlock_detected. That way.raise_application_error ends the subprogram and returns a user-defined error number and message to the application. -60). Note that you do not need to qualify raise_application_error with DBMS_STANDARD */ raise_application_error(-20101. it can use the pragma EXCEPTION_INIT to . you do not need to qualify references to it. you call raise_application_error if an error condition of your choosing happens (in this case. use the syntax raise_application_error( error_number. In Example 10-5. as shown inExample 10-4. where exception_name is the name of a previously declared exception and the number is a negative value corresponding to an ORA-error number. 'Expecting at least 1000 tables'). -Oracle_error_number). If the parameter is FALSE (the default). -20999 and message is a character string up to 2048 bytes long. END. -. {TRUE | FALSE}]).PRAGMA EXCEPTION_INIT(exception_name.. The error number and message can be trapped like any Oracle error. -.error messages from stored subprograms. -. if the current schema owns less than 1000 tables): Example 10-5 Raising an Application Error With raise_application_error DECLARE num_tables NUMBER. and as with package STANDARD.Do the rest of the processing (for the non-error case). message[. BEGIN NULL. / The calling application gets a PL/SQL exception. the error is placed on the stack of previous errors. When called.Some operation that causes an ORA-00060 error EXCEPTION WHEN deadlock_detected THEN NULL. where error_number is a negative integer in the range -20000 . which it can process using the error-reporting functions SQLCODE and SQLERRM in anOTHERS handler. The pragma must appear somewhere after the exception declaration in the same declarative section. BEGIN SELECT COUNT(*) INTO num_tables FROM USER_TABLES. If the optional third parameter is TRUE. / Defining Your Own Error Messages: Procedure RAISE_APPLICATION_ERROR The procedure RAISE_APPLICATION_ERROR lets you issue user-defined ORA. RAISE_APPLICATION_ERROR is part of package DBMS_STANDARD. IF num_tables < 1000 THEN /* Issue your own error code (ORA-20101) with your own error message. Also. To call RAISE_APPLICATION_ERROR. END IF. the error replaces all previous errors. An application can call raise_application_error only from an executing stored subprogram (or method).handle the error END. you can report errors to your application and avoid returning unhandled exceptions. ELSE NULL.

/* Map error number returned by raise_application_error to user-defined exception. */ DECLARE null_salary EXCEPTION. */ PRAGMA EXCEPTION_INIT(null_salary.handle the error END. as the following Pro*C example shows: EXEC SQL EXECUTE /* Execute embedded PL/SQL block using host variables v_emp_id and v_amount.. EXCEPTION WHEN null_salary THEN INSERT INTO emp_audit VALUES (:v_emp_id. APR 13 A Comparison of Oracle's DATE and TIMESTAMP Datatypes ============================================================== . Redeclaring Predefined Exceptions Remember.). a handler written forINVALID_NUMBER will not catch the internal exception. . Redeclaring predefined exceptions is error prone because your local declaration overrides the global declaration. PL/SQL declares predefined exceptions globally in package STANDARD. so you need not declare them yourself. =========================================== ============================= Posted 19th April 2011 by Prafull Dangore 0 Add a comment 78. BEGIN raise_salary(:v_emp_id. as follows: EXCEPTION WHEN invalid_number OR STANDARD. if you declare an exception named invalid_number and then PL/SQL raises the predefined exception INVALID_NUMBER internally. which were assigned values in the host environment.INVALID_NUMBER THEN -. END-EXEC. For example. -20101).map specific error numbers returned by raise_application_error to exceptions of its own.. This technique allows the calling application to handle error conditions in specific exception handlers. :v_amount). you must use dot notation to specify the predefined exception. END. In such cases.

5 trunc((86400*(date2-date1))/60)6 60*(trunc(((86400*(date2-date1))/60)/60)) minutes. century. DATE datatype This is the datatype that we are all too familiar with when we think about representing date and time values. LISTING B: Determine the interval breakdown between two dates for a DATE datatype 1 SELECT TO_CHAR(date1. days.A Comparison of Oracle's DATE and TIMESTAMP Datatypes Oracle date and time data types.---------. the TO_CHAR function has traditionally been wrapped around the date as in Listing A. and seconds.'MM/DD/YYYY HH24:MI:SS') "Date" FROM date_table. Check out Listing B for my solution on how to extract the individual time intervals for a subtraction of two dates. It has the ability to store the month. Date --------------------------06/20/2003 16:55:14 06/26/2003 11:16:36 About the only trouble I have seen people get into when using the DATE datatype is doing arithmetic on the column in order to figure out the number of years. 9 trunc((((86400*(date2-date1))/60)/60)/24) days. 7 trunc(((86400*(date2-date1))/60)/60)8 24*(trunc((((86400*(date2-date1))/60)/60)/24)) hours. You should then multiply that number by the number of seconds in a day (86400) before you continue with calculations to determine the interval with which you are concerned.---------. If you want to store date and time information in Oracle. In order to represent the date stored in a more readable format. 10 trunc(((((86400*(date2-date1))/60)/60)/24)/7) weeks 11* FROM date_table DATE1 DATE2 SECONDS MINUTES HOURS DAYS WEEKS ----------------. Hope you agree. I am aware that the fractions could be reduced but I wanted to show all the numbers to emphasize the calculation. 3 trunc(86400*(date2-date1))4 60*(trunc((86400*(date2-date1))/60)) seconds. It is typically good for representing data for when something has happened or should happen in the future. and just plain who to use them often plauge Oracle users more than they should. LISTING A: Formatting a date SQL> SELECT TO_CHAR(date1.---------. you really only have two different options for the column's datatype. hours. This issue is solved later in this article when we discuss the TIMESTAMP datatype.'MMDDYYYY:HH24:MI:SS') date1. What needs to be realized when doing the calculation is that when you do subtraction between dates. and seconds between two dates. weeks. Lets take a quick look at these two datatypes and what they offer. hours.---------. you get a number that represents the number of days.----------------. year. 2 TO_CHAR(date2. calculations around these data types. day.'MMDDYYYY:HH24:MI:SS') date2. The problem with the DATE datatype is its' granularity when trying to determine a time interval between two events when the events happen within a second of each other. minutes.---------06202003:16:55:14 07082003:11:22:57 43 27 18 17 2 06262003:11:16:36 07082003:11:22:57 21 6 0 12 1 . Here is an article I wrote a while back but still holds some good insight (I think) to using these data types.

TIMESTAMP datatype One of the main problems with the DATE datatype was its' inability to be granular enough to determine which event might have happened first in relation to another event. Oracle has expanded on the DATE datatype and has given us the TIMESTAMP datatype which stores all the information that the DATE datatype stores, but also includes fractional seconds. If you want to convert a DATE datatype to a TIMESTAMP datatype format, just use the CAST function as I do in Listing C. As you can see, there is a fractional seconds part of '.000000' on the end of this conversion. This is only because when converting from the DATE datatype that does not have the fractional seconds it defaults to zeros and the display is defaulted to the default timestamp format (NLS_TIMESTAMP_FORMAT). If you are moving a DATE datatype column from one table to a TIMESTAMP datatype column of another table, all you need to do is a straight INSERTSELECT FROM and Oracle will do the conversion for you. Look at Listing D for a formatting of the new TIMESTAMP datatype where everything is the same as formatting the DATE datatype as we did in Listing A. Beware while the TO_CHAR function works with both datatypes, the TRUNC function will not work with a datatype of TIMESTAMP. This is a clear indication that the use of TIMESTAMP datatype should explicitly be used for date and times where a difference in time is of utmost importance, such that Oracle won't even let you compare like values. If you wanted to show the fractional seconds within a TIMESTAMP datatype, look at Listing E. In Listing E, we are only showing 3 place holders for the fractional seconds. LISTING C: Convert DATE datatype to TIMESTAMP datatype SQL> SELECT CAST(date1 AS TIMESTAMP) "Date" FROM t; Date ----------------------------------------------------20-JUN-03 04.55.14.000000 PM 26-JUN-03 11.16.36.000000 AM LISTING D: Formatting of the TIMESTAMP datatype 1 SELECT TO_CHAR(time1,'MM/DD/YYYY HH24:MI:SS') "Date" FROM date_table Date ------------------06/20/2003 16:55:14 06/26/2003 11:16:36 LISTING E: Formatting of the TIMESTAMP datatype with fractional seconds 1 SELECT TO_CHAR(time1,'MM/DD/YYYY HH24:MI:SS:FF3') "Date" FROM date_table Date ----------------------06/20/2003 16:55:14:000 06/26/2003 11:16:36:000 Calculating the time difference between two TIMESTAMP datatypesdatatype. Look at what happens when you just do straight subtraction of the columns in Listing F. As you can see, the results are much easier to recognize, 17days, 18hours, 27minutes, and 43seconds for the first row of output. This means no more worries about how many seconds in a day and all those cumbersome calculations. And therefore the calculations for getting the weeks, days, hours, minutes, and seconds becomes a matter of picking out the number by using the SUBSTR function as can be seen in Listing G.

LISTING F: Straight subtraction of two TIMESTAMP datatypes 1 SELECT time1, time2, (time2-time1) 2* FROM date_table TIME1 TIME2 (TIME2-TIME1) ------------------------------ ---------------------------- ---------------------06/20/2003:16:55:14:000000 07/08/2003:11:22:57:000000 +000000017 18:27:43.000000 06/26/2003:11:16:36:000000 07/08/2003:11:22:57:000000 +000000012 00:06:21.000000 LISTING G: Determine the interval breakdown between two dates for a TIMESTAMP datatype 1 SELECT time1, 2 time2, 3 substr((time2-time1),instr((time2-time1),' ')+7,2) seconds, 4 substr((time2-time1),instr((time2-time1),' ')+4,2) minutes, 5 substr((time2-time1),instr((time2-time1),' ')+1,2) hours, 6 trunc(to_number(substr((time2-time1),1,instr(time2-time1,' ')))) days, 7 trunc(to_number(substr((time2-time1),1,instr(time2-time1,' ')))/7) weeks 8* FROM date_table TIME1 TIME2 SECONDS MINUTES HOURS DAYS WEEKS ------------------------- -------------------------- ------- ------- ----- ---- ----06/20/2003:16:55:14:000000 07/08/2003:11:22:57:000000 43 27 18 17 2 06/26/2003:11:16:36:000000 07/08/2003:11:22:57:000000 21 06 00 12 1 System Date and Time In order to get the system date and time returned in a DATE datatype, you can use the SYSDATE function such as : SQL> SELECT SYSDATE FROM DUAL; In order to get the system date and time returned in a TIMESTAMP datatype, you can use the SYSTIMESTAMP function such as: SQL> SELECT SYSTIMESTAMP FROM DUAL; You can set the initialization parameter FIXED_DATE to return a constant value for what is returned from the SYSDATE function. This is a great tool for testing date and time sensitive code. Just beware that this parameter has no effect on the SYSTIMESTAMP function. This can be seen in Listing H. LISTING H: Setting FIXED_DATE and effects on SYSDATE and SYSTIMESTAMP SQL> ALTER SYSTEM SET fixed_date = '2003-01-01-10:00:00'; System altered. SQL> select sysdate from dual; SYSDATE --------01-JAN-03 SQL> select systimestamp from dual; SYSTIMESTAMP --------------------------------------------------------09-JUL-03 11.05.02.519000 AM -06:00 When working with date and time, the options are clear. You have at your disposal the DATE and TIMESTAMP datatypes. Just be aware, while there are similarities, there are also differences that could create havoc if you try to convert to the more powerful TIMESTAMP datatype. Each of the two has strengths in simplicity and granularity. Choose wisely.

=========================================== =======================
Posted 13th April 2011 by Prafull Dangore
0

Add a comment

79.
APR

13

Informatica Power Center performance – Concurrent Workflow Execution
=========================================================================== Informatica Power Center performance – Concurrent Workflow Execution What is concurrent work flow? A concurrent workflow is a workflow that can run as multiple instances concurrently. What is workflow instance? A workflow instance is a representation of a workflow. How to configure concurrent workflow? 1) Allow concurrent workflows with the same instance name: Configure one workflow instance to run multiple times concurrently. Each instance has the same source, target, and variables parameters. Eg: Create a workflow that reads data from a message queue that determines the source data and targets. You can run the instance multiple times concurrently and pass different connection parameters to the workflow instances from the message queue. 2) Configure unique workflow instances to run concurrently: Define each workflow instance name and configure a workflow parameter file for the instance. You can define different sources, targets, and variables in the parameter file. Eg: Configure workflow instances to run a workflow with different sources and targets. For example, your organization receives sales data from three divisions. You create a workflow that reads the sales data and writes it to the database. You configure three instances of the workflow. Each instance has a different workflow parameter file that defines which sales file to process. You can run all instances of the workflow concurrently. How concurrent workflow Works? A concurrent workflow group’s logical sessions and tasks together, like a sequential workflow, but runs all the tasks at one time. Advantages of Concurrent workflow? This can reduce the load times into the warehouse, taking advantage of hardware platforms’ Symmetric Multi -Processing (SMP) architecture. LOAD SCENARIO: Source table records count: 150,622,276

===========================================================================

Posted 13th April 2011 by Prafull Dangore

If the subsequent sessions are doing lookup on the same table. Following standards/guidelines can improve the overall performance. 81.0 Add a comment 80. APR 13 What is Pushdown Optimization and things to consider ================================================================================= What is Pushdown Optimization and things to consider . Turn off the Verbose Logging while moving the mappings to UAT/Production environment. as the integer comparison is faster than the string comparison. Use Source Qualifier if the Source tables reside in the same schema Make use of Source Qualifer “Filter” Properties if the Source type is Relational. For large volume of data drop index before loading and recreate indexes after load. Group by simple columns in transformations like Aggregate. Use tables with lesser number of records as master table for joins. Source Qualifier Use Router transformation in place of multiple Filter transformations. Data remains in the Cache and available for the subsequent session for usage. Have all Ports that are required connected to Subsequent Transformations else check whether we can remove these ports Suppress ORDER BY using the ‘–‘ at the end of the query in Lookup Transformations Minimize the number of Update strategies. define the appropriate data type instead of reading as String and converting. use persistent cache in the first session. APR 13 Informatica Performance Improvement Tips =========================================================================== Informatica Performance Improvement Tips We often come across situations where Data Transformation Manager (DTM) takes more time to read from Source or when writing in to a Target. Use flags as integer. While reading from Flat files. For large of volume of records Use Bulk load Increase the commit interval to a higher value large volume of data Set ‘Commit on Target’ in the sessions ===========================================================================                Posted 13th April 2011 by Prafull Dangore 0 Add a comment 82.

EMPLOYEES_SRC. MANAGER_ID.MANAGER_ID. EMAIL. NULL. LAST_NAME. EMPLOYEES_SRC.MANAGER_ID = (SELECT PM_Alkp_emp_mgr_1. instead of an ODBC driver.EMPLOYEE_ID FROM EMPLOYEES PM_Alkp_emp_mgr_1 WHERE (PM_Alkp_emp_mgr_1. JOB_ID. In case of ODBC drivers. The Source or Target Database executes the SQL queries to process the transformations.SALARY. SALARY. :5. PHONE_NUMBER.LAST_NAME. else the session will fail. but database treats null values as the highest value in the sort order. :3. :10. Date Conversion: The Integration Service converts all dates before pushing transformations to the database and if the format is not supported by the database. the SYSDATE returns the current date and time for the machine hosting the database. COMMISSION_PCT.DEPARTMENT_ID FROM (EMPLOYEES_SRC LEFT OUTER JOIN EMPLOYEES PM_Alkp_emp_mgr_1 ON (PM_Alkp_emp_mgr_1. Logging: When the Integration Service pushes transformation logic to the database. How does Pushdown Optimization (PO) Works? The Integration Service generates SQL statements when native database driver is used.MANAGER_ID))) OR (0=0)) – executes 1 time Things to note when using PO There are cases where the Integration Service and Pushdown Optimization can produce different result sets for the same transformation logic.PHONE_NUMBER. sequence generation. in the database. DEPARTMENT_ID) SELECT CAST(PM_SJEAIJTJRNWT45X3OO5ZZLJYJRY.     . When the database handles errors. This can happen during data type conversion. Database schema (SQ Connection. FIRST_NAME.MANAGER_NAME. EMPLOYEES_SRC. EMAIL. the results can vary. But In case of Pushdown Optimization. EMPLOYEES_SRC. CAST(EMPLOYEES_SRC. EMPLOYEES_SRC. PHONE_NUMBER.EMAIL || ‘@gmail. Few Benefits in using PO    There is no memory or disk space required to manage the cache in the Informatica server for Aggregator. HIRE_DATE. For any SQL Override.EMPLOYEE_ID. :12. COMMISSION_PCT. EMPLOYEES_SRC. :2.EMPLOYEE_ID = EMPLOYEES_SRC. MANAGER_NAME. handling null values. Similarly it also create sequences (PM_*) in the database. case sensitivity. The Integration Service can usually push more transformation logic to a database if a native driver is used. LKP connection). :7. EMPLOYEES_SRC. the Integration Service does not write reject rows to the reject file. should have the Create View / Create Sequence Privilege. MANAGER_ID. The statistics the Integration Service can trace depend on the type of pushdown optimization. the Integration Service translates the transformation logic into SQL queries and sends the SQL queries to the database. However. Integration service creates a view (PM_*) in the database while executing the session task and drops the view after the task gets complete.JOB_ID. FIRST_NAME. Integration Service do row by row processing using bind variable (only soft parse – only processing time.HIRE_DATE AS date). :11. SYSDATE built-in variable: Built-in Variable SYSDATE in the Integration Service returns the current date and time for the node running the service process.com’) AS VARCHAR2(25)). no parsing time). :4. the statement will be executed once. When inserting into Targets. EMPLOYEES_SRC. :6. the database handles the errors. :13) –executes 7012352 times With Using Pushdown optimization INSERT INTO EMPLOYEES(ID_EMPLOYEE. Lookup. it cannot trace all the events that occur inside the database server. the Integration Service can treat null values as lowest. HIRE_DATE. When the Integration Service runs a session configured for full pushdown optimization and an error occurs. making easier to debug. CAST((EMPLOYEES_SRC. 2)). LAST_NAME. as the transformation logic is pushed to database. EMPLOYEE_ID.FIRST_NAME. the session fails. the Integration Service cannot detect the database type and generates ANSI SQL.EMPLOYEE_ID = EMPLOYEES_SRC.MANAGER_ID)) WHERE ((EMPLOYEES_SRC.COMMISSION_PCT. DEPARTMENT_ID) VALUES (:1. If the time zone of the machine hosting the database is not the same as the time zone of the machine running the Integration Service process. EMPLOYEE_ID. When a session is configured to run for Pushdown Optimization. The database and Integration Service produce different output when the following settings and conversions are different: Nulls treated as the highest or lowest value: While sorting the data. and sorting of data. JOB_ID.NEXTVAL AS NUMBER(15. :9. :8. EMPLOYEES_SRC. SQL Generated by Informatica Integration service can be viewed before running the session through Optimizer viewer.The process of pushing transformation logic to the source or target database by Informatica Integration service is known as Pushdown Optimization. Without Using Pushdown optimization: INSERT INTO EMPLOYEES(ID_EMPLOYEE. SALARY. Sorter and Joiner Transformation.

Prod_Code --#PC97## #PC98## #PC99## #PC125# #PC156# ------#PC767# .source_name from opb_mapping. opb_subject. MAR 28 How to remove/trim special characters in flatfile source field? Consolidated Info Que.MAPPING_NAME.================================================================================= Posted 13th April 2011 by Prafull Dangore 0 Add a comment 83..WIDGET_ID = OPB_SRC.SUBJECT_ID and OPB_MAPPING.MAPPING_ID = OPB_WIDGET_INST. opb_src. OPB_MAPPING. Can any one suggest. 85... APR 13 Informatica OPB table which have gives source table and the mappings and folders using an sql query SQL query select OPB_SUBJECT. How to remove special characters like ## in the below .MAPPING_ID and OPB_WIDGET_Inst.SRC_ID and OPB_widget_inst.widget_type=1.SUBJ_ID = opb_mapping.OPB_SRC. Posted 13th April 2011 by Prafull Dangore 0 Add a comment 84.. opb_widget_inst where opb_subject.SUBJ_NAME.

REPLACECHR searches the input string for the characters you specify and replaces all occurrences of all characters with the new character you specify. .. the function converts it to a character string. InputString. If you pass a numeric value. Passes the string you want to search. You can enter any valid transformation expression. Determines whether the arguments in this function are case sensitive. REPLACECHR Availability: Designer Workflow Manager Replaces characters in a string with a single character or no character.#PC766# #PC921# #PC1020 #PC1071 #PC1092 #PC1221 i want to remove that special characters. When CaseFlag is a null value or 0. NewChar ) Argument Required/ Description Optional CaseFlag Required Must be an integer. Ans: In expression . Syntax REPLACECHR( CaseFlag.. OldCharSet. When CaseFlag is a number other than 0. the function is case sensitive.use the replacechar function and in that just replace # with null char.. i want to load in the target just Prod_Code --PC9 PC98 PC99 PC125 PC156 . You can enter any valid transformation expression. InputString Required Must be a character string. the function is not case sensitive.

You can enter one or more characters. If OldCharSet is NULL or empty.1 GET /companyinfo/index. REPLACECHR uses the first character to replaceOldCharSet.html HTTP/1.html HTTP/1. 'abc'. You can enter any valid transformation expression.OldCharSet Required NewChar Required If InputString is NULL.html HTTP/1. You can enter one character. Return Value String.1 NULL The following expression changes part of the value of the customer code for each row in the CUSTOMER_CODE port: REPLACECHR ( 1. NULL if InputString is NULL.1 NULL The following expression removes multiple characters for each row in the WEBLOG port: REPLACECHR ( 1. 'M' ) . You can enter any valid transformation expression. Empty string if REPLACECHR removes all characters in InputString.1" NULL RETURN VALUE 29/Oct/2001:14:13:50 -0700 31/Oct/2000:19:45:46 -0700 GET /news/index. REPLACECHR returns NULL. the function converts it to a character string. The characters you want to replace. REPLACECHR returns InputString.html HTTP/1.html HTTP/1. an empty string.1 "GET /companyinfo/index.1 01/Nov/2000:10:51:31 -0700 GET /news/index.html HTTP/1. You can also enter a text literal enclosed within single quotation marks.1 NULL GET /companyinfo/index.1" [01/Nov/2000:10:51:31 -0700] "GET /news/index. NULL ) WEBLOG RETURN VALUE "GET /news/index. If NewChar contains more than one character. REPLACECHR removes all occurrences of all characters inOldCharSet in InputString. for example. InputString if OldCharSet is NULL or empty.1" GET /companyinfo/index. Must be a character string. '"'.html HTTP/1. ']["'. 'A'. Examples The following expression removes the double quotes from web log data for each row in the WEBLOG port: REPLACECHR( 0. WEBLOG.1" GET /news/index.html HTTP/1. Must be a character string. CUSTOMER_CODE. NULL ) WEBLOG [29/Oct/2001:14:13:50 -0700] [31/Oct/2000:19:45:46 -0700] "GET /news/index. or NULL. If you pass a numeric value. WEBLOG. If NewChar is NULL or empty.html HTTP/1.html HTTP/1.

'A'. for each row in the INPUT port: REPLACECHR (1. CUSTOMER_CODE. including the single quote. NULL ) INPUT 12345 4141 111115 NULL RETURN VALUE 235 NULL 5 NULL When you want to use a single quote (') in either OldCharSet or NewChar. CHR(39). The single quote is the only character that cannot be used inside a string literal. you must use the CHR function. The following expression removes multiple characters. NULL ) . CUSTOMER_CODE. '14'. 'M' ) CUSTOMER_CODE RETURN VALUE ABA abA BBC ACC MBM MbM BBC MCC The following expression changes part of the value of the customer code for each row in the CUSTOMER_CODE port: REPLACECHR ( 1. 'A'. INPUT. NULL ) CUSTOMER_CODE RETURN VALUE ABA BBC ACC AAA aaa NULL B BBC CC [empty string] aaa NULL The following expression removes multiple numbers for each row in the INPUT port: REPLACECHR ( 1. INPUT.CUSTOMER_CODE RETURN VALUE ABA abA BBC ACC NULL MBM abM BBC MCC NULL The following expression changes part of the value of the customer code for each row in the CUSTOMER_CODE port: REPLACECHR ( 0.

MAR 25 Define: Surrogate Key -> Consolidated Info Definition: . appending the change data to the existing table.. MAR 25 What is Delta data load? -> Consolidated Info A delta load. is loading incremental changes to the data.INPUT RETURN VALUE 'Tom Smith' 'Laura Jones' Tom Smith Laura Jones Tom's Toms NULL NULL Posted 28th March 2011 by Prafull Dangore 0 Add a comment 86. If the Hash values are different then they are updated records. Delta checks can be done in a number of ways. for example. Different logics can accomplish this. Then if the Keys don't exist then it should be inserted as new records and if the record exist then compare the Hash value of non key attributes of the table which are candidates for change.. Posted 25th March 2011 by Prafull Dangore 0 Add a comment 87. you perform inserts only. (For Hash Values you can use MD5 function in Informatica) If you are keeping History (Full History) for the table then it adds a little more complexity in the sense that you have to update the old record and insert a new record for changed data. This can also be done with 2 separate tables with one as current version and another as History version. One way is to check if the record exists or not by doing a lookup on the keys. by definition. When doing a delta load to a fact table.

Surrogated keys are always integer or numeric. we will introduce a technical surrogate key instead of re-using the source system's natural (business) key. Loading Send feedback . A unique and common surrogate key is a one-field numeric key which is shorter.output table or file with newly assigned surrogate keys . however the inputs and outputs usually fit into the design shown below: Inputs: . if a surrogate key generation process is implemented correctly. In order to isolate the data warehouse from source systems. and independent from changes in source system than using a business key. It is just a unique identifier or number for each row that can be used for the primary key to the table.new maximum key . Each extract contains customer records with a business key (natural key) assigned to it.maximum key lookup Outputs: .an input respresented by an extract from the source system . Also. we will use two made up sources of information to provide data about customers dimension.updated reference table with new records Posted 25th March 2011 by Prafull Dangore 0 Add a comment 88. It is useful because the natural primary key can change and this makes updates more difficult. easier to maintain and understand. The only requirement for a surrogate primary key is that it is unique for each row in the table.Surrogate key is a substitution for the natural primary key in Data Warehousing.datawarehouse table reference for identifying the existing records . Scenario overview and details To illustrate this example. Surrogate key generation mechanism may vary depending on the requirements. adding a new source system to the data warehouse processing will not require major efforts.

Sign up to vote on this title
UsefulNot useful