Professional Documents
Culture Documents
Complete Informatica
Complete Informatica
Source name
Database location
Column names
Data types
Key constraints
3. Where should U place the flat file to import the flat file
definition to the designer?
Connected lookup
Unconnected lookup
Persistent cache: U can save the lookup cache files and reuse them
the next time the informatica server processes a lookup
transformation configured to use the cache.
Static cache: U can configure a static or read only cache for only
lookup table. By default informatica server creates a static cache. it
caches the lookup table and lookup values in the cache for each row
that comes into the transformation. when the lookup condition is
true, the informatica server does not update the cache while it
processes the lookup transformation.
Dynamic cache: If u want to cache the target table and insert new
rows into cache and the target, u can create a look up
transformation to use dynamic cache. The informatica server
dynamically inerts data to the target table.
35. How the informatica server sorts the string values in Rank
transformation?
When the informatica server runs in the ASCII data movement mode
it sorts session data using Binary sortorder.If U configure the session
to use a binary sort order, the informatica server calculates the binary
value of each string and returns the specified number of rows with
the highest binary values for the string.
42.What r the types of data that passes between informatica server and stored
procedure?
3 types of data
Input/Out put parameters
Return Values
Status code.
48. What r the basic needs to join two sources in a source qualifier?
Two sources should have primary and Foreign key relation ships.
Two sources should have matching data types.
This transformation is used to maintain the history data or just most recent
changes in to target
table.
Data driven.
Type 2: The Type 2 Dimension Data mapping inserts both new and
changed dimensions into the target. Changes are tracked in the target
table by versioning the primary key and creating a version number
for each dimension in the table.
Use the Type 2 Dimension/Version Data mapping to update a
slowly changing dimension table when you want to keep a full
history of dimension data in the table. Version numbers and
versioned primary keys track the order of changes to each
dimension.
58. How can u recognise whether or not the newly added rows in
the source r gets insert in the target ?
For XML and file sources, informatica server reads multiple files
concurently. For loading the data informatica server creates a
seperate file for each partition(of a source file). U can choose to
merge the targets.
Locking and reading the session: When the informatica server starts
a session lodamaager locks the session from the repository.Locking
prevents U starting the session again and again.
Pre and post session threads: This will be created to perform pre and
post session operations.
ASCII mode
Uni code mode.
73. What r the out put files that the informatica server creates
during the session running?
Session log file: Informatica server creates session log file for each
session.It writes information about session into log files such as
initialization process,creation of sql commands for reader and writer
threads,errors encountered and load summary.The amount of detail
in session log file depends on the tracing level that u set.
Session detail file: This file contains load statistics for each targets
in mapping.Session detail include information such as table
name,number of rows written or rejected.U can view this file by
double clicking on the session in monitor window
Reject file: This file contains the rows of data that the writer does
notwrite to targets.
Control file: Informatica server creates control file and a target file
when U run a session that uses the external loader.The control file
contains the information about the target flat file such as data format
and loading instructios for the external loader.
Indicator file: If u use the flat file as a target,U can configure the
informatica server to create indicator file.For each target row,the
indicator file contains a number to indicate whether the row was
marked for insert,update,delete or reject.
Yes. By using copy session wizard u can copy a session in a different folder
or repository. But that
target folder or repository should consists of mapping of that session.
If target folder or repository is not having the maping of copying session ,
u should have to copy that maping first before u copy the session
79.How many number of sessions that u can create in a batch? Any number of
sessions.
81. What is a command that used to run a batch? pmcmd is used to start a
batch.
82. What r the different options used to configure the sequential batches?
Two options
Run the session only if previous session completes sucessfully. Always runs
the session.
83. In a sequential batch can u run the session if previous session fails?
89. How can u access the remote source into U'r session?
Hetrogenous : When U'r maping contains more than one source type,the server
manager creates
a hetrogenous session that displays source options for all types.
Joiner Transformation : U can not partition the master source for a joiner
transformation
Normalizer Transformation
XML targets.
Flat files: If u'r flat files stored on a machine other than the
informatca server, move those files to the machine that consists of
informatica server.
Relational datasources: Minimize the connections to sources
,targets and informatica server to
improve session performance.Moving target database into server
system may improve session
performance.
Staging areas: If u use staging areas u force informatica server to
perform multiple datapasses.
Removing of staging areas may improve session performance.
Database connections
Global objects
Mappings
Mapplets
Multidimensional metadata
Reusable transformations
Sessions and batches
Short cuts
Source definitions
Target defintions
Transformations
Server Enahancements:
U can copy the session across the folders and reposotories using the
copy session wizard in the informatica server manager
100 .What is tracing level and what r the types of tracing level?
If you do not clear Perform Recovery, the next time you run the
session, the Informatica Server attempts to recover the previous
session.
If you do not configure a session in a sequential batch to stop on
failure, and the remaining sessions in the batch complete, recover
the failed session as a standalone session.
107. How to recover sessions in concurrent batches?
NO. Informatica is not at all concern with back end data base.It
displays u all the information
that is to be stored in repository.If want to reflect back end
changes to informatica screens,
again u have to import from back end to informatica by valid
connection.And u have to replace the existing files with imported
files.
General Questions
1. Tell about yourself?
2. How many years of oracle experience?
3. How many years of warehouse tools experience?
4. How many years of informatica experience?
5. Have you involved in all the phases of the software?
6. What you have done in analysis and design phase in your
recent project?
7. How many years of data warehousing experience?
8. How do you mentor/train the employees?
9. What encouraged you to work in your recent project?
10. Did you get any problems while in analysis phase?
11. Have you followed any naming standards?
12. What kind of documents you have prepared?
13. What is your responsibility in your recent project?
14. With which version you started your career in
DataStage?
15. What is your operating system
environment? Is it NT or UNIX?
16. Have you written any Unix shell scripting?
17. How many databases you know?
18. Do you have any questions?
19. Can you explain about the last two projects you have
done?
20. What is your visa status?
21. How did you learn Informatica?
22. Do you drive?
23. When did your last project get over?
24. How did you commute to work?
25. Give me your functional objective of your last project.
26. How did u implement your latest project?
27. What phases did go through in the project
PL/SQL
4. what is a cursor?
Ans : A cursor is a variable whose declaration
specifies a set of tuples (as a query result) such
that the tuples can be processed in a tuple-oriented
way (i.e., one row at a time) using the fetch
statement.
Unconnected
Connected Lookup
Lookup
Receives input values
from the result of a
Receives input values directly from
:LKP expression in
the pipeline.
another
transformation.
You can use a dynamic or static You can use a static
cache. cache.
Cache includes all lookup columns
Cache includes all
used in the mapping (that is,
lookup/output ports in
lookup table columns included in
the lookup condition
the lookup condition and lookup
and the lookup/return
table columns linked as output
port.
ports to other transformations).
Can return multiple columns from Designate one return
the same row or insert into the port (R). Returns one
dynamic lookup cache. column from each row.
If there is no match for the lookup
condition, the Informatica Server
If there is no match for
returns the default value for all
the lookup condition,
output ports. If you configure
the Informatica Server
dynamic caching, the Informatica
returns NULL.
Server inserts rows into the cache
or leaves it unchanged.
If there is a match for the lookup
condition, the Informatica Server
If there is a match for
returns the result of the lookup
the lookup condition,
condition for all lookup/output
the Informatica Server
ports. If you configure dynamic
returns the result of
caching, the Informatica Server
the lookup condition
either updates the row the in the
into the return port.
cache or leaves the row
unchanged.
Pass one output value
to another
Pass multiple output values to transformation. The
another transformation. Link lookup/output/return
lookup/output ports to another port passes the value
transformation. to the transformation
calling :LKP
expression.
Supports user-defined default Does not support user-
values. defined default values.
14. Can you tell one Scenario where you used lookup
transformation
38. IIF (isnull (A), NULL, IIF (isnull (B), 4,iif (D ='1', 0,
-1)))
a. If d =1 and B= null and a= not null
39. Did you do Error Handling? (Null Handling?)
40. How do you we migration the Mappings and
Sessions from Development to QA or Testing ?
41. For what Transformations the Sessions runs
slowly….If so how do you fix them?
42. What is pmcmd
43. Which Transformation is used to join
heterogeneous sources residing at different locations or
File Systems?
44. How do you truncate a table? What about the
truncate option on the target settings?
45. What are target load strategies? What you were
using in your latest project?
46. What is a router transformation?
47. Which tool do you use perform unlocks? Repository
Manager
48. Added features of Informatica 6.0 Designer
49. What is a worklets?
50. Difference between using a joiner transformation
and SQL with multiple joins? Which do you prefer?
51. What does the normalizer transformation do?
52. Which transformations should not use in Mapplets?
53. What is pmrep command?
54. Write the Syntax for Unconnected Lookup And the
lookup Name is "SATYAM" and two values are to be
parsed "SATYAM COMPUTERS" AND "STC"?
55. IIF (ISNULL (A), DD_INSERT, DD_UPDATE), what
is the O/P?
56. How do you run the server on unix machines? Ans:
Using ‘pmcmd’ command.
57. You usually get flat files from legacy systems. They
can be joined with tables from relational sources using a
joiner transformation.
58. Have you used FTP connections? Ans : Yes, we
used to get flat files from legacy systems. We used to
create ftp connections to predefined paths on remote
systems and when the session is run Informatica gets
the file from the remote system.
59. The order in which Informatica server sends
records to various target definitions in the mapping is
known as?
60. What properties should be there for the shared
folder (shortcuts)?
Performance Questions
1. What are the performance issues you have come across? And
how did you handle them?
2. How can you increase the performance at the mapping level?
3. Performance tuning at session level.
4. What is Repository tuning?
5. If a session fails, who are all the people to whom you were
sending the email? And how did you do that?
6. How do you improving a query performance?
1. How did you handle the rejected data? Ans: Open the log file
and rejected file and analyze the reason for rejection of each
row and the modify the data in the rejected file, then using
reject load utility reload the data into the target tables.
2. In how many types you can load the target data?
3. Can we create target table dynamically? How?
4. How can we use the same mapping for extracting data from a
source, which comes with a different name every week without
modifying the mapping?
5. What is the difference between bulk load and normal load?
6. In how many types you can load the target data?
7. Can we create target table dynamically? How?
8. How can we use the same mapping for extracting data from a
source, which comes with a different name every week without
modifying the mapping.
9. There are three Targets "X","Y","Z" in a mapping. How do I
look mapping only for target "X" without? Having the "Y" and
"Z" on the screen. Ans) select "Layout " from the Toolbar. Go
to option "Arrange". And select the Target "X"
10. There are three Targets "X","Y","Z" in a mapping. How
do I set the load sequence?
11. So that the Data gets loaded for "X" then "Y" and then
"Z"? Ans GO to "Mappings" in the toolbar and then select
Target Load Plan.
12. How did you handle the data errors, say bad data?
13. How do you do a Test Load ?
14. If there is no primary key on the Target Table can we
update the Target Table? Ans : No
15. What is difference in between normal load and bulk
load?
16. How did you handle the rejected data? Ans: Open the
rejected file and analyze the reason for rejection of each row
and modify the data in the rejected file, then using reject load
utility reload the data into the target tables.
17. What the different sources that Informatica can handle?
18. When can we run a Store Procedure?
Ans:
a. Normal - when the store procedure is supposed to be
executed after each and Every row of data.
Advantages of MDDB:
Retrieval is very fast because
The data corresponding to any combination of dimension
members can be retrieved with a single I/O.
Data is clustered compactly in a multidimensional array.
Values are caluculated ahead of time.
The index is small and can therefore usually reside completely
in memory.
Storage is very efficient because
The blocks contain only data.
A single index locates the block corresponding to a
combination of sparse dimension numbers.
A data warehouse(or mart) is way of storing data for later retrieval. This
retrieval is almost always used to support decision-making in the
organization. That is why many data warehouses are considered to be
DSS (Decision-Support Systems).
Both a data warehouse and a data mart are storage mechanisms for
read-only, historical, aggregated data.
By read-only, we mean that the person looking at the data won’t be
changing it. If a user wants at the sales yesterday for a certain product,
they should not have the ability to change that number.
The “historical” part may just be a few minutes old, but usually it is at
least a day old.A data warehouse usually holds data that goes back a
certain period in time, such as five years. In contrast, standard OLTP
systems usually only hold data as long as it is “current” or active. An
order table, for example, may move orders to an archive table once they
have been completed, shipped, and received by the customer.
When we say that data warehouses and data marts hold aggregated
data, we need to stress that there are many levels of aggregation in a
typical data warehouse.
8. If data source is in the form of Excel Spread sheet then how do use?
Ans: PowerMart and PowerCenter treat a Microsoft Excel source as a
relational database, not a flat file. Like relational sources,
the Designer uses ODBC to import a Microsoft Excel source. You do
not need database permissions to import Microsoft
Excel sources.
To import an Excel source definition, you need to complete the
following tasks:
Install the Microsoft Excel ODBC driver on your system.
Create a Microsoft Excel ODBC data source for each source file in
the ODBC 32-bit Administrator.
Prepare Microsoft Excel spreadsheets by defining ranges and
formatting columns of numeric data.
Import the source definitions in the Designer.
Once you define ranges and format cells, you can import the ranges in
the Designer. Ranges display as source definitions
when you import the source.
The main reason for using a canned query or report rather than creating
your own is that your chances of misinterpreting data or getting the
wrong answer are reduced. You are assured of getting the right data and
the right answer.
12. How many Fact tables and how many dimension tables u did? Which
table precedes what?
Ans: http://www.ciobriefings.com/whitepapers/StarSchema.asp
13. What is the difference between STAR SCHEMA & SNOW FLAKE
SCHEMA?
Ans: http://www.ciobriefings.com/whitepapers/StarSchema.asp
14. Why did u choose STAR SCHEMA only? What are the benefits
of STAR SCHEMA?
Ans: Because it’s denormalized structure , i.e., Dimension Tables are
denormalized. Why to denormalize means the first (and often
only) answer is : speed. OLTP structure is designed for data inserts,
updates, and deletes, but not data retrieval. Therefore,
we can often squeeze some speed out of it by denormalizing some of
the tables and having queries go against fewer tables.
These queries are faster because they perform fewer joins to retrieve
the same recordset. Joins are also confusing to many
End users. By denormalizing, we can present the user with a view of
the data that is far easier for them to understand.
16. (i) What is FTP? (ii) How do u connect to remote? (iii) Is there
another way to use FTP without a special utility?
Ans: (i): The FTP (File Transfer Protocol) utility program is commonly used
for copying files to and from other computers. These
computers may be at the same site or at different sites thousands of
miles apart. FTP is general protocol that works on UNIX
systems as well as other non- UNIX systems.
20. When do u create the Source Definition ? Can I use this Source Defn to
any Transformation?
Ans: When working with a file that contains fixed-width binary data ,
you must create the source definition.
The Designer displays the source definition as a table,
consisting of names, datatypes, and constraints. To use a source
definition in a mapping, connect a source definition to a
Source Qualifier or Normalizer transformation. The Informatica
Server uses these transformations to read the source data.
25. What are the tasks that are done by Informatica Server?
Ans:The Informatica Server performs the following tasks:
Manages the scheduling and execution of sessions and batches
Executes sessions and batches
Verifies permissions and privileges
Interacts with the Server Manager and pmcmd.
The Informatica Server moves data from sources to targets based
on metadata stored in a repository. For instructions on how to
move and transform data, the Informatica Server reads a mapping
(a type of metadata that includes transformations and source and
target definitions). Each mapping uses a session to define
additional information and to optionally override mapping-level
options. You can group multiple sessions to run as a single unit,
known as a batch.
26. What are the two programs that communicate with the Informatica
Server?
Ans: Informatica provides Server Manager and pmcmd programs to
communicate with the Informatica Server:
Server Manager. A client application used to create and manage
sessions and batches, and to monitor and stop the Informatica Server.
You can use information provided through the Server Manager to
troubleshoot sessions and improve session performance.
pmcmd. A command-line program that allows you to start and stop
sessions and batches, stop the Informatica Server, and verify if the
Informatica Server is running.
27. When do u reinitialize Aggregate Cache?
Ans: Reinitializing the aggregate cache overwrites historical aggregate
data with new aggregate data. When you reinitialize the
aggregate cache, instead of using the captured changes in source
tables, you typically need to use the use the entire source
table.
For example, you can reinitialize the aggregate cache if the source
for a session changes incrementally every day and
completely changes once a month. When you receive the new
monthly source, you might configure the session to reinitialize
the aggregate cache, truncate the existing target, and use the new
source table during the session.
28. (ii) What are the minimim condition that u need to have so as to
use Targte Load Order Option in Designer?
Ans: U need to have Multiple Source Qualifier transformations.
To specify the order in which the Informatica Server sends data to
targets, create one Source Qualifier or Normalizer transformation for each
target within a mapping. To set the target load order, you then determine the
order in which each
Source Qualifier sends data to connected targets in the
mapping.
When a mapping includes a Joiner transformation, the
Informatica Server sends all records to targets connected to that
Joiner at the same time, regardless of the target load order.
To add a slight performance boost, you can also set the tracing level to
Terse, writing the minimum of detail to the session log
when running a session containing the transformation.
Data Warehouse:
A data warehouse is a central repository for all or significant parts of
the data that an enterprise's various business systems collect. The term
was coined by W. H. Inmon. IBM sometimes uses the term
"information warehouse."
Typically, a data warehouse is housed on an enterprise mainframe
server. Data from various online transaction processing (OLTP)
applications and other sources is selectively extracted and organized on
the data warehouse database for use by analytical applications and user
queries. Data warehousing emphasizes the capture of data from
diverse sources for useful analysis and access, but does not generally
start from the point-of-view of the end user or knowledge worker who
may need access to specialized, sometimes local databases. The latter
idea is known as the data mart.
data mining, Web mining, and a decision support system (DSS)
are three kinds of applications that can make use of a data warehouse.
34. How do you use DDL commands in PL/SQL block ex. Accept table
name from user and drop it, if available else display msg?
Ans: To invoke DDL commands in PL/SQL blocks we have to use
Dynamic SQL, the Package used is DBMS_SQL.
35. What r the steps to work with Dynamic SQL?
Ans: Open a Dynamic cursor, Parse SQL stmt, Bind i/p variables (if any),
Execute SQL stmt of Dynamic Cursor and
Close the Cursor.
36. Which package, procedure is used to find/check free space available for
db objects like table/procedures/views/synonyms…etc?
Ans: The Package is DBMS_SPACE
The Procedure is UNUSED_SPACE
The Table is DBA_OBJECTS
37. Does informatica allow if EmpId is PKey in Target tbl and source data is
2 rows with same EmpID?If u use lookup for the same
situation does it allow to load 2 rows or only 1?
Ans: => No, it will not it generates pkey constraint voilation. (it loads 1 row)
=> Even then no if EmpId is Pkey.
Explained.
**************************
TO QUERY THE PLAN TABLE :-
**************************
SQL> SELECT RTRIM(ID)||' '||
2 LPAD(' ', 2*(LEVEL-1))||OPERATION
3 ||' '||OPTIONS
4 ||' '||OBJECT_NAME STEP_DESCRIPTION
5 FROM PLAN_TABLE
6 START WITH ID = 0 AND STATEMENT_ID = 'PKAR02'
7 CONNECT BY PRIOR ID = PARENT_ID
8 AND STATEMENT_ID = 'PKAR02'
9 ORDER BY ID;
STEP_DESCRIPTION
----------------------------------------------------
0 SELECT STATEMENT
1 FILTER
2 SORT GROUP BY
3 TABLE ACCESS FULL EMP
=========================================================
=====
Copying Mapping:
To copy the mapping, open a workbook.
In the Navigator, click and drag the mapping slightly to the right, not
dragging it to the workbook.
When asked if you want to make a copy, click Yes, then enter a new
name and click OK.
Choose Repository-Save.
Repository Copying: You can copy a repository from one database
to another. You use this feature before upgrading, to
preserve the original repository. Copying repositories provides a quick
way to copy all metadata you want to use as a basis for
a new repository.
If the database into which you plan to copy the repository contains an
existing repository, the Repository Manager deletes the existing
repository. If you want to preserve the old repository, cancel the copy.
Then back up the existing repository before copying the new repository.
To copy a repository, you must have one of the following
privileges:
Administer Repository privilege
Super User privilege
To copy a repository:
1. In the Repository Manager, choose Repository-Copy Repository.
2. Select a repository you wish to copy, then enter the following
information:
-------------------------------- ---------------------------
-------------------------------------------------
Copy Repository Field Required/ Optional Description
-------------------------------- ---------------------------
-------------------------------------------------
Repository Required Name for the repository copy.
Each repository name must be unique within
the domain and should be easily
distinguished from all other repositories.
Database Username Required Username required to connect
to the database. This login must have the
appropriate database permissions to create
the repository.
Database Password Required Password associated with the
database username.Must be in US-ASCII.
ODBC Data Source Required Data source used to connect to
the database.
Native Connect String Required Connect string identifying
the location of the database.
Code Page Required Character set associated with
the repository. Must be a superset of the code
page of the repository you want to copy.
If you are not connected to the repository you want to copy, the
Repository Manager asks you to log in.
3. Click OK.
5. If asked whether you want to delete an existing repository data in the
second repository, click OK to delete it. Click Cancel to preserve the
existing repository.
Copying Sessions:
In the Server Manager, you can copy stand-alone sessions within a
folder, or copy sessions in and out of batches.
To copy a session, you must have one of the following:
Create Sessions and Batches privilege with read and write
permission
Super User privilege
To copy a session:
1. In the Server Manager, select the session you wish to copy.
2. Click the Copy Session button or choose Operations-Copy Session.
The Server Manager makes a copy of the session. The Informatica
Server names the copy after the original session, appending a number,
such as session_name1.
When the object the shortcut references changes, the shortcut inherits
those changes. By using a shortcut instead of a copy,
you ensure each use of the shortcut exactly matches the original
object. For example, if you have a shortcut to a target
definition, and you add a column to the definition, the shortcut
automatically inherits the additional column.
For example, you might use a shell command to copy a file from one
directory to another. For a Windows NT server you would use the
following shell command to copy the SALES_ ADJ file from the target
directory, L, to the source, H:
copy L:\sales\sales_adj H:\marketing\
For a UNIX server, you would use the following command line to
perform a similar operation:
cp sales/sales_adj marketing/
Note: You can only work within one version of a folder at a time.
50. How do automate/schedule sessions/batches n did u use any tool for
automating Sessions/batch?
Ans: We scheduled our sessions/batches using Server Manager.
You can either schedule a session to run at a given time or interval, or
you can manually start the session.
U needto hv create sessions n batches with Read n Execute
permissions or super user privilege.
If you configure a batch to run only on demand, you cannot
schedule it.
51. What are the differences between 4.7 and 5.1 versions?
Ans: New Transformations added like XML Transformation and MQ Series
Transformation, and PowerMart and PowerCenter both
are same from 5.1version.
53. How many values it (informatica server) returns when it passes thru
Connected Lookup n Unconncted Lookup?
Ans: Connected Lookup can return multiple values where as Unconnected
Lookup will return only one values that is Return Value.
When you enter the expression, you can use values available
through ports. For example, if the transformation has two input
ports representing a price and sales tax rate, you can calculate the
final sales tax using these two values. The ports used in the
expression can appear in the same transformation, or you can use
output ports in other transformations.
57. In case of Flat files (which comes thru FTP as source) has not
arrived then what happens?Where do u set this option?
Ans: U get an fatel error which cause server to fail/stop the session.
U can set Event-Based Scheduling Option in Session
Properties under General tab-->Advanced options..
----------------- ------------------- ------------------
Event-Based Required/ Optional Description
----------------- -------------------- ------------------
Indicator File to Wait For Optional Required to use event-
based scheduling. Enter the indicator file
(or directory and file) whose arrival
schedules the session. If you do
not enter a directory, the Informatica
Server assumes the file appears
in the server variable directory
$PMRootDir.
58. What is the Test Load Option and when you use in Server
Manager?
Ans: When testing sessions in development, you may not need to
process the entire source. If this is true, use the Test Load
Option(Session Properties General Tab Target Options
Choose Target Load options as Normal (option button), with
Test Load cheked (Check box) and No.of rows to test ex.2000
(Text box with Scrolls)). You can also click the Start button.
----------------------------------------------------------------------------------
----------------------------------------------------------------------------------
-----
59. SCD Type 2 and SGT difference?
67. Can u refresh Repository in 4.7 and 5.1? and also can u refresh
pieces (partially) of repository in 4.7 and 5.1?
68. What is BI?
Ans: http://www.visionnet.com/bi/index.shtml
70. BI Faq
Ans: http://www.visionnet.com/bi/bi-faq.shtml
Oracle Questions
PL/SQL Questions
SQL Questions
Sebastian
CISCO
1 bulk bind
2 bind vairable
8 snap shot too old error what is it have you encountered this error
in your project what is the solution
10 informatica wht all things you have done in your project11 what
was your role in your last project
16 wht is hints? Its for query optimization its an option you are
giving to a query to chose how to execute a statement
PL/SQL QUESTIONS:
2. What is a mutating table error and how can you get around
it?
Level: Intermediate
Expected answer: This happens with triggers. It occurs
because the trigger is trying to update a row it is currently
using. The usual fix involves either use of views or temporary
tables so the database is selecting from one while updating
the other.
DBA:
12. What causes the ¡°snapshot too old¡± error? How can this
be prevented or mitigated?
Level: Intermediate
Expected answer: This is caused by large or long running
transactions that have either wrapped onto their own rollback
space or have had another transaction write on part of their
rollback space. This can be prevented or mitigated by
breaking the transaction into a set of smaller transactions or
increasing the size of the rollback segments and their extents.
16. If you have an example table, what is the best way to get
sizing data for the production table implementation?
Level: Intermediate
Expected answer: The best way is to analyze the table and
then use the data provided in the DBA_TABLES view to get
the average row length and other pertinent data for the
calculation. The quick and dirty way is to look at the number of
blocks the table is actually using and ratio the number of rows
in the table to its number of blocks against the number of
expected rows.
17. How can you find out how many users are currently
logged into the database? How can you find their operating
system id?
Level: high
Expected answer: There are several ways. One is to look at
the v$session or v$process views. Another way is to check
the current_logins parameter in the v$sysstat view. Another if
you are on UNIX is to do a ¡°ps -ef|grep oracle|wc -l’
command, but this only works against a single instance
installation.
18. A user selects from a sequence and gets back two values,
his select is:
SELECT pk_seq.nextval FROM dual;
What is the problem?
Level: Intermediate
Expected answer: Somehow two values have been inserted
into the dual table. This table is a single row, single column
table that should only have one value in it.
SQL/ SQLPLUS
1. How can variables be passed to a SQL routine?
Level: Low
Expected answer: By use of the & symbol. For passing in
variables the numbers 1-8 can be used (&1, &2,...,&8) to pass
the values after the command into the SQLPLUS session. To
be prompted for a specific variable, place the ampersanded
variable in the code itself:
¡°select * from dba_tables where owner=&owner_name;¡± .
Use of double ampersands tells SQLPLUS to resubstitute the
value for each subsequent use of the variable, a single
ampersand will cause a reprompt for the value unless an
ACCEPT statement is used to get the value from the user.
5. You want to use SQL to build SQL, what is this called and
give an example
Level: Intermediate to high
Expected answer: This is called dynamic SQL. An example
would be:
set lines 90 pages 0 termout off feedback off verify off
spool drop_all.sql
select ¡®drop user ¡®||username||’ cascade;’ from dba_users
where username not in (¡°SYS’,’SYSTEM’);
spool off
Essentially you are looking to see that they know to include a
command (in this case DROP USER...CASCADE;) and that
you need to concatenate using the ¡®||’ the values selected
from the database.
11. You are joining a local and a remote table, the network
manager complains about the traffic involved, how can you
reduce the network traffic?
Level: High
Expected answer: Push the processing of the remote data to
the remote instance by using a view to pre-select the
information for the join. This will result in only the data
required for the join being sent across.
TUNING QUESTIONS:
10. Where can you get a list of all initialization parameters for
your instance? How about an indication if they are default
settings or have been changed?
Level: Low
Expected answer: You can look in the init<sid>.ora file for an
indication of manually set parameters. For all parameters,
their value and whether or not the current value is the default
value, look in the v$parameter view.
12. Discuss row chaining, how does it happen? How can you
reduce it? How do you correct it?
Level: high
Expected answer: Row chaining occurs when a VARCHAR2
value is updated and the length of the new value is longer
than the old value and won’t fit in the remaining block space.
This results in the row chaining to another block. It can be
reduced by setting the storage parameters on the table to
appropriate values. It can be corrected by export and import
of the effected table.
13. When looking at the estat events report you see that you
are getting busy buffer waits. Is this bad? How can you find
what is causing it?
Level: high
Expected answer: Buffer busy waits could indicate contention
in redo, rollback or data blocks. You need to check the
v$waitstat view to see what areas are causing the problem.
The value of the ¡°count¡± column tells where the problem is,
the ¡°class¡± column tells you with what. UNDO is rollback
segments, DATA is data base buffers.
14. If you see contention for library caches how can you fix it?
Level: Intermediate
Expected answer: Increase the size of the shared pool.
15. If you see statistics that deal with ¡°undo¡± what are they
really talking about?
Level: Intermediate
Expected answer: Rollback segments and associated
structures.
20. What can cause a high value for recursive calls? How can
this be fixed?
Level: High
Expected answer: A high value for recursive calls is cause by
improper cursor usage, excessive dynamic space
management actions, and or excessive statement re-parses.
You need to determine the cause and correct it By either
relinking applications to hold cursors, use proper space
management techniques (proper storage and sizing) or
ensure repeat queries are placed in packages for proper
reuse.
21. If you see a pin hit ratio of less than 0.8 in the estat library
cache report is this a problem? If so, how do you fix it?
Level: Intermediate
Expected answer: This indicate that the shared pool may be
too small. Increase the shared pool size.
22. If you see the value for reloads is high in the estat library
cache report is this a matter for concern?
Level: Intermediate
Expected answer: Yes, you should strive for zero reloads if
possible. If you see excessive reloads then increase the size
of the shared pool.
23. You look at the dba_rollback_segs view and see that there
is a large number of shrinks and they are of relatively small
size, is this a problem? How can it be fixed if it is a problem?
Level: High
Expected answer: A large number of small shrinks indicates a
need to increase the size of the rollback segment extents.
Ideally you should have no shrinks or a small number of large
shrinks. To fix this just increase the size of the extents and
adjust optimal accordingly.
24. You look at the dba_rollback_segs view and see that you
have a large number of wraps is this a problem?
Level: High
Expected answer: A large number of wraps indicates that your
extent size for your rollback segments are probably too small.
Increase the size of your extents to reduce the number of
wraps. You can look at the average transaction size in the
same view to get the information on transaction size.
INSTALLATION/CONFIGURATION
1. Define OFA.
Level: Low
Expected answer: OFA stands for Optimal Flexible
Architecture. It is a method of placing directories and files in
an Oracle system so that you get the maximum flexibility for
future tuning and file placement.
4. You have installed Oracle and you are now setting up the
actual instance. You have been waiting an hour for the
initialization script to finish, what should you check first to
determine if there is a problem?
Level: Intermediate to high
Expected Answer: Check to make sure that the archiver isn’t
stuck. If archive logging is turned on during install a large
number of logs will be created. This can fill up your archive log
destination causing Oracle to stop to wait for more space.
11. How many control files should you have? Where should
they be located?
Level: Low
Expected answer: At least 2 on separate disk spindles. Be
sure they say on separate disks, not just file systems.
12. How many redo logs should you have and how should
they be configured for maximum recoverability?
Level: Intermediate
Expected answer: You should have at least three groups of
two redo logs with the two logs each on a separate disk
spindle (mirrored by Oracle). The redo logs should not be on
raw devices on UNIX if it can be avoided.
DATA MODELER:
1. Describe third normal form?
Level: Low
Expected answer: Something like: In third normal form all
attributes in an entity are related to the primary key and only
to the primary key
3. What is an ERD?
Level: Low
Expected answer: An ERD is an Entity-Relationship-Diagram.
It is used to show the entities and relationships for a database
logical model.
UNIX:
9. What is an inode?
Level: Intermediate
Expected answer: an inode is a file status indicator. It is
stored in both disk and memory and tracts file status. There is
one inode for each file on the system.
10. The system administrator tells you that the system hasn’t
been rebooted in 6 months, should he be proud of this?
Level: High
Expected answer: Maybe. Some UNIX systems don’t clean up
well after themselves. Inode problems and dead user
processes can accumulate causing possible performance and
corruption problems. Most UNIX systems should have a
scheduled periodic reboot so file systems can be checked and
cleaned and dead or zombie processes cleared out.
13. How can you find all the processes on your system?
Level: Low
Expected answer: Use the ps command
ORACLE TROUBLESHOOTING:
Level: Low
ORA-06114: (Cnct err, can't get err txt. See Servr Msgs &
Codes Manual)
19. How did you handle the rejected data? Ans: Open the log
file and rejected file and analyze the reason for rejection of
each row and the modify the data in the rejected file, then
using reject load utility reload the data into the target tables.
20. In how many types you can load the target data?
21. Can we create target table dynamically? How?
22. How can we use the same mapping for extracting data
from a source, which comes with a different name every week
without modifying the mapping?
23. What is the difference between bulk load and normal
load?
24. In how many types you can load the target data?
25. Can we create target table dynamically? How?
26. How can we use the same mapping for extracting data
from a source, which comes with a different name every week
without modifying the mapping.
27. There are three Targets "X","Y","Z" in a mapping. How
do I look mapping only for target "X" without? Having the "Y"
and "Z" on the screen. Ans) select "Layout " from the Toolbar.
Go to option "Arrange". And select the Target "X"
28. There are three Targets "X","Y","Z" in a mapping. How
do I set the load sequence?
29. So that the Data gets loaded for "X" then "Y" and then
"Z"? Ans GO to "Mappings" in the toolbar and then select
Target Load Plan.
30. How did you handle the data errors, say bad data?
31. How do you do a Test Load ?
32. If there is no primary key on the Target Table can we
update the Target Table? Ans : No
33. What is difference in between normal load and bulk
load?
34. How did you handle the rejected data? Ans: Open the
rejected file and analyze the reason for rejection of each row
and modify the data in the rejected file, then using reject load
utility reload the data into the target tables.
35. What the different sources that Informatica can handle?
36. When can we run a Store Procedure?
Ans:
f. Normal - when the store procedure is supposed to be
executed after each and Every row of data.
g. Pre-Load of The Source - Before the sessions retrieve
the data from the source.
Designer Questions
79. IIF (isnull (A), NULL, IIF (isnull (B), 4,iif (D ='1', 0,
-1)))
a. If d =1 and B= null and a= not null
80. What is SQL Override
81. Did you do Error Handling? (Null Handling?)
82. Explain the complex mapping you did?
83. Purpose of Source qualifier Transformation ?
84. How do you we migration the Mappings and Sessions
from Development to QA or Testing ?
85. Can we use two tables from two different databases in a
joiner sql override in Source Qualifier Transformation? Ans :
NO
86. Have you created Stored Procedure ?
87. For what Transformations the Sessions runs slowly….If
so how do you fix them?
88. What are the Various Kinds of Ports that are used in
different Transformations?
89. Designer lets you to add local variables to which
transformations?
90. What is pmcmd
91. What are the two ways to validate a Mapping
92. Which Transformation is used to join heterogeneous
sources residing at different locations or File Systems?
93. Can you tell one Scenario where you used lookup
transformation
94. Have you used any of the advanced configuration options
regarding performance?
95. Can u send data into a relational table and flat file from
the same source?
96. What was the target database?
97. How do you debug a procedure (or how do I get process
details)?
98. How do you truncate a table?What about the truncate
option on the target settings?
99. What are target load strategies? What you were using in
your latest project?
100. What is a router transformation?
101. Can you have a maplet inside another maplet? No
102. Which tool do you use perform unlocks? Repository
Manager
103. Added features of Informatica 6.0 Designer
104. What is a worklets?
105. Difference between using a joiner transformation and
SQL with multiple joins? Which do you prefer?
106. What does the normalizer transformation do?
107. Which transformations should not use in Mapplets?
108. What is pmrep command?
109. Write the Syntax for Unconnected Lookup And the
lookup Name is "SATYAM" and two values are to be parsed
"SATYAM COMPUTERS" AND "STC"?
110. IIF (ISNULL (A), DD_INSERT, DD_UPDATE), what is
the O/P?
111. How do you run the server on unix machines? Ans:
Using ‘pmcmd’ command.
112. What is the warehouse designer in Informatica?
113. You usually get flat files from legacy systems. They can
be joined with tables from relational sources using a joiner
transformation.
114. Have you used FTP connections? Ans : Yes, we used to
get flat files from legacy systems. We used to create ftp
connections to predefined paths on remote systems and when
the session is run Informatica gets the file from the remote
system.
115. The order in which Informatica server sends records to
various target definitions in the mapping is known as?
116. What properties should be there for the shared folder
(shortcuts)?
117. Did u create stored procedure and what exactly did u
write?
118. Sorter transformation what it is for?
Teradata Questions
4. What are Hash Files? Why are they used? How to optimize
them for better performance?
Hash files are basically used for lookup reference to make the
fetch fast. To get the maximum performance while creating the
file only select the columns which are required.
5. Is there any chance that hash file gets corrupted (hope OS isn’t
the source of Corruption)?
Yes, Hash files can get corrupted. So always make sure that you
can re-create the hash file with your job re-running.
6. What are the best practices to handle a hash file? What is the
limitation in the size of the hash file in datastage 6.0?
The maximum size of Hash file is 2 GB by default. To increase a
hash table larger than that you need to create it with the 64bit
option.
26. What other inbuilt functions? Did you use any function
other than Iconv and Oconv?
We have a lot of function like trim, cast and many time stamp
functions.
29. How many stages you have used in server jobs? Name
and explain them?
MetaRecon Questions
First and foremost, when you get to know that you are
scheduled to have a telephonic interview with a company, do
a little bit of research. Homework always comes in handy
here. So gather some information about the company and
note it down in your note pad.
Before you get your all important phone call, there are some
things to keep ready. Make sure you have a note pad and pen
with you, to jot down all relevant details about the company
and any queries that you might have to ask. This says
Appaji.S.N.Rao,V., an Assistant Professor, at Gandhi Institute
of Technology & Management, Vizag, India, is a very
important factor to be kept in mind during a telephonic
interview.
When the phone call comes through, make sure you pick up
the call yourself and introduce yourself properly. Understand
your interviewer’s name correctly, as you wouldn’t want to call
Mr. Allan King, Mr. Fig, or some such inexcusable faux pas.